prompt
stringlengths
0
312k
target
stringlengths
2
62.4k
that case the discrepancy was not improved. \section{Summary} The formation energy, $\Delta E$, magnetic, $S_{magn}$, and configuration, $S_{conf}$, entropy contributions were calculated and analyzed in the full range of the FeV-$\sigma$-phase existence using the electronic band structure calculations by means of the KKR method. It was found that $\Delta E$-values strongly depend on the Fe concentration, and their variation observed for different site occupancies is characteristic of a given lattice site. The changes also strongly depend on the number of Fe atoms on the sites. Similar, increasing dependence on the sample's composition exhibits the $S_{magn}$ term, but for a given composition it weakly depends on the lattice site. On the other hand, the composition dependence of the $S_{conf}$ term was found to be weak, although for a given composition, $S_{conf}$, shows well-defined dependence on the lattice site, and for each site, a rather strong dependence on the number of Fe atoms occupying the site. The sublattice occupancies were determined for various compositions and temperatures by minimizing the free energy. The occupancies of A and D sites were found to be almost independent of $T$ and $x$, while
the occupancies of the sites B, C and D showed a higher sensitivity to $T$ and $x$. The KKR calculations combined with the analysis of the entropy contributions clearly showed that Fe atoms preferably occupy A and D sites in the FeV-$\sigma$ phase. This agrees well with the neutron diffraction measurements, and it confirms the general observation for $\sigma$ in binary alloy systems that the smaller atoms (in this case Fe) tend to mostly reside on the sites having the shortest distance to the nearest-neighbors (A and D), while the bigger atoms (here V) prefer to occupy the sites having more fee space to accommodate them (B, C, E). \begin{acknowledgments} This work, supported by the European Communities under the contract of Association between EURATOM and IPPLM, was carried out within the framework of the European Fusion Development Agreement. It was also supported by the Ministry of Science and Higher Education, Warsaw (grant No. N N202 228837). We also acknowledge financial support from the Accelerated Metallurgy Project, which is co-funded by the European Commission in the 7th Framework Programme (contract NMP4-LA-2011-263206), by the European Space Agency and by the individual partner organizations. \end{acknowledgments}
\section{Introduction} \label{sec:introduction} Turbulence in galaxy clusters can arise from many sources, ranging from mergers and cosmological structure formation \citep{lau09,vazza09,vazza11,zuhone10}, to galactic wakes \citep{kim07,ruszkowski11}, and AGN feedback (\citet{mcnamara07}, and references therein). It is generally expected to be highly subsonic, with Mach numbers ${\cal M}\sim 0.1-0.5$. Turbulence has wide-ranging and pivotal effects on ICM physics. It could dominate metal transport \citep{rebusco05,simionescu08}, accelerate particles, as required in a prominent model of radio halos \citep{brunetti01,brunetti07}, generate and amplify magnetic fields \citep{subramanian06,ryu08,cho09,ruszkowski11a}, and provide pressure support, thus impacting X-ray mass measurements \citep{lau09}, and Sunyaev-Zeldovich (SZ) measurements of the thermal pressure \citep{shaw10,battaglia11,battaglia11a,parrish12}. The unknown level of non-thermal pressure support introduces systematic deviations in the mass calibration of clusters and could strongly affect their use for cosmology. A particularly interesting effect of turbulence is its impact on the thermal state of the gas, potentially allowing it to stave off catastrophic cooling. It can do by dissipation of turbulent motions \citep{churazov04,kunz11}, or turbulent diffusion of heat \citep{cho03,kim03,dennis05}. More subtly, it can do so by affecting magnetic field topology; by randomizing the $B$-field, it can restore thermal conduction to $\sim 1/3$ of the Spitzer rate \citep{ruszkowski10,ruszkowski11,parrish10}. Besides turbulence, a variety of bulk motions such as streaming, shocks, and sloshing have been observed\footnote{While rotation has not been directly seen, it is also expected from cosmological simulations \citep{lau11}. Its effects are generally too small to be detected by the methods discussed in this paper.}. Such (often laminar) gas motions are interesting in their own right. For instance, gas sloshing in the potential well of clusters, which produces observed cold fronts---contact discontinuities between gas of very different entropies---has gleaned information about hydrodynamic instabilities, magnetic fields, thermal conductivity and viscosity of ICM \citep[and references therein]{Markevitch2007}. Current observational constraints on ICM turbulence are fairly weak, and mostly indirect. They come from the analysis of pressure maps \citep{schuecker04}, the lack of detection of resonant-line scattering \citep{churazov04,werner10}, Faraday rotation maps \citep{vogt05,enslin06}, and deviations from hydrostatic equilibrium with thermal pressure alone \citep{Churazov2008,churazov10,Zhang2008}. In general, these studies constrain cluster cores and either place upper bounds on turbulence, or indicate (with large uncertainties) that it could be present with energy densities $\sim 5-30\%$ that of thermal values. The energy density in turbulence is expected to increase strongly with radius \citep{shaw10,battaglia11}, though observational evidence for this is indirect. The most direct means to constrain gas motions is through Doppler broadening of strong emission lines, but this remains undetected with current technology. By examining the widths of emission lines with XMM RGS, \citet{Sanders2010} found a 90\% upper limit of 274 ${\rm km s^{-1}}$ (13\% of the sound speed) on the turbulent velocity in the inner 30 kpc of Abell 1385; analysis of other systems provides much weaker bounds ($~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}} 500 {\rm km s^{-1}}$; \citet{sanders11}). Numerical simulations provide further insights \citep[e.g.][]{Lau2009,Vazza2009, Vazza2010}. However, due to limited resolution and frequent exclusion of important physical ingredients such as AGN jets, magnetic fields, radiative cooling, anisotropic viscosity, it is difficult to draw robust conclusions. The forthcoming Astro-H mission\footnote{http://astro-h.isas.jaxa.jp/} (launch date 2014) represents our best hope of gaining a robust understanding of gas motions in ICM\footnote{Much farther in the future, the ATHENA mission (http://sci.esa.int/ixo) could significantly advance the same goals.}. With unprecedented spectral resolution (FWHM $\sim$ 4-5 eV), Astro-H could not only measure the widths of emission lines, therefore constraining the turbulent amplitude, but also probe the line shapes. Somewhat surprisingly, very few studies have been conducted to extract velocity information from the shape of emission lines in a realistic observational context. Current work has focused on studying a Gaussian approximation to the line \citep{rebusco08}, and interpreting the radial variation of the line width and line center \citep{zhuravleva12}. For instance, \citet{zhuravleva12} show how the radial variation of line width is related to the structure function of the velocity field, and also how the 3D velocity field can be recovered from the projected velocity field. However, such inferences generally require angular resolution comparable to characteristic scale lengths of the velocity field, and are likely feasible only for one or two very nearby clusters such as Perseus (though such studies do represent a very exciting possibility for ATHENA). At the same time, it has long been apparent that turbulence in clusters leads to significant non-Gaussianity in the line shape---indeed, these were clearly visible in the early simulations of \cite{sunyaev03}. \citet{Inogamov2003} presented a deep and insightful discussion of the origin of line shapes, albeit in an idealized Kolmogorov cascade model for cluster turbulence (for instance, they do not consider the effect of gas sloshing and cold fronts). Heuristically, one can consider non-Gaussianity to arise when the size of the emitting region (heavily weighted toward the center in clusters) is not much larger than the characteristic outer scale of the velocity field. The central limit theorem does not hold as the number of independent emitters is small (and/or in large scale bulk flows, the motion of different emitters is highly correlated). In a series of papers, Lazarian and his collaborators considered the relationship between the turbulent spectrum and the spectral line shape in the ISM \citep{Lazarian2000, Lazarian2006,Chepurnov2006}. However, since they focused on the supersonic and compressible turbulence seen in the ISM---a regime where thermal broadening is negligible and density fluctuations are considerable---the methods they employ are not readily suitable for the mild subsonic turbulence expected in the ICM. We therefore aim to study how velocity information can be recovered from the emission line profile in the ICM context, in a realistic observational setting. In particular, we advance the notion that the profile can be separated into different modes, which have a meaningful physical interpretation. As we will discuss in more detail below, many processes in the ICM could give rise to velocity fields composed of distinct components. For instance, the sharp contact discontinuity in velocity in cold fronts will give rise to a bimodal velocity field where one component is significantly offset from another. Another interesting scenario arises if turbulence is not volume-filling (due, for instance, to anistropic stirring by AGN jets). Then spectral lines of different width (with and without turbulent broadening) will be superimposed on one another. When seen in the same field of view (FOV), these components correspond to different modes in the line profile, and decomposing the velocity field into dominant modes can yield valuable quantitative information (for instance, the volume filling factor). For the upcoming Astro-H mission, mode separation in the spectrum is necessary and important since the poor angular resolution make it hard to spatially resolve different components---indeed, the high spectral resolution of Astro-H is our best tool for inferring the complex structure of the velocity field. We use standard mixture modeling techniques and Fisher matrix/Markov chain Monte Carlo error analysis to quantify how well we could separate and constrain different components from a single spectrum, and then establish what we can learn from about the underlying velocity field from such a component separation. \begin{table} \caption{Specifications of the Soft X-ray Spectroscopy System onboard the Astro-H telescope.} \label{tbl:specs} \begin{center} \begin{tabular}{l c} \hline \hline Effective area ${\rm cm^2}$ at 6 keV& 225\\ Energy range (keV) & 0.3-12.0 \\ Angular resolution in half power diameter (arcmin) & 1.3\\ Field of view (${\rm arcmin}^2)$ & $3.05\times 3.05$\\ Energy resolution in FWHM (eV) & 5\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Photon counts $N_p$ in the He-like iron line and physical length corresponding to angular resolution in HPD (1.3 arcmin) for a few nearby galaxy clusters. The photons are accumulated from 1 FOV through the cluster center over $10^6$ seconds. } \label{tbl:clusters} \begin{center} \begin{tabular}{l c c c } \hline \hline Cluster Name& Redshift & $d_{1.3}$ (kpc) & $N_p (\times 10^4~{\rm phot})$ \\ \hline PERSEUS & 0.0183 & 28.81 & 5.8\\ PKS0745 & 0.1028 & 146.76 & 3.8\\ A0478 & 0.0900 & 130.36 & 3.7\\ A2029 & 0.0767 & 112.79 & 3.4\\ A0085 & 0.0556 & 83.78 & 3.1\\ A1795 & 0.0616 & 92.17 & 2.6\\ A0496 & 0.0328 & 50.76 & 1.9\\ A3571 & 0.0397 & 60.94 & 1.7\\ A3112 & 0.0750 & 110.51 & 1.7\\ A2142 & 0.0899 & 130.23 & 1.6\\ 2A0335 & 0.0349 & 53.87 & 1.3\\ HYDRA-A & 0.0538 & 81.23 & 1.3\\ A1651 & 0.0860 & 125.13 & 1.1\\ A3526 & 0.0103 & 16.37 & 0.8\\ \hline \hline \end{tabular} \end{center} \end{table} Before proceeding to the main discussion, we first list a few specifications of the Astro-H mission, on which our discussions are based. Our study mainly takes the advantage of the high spectral resolution of the Soft X-ray Spectroscopy System (SXS) onboard the Astro-H telescope. Its properties, taken from the ``Astro-H Quick Reference''\footnote{http://astro-h.isas.jaxa.jp/doc/ahqr.pdf}, are given in Table \ref{tbl:specs}. The energy resolution is 5 eV in FWHM\footnote{It has shown to be even lower--4 eV--in laboratory tests \citep{porter10}.}, corresponding to a standard deviation of 2.12 eV. For comparison, the thermal broadening of the Fe 6.7 keV line is 2.07 eV for a 5 keV cluster, while broadening by isotropic Mach number ${\cal M} \sim 0.2$ motions is 2.9 eV. Thus, for the highly subsonic motions in the core with Mach numbers ${\cal M} \sim 0.1-0.3$ generally seen in cosmological simulations, the instrumental, thermal and turbulent contributions to line broadening are all roughly comparable. In contrast to the impressive energy resolution, the angular resolution of Astro-H is poor: 1.3 arcmin in half power diameter (HPD). Therefore, different velocity components are likely to show up in the same spectrum. Based on these specifications, Table \ref{tbl:clusters} shows the expected photon counts in the He-like iron line at 6.7 keV for a few of the brightest nearby clusters ($z \leq 0.1$). The photons are accumulated in one FOV through the cluster center over $10^6$ seconds; the $\sim$ several x $10^{4}$ photons collected should allow good statistical separation of mixtures if present. The density distributions and cluster temperatures are taken from \citet{Chen2007}, and metallicity is assumed to be 0.3 ${\rm Z_{\odot}}$. Also shown are the physical lengths corresponding to the angular resolution in HPD, which are $\sim$100 kpc; comparable to the core size. We therefore do not expect Astro-H to spatially resolve many structures. The remainder of the paper is organized as follows. In \S~\ref{sec:motivations}, we discuss possible scenarios that could give rise to multiple component spectra, further motivating the current study. In \S~\ref{sec:methodology}, we develop the methodology to be used in this paper. In \S~\ref{sec:constraints}, we discuss how accurately different components could be recovered in idealized toy models, to build our understanding of the applicability and capabilities of the method. In \S~\ref{sec:application}, we apply our statistical method to realistic simulations of galaxy clusters, where we have full knowledge of the underlying velocity field, and see what information we can recover. In \S~\ref{sec:conclusions}, we conclude by summarizing the main results. \section{Motivation} \label{sec:motivations} In this section, we motivate the current study by giving examples of very common processes operating in the ICM which could give rise to multi-component velocity fields: bulk motions from mergers and sloshing, and AGN feedback. \subsection{Bulk Motions} \label{subsec:example} \begin{figure} \begin{tabular}{c} \rotatebox{-0}{\resizebox{120mm}{!}{\includegraphics{f1.ps}}} \end{tabular} \caption{Density map on a slice through the cluster center. The dashed line shows the direction along which the profiles in Fig. \ref{fig:profiles} are computed, while the perpendicular dotted line--chosen to maximize line of sight velocity shear --indicates the observation direction for the solid red velocity PDF shown in Fig. \ref{fig:spectrum}. The dotted red line shows an alternate viewing direction with much less velocity shear; its velocity PDF is given by the thin red line in Fig. \ref{fig:profiles}.} \label{fig:density} \end{figure} \begin{figure} \begin{tabular}{c} \rotatebox{-0}{\resizebox{90mm}{!}{\includegraphics{f2.ps}}} \end{tabular} \caption{Velocity fields on the same slide as in Fig. \ref{fig:density}, overlaid with density (solid blue curves) and temperature (dashed red curves) contours. The large purple arrow indicates the location of the cold front.} \label{fig:velocity} \end{figure} \begin{figure} \begin{tabular}{c} \rotatebox{-90}{\resizebox{90mm}{!}{\includegraphics{f3.eps}}} \end{tabular} \caption{Density (dashed curve), temperature (dotted curve) and line-of-sight velocity (solid curve in the bottom panel) profiles along the a direction perpendicular to the cold front, as indicated in Fig. \ref{fig:density} with a dashed line. Here, the position of the cold front is given by the vertical (cyan) line at 85 kpc. } \label{fig:profiles} \end{figure} \begin{figure} \begin{tabular}{c} \rotatebox{-90}{\resizebox{60mm}{!}{\includegraphics{f4.eps}}} \end{tabular} \caption{The thick solid (red) curve is the normalized emission-weighted velocity PDF from a box centered on the dotted line in Fig. \ref{fig:density}. The box is 100 kpc long, 100 kpc wide and 1 Mpc deep. The thick dashed green curve, is the corresponding profile of the He-like iron line at 6.7 keV (see top axis for energy scale), including the effects of thermal broadening, while the thick dot-dashed purple curve also includes instrumental broadening. The thin lines show the same curves for the line of sight given by the dotted red line in Fig. \ref{fig:density}. } \label{fig:spectrum} \end{figure} Thus far, most constraints on gas bulk motions comes from observations of sharp density gradients in the plane of the sky. Classic bow shocks have been seen in a handful of violent mergers. Much more common are ``cold fronts'' \citep{markevitch07}: sharp contact discontinuities between gas phases of different entropies, discovered in the last decade thanks to the high-resolution of the {\it Chandra} X-ray telescope. They are seen both in mergers (where the cold gas arises from the surviving cores of infalling subclusters) and relaxed cool core clusters (where they are produced by the displacement and subsequent sloshing of the low-entropy central gas in the gravitational potential well of the cluster). They are remarkably ubiquitous, even in relaxed cool core clusters with no signs of recent mergers, which often exhibit several such cold fronts at different radii from the density peak. For instance, they are seen in more than half of all cool core clusters; given projection effects, most if not all cool core clusters should exhibit such features. Evidently, coherent gas bulk motions are extremely common if not universal\footnote{Indeed, we show here the very first cluster we simulated from random initial conditions, which already exhibited cold front like features.}, and their effects must be taken into account when interpreting Astro-H spectra. Generically, we would expect bulk motions to offset the centroids of emitting regions with significant line-of-sight relative velocity. Cold fronts have been used to probe the amplitude and direction of gas motions in the plane of the sky; combining this with line-of-sight information from the spectrum could prove very powerful indeed. Our example is taken from an adiabatic numerical simulation from cosmological initial conditions with the adaptive mesh code Enzo \citep{Bryan1999,Norman1999,O'Shea2004}. We assume a $\Lambda$CDM cosmology with cosmological parameters consistent with the seventh year {\it WMAP} results \citep{komatsu11}: $\Omega_m=0.274$, $\Omega_{\Lambda}=0.726$, $\Omega_b=0.045$, $h=0.705$, $\sigma_8=0.810$, $n_s=0.96$. The simulation has a box size of 64 Mpc, and a root grid of $128^3$. We picked the most massive cluster ($M \sim 2\times 10^{14}~\rm M_{\odot}$) from the fixed-grid initial run, and re-simulate it with much higher resolution. The highest spatial resolution is 11 kpc in the cluster center. The cluster has a disturbed morphology, and shows a ``cold front''-like feature in the core. Note that our adiabatic simulation necessarily produces a NCC cluster. The density and velocity fields on a slice through the cluster center are shown in Fig. \ref{fig:density} and \ref{fig:velocity}, respectively. In the position indicated by the large arrow in Fig. \ref{fig:velocity}, the density, temperature and velocity all change rapidly. This is clearly shown in Fig. \ref{fig:profiles}, which shows density, temperature and velocity profiles along a line perpendicular to the front (indicated in Fig. \ref{fig:density} with a dashed line). At $\sim 85$ kpc from the cluster center, the density decreases while the temperature increases rapidly, as expected in a cold front (for a shock, the temperature jump would be opposite). Furthermore, the pressure is continuous across the front, while the tangential velocity changes direction discontinuously across the front---both well-known features of cold fronts \citep{markevitch07}. For an observation direction along the white dotted line in Fig. \ref{fig:density}, Fig. \ref{fig:spectrum} shows the emission-weighted probability distribution function (PDF) of the line-of-sight velocity. Motivated by Table \ref{tbl:clusters}, we extract the emission-weighted PDF from an volume with an area of $100\times100~{\rm kpc}^2$ and a depth of 1 Mpc (this last number represents the line of sight depth, and is chosen for convenience. Our results are insensitive to it as long as it is much larger than the core size, where most of the photons come from). The PDF clearly shows two peaks, centered at -400 ${\rm km \, s^{-1}}$ and 250 ${\rm km \, s^{-1}}$, corresponding to the gas on different side of the cold front. After convolution with thermal broadening, the dashed line shows the profile of the He-like iron line at 6.7 keV, while the dot-dashed line also includes the instrumental broadening of Astro-H. They also clearly show double peak features. The above case is a somewhat idealized ``best case'' scenario, where we have assumed the viewing angle to be along the direction of maximum line of sight velocity shear, thus maximizing the separation between the two peaks in the velocity PDF. For a more general viewing angle, the separation would not be so clear, as we show with the thin curves in Fig. \ref{fig:spectrum}. This is the PDF along the red dotted curve in \ref{fig:density}, which has very small line-of-sight bulk flow. There is only one large peak, but with a long tail. From Fig. \ref{fig:velocity}, we see this long tail comes from the gas surrounding the cold clump, which has shear velocities with large components along the LOS. Therefore the PDF can also be separated into two components -- a narrow component emitted by the cold clump and a broad component from the ambient gas. The offset between the components is a measure of the LOS contact discontinuity in the bulk velocity, while smaller scale shear contributes to the width. Such a decomposition of the line-of-sight velocity, combined with spatially resolved temperature and density information in the plane of the sky from X-ray imaging, could shed more light on the 3D velocity field as well as physical information such as the gas viscosity. \subsection{Volume-filling Factor of Turbulence} \label{subsec:others} The previous section highlighted a situation where strong shear or bulk motion gives rise to different components with offset centroids (``separation driven'' case). Another regime where different components could arise is when the two components have markedly different widths (``width driven'' case). We saw an example of this at the end of the previous section: a narrow component due to a cold, kinematically quiescent clump, and a broader component due to the sheared surrounding ambient gas. More generally, different widths arise when turbulence varies spatially. The case when turbulence is only partially volume-filling is a particularly interesting special case. Many of the physical effects of turbulence depend not only on its energy density, but its volume filling fraction $f_{\rm V}$, which is often implicitly assumed to be unity. For instance, for turbulence to stave off catastrophic cooling, it must be volume-filling. This is by no means assured. For instance, analytic models \citep{subramanian06} of turbulence generation during minor mergers predict $f_{\rm V}\sim 0.2-0.3$ to be small, but area-filling (i.e., the projection of turbulent wakes on the sky cover a large fraction of the cluster area, $f_{\rm S} \sim O(1)$). Interestingly, cosmological AMR simulations which use vorticity as a diagnostic for turbulence find good agreement; $f_{\rm V} ~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}} 0.3$ and $f_{\rm S} \sim O(1)$ for all runs \citep{iapichino08}. In our own simulations of stirring by galaxies \citep{ruszkowski11}, we have seen both high and low values of $f_{\rm V}$, depending on modeling assumptions. If $g$ modes are excited by orbiting galaxies (which requires the driving orbital frequency $\omega$ to be less than the Brunt-V\"ais\"al\"a \, frequency $\omega_{\rm BV}$--a requirement which depends on both the gravitational potential and temperature/entropy profile of the gas), then volume-filling turbulence is excited; otherwise turbulence excited by dynamical friction is potentially confined to thin ``streaks'' behind galaxies (see also \citet{balbus90,kim07}). If turbulence is patchy, we might expect spectral lines to have a narrow thermal Doppler core (produced in quiescent regions), with turbulently broadened tails. In the context of our mixture model, measuring the fraction of photons in the second component might allow a quantitive measure of $f_{\rm V}$. Yet another context in which strongly spatially varying or partial volume-filling turbulence could result in multiple components in the velocity PDF is AGN feedback, which is ubiquitous in most cool core clusters (\citet{Birzan2004}; for a recent review, see \citet{mcnamara12}). AGN jets are launched over a narrow solid angle and are fundamentally anisotropic; thus, their ability to sustain isotropic heating in the core has often been questioned. Isotropization of the injected energy could arise from weak shocks and sound waves \citep{fabian03}, frequent re-orientation of jets by randomly oriented accretion disks \citep{king07}, jet precession \citep{dunn06,gaspari11}, and cavities being blown about by cluster weather \citep{bruggen05,heinz06,morsony10}. As above, AGN could also excite g-mode oscillations; an intriguing example is a cross-like structure on 100 kpc scales in the ICM surrounding 3C 401 \citep{reynolds05}. A measurement of $f_{\rm V}$ could thus constrain the efficacy of such mechanisms in isotropizing AGN energy deposition throughout the core. The expansion of AGN-driven cavities can also introduce high bulk velocities and shear (corresponding more to the ``separation-drive'' regime); this is potentially directly measurable with ATHENA's excellent angular and spectral resolution \citep{Heinz2010a}, but would require indirect methods such as mixture modeling with the poor angular resolution of Astro-H. In \S \ref{sec:application}, we analyze an AGN feedback simulation kindly provided to us by M. Br\"{u}ggen. \subsection{Physical Significance of Mixture Model Parameters} \label{section:physical_significance} In \S \ref{sec:methodology}, we lay out our methodology for recovering mixture model parameters, and in subsequent sections we describe how accurately these can be constrained. These parameters are the mixture weights $f_{i}$, and means and variances $\mu_{i}, \sigma_{i}^{2}$, of the fitted Gaussians. Given the results of this section, we can tentatively ascribe physical significance to these parameters. The mixture weights $f_{i}$ represent the emission-weighted fraction of the volume in each distinguishable velocity component. The Gaussian means $\mu_{i}$ represent the bulk velocity of a given component. In particular, the difference between the means is a measure of the LOS shear between these components (e.g., as arises at a cold front). Note that this shear due to bulk motions can be considerably larger than the centroid shift due to variance in the mean, induced by turbulent motion with a finite coherence length. The latter is given by $\mu_{i} \sim \sigma_{i}/\ \sqrt{N}$, where $N\sim L_{\rm emit}/l_{\rm v}$ is the average number of eddies pierced by the line of sight, and $L_{\rm emit},l_{\rm v}$ are the size of the emitting region and the coherence length of the velocity field respectively \citep{rebusco08,zhuravleva12}. The variances $\sigma_{i}$ represents turbulent broadening or shear due to the small scale velocity field. \section{Methodology} \label{sec:methodology} We have argued that the X-ray spectrum from galaxy clusters should have multiple distinct components. Uncovering these components is the domain of {\it mixture modeling}, a mature field of statistics with a large body of literature. We will specialize to the case of Gaussian mixture modeling, when Gaussians are used as the set of basis functions for the different components. This is an obvious choice, since thermal and instrumental broadening are both Gaussian, and turbulent broadening can be well approximated with a Gaussian when the injection scale is much smaller than the size of the emitting regions (\citet{Inogamov2003}; i.e., once coherent bulk motions have been separated out by classification into different mixtures, the remaining small scale velocity field is well approximated by a Gaussian). It is also by far the best studied case. Mixture modeling has been applied to many problems in astrophysics, such as detecting bimodality in globular cluster metallicities \citep{ashman94, muratov10} linear regression \citep{kelly07}, background-source separation \citep{guglielmetti09}, and detecting variability in time-series \citep{shin09}, though to our knowledge it has not been applied to analyzing spectra. It should be noted that the specialization to Gaussian mixture is not necessarily restrictive; for instance, Gaussian mixtures have been used to model quasar luminosity functions \citep{kelly08}. For us, the fact that Gaussians are a natural basis function allows us to model the spectra compactly with a small number of mixtures, and assign physical meaning to these different components. Consider a model in which the observations $x_{1},\ldots,x_{n}$ are distributed as a sum of $k$ Gaussian mixtures: \begin{equation} f(x|\theta) = \sum^{k}_{j=1} \omega_{i} f_{j}(x|\mu_{j},\sigma^{2}_{j}), \label{eqn:mixture} \end{equation} where $f_{j}(x|\mu_{j},\sigma^{2}_{j})$ are normal densities with unknown means $\mu_{j}$ and variances $\sigma_{j}^{2}$, and $\omega_{i}$ are the mixture weights. The parameters which must be estimated for each mixture are therefore $\theta_{j}=(\omega_{j},\mu_{j},\sigma_{j}^{2})$, and the function $f$ can be viewed as the probability of drawing a data point with value $x$ given the model parameters $\theta$. Parameter estimation in this case suffers from the well-known {\it missing data problem}, in the sense that the information on which distribution $j$ a data point $x_{i}$ belongs to has been lost. In addition, the number of mixtures $k$ may not be a priori known\footnote{In this case, the optimal number of mixtures can also be estimated from the data, via simple criteria such as the Bayesian Information Criterion (see equation \ref{eqn:bic}), or more sophisticated techniques in so-called Infinite Gaussian Mixture Models. In this paper, we only investigate separating the two most dominant components of the spectrum, which have the highest signal-to-noise. The data is generally not of sufficient quality to allow solving for more than two mixtures (strong parameter degeneracies develop). Physical interpretation is also most straightforward for the two dominant mixtures.}. Standard techniques for overcoming this are a variant of maximum likelihood techniques known as Expectation Maximization (EM; \citet{dempster77}), or Maximum a Posterior estimation (MAP; see references in Appendix), which generally involves Markov Chain Monte Carlo (MCMC) sampling from the posterior. They are both two-step iterative procedures in which parameter estimation and data point membership are considered separately. Since they do not require binning of the data, all information is preserved. We have experimented extensively with both. However, due to the large number of data points ($\sim 10^{4}$ photons) in this application, we have found that the much simpler and faster procedure of fitting to the binned data yields virtually identical results. In the Appendix, we describe our implementation of Gibbs sampling MAP and how it compares with the much simpler method we use in this paper. Here, we simply bin the data and adopt as our log-likelihood the C-statistic \citep{Cash1979}: \begin{eqnarray} -2{\rm ln} {\cal L}(p|d) = -2 \sum_{i=1}^{N_{bin}}n_i {\rm ln} e_i- e_i - {\rm ln} n_i ! \label{eqn:cashc} \end{eqnarray} where ${\cal L}(p|d)$ is the likelihood of the parameters $p$ given the data $d$, $N_{bin}$ is the number of bins, $n_i$ and $e_i$ are the observed and expected number of counts in the i-th bin; $n_i,e_i$ are obviously functions of the data $d$ and the unknown model parameters $p$ respectively. It assumes that the number of data points in each bins is Poisson distributed (indeed, it is simply the log of the Poisson likelihood). As we describe in the Appendix, maximizing this statistic produces identical results to more rigorous mixture modeling techniques for large number of data points, when the bin size is sufficiently small. Naively, for a large number of data points one might expect $\chi^{2}$ minimization to work equally well. However, in fitting distributions we are sensitive to the wings of the Gaussian basis functions, when the expected number of counts in a bin is small and the data is therefore Poisson rather than Gaussian distributed. With the likelihood specified in Equation \ref{eqn:cashc}, we sample from the posterior using Metropolis-Hastings MCMC, adapted from CosmoMC \citep{Lewis2002}. Each run draws $\sim 10^5$ samples. The first 30\% are regarded as burn-in and are ignored in the post-analyses. For all the runs, we visually exam the trace plots to check for convergence. The MCMC analysis yields the best-fit MAP parameters as well as the full posterior distribution of parameters, which allows us to estimate confidence intervals. In all cases, we use non-informative (uniform) priors; the range of possibilities for the turbulent velocity field is sufficiently large that only very weak priors are justifiable. The only obvious prior we use is $0 < f_{i} < 1$. Note that there are two identical modes in the likelihood, since it is invariant under permutation of the mixture indices--the well-known identifiability or ``label-switching'' problem. Generally, in a $k$ component mixture, there are $k!$ identical modes in the likelihood. During the course of a Monte-Carlo simulation, instead of singling out a single mode of the posterior, the simulation may visit portions of multiple modes, resulting in a sample mean which in fact lies in a very low probability region, as well as an unrealistic probability distribution. We enforce identifiability in a very simple manner by demanding $\mu_{1} < \mu_{2}$ and hence $s \equiv \mu_{2}-\mu_{1} >0$. While this is known to sometimes be problematic \citep{celeux00,jasra05}, in practice it suffices for our simple models. For a large number of data points, the distribution of model parameters becomes asymptotically Gaussian, in which case the Fisher matrix can be used to quickly estimate joint parameter uncertainties (e.g., \citet{Tegmark1997}). As a consistency check, we therefore also calculate the Fisher matrix whenever the input model is simple enough to be expressed analytically. It is defined as: \begin{eqnarray} F_{ij}=-\left<\frac{\partial^{2}\ln{\cal{L}}}{\partial p_i \partial p_j}\right>, \label{eqn:fisher} \end{eqnarray} where $p_i$ is the i-th model parameter. The best attainable covariance matrix is simply the inverse of the Fisher matrix, \begin{eqnarray} C_{ij}=(F^{-1})_{ij}, \end{eqnarray} and the marginalized error on an individual parameter $p_i$ is $\sqrt{(F^{-1})_{ii}}$. Differences between the MCMC and the Fisher matrix error bars generally indicate the non-Gaussianity of the likelihood surface (or equivalently, that the log-likelihood cannot be truncated at second order in a Taylor expansion). \section{Idealized Models} \label{sec:constraints} \subsection{Two component Gaussian mixture models: General Results} \label{subsec:general} \begin{figure} \begin{tabular}{c} \rotatebox{-0}{\resizebox{80mm}{!}{\includegraphics{f5.eps}}} \end{tabular} \caption{Constraints on the five model parameters with $N_{d}=10^{4}$ data points, as a function of $s/(\sigma_{1}+\sigma_{2})$, where $s=\mu_{2}-\mu_{1}$ is the separation between the means. The curves and points are the results obtained using the Fisher matrix and MCMC methods, respectively. The dashed lines and circles [green], dotted lines and upward triangles [blue], solid lines [red], dot-dashed lines and downward triangles [purple] and dot-dot-dashed lines and diamonds [brown] correspond to $\sigma_{2}=(0.6,0.8,1,1.2,1.4) \sigma_{1}$, respectively.} \label{fig:separation} \end{figure} \begin{figure} \begin{tabular}{c} \rotatebox{0}{\resizebox{85mm}{!}{\includegraphics{f6.ps}}} \end{tabular} \caption{Error contours for a ``SD'' case ($\sigma_1=1$, $\sigma_2=0.8$, $s = 2 (\sigma_1+\sigma_2)$): contours depict the 68\%, 95\% confidence levels for the marginalized distribution; the shadings shows the mean likelihood of the samples; the solid and dashed curves in the 1-D plots are the fully marginalized posterior and relative mean likelihood of the samples, respectively. } \label{fig:corr_sd} \end{figure} \begin{figure} \begin{tabular}{c} \rotatebox{0}{\resizebox{85mm}{!}{\includegraphics{f7.ps}}} \end{tabular} \caption{Same as Fig. \ref{fig:corr_sd} but for a ``WD'' case: $\sigma_1=1$, $\sigma_2=0.8$, $s=0.25 (\sigma_1+\sigma_2)$. Note the increased parameter degeneracies.} \label{fig:corr_wd} \end{figure} \begin{figure} \begin{tabular}{c} \rotatebox{-90}{\resizebox{60mm}{!}{\includegraphics{f8.eps}}} \end{tabular} \caption{Constraints on the model parameters for a 2 Gaussian mixture as a function of the fractional difference in width, when $s=0.2$ and $\sigma_{1}=1$. As in Fig. \ref{fig:separation}, the lines and points show the results obtained with the Fisher matrix and MCMC technique, respectively. The solid lines and squares [red], dashed lines and circles [green], dotted lines and upward triangles [blue], dot-dashed lines and downward triangles [purple] and dot-dot-dashed lines and diamonds [brown] are the constraints on $f_1$, $\mu_1$, $s$, $\sigma_1$ and $\sigma_2$, respectively} \label{fig:width} \end{figure} \begin{figure} \begin{tabular}{c} \rotatebox{0}{\resizebox{80mm}{!}{\includegraphics{f9.eps}}} \end{tabular} \caption{Constraints on the five free parameters for different values of $N_d$ and $f_1$. The solid curves and squares are the results for the fiducial case: $N_d=10^4$, $\sigma_1=1$, $\sigma_2=1$ and $f_1=0.4$; results for $N_d=10^5$, $f_1=0.3$, and $f_1=0.5$ are shown with dashed curves and circles, dotted curves and downward triangles, and dot-dashed curves and upward triangles, respectively. } \label{fig:frnp} \end{figure} Before focusing on the specific application to galaxy clusters, we first consider a more general problem: how well two Gaussian profiles can be separated. As mentioned previously, Astro-H data quality is generally only sufficient to allow solving for the two most dominant mixtures. A two mixture component is likely the most common scenario, with the most straightforward physical interpretation. These results serve to guide and motivate our later discussions. Consider therefore the profile: \begin{eqnarray} p(x)=\sum_{i=1,2}f_i G(x-\mu_i,\sigma_i), \label{eqn:profile} \end{eqnarray} where $f_i$ is the fraction of each component, while $\mu_i$ and $\sigma_i$ are the mean and standard deviation of i-th Gaussian function. Given the constraint $\sum f_i =1$, there are only five model parameters, which we choose to be: $f_1$, $\mu_1$, $s(\equiv \mu_2 - \mu_1)$, $\sigma_1$ and $\sigma_2$. Note, $\mu_2$ has been replaced by $s$ (the separation between the two Gaussians) since the latter, as we will see more clearly later, usually carries clearer physical meaning. The constraints, expressed in term of standard deviations $\Delta$ throughout this paper, are forecasted with both the MCMC and Fisher matrix methods (for the MCMC runs, they correspond to $68\%, 95\%$ confidence intervals for $\Delta,2 \Delta$ respectively, even if the parameter distribution is non-Gaussian). For each model we create a Monte-Carlo realization with $N_{d}$ data points and forecast constraints for this data set. Motivated by Table \ref{tbl:clusters}, we assume $N_d=10^4$. In general, the standard deviation of the model parameters $\Delta p \propto 1/\sqrt{N_{\rm d}}$, though there are some subtleties---see further discussions below. The constraints also depend on how much the two components differ; if they are difficult to distinguish, mixture modeling will fail. For Gaussian components, they may differ in fraction $f_{i}$, mean $\mu_{i}$ or width $\sigma_{i}$. Here, we shall mostly focus on a situation when the mixing fractions are comparable: $f_{1}=0.4,f_{2}=0.6$, and focus on how mixture separation can be driven by differences in mean (``separation dominated'', or SD), or width (``width drive'', or WD). In practice, we care mostly about the case when the mixing fractions are roughly comparable, since then the different components are of comparable importance in reconstructing the (emission-weighted) velocity field. As a practical matter, it also becomes increasingly difficult to perform mixture modeling when one component dominates (though see \S \ref{subsec:prospects}). The results are shown in Fig. \ref{fig:separation} - \ref{fig:frnp}. Fig. \ref{fig:separation} shows the constraints as a function of the separation $s=\mu_{2}-\mu_{1}$, normalized by the sum of the standard deviations: $s/(\sigma_{1}+\sigma_{2})$. Different line types and point types indicate different values of $\sigma_2= (0.6, 0.8, 1, 1.2, 1.4) \, \sigma_1$. Note that all five parameters scale with $s/(\sigma_{1}+\sigma_{2})$ in the same way, with fractional errors which are all roughly comparable. We can identify three distinct regimes: \begin{itemize} \item{{\bf Separation Driven} For $s/(\sigma_{1}+\sigma_{2}) ~\rlap{$>$}{\lower 1.0ex\hbox{$\sim$}} 2$, the fractional errors converge to an asymptotic constant value, independent of $s/(\sigma_{1}+\sigma_{2})$. In this regime, the separation is so large that different components could be viewed as individually constrained without mixing from other components. Except for $\Delta(s)$ (which depends on $\Delta (\mu_{2}) \sim \sigma_2/\sqrt{f_i N_d}$), this asymptotic convergence is also independent of $\sigma_{2}$ (i.e., the relative widths of the distributions don't matter when the separation is large). The asymptotic values for $\Delta(\mu_i)$, $\Delta(\sigma_i)/\sigma_i$ and $\Delta(f_i)$ are $\sigma_i/\sqrt{f_i N_d}$, $1/\sqrt{2 f_i N_d}$ and $\sqrt{{f_i(1-f_i)}/{N_d}}$, respectively; given our $N_{d}=10^{4}$, this corresponds to $\sim 1\%$ accuracy in parameter constraints. } \item{{\bf Hybrid} For $0.3 ~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}} s/(\sigma_{1}+\sigma_{2}) ~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}} 2$, the separation is comparable to the sum of widths. The mixing between different components become severe and the quality of parameter constraints decrease rapidly with decreasing $s$. Since constraints are increasing driven by data points in the tails of the respective mixtures (which drive distinguishability), the effective number of data points $N_{\rm eff} < N_{d}$ falls. Strong parameter degeneracies also develop.} \item{{\bf Width Driven} When $s/(\sigma_{1}+\sigma_{2}) ~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}} 0.3$, the separation between the distribution becomes negligible, and component separation is driven almost entirely by differences in width (note how parameter uncertainties blow up at low $s$ when $\sigma_{1}=\sigma{2}$). It is driven to an asymptotic value determined by the effective number of data points in the tails of the mixtures, $N_{\rm eff}$. } \end{itemize} The results obtained with the Fisher matrix (lines) and MCMC (points) agree well with each other when the mixtures are easily distinguishable (when $s/(\sigma_{1}+\sigma_{2})$ is large or $\sigma_2/\sigma_1$ is reasonably far away from 1). Otherwise, discrepancies between these two methods are clear. These discrepancies are caused by the non-Gaussianity of the likelihood surfaces and the priors we placed in the MCMC runs. In this regime, one therefore cannot use the Fisher matrix approximation to the full error distribution. Fig. \ref{fig:corr_sd} and \ref{fig:corr_wd} show the marginalized likelihood distributions and error contours for two example runs. Fig. \ref{fig:corr_sd} is in the SD regime ($s = 2 (\sigma_1+\sigma_2)$). The likelihood distributions are very close to Gaussian, explaining the consistency between the Fisher matrix and MCMC results. The contours allow direct reading of correlations among parameters. The strongest correlation is between $\mu_1$ and $s$. As expected, they are negatively correlated, since $s=\mu_2-\mu_1$ while the $\mu_i$'s are uncorrelated. Fig. \ref{fig:corr_sd} is in the WD regime ($s = 0.25 (\sigma_1+\sigma_2)$). The likelihood distributions now deviate from Gaussians, and the correlations among parameters are much stronger. These are all consistent with the facts that the constraints are worse (due to larger parameter degeneracies) and the Fisher matrix results are no long in agreement with MCMC results (due to non-Gaussianity of the likelihood surface). In Fig. \ref{fig:width}, we show how the constraints vary with differences in the Gaussian width in the ``width dominated'' regime. The width of the first component is fixed to $\sigma_{1}=1$, while the separation $s=0.2$ (note that $s/(\sigma_{1}+\sigma_{2}) \sim 0.2$ is typically the minimal value expected in cluster turbulence when there are no bulk flows, and is due solely to error in the mean; see \S\ref{section:physical_significance}). In general, the constraints improve as the differences in width increase, consistent with intuitive expectations. However, the constraints on $s$ and $\sigma_2$ turn over around $\sigma_2 \sim 2\sigma_1$, beyond which they increase with $\sigma_2$\footnote{The turnover does not appear in the last panel of Fig. \ref{fig:separation}, because there the y-axis is $\Delta(\sigma_2)/\sigma_2$ rather than $\Delta(\sigma_2)$}. This can be understood as follows: the error on these quantities receive contributions from confusion error (which dominates at low $\sigma_{2}$) and scaling with $\sigma_{2}$ (since $\Delta(\mu_{i}) \sim \sigma_{i}/\sqrt{f_{i} N_{d}}$ and $\Delta \sigma_{i} \sim \sigma_{i}/\sqrt{2 f_{i} N_{d}}$; this dominates at high $\sigma_{2}$). On the other hand, the error on the mixing fraction $f_{1}$ scales strongly with the difference in widths, since it is driven solely by confusion error. However, for other parameters the scaling is significantly weaker. For most cluster scenarios, the width-driven regime gives relative errors of $\Delta p/p \sim 10\%$, which is still small. Fig. \ref{fig:frnp} shows how the constraints vary with $N_d$ and $f_1$. The fiducial case (solid curves and squares) is computed assuming $N_d=10^4$, $\sigma_1=1$, $\sigma_2=0.8$ and $f_1=0.4$, exactly the same as the dotted curves and upward triangles in Fig. \ref{fig:separation}. As we increase the $N_d$ by a factor of 10, most constraints are improved by a factor of $\sqrt{10}$, consistent with our expectation that $\Delta p_{i}/p_{i} \propto 1/\sqrt{N_{d}}$. This is despite the fact that only in the asymptotic SD case are relative errors quantitatively given by the Poisson limit $\Delta p_{i}/p_{i} \approx 1/\sqrt{(f_{i} N_{d})}$. This is because when mixtures overlap and are in the hybrid/WD regimes, results are driven by the distribution tails, where the effective number of data points is still $N_{\rm eff} \propto N_{\rm d}$. For $s$ and $\mu_1$, however, MCMC results show better improvements in the WD regime than factors of $\sqrt{10}$. This might be due to reduced parameter degeneracies from the larger number of data points. Varying $f_1$ to 0.3 and 0.5 mildly impacts the results. As the $f_1$ goes closer to 0.5, constraints improve for most parameters, except for $f_1$ and $\sigma_2$. Constraints on $f_1$ are almost unchanged while constraints on $\sigma_2$ are degraded, because fewer data points are available in the second component to constrain $\sigma_2$. Based on Fig. \ref{fig:separation} and \ref{fig:width}, we can already anticipate the constraints from Astro-H: when there is significant bulk flow and the modes have a large relative velocity $v_{\rm bulk} > \sigma_{\rm turb}$, parameters can be constrained to $\sim 1\%$ accuracy (SD regime); when the relative velocity is small but the widths are different by a reasonable (a few tens of percents) amount, the parameter estimates are accurate at the $\sim 10\%$ level (WD regime). Given the modeling uncertainties in the physical interpretation of these parameter estimates, such accuracy is more than adequate. Next, we will consider two specific examples of the SD regime and WD regime respectively. \subsection{Application to Clusters: the Single Line Scenario} \label{subsec:case1} \begin{figure} \begin{tabular}{c} \rotatebox{-90}{\resizebox{100mm}{!}{\includegraphics{f10.eps}}} \end{tabular} \caption{The mock spectra (data points) and best-fit models for the WD (upper panel) and SD (lower panel) cases in the single line scenario. The red solid and dot-dashed lines are the input overall spectra and individual components respectively. The green dashed lines are the recovered components. The recovery is remarkably accurate, even when (as in the top panel) the spectra is visually indistinguishable from a single component Gaussian.} \label{fig:spec_single} \end{figure} \begin{figure} \begin{tabular}{c} \rotatebox{-90}{\resizebox{100mm}{!}{\includegraphics{f11.eps}}} \end{tabular} \caption{Same as Fig. \ref{fig:spec_single} but for the entire iron line complex. The continuum has also been included, assuming a metallicity of 0.3 ${\rm Z_{\odot}}$.} \label{fig:spec_multiple} \end{figure} \begin{table*} \caption{Input parameters for the WD case and recovered best-fit parameters together with their 1-$\sigma$ errors. Also shown are the predicted uncertainties using the Fisher matrix technique. Note that $v_{\rm pec},v_{\rm rel}$ are line of sight quantities, while $v_{\rm tb,1}, v_{\rm tb,2}$ are 3D velocity dispersions (assuming $v_{\rm 3D}^{2}=3 v_{\rm 1D}^{2}$).} \label{tbl:para_wd} \begin{center} \begin{tabular}{c|l| c c c c c} \hline \hline &&$f_1$&$v_{pec}$(km/s)&$v_{rel}$(km/s)&$v_{tb,1}$(km/s)&$v_{tb,2}$(km/s)\\ &Input & 0.4 & 0 & 100 & 150 & 300 \\ \hline \multirow{2}{*}{Single line}&MCMC & $0.46_{-0.12}^{+0.18}$ & $11.34_{-11.32}^{+11.62}$ & $89.48_{-13.85}^{+28.06}$ & $164.89_{-33.97}^{+33.75}$ & $298.82_{-13.34}^{+11.43}$ \\ &Fisher Matrix & (0.12) & (12.99) & (14.06) & (37.03) & (10.75)\\ \hline \multirow{2}{*}{Multiple lines}&MCMC & $0.42_{-0.09}^{+0.25}$ & $-14.24_{-9.79}^{+17.15}$ & $130.72_{-11.19}^{+65.05}$ & $145.48_{-33.65}^{+42.59}$ & $291.64_{-34.19}^{+5.40}$ \\ &Fisher Matrix & (0.12) & (11.28) & (17.37) & (36.75) & (10.65)\\ \hline \multirow{1}{*}{Multiple lines}&MCMC & $0.64_{-0.13}^{+0.22}$ & $5.44_{-9.08}^{+18.10}$ & $147.88_{-34.85}^{+124.25}$ & $180.16_{-21.43}^{+26.58}$ & $308.14_{-83.92}^{+13.44}$ \\ plus continuum&Fisher Matrix & (0.19) & (15.36) & (27.04) & (53.07) & (17.19)\\ \hline \end{tabular} \end{center} \end{table*} \begin{table*} \caption{Same as Table \ref{tbl:para_wd} but for the SD case.} \label{tbl:para_sd} \begin{center} \begin{tabular}{c|l| c c c c c} \hline \hline &&$f_1$&$v_{pec}$(km/s)&$v_{rel}$(km/s)&$v_{tb,1}$(km/s)&$v_{tb,2}$(km/s)\\ &Input & 0.4 & 0 & 500 & 150 & 300 \\ \hline \multirow{2}{*}{Single line}&MCMC & $0.40_{-0.02}^{+0.01}$ & $2.17_{-6.40}^{+7.28}$ & $493.65_{-4.82}^{+4.54}$ & $152.71_{-5.93}^{+7.62}$ & $310.98_{-5.33}^{+6.65}$ \\ &Fisher Matrix & (0.01) & (6.40) & (4.52) & (11.45) & (9.77)\\ \hline \multirow{2}{*}{Multiple lines}&MCMC & $0.41_{-0.01}^{+0.01}$ & $2.47_{-4.72}^{+6.18}$ & $505.28_{-4.13}^{+3.96}$ & $148.71_{-9.61}^{+14.19}$ & $294.13_{-9.43}^{+8.84}$ \\ &Fisher Matrix & (0.01) & (5.73) & (4.15) & (12.39) & (9.24)\\ \hline \multirow{1}{*}{Multiple lines}&MCMC & $0.40_{-0.02}^{+0.02}$ & $-0.01_{-8.19}^{+7.88}$ & $486.43_{-4.98}^{+4.90}$ & $167.92_{-16.89}^{+15.33}$ & $309.75_{-14.03}^{+14.04}$ \\ plus continuum&Fisher Matrix & (0.02) & (6.72) & (4.61) & (15.75) & (12.50)\\ \hline \end{tabular} \end{center} \end{table*} We begin our discussion of mixture modeling of cluster emission line spectra with the simplest case. For now we ignore line blending and continuum emission, and only consider one emission line -- the He-like iron line at 6.7 keV. Again, we assume the PDF is composed of two Gaussian components. Most of these assumptions will be relaxed later. We assume the cluster is isothermal with a temperature of 5 keV. The assumption of an isothermal distribution is of course somewhat crude for the entire cluster. However, for nearby clusters, the emission-weighted spectrum is accumulated from a small area where temperature variations are generally mild ($< 0.5$ keV). Moreover, our results are not very sensitive to the temperature distribution. We express our results in terms of the bulk peculiar velocity of the first component ($v_{pec}$), the relative velocity between the two components ($v_{rel}$), and the 3D turbulent velocity dispersions of each component ($v_{\rm tb,1}$ and $v_{\rm tb,2}$). We assume isotropic turbulence, so the line of sight velocity dispersion is $v_{\rm tb,i}/\sqrt{3}$. They are related to the Gaussian PDF via: \begin{eqnarray} \mu_1&=&\nu_0+\nu_0\frac{v_{pec}}{c},\\\nonumber s&=&\nu_0\frac{v_{rel}}{c},\\\nonumber \sigma_i&=&\sqrt{\sigma_{tb,i}^2+\sigma_{ther}^2+\sigma_{instr}^2},\\\nonumber \sigma_{tb,i}&=&\nu_0\frac{v_{tb,i}}{\sqrt{3}c},\\\nonumber \sigma_{ther}&=&\frac{\nu_0}{c}\sqrt{\frac{kT}{Am_p}},\\\nonumber \label{eqn:profile} \end{eqnarray} where $\nu_0$ is the line frequency in the rest frame, $\sigma_{instr}$ is the standard deviation of instrumental noise (FWHM/2.35), $A$ is the atomic weight of iron and $m_p$ is the proton mass. In our WD example, we assume $(v_{\rm tb,1},v_{\rm tb,2})=(150, 300) \, {\rm km \, s^{-1}}$, and $v_{\rm rel}=100 \, {\rm km \, s^{-1}}$. For the SD example, we assume the same $v_{\rm tb,1},v_{\rm tb,2}$, but $v_{\rm rel}=500 \, {\rm km \, s^{-1}}$. In all cases, we assume the bulk velocity zero-point $v_{\rm pec}=0 \, {\rm km \, s^{-1}}$.\footnote{Note that if the redshift of the collisionless component of the cluster (which does not participate in gas bulk motions) can be determined to high accuracy by spectroscopy of numerous galaxies, then $v_{\rm pec}, v_{\rm rel}-v_{\rm pec}$ give the line of sight bulk velocities with respect to the cluster potential well. For instance, for nearby clusters where $N_{\rm gal} \sim 400$ galaxy redshifts have been measured, the relative error in the center of mass redshift is $\sim 1000 {\rm km \, s^{-1}}/(\sqrt{3}\sqrt{N_{\rm gal}}) \sim 30 {\rm km \, s^{-1}}$. Otherwise, only $v_{\rm rel}$ (the relative bulk velocity) is of physical significance.} Sloshing in the cluster potential well generally results bulk motions with transonic Mach numbers \citep{markevitch07}, so such a value is realistic for a 5 keV cluster (with sound speed $c_{\rm s} \sim 1000 \, {\rm km \, s^{-1}}$) along an arbitrary line of sight---indeed, such velocities are found in the simulated cluster in \S\ref{subsec:example}. With these assumptions, the widths of the first and second component, including instrumental, thermal and turbulent broadening, are 3.54 and 4.87 eV; the offsets between peaks are 1.12 and 11.17 eV for the WD and SD cases, respectively. These parameter choices correspond to $s/(\sigma_{1}+\sigma_{2})=(0.13,1.3)$ respectively, and thus can be compared to expectations from Fig. \ref{fig:separation}. Note that the SD case is not quite in the asymptotic regime $s/(\sigma_{1}+\sigma_{2}) ~\rlap{$>$}{\lower 1.0ex\hbox{$\sim$}} 3$ yet (where the relative errors would be $\sim 1/\sqrt{f_{i} N_{d}} \sim 1\%$), but it is fairly close. The mock spectra and best-fit models for $10^4$ photons are shown in Fig. \ref{fig:spec_single}. The best-fit parameters and their uncertainties are listed in the first row of Table \ref{tbl:para_wd} and \ref{tbl:para_sd}. In accordance with expectations from \S\ref{subsec:general}, component recovery is remarkably accurate. Even is the WD case, which is visually indistinguishable from a single Gaussian (see top panel of Fig. \ref{fig:spec_single}), the decomposition into the original mixtures is very good, and most velocities are constrained to within $\sim 10-30 \, {\rm km \, s^{-1}}$, which is significantly higher accuracy than needed to model the physical effects of bulk motions and turbulence in the cluster. This showcases the great potential of high spectral resolution instruments. Of particular interest is the constraint on the mixing fraction, which is a very good indicator of our ability to separate different components. A confident detection of multiple components should have $f_1/\Delta(f_1)$ larger than a few, i.e., the best-fit fraction should be at least a few $\sigma$ away from ``non-detection'' ($f_1$ or $f_2$ equal to 0). In the single line scenario, the $1-\sigma$ error of $f_1$ is 0.01 and 0.12 in the SD and WD cases, respectively, consistent with our expectations from discussions in the previous section. However, a large fraction of the constraints in the WD case is from the tails, and could easily be affected by continuum emission (see discussion below). Also note the general consistency between Fisher matrix and MCMC techniques, indicating the Gaussian shape of the likelihood surface for this scenario. \subsection{The Impact of Multiple Lines and Continuum} \label{subsec:case2} In this section, we consider the impact of multiple lines and continuum emission. Iron lines appear as a line complex between 6.6 and 6.75 keV, and these lines inevitably blend together. Multiple lines have two competing effects. First, taking all lines into account--all of which have identical mixture decompositions--means more photons, which reduces shot noise in parameter estimates. The photons from the entire line complex is about twice that from the He-like iron line alone. Secondly, as different lines blend together, information contained in the shape of individual lines is partly lost due to blending in the line wings. The latter are crucial to driving parameter estimation in the hybrid and WD cases (note however, from Fig. \ref{fig:spec_multiple} that the lowest and highest energy lines in the complex have low/high energy line wings respectively which are unaffected by blending. This is particularly important in the case of the high energy He-like line, which is by far the strongest line in the complex). These two factors have opposite effects on the constraints. As in the previous sub-section, we run MCMC chains and Fisher matrices to estimate the constraints. The properties of the line complex were taken from ATOMDB database\footnote{http://www.atomdb.org/} (v. 2.0.1). To save computing time, we only included the ten strongest lines lines. Fisher matrix estimates including more lines show negligible difference. The results are listed in the second row of Table \ref{tbl:para_wd} and \ref{tbl:para_sd}. In the SD case, the constraints estimated using both MCMC and Fisher matrix techniques are very close to those in the single line scenario, indicating almost total cancellation between the effects just mentioned. In the WD case, the constraints from the Fisher matrix technique are again close to the single line scenario. However, the results from MCMC runs show asymmetry, and in general, the constraints are worse than in the single line scenario. Line blending seems to make the likelihood surface significantly non-Gaussian. Next, we include the effect of continuum emission. Continuum acts as a source of background noise. Even though we can measure and subtract the continuum, doing so introduces shot noise, particularly in the line wings when Fe line emission and continuum brightness can become comparable, or continuum emission could even dominate. The relative level of continuum and Fe line emission is controlled by metallicity; larger metallicities imply brighter lines. The mean metallicity of clusters is typically ${\rm Z} \sim 0.3 \, {\rm Z_{\odot}}$, which we shall assume, though the metallicity in the cluster center is often higher due to contributions from the cD galaxy. We apply our mixture model incorporating both the effects of line blending and continuum; the results are shown in Table \ref{tbl:para_wd} and \ref{tbl:para_sd}, and in Fig. \ref{fig:spec_multiple} (for the purpose of clarity, only 1/3 of the data points are shown in this figure). The results are as one might expect. In the SD case, the constraints are only slightly worsened, since the mixtures are clearly separated, and almost all the $\sim f_{i} N_{d}$ points in a given mixture can be used for parameter estimates; only a small fraction in the line tails are contaminated by line blending and the continuum. The constraints in the WD case are more badly affected, since the constraints in this case are largely drawn from the tails; here, the differences between the MCMC
and Fisher matrix techniques are also further enlarged. The presence of the continuum and line blending limit the domain of the WD regime, which is no longer strictly independent of $s/(\sigma_{1}+\sigma_{2})$. For instance, if we assume $v_{\rm rel} = 50 \, {\rm km \, s^{-1}}$ (corresponding to $s/(\sigma_{1}+\sigma_{2})= 0.067$), the MCMC simulations fail to converge). They thus limit our ability to constrain components with small separations, though in practice such small separations should be rare. \subsection{Model Selection: When is a Mixture Model fit Justified?} \label{subsec:prospects} \begin{figure} \begin{tabular}{c} \rotatebox{-90}{\resizebox{60mm}{!}{\includegraphics{f12.eps}}} \end{tabular} \caption{Model selection: regions of the $f_1$ - $v_{rel}$ plane where the double component model is preferred according to the BIC, for $v_{\rm tb,1}=100 \, {\rm km \, s^{-1}}$, $v_{\rm tb,2}=(200,300,400) \, {\rm km \, s^{-1}}$, and $f_{1}=0.4$, $v_{\rm pec}=0 \, {\rm km \, s^{-1}}$. } \label{fig:select1} \end{figure} \begin{figure} \begin{tabular}{c} \rotatebox{-90}{\resizebox{60mm}{!}{\includegraphics{f13.eps}}} \end{tabular} \caption{Model selection: shaded regions shows the regions of $(v_{\rm rel},v_{\rm tb,2})$ parameter space where the double component model is preferred according to BIC, while the hatched regions are where mixing fraction is accurately constrained: $\Delta(f_{1}) < (0.2,0.1)$ (cyan and purple hatches respectively). All other parameters are as in Fig. \ref{fig:select1}.} \label{fig:select2} \end{figure} Thus far, we have only considered how accurately mixture model parameters can be constrained. However, this begs the question of whether a mixture model approach is justified at all, particularly when (as in the WD case) the observed emission line is visually indistinguishable from a single Gaussian. Introducing additional parameters will always result in an improved fit, even when these parameters are largely irrelevant and of little physical significance. This is essentially a model selection problem. We use information criteria (e.g., see \citet{liddle04}) which penalize models with more parameters, to identify preferred models. While they have solid underpinnings in statistical theory, fortunately, they have very simple analytic expressions. In this paper, we use the Bayesian Information Criterion (BIC; \citet{Schwarz1978}): \begin{eqnarray} BIC\equiv -2 \ln{\cal L}_\mathrm{max}+k\ln N \label{eqn:bic} \end{eqnarray} where ${\cal L}_\mathrm{max}$ is the maximum likelihood achievable by the model, $k$ is the number of free parameters, and $N$ is the number of data points; the preferred model is one which minimizes BIC. The BIC comes from the Bayes factor \citep{jeffreys61}, which gives the posterior odds of one model against another. We use it over the closely related Akaike Information Criterion (AIC; \citet{akaike74}), which places a lower penalty on additional model parameters. Thus, we adopt a conservative criterion for preferring mixture models. The absolute value of the BIC has no significance, only the relative value between models. A difference of 2 is regarded as positive evidence, and of 6 or more as strong evidence, to prefer the model with lower BIC \citep{jeffreys61,mukherjee98}. Note that the BIC does not incorporate prior information. This is possible with the more sophisticated notion of Bayesian evidence (e.g., \citet{mackay03}), but involves expensive integrals over likelihood space, and is unnecessary in our case since we adopt uninformative priors. We aim to distinguish the double component model with $k=5$ (free parameters: $(v_{\rm pec}, v_{\rm rel}, f_{1}, v_{\rm tb,1}, v_{\rm tb,2}$), against the single component model with $k=2$ (free parameters: $\mu, \sigma$). We create simulated data sets which have two underlying components, and see which regions of parameter space the BIC will correctly prefer the two component model. Our simulated line profiles incorporate the additional effects of thermal and instrumental broadening, continuum, and line blending. Rather than exploring the full 5 dimensional space, we explore the most interesting subspace to see where model selection is effective. In Fig. \ref{fig:select1} we explore model selection in the $f_1$ - $v_{\rm rel}$ plane, for $v_{\rm tb,2}=(200,300,400) \, {\rm km \, s^{-1}}$, and $v_{\rm pec}=0 \, {\rm km \, s^{-1}}$, $f_{1}=0.4$, $v_{\rm tb,1}=100 \, {\rm km \, s^{-1}}$. The plot shows where the BIC for the double component fit is smaller than that for the single component fit (note that the BIC is obtained by allowing for variation in all fitted parameters; we are just plotting model selection in a subspace). When $v_{\rm tb,2}=400 \, {\rm km \, s^{-1}}$, all values of $v_{\rm rel}$ and all $0.1 < f_{1} < 0.9$ permit correct selection of the double component model. The result is very similar for $v_{\rm tb,2}=300 \, {\rm km \, s^{-1}}$, but for $v_{\rm tb,2}=200 \, {\rm km \, s^{-1}}$, if both $f_{1}, v_{\rm rel}$ assume low values, the double component model is not preferred. Overall, it is reassuring to see that model selection is not very sensitive to $f_{1}$, since we previously restricted our studies to $f_{1}=0.4$. Thus, even if a smaller fraction of the emission weighted volume has a markedly different velocity structure, it will be detectable in the spectrum. In Fig. \ref{fig:select2}, we show the regions of $(v_{\rm rel},v_{\rm tb,2})$ parameter space where the double component model is preferred according to BIC. Overall, as expected, the mixtures can be distinguished if $v_{\rm rel}$ or $v_{\rm tb,2}$ are large; for Astro-H and with the adopted parameters, this is of order $200 \, {\rm km \, s^{-1}}$. In addition, we show the regions where the mixing fraction $f_{1}$ is accurately constrained to $\Delta (f_{1}) < (0.1,0.2)$, since the error on the mixing fraction should be a good indicator of our ability to distinguish mixtures. We use the Fisher matrix formalism to calculate these constraints. The results are qualitatively similar that obtained with the BIC, though somewhat more restrictive. \subsection{Non-Gaussian Mixture Components} \label{subsec: nongaussianity} \begin{figure} \begin{tabular}{c} \rotatebox{-90}{\resizebox{100mm}{!}{\includegraphics{f14.eps}}} \end{tabular} \caption{The velocity PDFs in the WD (upper panel) and SD (lower panel) cases. The solid (red) and dashed (green) curves are the input and recovered PDFs, respectively. The thick curves are the overall PDFs, while the thin curves show the individual components. } \label{fig:nongaus} \end{figure} \begin{table*} \caption{Non-Gaussian mixture components: input parameters (obtained by shifting and rescaling the non-Gaussian mixtures) and recovered best-fit parameters together with their 1-$\sigma$ errors, for both the WD and SD cases, as in Fig. \ref{fig:nongaus}. Note that the Fisher matrix results--which require an analytic likelihood--assume Gaussian mixtures, and hence are the same as Tables \ref{tbl:para_wd} and \ref{tbl:para_sd}.} \label{tbl:nongaus} \begin{center} \begin{tabular}{c|l| c c c c c} \hline \hline &&$f_1$&$v_{pec}$(km/s)&$v_{rel}$(km/s)&$v_{tb,1}$(km/s)&$v_{tb,2}$(km/s)\\ \hline \multirow{3}{*}{WD}&Input & 0.4 & 0 & 100 & 150 & 300 \\ &MCMC & $0.17_{-0.01}^{+0.49}$ & $-33.18_{-11.54}^{+37.53}$ & $112.49_{-10.96}^{+89.48}$ & $63.80_{-14.55}^{+138.51}$ & $284.99_{-39.82}^{+4.86}$ \\ &Fisher Matrix & (0.19) & (15.36) & (27.04) & (53.07) & (17.19)\\ \hline \multirow{3}{*}{SD}&Input & 0.4 & 0 & 500 & 150 & 300 \\ &MCMC & $0.43_{-0.02}^{+0.01}$ & $8.37_{-7.44}^{+5.83}$ & $509.22_{-4.49}^{+4.49}$ & $162.28_{-17.94}^{+12.09}$ & $284.55_{-10.77}^{+13.81}$ \\ &Fisher Matrix & (0.02) & (6.72) & (4.61) & (15.75) & (12.50)\\ \hline \end{tabular} \end{center} \end{table*} All the preceding discussions are based on the assumption that the PDFs of individual components are Gaussian, which is not true in general. As we see in Fig. \ref{fig:spectrum}, individual mixtures show deviations from Gaussianity, i.e. Gaussians are a good but imperfect set of basis functions. In principle, this can be dealt with by fitting higher order mixture models, but in practice the data quality from Astro-H does not allow this; parameter estimation becomes unstable and large degeneracies develop, particularly since the higher order mixtures generally have low mixing fractions $f_{i}$. Unless their velocity means or widths are very different, the physical interpretation of these additional components is also more difficult. Here we construct a simple toy model to isolate the effects of non-Gaussian components. As there are many flavors of non-Gaussianity, the results we show are meant to be illustrative rather than definitive. To this end, we extract PDFs from a simulated relaxed cluster, use them as the ``basis'' PDFs of individual components, resize and combine them to produce a composite PDF, which is in turn used to generate mock spectra. The ``basis'' PDFs are extracted from a simulation by \citet{Vazza2010}, which the authors kindly made public; we sample different PDFs by looking along different lines of sight. The cluster, labeled as E14, has a mass of ${\rm M} \sim 10^{15}~\rm M_{\odot}$ and experienced its latest major merger at $z>1$. Due to shot noise which arises from the finite resolution (25 ${\rm kpc} \, h^{-1}$) of this simulation--which results in a small number of cells--we are forced to extract the emission weighted velocity PDFs from a large volume of $400\times 400 \times 1000~{\rm kpc}^3$. The PDFs are shifted (to match means), linearly rescaled (to match variances) and combined to produce the same WD and SD cases in \S~\ref{subsec:case2}.\footnote{We emphasize that this procedure is {\it not} meant to simulate what an realistic observation would see, which we treat in \S\ref{sec:application}. It is a toy model in the spirit of the preceding sub-sections, where we use simulations to generate non-Gaussian mixture components.} We then convolve the composite PDFs with thermal broadening and instrumental noise for the entire Fe line complex, and add continuum to produce mock spectra. Finally, we fit the mock spectra to separate and constrain the two components. The results are shown in Fig. \ref{fig:nongaus} and Table \ref{tbl:nongaus}. In Fig. \ref{fig:nongaus}, the solid (red) curves are the input PDFs while the dashed (green) curves are the best-fit model. The thick and thin curves are the total PDF and individual components, respectively (note that because we display the velocity PDF rather than the spectrum, the multiple lines in the Fe complex, as well as the continuum and thermal/instrumental broadening, are not shown. However, all these effects are included in the simulations). In the SD case (lower panel), the two components are recovered almost perfectly. In the WD case (upper panel), however, there are some discrepancies between the input and output PDFs. The same conclusion can be drawn from Table \ref{tbl:nongaus}; in the WD case, the best-fit values of $f_1$ and $v_{tb,1}$ are somewhat different from the input values. However, they are still within the (large) errors. Comparing Table \ref{tbl:nongaus} with Table \ref{tbl:para_wd} and \ref{tbl:para_sd}, we see that at least in this case, non-Gaussian components have limited effect on the results. Note the strong discrepancy between MCMC and Fisher matrix error bars in both cases, and in particular the strong asymmetry in MCMC errors. We repeated the same exercise several times with PDFs randomly drawn along different lines of sight from the same simulation. In most attempts, we are able to recover the input parameter values within the uncertainties. Thus, conclusions based on Gaussian components are still applicable when the true PDFs deviate from Gaussianity by a reasonable amount. Instrumental and thermal broadening, which {\it gaussian}, effectively smooth out small scale deviations from Gaussianity. \section{Results from Numerical Simulations} \label{sec:application} \subsection{Cold Front Cluster} \begin{figure} \begin{tabular}{c} \rotatebox{-90}{\resizebox{100mm}{!}{\includegraphics{f15.eps}}} \end{tabular} \caption{``Cold front'' cluster: the solid (red) curves are the same velocity PDFs as in Fig. \ref{fig:spectrum}. The dashed (green) curves are the recovered PDFs from the best-fit models, and the dotted (blue) curves are the individual components. Numerical values of the fit parameters are in Table \ref{tbl:app1}.} \label{fig:app1} \end{figure} \begin{table*} \caption{``Cold front'' cluster: best-fit parameters and their uncertainties for the PDFs in Fig. \ref{fig:app1}, obtained using the Enzo simulation described in \S\ref{subsec:example}. Case 1 and 2 are the top and bottom panels of Fig. \ref{fig:app1} respectively. The ``true values'' are obtained by fitting the PDF directly, while the recovered values are obtained from the mock spectrum of $10^4$ photons, which includes line blending, thermal and instrumental broadening, and continuum emission.} \label{tbl:app1} \begin{center} \begin{tabular}{c | l | c c c c c} \hline \hline &&$f_1$&$v_{pec}$(km/s)&$v_{rel}$(km/s)&$v_{tb,1}$(km/s)&$v_{tb,2}$(km/s)\\ \hline \multirow{2}{*}{Case 1}&True values & $0.40_{-0.01}^{+0.01}$ & $-359.53_{-1.93}^{+1.73}$ & $517.15_{-2.76}^{+2.24}$ & $155.00_{-2.68}^{+2.49}$ & $269.39_{-3.02}^{+4.04}$ \\ &Recovered& $0.43_{-0.01}^{+0.01}$ & $-348.67_{-6.00}^{+6.34}$ & $522.79_{-4.82}^{+3.31}$ & $165.52_{-16.27}^{+13.16}$ & $247.04_{-9.66}^{+12.70}$ \\ \hline \multirow{2}{*}{Case 2}&True values& $0.78_{-0.02}^{+0.02}$ & $-40.49_{-1.52}^{+1.70}$ & $142.10_{-6.40}^{+11.28}$ & $128.32_{-2.01}^{+2.17}$ & $220.24_{-6.93}^{+4.22}$ \\ &Recovered& $0.80_{-0.26}^{+0.11}$ & $-37.14_{-13.42}^{+9.09}$ & $136.29_{-60.12}^{+78.13}$ & $131.28_{-38.74}^{+11.02}$ & $237.73_{-53.31}^{+23.30}$ \\ \hline \end{tabular} \end{center} \end{table*} Finally, we apply our tool to cluster simulations. In the first example, we attempt to recover the velocity PDFs shown in Fig. \ref{fig:spectrum}, which derives from a cosmological ENZO simulation of a cold front cluster. These two cases, which come from different lines of sight through the same cluster, correspond to $s/(\sigma_{1}+\sigma_{2}) = {1.42}$ and $s/(\sigma_{1}+\sigma_{2}) = {0.42}$ respectively, i.e. in the ``separation-driven'' and ``width-driven'' regimes. We first fit the PDFs with a mixture model when no sources of noise or confusion are present, to derive the ``true'' parameter values. We then generate mock spectra by adding thermal and instrumental broadening, continuum emission and line blending to the PDFs, and then apply mixture modeling to the results. The results are given in Fig. \ref{fig:app1} and Table \ref{tbl:app1}. Overall, the results are very good. The best-fit models successfully recover the general features of the PDFs, and accurate parameter estimates with uncertainties which are consistent with our estimates from the toy models -- on the order of $\sim 10\%$ for the width-drive case (case 2) and $\sim 1\%$ for the separation-driven case (case 1). No systematic biases appear to be present. As we discussed in \S\ref{section:physical_significance}, these parameters all have physical significance: $v_{\rm tb,i}$ relates to the turbulent energy density in each component, $f_{i}$ to the emission weighted volume fraction of each component, and $v_{\rm rel}$ to the bulk velocity shear between them. We also applied the single component model to the same mock spectra, and compared the BIC values. In both cases, the double component model is preferred (case 1: $BIC_{\rm double}-BIC_{\rm single}= -1002$; case 2: $BIC_{\rm double}-BIC_{\rm single}= -10$). \subsection{AGN Feedback Cluster} \begin{figure} \begin{tabular}{c} \rotatebox{0}{\resizebox{85mm}{!}{\includegraphics{f16.eps}}} \end{tabular} \caption{Density map and velocity field on the $y-z$ plane. The size of the figure is 1 Mpc; the bulk motion along the y-direction has been substracted out. } \label{fig:agnmap} \end{figure} \begin{figure} \begin{tabular}{c} \rotatebox{-90}{\resizebox{60mm}{!}{\includegraphics{f17.eps}}} \end{tabular} \caption{``AGN feedback'' cluster: the solid (red) curves are the velocity PDFs from the simulation. The dashed (green) curves are the recovered PDFs from the best-fit models, and the dotted (blue) curves are the individual components. Numerical values of fits are in Table \ref{tbl:app2}.} \label{fig:agn} \end{figure} \begin{figure} \begin{tabular}{c} \rotatebox{0}{\resizebox{85mm}{!}{\includegraphics{f18.ps}}} \end{tabular} \caption{Error contours for the ``AGN feedback'' case: contours depict the 68\%, 95\% confidence levels for the marginalized distribution; the shadings shows the mean likelihood of the samples; the solid and dashed curves in the 1-D plots are the fully marginalized posterior and relative mean likelihood of the samples, respectively. The stars and vertical lines label the positions of the true values.} \label{fig:agn_2D} \end{figure} \begin{table*} \caption{``AGN feedback'' cluster: best-fit parameters and their uncertainties for the simulated PDF in Fig. \ref{fig:agn}. The ``true values'' are obtained by fitting the PDF directly, while the recovered values are obtained from the mock spectrum of $10^5$ photons, which includes line blending, thermal broadening, instrument noise, and continuum emission.} \label{tbl:app2} \begin{center} \begin{tabular}{ l | c c c c c} \hline \hline &$f_1$&$v_{pec}$(km/s)&$v_{rel}$(km/s)&$v_{tb,1}$(km/s)&$v_{tb,2}$(km/s)\\ \hline True values & $0.28_{-0.01}^{+0.01}$ & $-5.89_{-0.84}^{+0.78}$ & $2.91_{-1.72}^{+2.16}$ & $44.72_{-1.32}^{+1.83}$ & $240.07_{-1.89}^{+2.84}$ \\ Recovered& $0.30_{-0.01}^{+0.08}$ & $-1.58_{-1.96}^{+2.66}$ & $-4.38_{-4.98}^{+2.94}$ & $23.43_{-10.56}^{+50.80}$ & $246.80_{-3.42}^{+10.89}$ \\ \hline \end{tabular} \end{center} \end{table*} The second example is a FLASH simulation with static gravity and radiative cooling of a cluster with an AGN in the center (hereafter denoted as ``AGN feedback''); a simulation snapshot was kindly provided to us by Marcus Br\"{u}ggen. The simulated cluster, meant to mimic Hydra A, is described in \citet{bruggen07} and \citet{simionescu09}; numerous plots of the velocity field can also be found in \citet{vazza12}. Here, we briefly summarize some properties. The box size was 1 Mpc and AMR resolution reached a peak of 0.5 kpc in the center, and a maximum of (1,4,8) kpc outside (16,100,200) kpc respectively. A bipolar jet 2 kpc in diameter with power $L_{\rm jet} = 3 \times 10^{45} \, {\rm erg \, s^{-1}}$ was then introduced; for the analyzed snapshot the bulk velocity along the jet is $\sim 1500-1800 \, {\rm km s^{-1}}$, and a $M\sim 1.3$ shock has been driven into the surrounding ICM. The AGN was also given a bulk velocity of $\sim670 \, {\rm km \, s^{-1}}$ along the direction of (-1,1,0) relative to the ambient ICM, to mimic the observed offset between the shock center and the AGN in Hydra A. In Fig \ref{fig:agnmap}, we show a 1 Mpc size density and velocity field map on the y-z plane through the center. The size of the figure is 1 Mpc. The large bulk velocity along the x-direction has been subtracted from the figure. The outflows from the AGN stir the gas in the central region ($\sim 300$ kpc in radius), while the ambient gas is left relatively quiescent\footnote{The small velocity dispersion ($\sim 45 \, {\rm km \, s^{-1}}$) of the quiescent region in this example come from the fact that apart from AGN outburst, it is a relaxed cluster which has not experienced any recent major mergers. Note, however, that the initial conditions come from cosmological GADGET SPH simulations where the small scale gas motions may not have been fully resolved.}. The velocity field is predominantly radial outside 100 kpc (associated with jet expansion and the running shock), while it is close to isotropic within 100 kpc, indicating that instabilities have efficiently isotropized and distributed the jet power.\footnote{Note, however, as also discussed by \citet{vazza12}, that these simulations are purely hydrodynamic, and magnetohydrodynamic (MHD) effects can strongly affect fluid instabilities and energy transfer from AGN bubbles to the ICM \citep{ruszkowski07,dursi08,oneill09}. For instance, 3D MHD simulations of bipolar jets by \citet{oneill10} find to the contrary that jet energy is not efficiently distributed/isotropized, remaining instead near the jet/cocoon boundary.} In Fig. \ref{fig:agn}, we plot the emission-weighted velocity PDF along the z-direction inside an area of $1\times1~{\rm Mpc}^2$. The division between turbulent and quiescent gas shows up in the velocity PDF as a double Gaussian distribution -- a narrow Gaussian corresponding to the quiescent gas outside the core, and a broad Gaussian corresponding to the turbulent gas in the center. This is an example of non-volume filling turbulence discussed in \S~\ref{subsec:others}. Note that we have pessimistically chosen a viewing direction in which there are no bulk motions (similar to the ``width driven'' case of the preceding example). For other viewing angles, the jet expansion drives bulk motions which result in two clear peaks in the spectrum (similar to the preceding ``separation driven'' case). The scales in this Hydra A example are so large that in this particular instance, the velocity structure could be spatially resolved by Astro-H. However, Hydra A is of course an extremely rare and energetic outburst; for more typical jet luminosities of $L_{\rm jet} \sim 10^{44} \, {\rm erg \, s^{-1}}$, the turbulently stirred region will be at least a factor of $\sim 30^{1/3}\sim 3$ smaller or $\sim 100$ kpc in size, and hence barely resolved by Astro-H. In this instance, mixture modeling will still be required to uncover the filling fraction of turbulence. Also, as previously discussed, MHD simulations show that motions are not efficiently isotropized and distributed within the region of influence of the AGN, so in reality there could be small scale intermittency in turbulence which would be spatially unresolved, but detectable with mixture modeling. To approximate such situations, we analyze the spectrum with the velocity PDF shown in Fig. \ref{fig:agn}, where the effects of line blending, thermal broadening, instrument noise, and continuum emission have been included. This is a clear example of the ``width driven'' scenario, with $\sigma_{1}/\sigma_{2} = {0.70}$. We were unable to recover the velocity PDF from the mock spectrum with $10^4$ photons. The estimated BIC values for the single and double component models using the ``true values'' indeed show that the single component model is preferred for $10^4$ photons ($BIC_{\rm double}-BIC_{\rm single}$=17). However, with $10^5$ photons (which is for instance, possible for Perseus; see Table \ref{tbl:clusters}), the two components could be easily separated (in this case, $BIC_{\rm double}-BIC_{\rm single}$=-102). The results are given in Table \ref{tbl:app2}. Again, here the ``true values'' are obtained by fitting the PDF directly, by generating a Monte-Carlo sample of $10^4$ photons. The corresponding 2-D error contours and marginalized posterior are shown in Fig. \ref{fig:agn_2D}. Note the firm lower limit of $\sim 30\%$ to the quiescent component; a clear detection that turbulence is not volume-filling. The velocity dispersion and hence the energy density in the turbulent component are also accurately recovered. \section{Conclusions} \label{sec:conclusions} Gas motions can have profound influence on many physical processes in the ICM, but thus far we have lacked a direct measurement of turbulence in clusters. Upcoming X-ray missions--in particular Astro-H--are poised to change that, by directly measuring turbulent broadening of spectral lines. Thus far, most work has focussed on how gas motions can alter the mean and width of X-ray emission lines from galaxy clusters. However, the detailed shape of the line profile has valuable information beyond these first two moments. Exploiting the line shape (and thus the high spectral resolution of upcoming missions such as Astro-H) can in many cases ameliorate poor angular resolution in inferring the 3D velocity field. The main point of this paper is that the line-of sight velocity PDF can often be meaningfully decomposed into multiple distinct and physically significant components. The separation is based on deviations of line profiles from a single Gaussian shape, driven by either the difference in width (``width-driven'', WD) or mean (``separation-driven'', SD) of the components. Such a mixture decomposition yields {\it qualitatively} different results from a single component fit, and the recovered mixture parameters have physical significance. For instance, bulk flows and sloshing produce components with offset means, while partial volume-filling turbulence from AGN or galaxy stirring leads to components with different widths. The offset between components allows us to measure gas bulk motions and separate them from small-scale turbulence, while component fractions and widths constrain the emission weighted volume and turbulent energy density in each component. With the MCMC algorithm and Fisher matrix techniques, we evaluate the prospects of using Gaussian mixture models to separate and constrain different velocity modes in galaxy clusters from the 6.7 keV Fe line complex. We found that with the $10^4$ photons (which is feasible for the $\sim 14$ nearest clusters; see Table \ref{tbl:clusters}), the components could be constrained with $\sim 10$\% accuracy in WD cases, and $\sim 1$\% accuracy in SD cases, in both toy models and simulations of clusters with cold fronts and AGN feedback respectively. Continuum emission degrades the constraints in WD cases, while it has little impact on the SD cases. On the other hand, line blending appear to have little impact. We generally find that Astro-H is effective in separating different components when either the offset between the components or the width of one of the components is larger than $\sim 200$ km/s. Using PDFs taken from numerical simulations as ``basis'' functions, we find that reasonable deviations from Gaussianity in the mixture components do not affect our results. We also study error scalings and use information criteria to determine when a mixture model is preferred. Many extensions of this method are possible. For instance: (i) It would be interesting to compare the separation between bulk/turbulent motions obtained from mock X-ray spectra by mixture modeling, with algorithms for performing this separation for the full 3D velocity field in numerical simulations (e.g., \citet{vazza12}), to see how close the correspondence is. (ii) In this study, we have assumed that due to Astro-H's poor spatial resolution, only line-of-sight information about the velocity field is possible. In principle, it should be possible also to obtain information about variation of the velocity field in the plane of the sky. For nearby clusters such as Perseus, it should be possible to examine the line shape as a function of projected radial position to obtain a full 3D reconstruction of the velocity field (a more detailed implementation of the suggestion by \citet{zhuravleva12} to study the variation of line center and width with projected radial position). It would be very interesting to study the variation of mixture parameters as a function of position in high-resolution simulations. Even for more distant clusters, a coarse-grained tiling of the cluster should be possible. (iii) High resolution X-ray imaging of cold-front clusters yield information about density/temperature contact discontinuities in the plane of the sky. This has already been used infer the presence of sloshing and bulk motions, as well as physical properties of the ICM such as viscosity and thermal conductivity. Combining information about the density/temperature contact discontinuity in the plane of the sky with the line of sight information obtained by mixture modeling could enhance our understanding of gas sloshing in clusters, and give more precise constraints on velocities. It would likewise be interesting to employ mixture modeling on spectra of the violent merger clusters with classic bow shocks. More generally, mixture modeling of spectra should prove useful whenever there are good reasons to believe that there are multiple components to the thermal or velocity field, and/or the line profile shows significant deviations from Gaussianity. For instance, it might be fruitful to consider applications to the ISM (e.g., \citet{falgarone04,Lazarian2006}), or Ly$\alpha$ emission from galaxies (e.g., \citet{hansen06,dijkstra12}). \vspace{-0.5\baselineskip} \section*{Acknowledgments} We thank Marcus Br\"{u}ggen for kindly providing a simulation snapshot of AGN feedback from \citet{bruggen07}, the authors of \citet{Vazza2010} for making their simulation data publicly available, and Brendon Kelly, Chris Reynolds, Franco Vazza, Sebastian Heinz and Fanesca Young for helpful conversations or correspondence. We acknowledge NSF grant AST0908480 for support. SPO thanks UCLA and KITP (supported in part by the National Science Foundation under Grant No. NSF PHY05-51164) for hospitality. We acknowledge the use of facilities at the Center for Scientific Computing at the CNSI and MRL (supported by NSF MRSEC (DMR-1121053) and NSF CNS-0960316).
\section{Case Study of Narrative Transitions} In this section, we take clips from one data video as a case to illustrate the effect of transitions in connecting narratives and visualizations. We highlight characteristic clips of these videos and mark transitions in \textbf{bold} texts. This video is about global wealth inequality\cite{TheRulesOrg2013}. It presents a series of data facts about the wealth comparison between the poor and the rich. Most facts are linked through crafted transitions rather than simply fading in/off. We introduce five clips (\autoref{fig:case-1}) in detail. The story starts with a pie chart, which shows the population distribution (\autoref{fig:case-1}(a1)). The red color represents the richest people, which is only 1\%, while the pewter represents the other 99\% of the people. Then, the pie chart \includegraphics[width=8.5em, trim=0 0.45em 0 0]{case-UpdatingContent-1.pdf} to present another data fact (\autoref{fig:case-1}(a2-a5)), where the thin red sector expands, while the pewter part reduces. By establishing contrasts of wealth and popularity between two groups of people, this transition reveals that the 1\% richest people own much more than the rest. Within the transition, the color encodes the data category (the richest people and the rest). The color encoding also preserves the narrative information during the transitions of two pie charts to keep viewers oriented. The following footage consists of two icons and texts (\autoref{fig:case-1}(b1)). The scene shows the fact that 3 billion poorest people and 300 richest people own the same wealth. Afterward, the narrator mentions that the number of people it takes to fill a mid-size commercial aircraft has more wealth than the combined populations of four countries. In the clip, an aircraft icon flies through the scene, and the previous pictograph quits the scene under the \textbf{Truck} transition (\autoref{fig:case-1}(b2-b4)). The icon moves from left to right and guides the viewers' attention to the following world map. After this contextual data fact, the topic turns from inequality over the population to that over regions (\autoref{fig:case-1}(b5)). The aircraft in this clip is a \includegraphics[width=5em, trim=0 0.5em 0 0]{case-RSTGuide.pdf} that attracts viewers' attention and shifts the presented topic insensibly. Then, the world map splits into two parts by two colors (\autoref{fig:case-1}(c1)). Developed regions and less developed regions are encoded with red and blue, respectively. These regions first \includegraphics[width=2.4em, trim=0 0.5em 0 0]{case-Splitting.pdf} from the whole world map and then \includegraphics[width=3em, trim=0 0.4em 0 0]{case-Merging.pdf} into two different circular areas (\autoref{fig:case-1}(c2-c4)). Each circular area represents the total wealth of the merged regions. The areas consist of a proportional area chart of global wealth measured by poor and rich regions. The chart then \includegraphics[width=8.5em, trim=0 0.4em 0 0]{case-UpdatingContent-2.pdf} by \includegraphics[width=3.5em, trim=0 0.45em 0 0]{case-Scaling.pdf} to show the comparison of the rich regions and the poor regions over two hundred years (\autoref{fig:case-1}(d1-d5)). During this process, the layout and the color of the two circles keep constant, whereas the sizes of the two circular areas change according to the data. The change in sizes creates a prominent contrast on the wealth of the rich and poor regions. \section{Discussion and Conclusion} \label{section:discussion-conclusion} In this study, we investigate the taxonomy of transition designs in data videos. First, we collect the dataset of 284 professional data videos with 3909 clips. These videos cover various visual styles of visualization contents and diverse transition designs. Based on a content analysis on these clips, we propose a taxonomy of narrative transitions in data videos. With regard to the change of visual variables, we conduct a more in-depth analysis of data-driven narrative transitions that preserve narrative meanings of contents, namely, \textit{Preserving Guide} and \textit{Narrative Agent}. The proposed taxonomy for narrative transitions takes a step in this direction and hopes to encourage future research. First, following the taxonomy, evaluations on specific transition designs are needed to assess the effectiveness of engagement and memorability of data videos. Designers can take transition designs into consideration when building the attention cues of data videos, for example, highlighting data facts or guiding viewer's attention. Second, our taxonomy provides a new way of inspecting the relationship between narrative transitions and story sequences in data videos. Narrative transitions can not only highlight visual changes in presentations but also enrich narrations. We plan to propose a comprehensive model that considers both narrative transitions and story sequences. Third, our taxonomy can inspire the transition design for other genres of narrative visualizations, for example, animated long-form web articles or data-GIFs\cite{Shu2021}. Finally, except for narrative, future work can consider other messages that transition can present, for example, visualization rhetoric\cite{Hullman2011} and acquire codes\cite{Byrne2015}. \section{Design Suggestions} \label{section:discussion-design-suggestions} Informed by the content analysis and taxonomy, we summarize a series of design suggestions for transition designs in data videos from the following three aspects, \textit{i.e.}, clip forms, content relation, and visualization types. Given that transitions of the \textit{Refresh}, \textit{Halftime}, \textit{Camera Motion} category are not specialized for any specific content, including clip forms, content relation, and visualization types, we summarized available transitions of \textit{Preserving Guide} and \textit{Narrative Agent} in these three aspects in \autoref{fig:suggestions}. The table presents available transition choices in different situations but with no effectiveness evaluation yet. We take typical transitions as examples to explain our design suggestions. \begin{figure*}[htb] \centering \includegraphics[width=\textwidth]{statistics-transitions.pdf} \caption{ (a) Five transition categories loosely ordered by the relevance of data content between scenes. (b) Result of our content analysis showing the statistics of visualization-to-visualization, visualization-to-non-visualization, non-visualization-to-visualization transitions in our collected videos. } \label{fig:statistics-transitions} \end{figure*} \subsection{Clip Form} A data video has a series of clips, some of which are data clips, while others are not. These clips have different forms, indicating that they have different styles in presenting video content. For example, videos such as \textit{We're quitting smoking, so why is big tobacco booming?}\cite{Guardian2019} from \textit{The Guardian}, introduce social topics through an arranged multimedia sequence, which includes expert interviewing scenes and data visualizations. By contrast, videos, such as \textit{Global Wealth Inequality}~\cite{TheRulesOrg2013} shown in \autoref{fig:case-1}, mainly consist of animated graphics. Some videos, such as the famous \textit{Hans Rosling's 200 Countries, 200 Years, 4 Minutes -- The Joy of Stats}~\cite{BBC2010}, are a mixture of animated data visualizations and shot scenes of the interpreter. The transition designs in different clip forms vary. \textbf{Use shared visual variables to articulate the transitions between visualizations.} Videos that have transition clips of only visualization contexts take a big proportion of 94\% (267/285) in our data video dataset. Transitions between these clips are used to transform video content between different charts. Staged animation in Heer and Robertson's work\cite{Heer2007} is suitable in this scenario. In our taxonomy, this type of staged animation belongs to \textit{Preserving Guide} and \textit{Narrative Agent}. Within these two transitions, those preserved visual variables encode the same narrative meanings of charts in the previous and coming scenes, thereby keeping the viewers oriented during the change of video context. Transition designs between different visualizations can use \textit{RST Guide}, \textit{Staying Guide}, \textit{Updating Content}, etc., according to the forms of visualizations. Suggestions on transitions between specific visualization types are discussed in detail in \autoref{section:discussion-vistype}. \textbf{Build the connection between visualization and other video content through transitions.} The transitions of \textit{Preserving Guide} and \textit{Narrative Agent} can fluently connect visualization contents with others in data videos (see as \autoref{fig:intro-example}(a)). The transition designs of these clip forms can utilize the similar visual variables between animated icons and items of visualizations (e.g., the circular shape of icons and pie charts and the color of icons and specific encoding in charts). These same or similar visual variables can create a bridge of non-data-driven animated icons and data-driven visualization items with a seamless meaning transformation between them. Moreover, texts and numbers can be reused in the coming scenes as titles or labels of charts without an additional set-up animation (\autoref{fig:teaser}(a1-a3)). \begin{figure*}[htb] \centering \includegraphics[width=0.88\textwidth]{suggestions.pdf} \caption{ Available transitions in clip forms, content relations and visualization types. } \label{fig:suggestions} \end{figure*} \subsection{Content Relation} Data videos are author-driven stories\cite{Segel2010}. These videos have linear storylines presented in an arranged order of scenes. That is, authors need to carefully consider the relationship between consecutive scenes and use elaborate transitions. Based on our analysis, the specific transitions can help reveal the following content relations in data videos. \textbf{Question \& Answer}: \textbf{Reuse elements in question when answering.} To lead viewers from the establisher and initial parts of a story to its peak gradually \cite{Amini2015}, designers can use attractive questions as a start and then use visualizations to answer the question by presenting data facts. In this process, the contents of the questioning scene, such as texts, icons, or even question marks, can be used to create a \textit{Preserving Guide} transition. For example, texts of the question can be used as an \textit{RST Guide}, and they can move to the visualization as a related annotation of charts. By doing so, the question and related answers become a fluent sequence without an abrupt change. \textbf{Whole and Part}: \textbf{Group items, expand colors, or zoom.} This content relation means presenting different granularities of data in contexts. By grouping partial items together or splitting them, a \textit{Merging/Splitting} transition can reveal the affiliation of partial items and the whole entity (\autoref{fig:case-1}(c1-c5), (a1-a5)). This content relation also includes topics and subtopics. If related contents of subtopics exist in the scene of the whole topic, then designers can use \textit{Zooming} (\autoref{fig:teaser}(d1-e1)) or \textit{Expanding Guide} (\autoref{fig:teaser}(f1-f3)) to demonstrate the subtopics in detail after the presentation of the whole topic. Those related contents (e.g., the red point in \autoref{fig:teaser}(d1), the blue sector in \autoref{fig:teaser}(f1)) are the medium of these types of transitions. With a zoom-in camera motion or a background expanding animation, the meaning of a further introduction of this topic can be illustrated, especially the subtopic that the expanding color or the zooming focus represents. \textbf{Progress} and \textbf{Supplement}: \textbf{Present items one by one.} Telling the story progressively is common in storytelling. If contents are planned on a canvas layout, then designers can use a \textit{Pedestal/Truck} camera motion to present them individually (\autoref{fig:case-2}(b1-b5)). If the canvas layout is not created for scenes, then designers can use \textit{Rack Focus} to blur the previous content and present the supplement overlaid on the blurred background (\autoref{fig:teaser}(l1-l3)). The contents of the previous scene can be an \textit{RST Guide} to conduct a supplement in the coming scenes (\autoref{fig:case-2}(c1-c5, c5-c6)) The \textit{Morphing} transition can also be used in a progress narrative to present a changing process of contents (\autoref{fig:case-3}(c1-c5)). \textbf{Contrast}: \textbf{Preserve contexts for comparison.} Contrasts among different data can play a role in strong arguments for narrative. A contrast can be gradually created. Accordingly, visualizations in the previous scenes can be regarded as a whole \textit{RST Guide}. The whole part translates to a new position but is preserved in the scene while making space for other contents for contrast. For example, a designer wants to present two pie charts about the sex ratio of two classes for contrast. After presenting the pie chart of class A, she preserves the chart and lays it on the left side of the screen. Then, she presents the pie chart of class B on the right side. Moreover, designers can use \textit{Narrative Agent} transitions (e.g., \textit{Scaling}) to present the contrast of data in different situations by updating related video contents (\autoref{fig:case-1}(d1-d5)). \subsection{Data Visualization Type} \label{section:discussion-vistype} Our content analysis demonstrates design tactics for different visualization types. We summarize these tactics on the basis of our data video dataset and taxonomy, including how to transform \textit{from} and \textit{to} them. The frequency of transitions \textit{from} and \textit{to} specific visualizations are presented in our supplemental materials. We first present the general transition designs for visualizations and then specific tactics for different visualization types orderly in the following. If details in the visualization will be presented, then designers can use \textit{Camera Motions}, such as \textit{Pedestal/Truck} and \textit{Zooming}, to present them for demonstration. Another way is to use \textit{Updating Content} to present these details in an arranged order. \textit{Updating Content} can be used for presenting the update of data on the basis of the same form of charts. Designers can make good use of the same elements in different scenes. For example, the same color area can be used as an \textit{Expanding Guide}. The annotation texts or icons can be used as a \textit{RST Guide}, and the same basic geometry shapes, such as circle and rectangle, can be used as \textit{Staying Guide}. The individual visualization types have different visual components. Accordingly, the transition designs of the varying visualization types are also distinguished according to these components. We list the specific tactics for the following visualization types as a supplement for the general tactics mentioned above. These types of visualization are commonly used in our data video dataset. \textbf{Line Chart}, \textbf{Scatter Plot}, and \textbf{Bar Chart}: These three types of charts have many necessary components, for example, axes, legends, label texts, as well as data-encoded lines, points, and bars. Designers can take these components as a medium for the \textit{RST Guide} transition. Besides, the axial component can also be a \textit{Staying Guide} when continuously presenting these three types of charts with different contents. \textbf{Map}: Maps can present spatial data. The overview and detailed conditions of maps are essential for presenting data patterns. \textit{Camera Motion} is a widely used transition to transform between different presenting levels or places on maps. \textit{Zooming} can indicate the geographical information. For instance, a particular condition of the city in a country can be shown by zooming from the whole map of the country into the city, and vice versa. The horizontal or vertical movement of the camera, for example, \textit{Pedestal/Truck}, creates a feeling of space of the scene and reveals the relative localization of different regions. The shape of a specific region on maps is a good choice for a \textit{RST Guide} if it is also shown in near scenes. The color of one sub-region can be an \textit{Expanding Guide}, that is, expanded as a background color when presenting detailed data of this subregion. \textbf{Proportional Area Chart}, \textbf{Pie Chart}, and \textbf{Donut Chart}: These types of charts present the proportional relation of data. The \textit{Merging} transition can gather parts that represent different categories of data from the previous scenes and combine them as a whole chart. The \textit{Splitting} transition is on the opposite. Varying colors are used to encode different proportional areas in these charts. These areas can be well used to present the whole and part levels of information as the medium for the \textit{Expanding Guide} (\autoref{fig:case-3}(a1-a5)). Designers can also use \textit{Scaling} and \textit{Morphing} transitions to present the change of proportional relation of data by revising the size or shape of changing parts. \textbf{Diagram}: Diagrams can present pipelines. In contrast with statistical charts, this type of visualization presents the progress of a topic rather than the data values. Designers can present each procedure through \textit{Pedestal/Truck} in sequence and use \textit{Zooming} to show the whole diagram at last. The arrows between procedures can be used as the attention cues of the \textit{RST Guide} transition and link up the whole pipeline. As for transforming from and to diagrams, icons, and texts in the procedure of a diagram can be used as the \textit{RST Guide}. \textbf{Pictograph} and \textbf{Number, Icon and Text}: These types of visualizations mainly consist of animated icons. Such icons may have their animations or interactions with other elements in the scene. Transitions from and to these visualizations can make good use of these icons, for example, taking them as the medium of the \textit{RST Guide} transition. In that way, the rotating, scaling, and translating animations of icons are equal to the transition of scenes, thereby making the transformation seamless in the narrative. \section{Related Work} In this section, we introduce related work with regard to animated transitions and data videos. Animated transitions are commonly used in visualization presentations. Heer and Robertson\cite{Heer2007} examined the effect of animated transitions between statistical charts and proposed detailed designs concerning the congruence of contexts. On the basis of an evaluation of those designs, they indicated that some animated transitions could keep viewers oriented. This notion means that well-designed transitions between visualizations can improve the viewer's perception and cognition. Researchers have recently conducted more specific work on visualization transition designs for different presentation tasks, such as the analogy between visualizations~\cite{Ruchikachorn2015}, aggregation operations\cite{Kim2019}, and visual grouping\cite{Chalbi2020}. As for design details, Thompson et al.\cite{Thompson2020} proposed several dimensions of animated data graphics and classified transitions based on these dimensions. Researchers also used animated transitions for data-driven storytelling. In Segel and Heer's design space, as a type of visual narrative tactic, \textit{transition guidance} promotes the use of conventional methods to achieve the continuity among different shots to keep viewers oriented when old scenes destruct and new scenes come\cite{Segel2010}. Animated transitions can be a type of transition guidance when telling stories. Hullman et al
.~\cite{Hullman2013} investigated the arrangements of narrative sequences based on transitions. Furthermore, Kim et al.~\cite{Kim2017} developed GraphScape, a synthetic model to evaluate the transition cost of visualization sequences. Researchers also use transitions to make stories cohesive in timelines~\cite{Brehmer2016} and slideshows~\cite{Wang2018}. Amini et al.~\cite{Amini2017} include \textit{Transition} in the taxonomy of data clip types. They expanded a few transition designs in their proposed authoring tool, DataClips, from and to pictographs based on Heer's work~\cite{Heer2007}. However, the discussed transition designs in the previous work have not well covered all transitions that connect contexts in the narrative of data videos. Data videos have a series of different forms, for example, animated graphics that consist of icons and characters, standard charts, pictographs. Given that not only visualizations exist in the data videos, but also iconic motion graphics, transition designs between scenes have a large space. In this paper, we expand the taxonomy to include specific designs and usage scenarios of transition clips. Additionally, we pay attention to how they cement the context in the linear narrative in data videos. \section{Taxonomy of Transitions} \label{section:taxonomy} This section first presents an overview of our proposed taxonomy for narrative transitions in data videos and then introduces data-driven transitions in detail. \subsection{Overview} According to the definition of narrative transitions (Sec. 3.1), we examine the designs from two aspects, namely, data-driven animated transitions (e.g., staged animations~\cite{Heer2007}, \autoref{fig:intro-example}(b), and \autoref{fig:case-1}(a)(c)(d)) and non-data-driven ones (e.g., fade-in/off, \autoref{fig:intro-example}(a), and \autoref{fig:case-1}(b)). We list five identified transition types in an order of the relevance to data, where the first three pertain to non-data-driven transitions and the last two belong to data-driven ones. The frequency of each transition type in the dataset (3909 clips) is also reported. Note that multiple types can be combined together to establish a transition in a clip. For example, \autoref{fig:case-1}(b) uses a camera motion and a preserving guide in the case. We provide animated illustrations of these narrative transitions on the website: \url{https://narrativetransitions.github.io/home/}. \textbf{Refresh} (53.0\%, 2070) means a complete update of the previous scene (\autoref{fig:taxonomy}(a)). In this type of transitions, no connection exists between the last frame of the previous scene and the first frame of the coming scene. Usually, this mechanism can be used to present a new topic or an abrupt turn. We place the combinations of the destruction of previous scenes and creation of coming scenes in this category, for example,~\textit{Hard Cut},~\textit{Fade}, and \textit{Wipe}. \textbf{Halftime} (2.5\%, 98) adds a new scene between video context (\autoref{fig:taxonomy}(b)). Such a scene is comparable to a quick half time or the stage curtain between the previous scene and the coming scene. \textbf{Camera Motion} (14.9\%, 583) updates the scene due to the changing viewpoints or screen focus. The seven subtypes of camera motion are\cite{Rea2015,Storyblocks2019}: \textit{Pedestal}, \textit{Truck}, \textit{Tilt}, \textit{Pan}, \textit{Dolly}, \textit{Zoom}, and \textit{Rack Focus}. \textit{Pedestal} means moving the camera vertically. By contrast, \textit{Truck} means moving the camera horizontally. \textit{Tilt} and \textit{Pan} also mean moving the camera in the vertical and horizontal direction, respectively. However, they both require the camera to keep a stable focus during the movement. \textit{Dolly} means moving the camera forward or backward. \textit{Zoom} means changing the focal distance. \textit{Rack Focus} means changing the focus in the scene, for example, bokeh effects. Camera movements can create a sense of space over scenes, and usually, they are used to present spatial visualizations, such as maps. The changes in the focus of scenes can highlight the key points of narrations according to the video designer's intention. \textbf{Preserving Guide} (22.9\%, 894) reuses elements in the previous scene as a visual guide that directs to the next scene (\autoref{fig:taxonomy}(d)). For example, a preserving guide could be a flying icon (\autoref{fig:case-1}(b)), a colored area, or a stable line, to lead the viewers' attention between two consecutive scenes. This guide can be used with camera motions to construct fluid transformations (\autoref{fig:case-1}(b)). \textbf{Narrative Agent} (19.9\%, 777) means substitutes for data during data-driven storytelling (\autoref{fig:taxonomy}(e)). The transitioned elements could be regarded as agents of data attributes or data values. This transition illustrates the change of data such as \textit{Scaling} and \textit{Merging}. Different transition designs are useful to connect diverse narrative states. For example, in bar charts, data-encoded bars can be used as \textit{Narrative Agents} to present the change of data. Another example is that \textit{Camera Motion} can be used to transform among different places on maps because this transition can generate a sense of space. \vspace{-5pt} \begin{figure}[htb] \setlength{\abovecaptionskip}{5pt} \setlength{\belowcaptionskip}{-10pt} \centering \includegraphics[width=0.9\columnwidth]{taxonomy-grey-new.pdf} \caption{ Taxonomy of transition designs in data videos (a-e). We also present changed and unchanged visual variables of \textit{Preserving Guide} and \textit{Narrative Agent} transitions (f). } \label{fig:taxonomy} \end{figure} \begin{figure*}[htb] \setlength{\abovecaptionskip}{3pt} \setlength{\belowcaptionskip}{-12pt} \centering \includegraphics[width=0.85\textwidth]{case-1-new.pdf} \caption{ Screenshots of the video \textit{Global Wealth Inequality}\cite{TheRulesOrg2013}. We added green marks on the screenshots to show the animation between two consecutive scenes. We presented the transition types, visual content and the corresponding transcripts (gray text) under each clip sequence. } \label{fig:case-1} \end{figure*} \subsection{Data-driven Narrative Transition} Transitions, such as \textbf{Refresh}, \textbf{Halftime} and \textbf{Camera Motion}, can be used not only in data videos but also in other video narratives. \textbf{Refresh} is the most popular transitions in the dataset. This kind of transition has been integrated in existing video authoring tools~\cite{Premiere2020,AfterEffects2020} and can be easily employed in videos. \textbf{Preserving Guide} and \textbf{Narrative Agent} need craft animations to conduct fluent narrative. These two transitions are usually used in data videos to preserve data attributes or encoding data-driven insights, which present narrative information of the visualization content of data videos. We paid special attention to \textbf{Preserving Guide} and \textbf{Narrative Agent} transitions to understand the data-driven design patterns of transitions in data videos. They make up a large proportion of the studied clips, especially in the 1644 vis-to-vis transitions, among which, 477 are \textbf{Preserving Guide} (29.0\%) and 617 are \textbf{Narrative Agent} (37.5\%). Within these transitions, changed and unchanged visual elements both exist. We focus on the visual contents within clips in terms of Bertin's variables of the image\cite{Bertin1983}. Those changed variables consist of the animation of transitions, while unchanged ones preserve the narrative information to maintain the congruence of the previous scenes and coming scenes. The narrative information could be data attributes and values preserved in special visual variables. The constant variables, which are similar to data agents, help viewers understand what narrative is going on. We list them in \autoref{fig:taxonomy}(f). Note that these agent elements are not isolated from each other in practice. In the following sections, we introduce the narrative transitions of data videos in detail. Illustrations about each type of transition are presented in \autoref{fig:taxonomy}(d) and (e). \subsubsection{Preserving Guide} \includegraphics[width=16em, trim=0 0.45em 0 0]{RSTGuide.pdf} A single or a group of visual items in the previous scene are kept and reused in the coming scenes. However, the layout of these visual items, for instance, orientation, size, and position, may change. These visual elements maintain the narrative information in previous scenes by preserving the same appearance, and at the same time, guide viewers' attention to a new scene by changing the layout. This transition is particularly useful in setting up a supplement, correlation, or a comparison. We abbreviate this transition as \textbf{RST Guide}. \includegraphics[width=12.5em, trim=0 0.45em 0 0]{ESGuide.pdf} This type means two related situations. The first situation is to expand the color of a specific item in the previous scene to the background color of the coming scene. Often, the coming scene introduces details about the item. The other situation is to shrink the background color of the previous scene to a specific item of the coming scene. This type of transition is usually used when presenting the relation between the whole and the parts. Color is an essential visual variable in this type of transition: the same color of adjacent scenes illustrates the same subject. \includegraphics[width=6.5em, trim=0 0.45em 0 0]{StayingGuide.pdf} This type of transition means that the layout of the visual items in the previous scene remains the same, and additional items are interacting with the existed items. The stable visual items are unchanged cue in the scene, and they can be regarded as the basis for the additional items entering, growing, and leaving. \subsubsection{Narrative Agent} \includegraphics[width=8em, trim=0 0.45em 0 0]{UpdatingContent.pdf} This type of transition means updating the visual content without changing the shapes, positions, and colors of the contents in the scenes. In this transition, the shape, position, and color maintain the narrative information; however, the number of the contents changes to show the differences. Such a mechanism is used in data videos for highlighting or presenting data sequentially. \includegraphics[width=3.7em, trim=0 0.45em 0 0]{Scaling.pdf} These transitions change sizes to show the value change. Although the size changes, the other visual variables remain the same, thereby making the contents coherent in different scenes. \includegraphics[width=4.5em, trim=0 0.45em 0 0]{Morphing.pdf} This type of transition mainly morphs icons or shapes from old ones to new ones. This transition conveys insights into the transformation between data. \includegraphics[width=8em, trim=0 0.45em 0 0]{MergingSplitting.pdf} Merging means combining separate contents into a group, while splitting means separating a group of contents. This type of transition describes insights into the gathering or scattering. The whole-part relationship of data can be clearly illustrated by using this type of transition. \section{Bibliography Instructions} \section{Content Analysis} We first state the definition of a transition in data videos in our research scope, followed by the adopted methodology. \subsection{Definition} Prior work has defined transitions in narrative visualizations\cite{Hullman2013}. Hullman et al.~\cite{Hullman2013} consider the change between two independent visual expressions as a transition. In the previous investigation of data videos, Amini et al. \cite{Amini2017} find that ``most transitions in these videos are a combination of destroy/create clips rather than staged animations described in~\cite{Heer2007}''. However, after collecting and analyzing 284 data videos, we found many more different transition designs in data videos compared with the previous work. For example, transitions in \autoref{fig:intro-example} connect the contents by using the shared visual elements of the successive scenes. The design of using shared visual elements is not strictly the staged animation~\cite{Heer2007}, but they have similarities in reusing visual elements and keeping audience oriented in syntax and in semantic. In our work, we follow the prior definition of transitions ~\cite{Heer2007,Hullman2013,Amini2017} and expand it in a wider scenario. First, we identify \textit{clips} in data videos, where a clip is an elemental unit of the data video sequence~\cite{Amini2017}. Each clip is considered containing two narrative states and one transition between these two states. A \textit{narrative state} in a clip is defined as an informationally-distinct scene for presenting data facts or other video narrative, following the definition of \textit{narrative visualization state} ~\cite{Hullman2013}. Therefore, we define the \textit{transition} (also called \textit{narrative transition}) in our paper as the change of two narrative states in a clip, where the narrative state can be \textit{1)} an animated content without visualizations or data-driven arguments; or \textit{2)} a visualization content that includes standard charts, pictographs, and other data-driven arguments. \subsection{Methodology} We conducted a content analysis~\cite{Krippendorff2018}, which was also used in previous work ~\cite{Segel2010,Hullman2011,Byrne2015} to gain a comprehensive understanding of the transition design in data videos. First, we collected 284 data videos from reputable sources, such as the leading media outlets, design associations' portfolios, and video sites. We gathered these videos following the same criteria in Amini's work \cite{Amini2017}. The videos should convey data-driven arguments and contain related visualizations. The wide range of topics in the videos includes science, finance, politics, sports, and history. The video dataset includes animated motion graphics, photography videos, and combinations of them. We took apart these videos into single clips. We only focused on the clips that have at least one scene with visualizations or data-driven arguments. Finally, our dataset contains 3909 clips: 1644 vis-to-vis clips (42.1\%), 1104 others-to-vis clips (28.2\%) and 1161 vis-to-others clips (29.7\%). To examine the detailed transition designs in clips, we first reviewed selected samples and proposed an initial taxonomy considering the changing visual variables (e.g., position, color, and shape) ~\cite{Bertin1983} during the transition. We iteratively improved the taxonomy through multiple rounds of discussion among authors and attempts to label sampled transitions. Ultimately, we achieved an agreement on the final taxonomy (Sec. \ref{section:taxonomy}). We mainly considered the following questions when analyzing the transition design: \begin{compactitem}[$\bullet$] \item What are the narrative states (e.g., visualizations, animated icons) of the transition? \item What visual variables have changed, and what unchanged during the transition? \item How are the transitions visually presented? \item What is the narrative relationship between two states? \end{compactitem} Based on the taxonomy, two authors independently coded all the clips and reached an initial consensus on 87.7\% (3430) transitions. The conflicts (12.3\%, 479) were further resolved through discussion. Based on the agreement, we conducted a quantitative analysis on the visualization types and transition types in each clip. The complete results are attached to the supplemental material. \section{Use Case} In this section, we demonstrate a use case (\autoref{fig:teaser}) on how to leverage our taxonomy to design a data video. The video is about COVID-19, which introduces the severe condition of the COVID-19 spreading by presenting a series of data facts with elaborate transitions. The data is collected from the official notification, and the whole video can be found in the supplementary materials. First, the video begins with a series of texts that indicate that the video is about the COVID-19 outbreak (\autoref{fig:teaser}(a1)). With the texts preserving the keywords, a \textit{RST Guide} transition connects the previous scene with a world map to show the data of the total cases in different regions over the world (\autoref{fig:teaser}(a2-a3)). The color depth on the map represents the severity of the infection in the corresponding area, which becomes deeper over time in China. A \textit{Zooming} transition brings the viewer into a detailed portrait of the total confirmed case data in China (\autoref{fig:teaser}(b1-b3)). The number rapidly increases over time. Then, texts of date and number serve as the medium of the \textit{RST Guide} transition (\autoref{fig:teaser}(c1)). In combination with the transition, the number seamlessly transforms into one point of the line chart (\autoref{fig:teaser}(c2-d1)). The line steeply grows, and a \textit{Zooming} transition enlarges the presentation of the highest points of the line chart (\autoref{fig:teaser}(d1-e1)). Then, the time stops growing, and the \textit{Updating Content} transition is used to show the details of February 12 (\autoref{fig:teaser}(e1-e2)). The point that represents the total cases on February 12 keeps its circular shape and transforms into a pie chart of the proportion of confirmed cases in Wuhan and other places in China (\autoref{fig:teaser}(e1)). The proportion of total confirmed cases in Wuhan is over half of the total cases in China. To show a further detail, an \textit{Expanding Guide} transition expands the color of the sector that represents Wuhan in the pie chart to the background of the scene, thereby indicating that the following scene is about the detailed condition of Wuhan (\autoref{fig:teaser}(f1-f3)). A group of people icons represents the whole crowd of confirmed cases in Wuhan (\autoref{fig:teaser}(g1)). The icons consist of a proportional pictograph by \textit{Splitting} into three groups to show the number of existing confirmed, cured, and dead cases (\autoref{fig:teaser}(g1-g3)). Thereafter, the icons of each group are \textit{Merged} into a proportional area chart that represents the percentage of each group (\autoref{fig:teaser}(h1-i1)). Then, a \textit{Shrinking Guide} transition transforms the proportional area chart into the previous pie chart (\autoref{fig:teaser}(i1-i2)). Then a \textit{Zooming} transition guide viewers' sight to the world map and raises the statistical map of total cases over the world (\autoref{fig:teaser}(j1-j2)). As time goes, by the \textit{Updating Content} transition, the global situation turned terrible (\autoref{fig:teaser}(k1-k3)). Lastly, a \textit{Rack Focus} transition blurs the world map and shows a proposal that the whole world should unite together to defeat COVID-19 (\autoref{fig:teaser}(l1-l3)).
\section{Introduction} The limiting behaviour of empirical measures is of paramount importance in percolation theory. The normalized passage time along a path, what we ultimately care about, is nothing but the identity function integrated against the empirical measure of that path. In First/Last Passage Percolation we study the minimizing/maximizing paths called geodesics. A major open problem posed by Hoffman in \cite{american} is whether empirical measures along geodesics in a fixed direction converge weakly. This is partially answered in \cite{bates} for FPP. Bates proves that the sets $\mathcal{R}^q, \mathcal{R}$ of limiting distributions in direction $q$, limiting distributions in the direction-free case respectively are deterministic and derives an explicit variational formula for the limit shape of the first passage time as the minimum value of a linear functional over $\mathcal{R}^q$. When the set of minimizers is a singleton, which Bates argues happens for a dense family of edge-weight distributions, it follows that Hoffman's question is answered in the affirmative. The same argument applied to the LPP model yields analogous conclusions. Generally, there is no known way of computing the limiting distributions along geodesics. \cite{martinAllan} showcases some cutting edge developments for the solvable Exponential LPP on $ {\mathbb{Z}} ^2$ model, including an explicit formula for weak limits of empirical measures along geodesics. For other recent work on geodesics see \cite{ahlberg}, \cite{janjigianRassoul} and \cite{janjigianShen}. One way to extend the work in \cite{bates} is to study the more general question of what \emph{entropy} of empirical measures in a fixed direction or direction-free converge in distribution to a certain target measure. Grid entropy gives an exact, deterministic answer. In \cite{rassoul2014quenched}, Rassoul-Agha and Sepp{\"a}l{\"a}inen derive this entropy as a large deviation rate function of empirical measures along paths and they provide variational formulas realizing the point-to-point/point-to-level Gibbs Free Energies as the convex conjugates of this mysterious new entropy. This rate function approach suggests that one might be able to derive formulas for this entropy coming from subadditivity, and furthermore that it is related to a critical exponent of paths. Our main aim is to prove these intuitions correct, working in an LPP setting on $ {\mathbb{Z}} ^D$ which can easily be extended to more general frameworks. Rather than starting from the definitions given in \cite{rassoul2014quenched}, we define grid entropy as a certain critical exponent of paths and show that it is equivalently described as a superadditive ergodic limit. Moreover, we arrive at variational formulas which are analogues to Rassoul-Agha and Sepp{\"a}l{\"a}inen's and which are in fact positive-temperature analogues of Bates' variational formula. Along the way, we establish various properties of this entropy, some of which are already noted in or follow from \cite{rassoul2014quenched} and \cite{bates}, but also some of which are not, such as a partial answer to Hoffman's question in a directed polymer setting and such as an amazing equality between grid entropy, relative entropy and Shannon entropy. We give versions of these results for both direction-$q$ grid entropy $||(q,\nu)||$ and direction-free grid entropy $||\nu||$ of target measures $\nu$. By concavity and the lattice symmetries, the direction-free case turns out to simply be the direction-fixed case in the unit direction $(\frac1D,\ldots, \frac1D)$ which maximizes the total number of paths. \bigskip Let us now be more precise. We consider north-east nearest-neighbor paths on $ {\mathbb{Z}} ^D$, and work on $ {\mathbb{R}} ^D$ by taking coordinate-wise floors. We follow a similar approach to \cite{bates}, in that we couple our i.i.d. edge weights $\tau_e$ to i.i.d. edge labels Unif[0,1] random variables $U_e$ via a measurable function $\tau: [0,1] \rightarrow {\mathbb{R}} $ satisfying $\tau_e = \tau(U_e)$. This lets us work on [0,1] at no additional cost, as we can lift everything back to $ {\mathbb{R}} $ via the pushforward of $\tau$. We then interest ourselves with \emph{how many} empirical measures $\frac1n \mu_{\pi} = \frac1n \sum \limits_{e\in \pi} \delta_{U_e}$ for paths $\vec{0} \rightarrow \lfloor nq \rfloor$ converge to some given target measure $\nu$ in $\mathcal{M}_+$, the set of finite non-negative Borel measures on [0,1]. We may keep the direction $q$ fixed, or we may vary $q$ over all points in $ {\mathbb{R}} ^D$ with the same 1-norm $||q||_1$. To perform the counting, we consider the order statistics of the distance of the paths' empirical measures $\frac1n \mu_{\pi}$ to the target $\nu$, where distance is measured via the Levy-Prokhorov metric $\rho$, which metrizes weak convergence of measures. That is, given a direction $q \in {\mathbb{R}} ^D$ and a target measure $\nu$, for every $n \in {\mathbb{N} } $ we let \[ \min_{\pi: \vec{0} \rightarrow \lfloor nq \rfloor}^1 \rho\bigg(\frac1n \mu_{\pi}, \nu \bigg) \leq \min_{\pi: \vec{0} \rightarrow \lfloor nq \rfloor}^2 \rho\bigg(\frac1n \mu_{\pi}, \nu \bigg) \leq \ldots \leq \min_{\pi: \vec{0} \rightarrow \lfloor nq \rfloor}^{\# \pi: \vec{0} \rightarrow \lfloor nq \rfloor} \rho\bigg(\frac1n \mu_{\pi}, \nu \bigg) \] denote the order statistics value of $ \rho(\frac1n \mu_{\pi}, \nu)$. It is convenient to define \[\min \limits_{\pi: \vec{0} \rightarrow \lfloor nq \rfloor}^j \rho \bigg(\frac1n \mu_{\pi}, \nu \bigg) := +\infty \ \mbox{for} \ j > \# \pi: \vec{0} \rightarrow \lfloor nq \rfloor\] In the direction-free case, where we count all paths of a certain scaled length from $\vec{0}$, we let \\ $t := ||\nu||_{TV}$ and similarly define $\min \limits_{\pi \ \mbox{s.t.} \ |\pi| = \lfloor nt \rfloor }^j \rho(\frac1n \mu_{\pi}, \nu)$ over paths of length $\lfloor nt \rfloor$ anchored at $\vec{0}$. Of course, these order statistics and the paths corresponding to them are event-dependent. We then define the grid entropy with respect to the target $\nu$ and the direction $q$, denoted $||(q, \nu)||$, and the direction-free grid entropy, denoted $||\nu||$, to be the critical exponent where the corresponding order statistics change from converging to 0 a.s. to diverging a.s.: \begin{equation} \label{firstDef} \begin{split} ||(q, \nu)|| &:= \sup \bigg\{\alpha \geq 0 \ : \lim_{n \rightarrow \infty} \min_{\pi: \vec{0} \rightarrow \lfloor nq \rfloor}^{\lfloor e^{\alpha n} \rfloor} \rho\bigg(\frac1{n} \mu_{\pi}, \nu \bigg) = 0 \ \mbox{a.s.} \bigg\} \\ ||\nu|| &:= \sup \bigg\{\alpha \geq 0 \ : \lim_{n \rightarrow \infty} \min_{\pi \ \mbox{s.t.} \ |\pi| = \lfloor nt \rfloor}^{\lfloor e^{\alpha n} \rfloor} \rho\bigg(\frac1{n} \mu_{\pi}, \nu \bigg) = 0 \ \mbox{a.s.} \bigg\} \end{split} \end{equation} where these are defined to be $-\infty$ if the set of $\alpha$'s is empty. It will turn out that we can replace the limits in this definition with $\liminf$'s and get the same quantity. Observe that these grid entropies lie in $\{-\infty\} \cup [0, H(q)]$, $\{-\infty\} \cup [0, \log D]$ respectively where \[ H(q): =\sum_{i =1}^D -q_i \log \frac{q_i}{||q||_1} \] is the (Shannon) entropy of the total number of paths in direction $q$, in the sense that \[(\# \mbox{paths} \ \vec{0} \rightarrow nq) = e^{H(q)n+o(n)}\] We note that the description \eqref{firstDef} of grid entropy as the critical exponent of these order statistics has been previously shown to hold only for Bernoulli edge labels (see \cite{carmona2010directed}). Over the course of this paper we establish two other equivalent definitions of grid entropy which avoid the annoyance of dealing with these event-dependent orderings of the paths. One of these alternate descriptions is remarkably simple: grid entropy is the negative convex conjugate of Gibbs Free Energy; as a direct consequence, our notion of grid entropy is equivalent to that appearing in \cite{rassoul2014quenched} modulo some normalizations. The other, as the entropy of paths with empirical measure Levy-Prokhorov-close to the target, is completely new though not unexpected. We summarize these characterizations in the following theorem and link them to current literature in subsequent remarks. \begin{manualtheorem}{A} \begin{enumerate}[label=(\roman*)] \item[] \item Let $q \in {\mathbb{R}} ^D_{\geq 0}$ and let $\nu$ be a finite non-negative Borel measure on [0,1]. Then the direction$-q$ grid entropy $||(q,\nu)||$ as defined in \eqref{firstDef} is also given by \begin{equation} \label{newLabel} ||(q, \nu)|| = \inf_{\epsilon > 0} \lim_{n \rightarrow \infty} \frac1n \log \sum_{\pi: \vec{0} \rightarrow \lfloor nq \rfloor} e^{-\frac{n}{\epsilon} \rho(\frac1n \mu_{\pi}, \nu)} \ \mbox{a.s.} \end{equation} The expressions we take an infimum of are each directed metrics with negative sign on $ {\mathbb{R}} ^D \times \mathcal{M}_+$. That is, they take value in $[-\infty, \infty)$, evaluate to 0 when $(q,\nu) = (\vec{0},0)$, and satisfy the triangle inequality with the sign reversed. Moreover, direction-fixed grid entropy is the negative convex conjugate of the point-to-point $\beta$-Gibbs Free Energy in direction $q$ (as a function of the environment-coupling function $\tau$): For $\beta > 0$, \[ ||(q, \nu)|| = -(G_q^{\beta})^{*}(\nu) = -\sup_{\tau} [ \beta \langle \tau, \nu \rangle - G_q^{\beta}(\tau)] \] where the supremum is over bounded measurable $\tau: [0,1] \rightarrow {\mathbb{R}} $, where $\langle \cdot, \cdot \rangle$ is the integration linear functional $\langle \tau, \nu \rangle = \int_0^1 \tau(u) d\nu$ and where the point-to-point $\beta$-Gibbs Free Energy is given by \[ G_{q}^{\beta}(\tau) = \lim_{n \rightarrow \infty} \frac1n \log \sum_{\pi: \vec{0} \rightarrow \lfloor nq \rfloor} e^{\beta T(\pi)} \] \item Analogous results hold in the direction-free case. Let $\nu$ be a finite non-negative Borel measure on [0,1] and let $t := ||\nu||_{TV}$. Then the direction-free grid entropy $||\nu||$ as defined in \eqref{firstDef} is also given by \begin{align*} ||\nu|| &= \inf_{\epsilon > 0} \lim_{n \rightarrow \infty} \frac1n \log \sum_{\pi \in \mathcal{P}_{\lfloor nt \rfloor}( \vec{0})} e^{-\frac{n}{\epsilon} \rho(\frac1n \mu_{\pi}, \nu)} \ \mbox{a.s.} \\ &= \sup_{q \in {\mathbb{R}} ^D_{\geq 0}, ||q||_1 = t} ||(q,\nu)|| \\ &= ||(t\ell, \nu)|| \ \mbox{where} \ \ell = \bigg(\frac1D, \ldots, \frac1D \bigg) \end{align*} The expressions we take an infimum of are each directed metrics with negative sign on $\mathcal{M}_t$, the set of finite non-negative Borel measures with total mass $t$. Moreover, direction-free grid entropy is the negative convex conjugate of the point-to-level $\beta$-Gibbs Free Energy: \[ || \nu|| = -(G^{\beta})^{*}(\nu) = -\sup_{\tau} [\beta \langle \tau, \nu \rangle - G^{\beta}(\tau)] \ \mbox{a.s.}\] where the supremum is over bounded measurable $\tau: [0,1] \rightarrow {\mathbb{R}} $ and where the point-to-level $\beta$-Gibbs Free Energy is given by \[ G^{\beta}(\tau) = \lim_{n \rightarrow \infty} \frac1n \log \sum_{\pi \ \mbox{s.t.} \ |\pi| = n} e^{\beta T(\pi)} \] \end{enumerate} \end{manualtheorem} \begin{remark} We can extend these variational formulas for point-to-point/point-to-level Gibbs Free Energy to general, possibly unbounded, measurable $\tau$ by truncating $\tau$ in $\beta \langle \tau, \nu \rangle$ at some constant $C > 0$ we take to $\infty$. For example, \[ ||(q, \nu)|| = - \sup_{C > 0} \sup_{\tau} [ \beta \langle \tau \land C, \nu \rangle - G_q^{\beta}(\tau)] \] \end{remark} \begin{remark} This theorem is partially proved in \cite{carmona2010directed} [Corollary 2] for the case when the edge labels follow a Bernoulli($p$) distribution. In this setting, Carmona shows that the negative convex conjugate of Gibbs Free Energy of measures of the form $\nu_s := s \delta_1 + (1-s) \delta_0$ is given by \begin{equation*} -(G^{1})^{\ast} (\nu_s) = \begin{cases} \lim\limits_{n \rightarrow \infty} \frac1n \log \#(\mbox{length $n$ paths from $\vec{0}$ with $\geq ns$ 1-labels}), & s \geq p \\ \lim \limits_{n \rightarrow \infty} \frac1n \log \#(\mbox{length $n$ paths from $\vec{0}$ with $\leq ns$ 1-labels}), & s < p \\ \end{cases} \end{equation*} The inequalities $\geq, \leq$ can be replaced with equality since the number of paths with $ns$ 1-labels is exponentially decaying in $s$. Using the definition of the Levy-Prokhorov metric we can conclude that this formulation is equivalent to \eqref{firstDef}. \end{remark} \begin{remark} As mentioned, grid entropy first appears in literature as the rate function for the Large Deviation Principle of the empirical measures in \cite{rassoul2014quenched}. The fact that we derive formula \eqref{newLabel} using the Subadditive Ergodic Theorem should therefore not be surprising. Our definitions \eqref{firstDef} for direction-fixed/direction-free grid entropy align with those of Rassoul-Agha and Sepp{\"a}l{\"a}inen as follows. Consider the product space $\Omega \times \mathcal{G}$ of the environment \\ $\Omega := [0,1]^{ {\mathbb{Z}} ^D \times \mathcal{G}}$ consisting of the i.i.d. edge labels and of the $D$ unit NE steps \\ $\mathcal{G} = \{e_1, \ldots, e_D\} $, and let $\phi: \Omega \times \mathcal{G} \rightarrow [0,1]$ map an (environment, unit direction) pair to the edge label of the corresponding unit direction anchored at the origin. Then \begin{equation} \label{translationEqn} ||(q,\nu)|| = \log D - \inf_{\mu: \phi_{\ast}(\mu) = \nu, E^{\mu}[Z_1] = q} H_1(\mu), \ ||\nu|| = \log D - \inf_{\mu:\phi_{\ast}(\mu) = \nu} H_1(\mu) \end{equation} where $H_1$ is the relative entropy defined in (5.2) of \cite{rassoul2014quenched} which can be traced back to Varadhan's paper \cite{varadhan2003large}, and where the infimums are over the measures $\mu$ on $\Omega \times \mathcal{G}$ whose $\phi$-pushforward is $\nu$ and, in the direction-fixed case, for which in addition the $\mu$-mean of the step coordinate $Z_1$ is $q$. See Section \ref{translationSection} for a more detailed derivation and discussion of \eqref{translationEqn}. \end{remark} \bigskip Now grid entropy satisfies some rather interesting properties. The following theorem captures the highlights for direction$-q$ grid entropy; the direction-free analogues hold as well. All but properties (iii) and (v) follow easily from the framework presented previously in \cite{rassoul2014quenched}, yet we will showcase the power of our new approach to grid entropy by proving \emph{all} of these properties directly. \begin{manualtheorem}{B}\label{part1_B} Let $q \in {\mathbb{R}} ^D_{\geq 0}$. Then: \begin{enumerate}[label=(\roman*)] \item Grid entropy $||(q,\nu)||$ is a directed norm with negative sign; it scales with positive-factors and it satisfies a reverse triangle inequality: \[ ||(p, \xi)|| + ||(q, \nu)|| \leq ||(p+q, \xi+\nu)|| \] \item Grid entropy $||(q,\nu)||$ is upper semicontinuous. Let \[\mathcal{R}^q := \{\mbox{accumulation points of empirical measures along paths in direction $q$}\}\] \item $\mathcal{R}^q$ is weakly closed, convex and deterministic and coincides almost surely with \[\{\nu \in \mathcal{M}_+: ||(q, \nu)|| > -\infty\}\] \item $\mathcal{R}^q$ consists only of measures $\nu$ with total variation $||\nu||_{TV} = ||q||_1$ that are absolutely continuous with respect to the Lebesgue measure $\Lambda$ on [0,1]. \item Any $\nu \in \mathcal{R}^q$ satisfies the following upper bound on the sum of the grid entropy and the relative entropy with respect to Lebesgue measure on [0,1]: $$D_{KL} (\nu||\Lambda) + ||(q, \nu)|| \leq \sum_{i =1}^D -q_i \log \frac{q_i}{||q||_1} := H(q)$$ where $D_{KL}$ denotes relative entropy (or Kullback-Leibler divergence), and where, \\ again, this upper bound is simply the (Shannon) entropy of the total number of paths in direction $q$. \end{enumerate} \end{manualtheorem} Why do we care? The deterministic set $\mathcal{R}^q$ is nothing more than the LPP analogue of the set Bates takes a infimum over in \cite{bates} in his variational formula for the FPP time constant. In our LPP setting, his formula becomes \begin{align*} \mbox{LPP time constant} &:= \lim_{n \rightarrow \infty} \frac{\mbox{last passage time for paths $\vec{0} \rightarrow \lfloor nq \rfloor$}}n \\ & = \sup_{\nu \in \mathcal{R}^q} \langle \tau, \nu \rangle \ \mbox{a.s.} \end{align*} We thus link \cite{bates} and \cite{rassoul2014quenched} by providing a new, more enlightening description of these sets $\mathcal{R}^q$ in terms of these grid entropies. Furthermore, the amazing bound in (iii) is an improvement of a version without the grid entropy term proved by Bates and which also follows from Rassoul-Agha and Sepp{\"a}l{\"a}inen's work. As in \cite{rassoul2014quenched}, the main application of grid entropy we present is a variational formula for the point-to-point/point-to-level Gibbs Free Energies in direction $q$/to the "level" $x_1 +\ldots +x_D = nt$ in the directed polymer model. Our variational formula is simply the zero temperature LPP analogue of Bates' variational formula for the FPP limit shape, and furthermore can be molded into Rassoul-Agha and Sepp{\"a}l{\"a}inen's variational formula via some normalizations, thus proving that what we call grid entropy really is the same object first developed in \cite{rassoul2014quenched}. \begin{manualtheorem}{C}\label{part1_C} Fix a direction $q \in {\mathbb{R}} ^D_{\geq 0}$ and an inverse temperature $\beta > 0$. For bounded measurable $\tau: [0,1] \rightarrow {\mathbb{R}} $, the point-to-point/point-to-level Gibbs Free Energies are given by \[ G_{q}^{\beta}(\tau) = \sup_{\nu \in \mathcal{R}^q} [ \beta \langle \tau, \nu \rangle + ||(q, \nu)||],\] \[G^{\beta}(\tau) = \sup_{\nu \in \mathcal{R}^1} [ \beta \langle \tau, \nu \rangle + ||\nu||] = G^{\beta}_{\ell} = \sup_{q \in {\mathbb{R}} ^D_{\geq 0}, ||q||_1=1} G^{\beta}_q \] where $\ell = (\frac1D, \ldots, \frac1D)$ is the maximizing direction, and \[\mathcal{R}^1 = \bigcup \limits_{q \in {\mathbb{R}} ^D_{\geq 0}, ||q||_1=1} \mathcal{R}^q\] is the set of Borel probability measures $\nu$ that have finite direction-free grid entropy. Moreover, these supremums are achieved by some $\nu$ in $\mathcal{R}^q, \mathcal{R}^{\ell}$ respectively. \end{manualtheorem} \begin{remark} We may replace the supremums in these formulas to be over all measurable $\tau$ by truncating $\tau$ at some $C>0$ and taking a supremum over $C$. \end{remark} As in \cite{bates}, it follows that the directed polymer analogue of Hoffman's question is answered in the affirmative when our variational formula has a unique maximizer, which happens for a dense family of measurable functions $\tau$. \begin{manualtheorem}{D}\label{part1_D} Fix an inverse temperature $\beta > 0$ and a bounded measurable $\tau: [0,1] \rightarrow {\mathbb{R}} $. \begin{enumerate}[label=(\roman*)] \item Fix $q \in {\mathbb{R}} ^D_{\geq 0}$ and suppose $\beta \langle \tau, \nu \rangle + ||(q,\nu)||$ has a unique maximizer $\nu \in \mathcal{R}^q$. For every $n$ pick a path $\pi_n: \vec{0} \rightarrow \lfloor nq \rfloor$ independently and at random according to the probabilities prescribed the corresponding point-to-point $\beta$-polymer measure \[ \rho_{n,q}^{\beta}(d\pi) = \frac{e^{\beta T(\pi)}}{ \sum \limits_{\pi: \vec{0} \rightarrow \lfloor nq \rfloor} e^{\beta T(\pi)} } \ \mbox{for paths} \ \pi:\vec{0} \rightarrow \lfloor nq \rfloor \] Then the empirical measures $\frac1n \mu_{\pi_n}$ converge weakly to $\nu$ a.s. \item Suppose $\beta \langle \tau, \nu \rangle + ||\nu||$ has a unique maximizer $\nu \in \mathcal{M}_1$. For every $n$ pick a length $n$ path $\pi_n$ from $\vec{0}$ independently and at random according to the probabilities prescribed the corresponding point-to-level $\beta$-polymer measure \[ \rho_{n}^{\beta}(d\pi) = \frac{e^{\beta T(\pi)}}{ \sum \limits_{\pi \ \mbox{s.t.} \ |\pi| = n} e^{\beta T(\pi)} } \ \mbox{for length $n$ paths $\pi$ from} \ \vec{0} \] Then the empirical measures $\frac1n \mu_{\pi_n}$ converge weakly to $\nu$ a.s. \end{enumerate} \end{manualtheorem} \begin{remark} In fact, even if there is no unique maximizer, we can show that all accumulation points of random paths chosen according to the direction-fixed/direction-free $\beta$-polymer measure are among the maximizers of the corresponding variational formula. This partial answer to Hoffman's question in the positive temperature case is immediate from the development of grid entropy by Rassoul-Agha and Sepp{\"a}l{\"a}inen \cite{rassoul2014quenched} as the rate function of the Large Deviation Principle of empirical measures. However, to the best of our knowledge this is the first time it has been formulated explicitly. \end{remark} The plan is as follows. In Section \ref{section2}, we describe the model and setup, and outline various facts and notions we will need over the course of this paper. Section \ref{section3} focuses on developing the second definition of grid entropy \eqref{newLabel} as a directed norm with negative sign and showing that it is equivalent to the original definition with the $\min \limits_{\pi}^j$. Then, in Section \ref{section4}, we investigate what information we can extract from grid entropy and what properties it satisfies (Theorem \ref{part1_B}). We devote Section \ref{section5} to applying our results to establish our variational formula for point-to-point/point-to-level Gibbs Free Energies and study the consequences of this (namely Theorems \ref{part1_C},\ref{part1_D}) as well as the correspondence to the work in \cite{rassoul2014quenched}. Last but not least, we make some closing remarks about adapting our results to other models. \newpage \section{Preliminaries}\label{section2} \subsection{Empirical Measures on the Lattice} We begin by briefly describing the setup we use in this paper. We restrict ourselves to a directed Last Passage Percolation model. Consider north-east nearest-neighbour paths on the lattice $ {\mathbb{Z}} ^D, D \geq 1$ with i.i.d. edge weights $\tau_e \sim \theta$ for some probability distribution $\theta$ on $ {\mathbb{R}} $. By north-east, we of course mean that the coordinates of points on the path are nondecreasing. For $p, q \in {\mathbb{Z}} ^D$ we denote by $\mathcal{P}(p,q)$ the set of all NE paths $\pi: p \rightarrow q$. Similarly, for $q \in {\mathbb{Z}} ^D$ and $t \in {\mathbb{Z}} _{\geq 0}$ we denote by $\mathcal{P}_t(q)$ the set of all NE paths from $q$ of length $t$ (no restriction on the endpoint). Observe that either $q-p \notin {\mathbb{Z}} _{\geq 0}^D$ and $\mathcal{P}(p,q) = \emptyset$ , or $q-p \in {\mathbb{Z}} _{\geq 0}$ and \[|\mathcal{P}(p,q) | = \binom{||q-p||_1}{q_1-p_1,q_2-p_2,\cdots, q_D-p_D}\] Here $||\cdot||_1$ is the 1-norm on $ {\mathbb{R}} ^D$ defined by $||p||_1 = \sum \limits_{i=1}^D |p_i|$. On the other hand, $|\mathcal{P}_t(q)| = D^t$ trivially for any $q \in {\mathbb{Z}} ^D$. Note that unlike other recent work (such as \cite{martinAllan}) we do not restrict ourselves to a known solvable model. We do not impose any restrictions on the edge weight distribution $\theta$, so that our results hold with the greatest generality possible. We scale the grid by $n$ and look at the behaviour in the limit. As is standard, we extend our initial inputs $p, q$ to lie in $ {\mathbb{R}} ^D$ by taking coordinatewise floors of the scaled coordinates. That is, we consider paths $\pi \in \mathcal{P}(\lfloor np \rfloor, \lfloor nq \rfloor)$ where \[\lfloor (x_1,\ldots, x_D) \rfloor := (\lfloor x_1 \rfloor, \ldots, \lfloor x_D \rfloor)\] Our normalized directed metrics will converge almost surely to a translation-invariant limit so it will suffice to consider the case when $p = \vec{0}$. But for now we let $\vec{p}$ be arbitrary. Various inequalities we derive involve the asymptotics of the number of length $n$ NE paths from the origin in a fixed or unfixed direction. The following lemma, which is easily proved using Sterling's approximation, gives us what we want. \begin{lemma}\label{lemma1} Let $k, n \in {\mathbb{N} } , a_i \in {\mathbb{R}} _{\geq 0}, \sum a_i = a$. Then \[\binom{\lfloor na \rfloor}{\lfloor na_1 \rfloor, \lfloor na_2 \rfloor, \ldots, \lfloor na_k \rfloor} = \bigg(\frac{a^{a}}{ \prod \limits_{1 \leq i \leq k} a_i^{a_i}} + o(1) \bigg)^n \] where we use the convention $0^0 = 1$. \end{lemma} \begin{remark} Recall that for $q \in {\mathbb{R}} ^D_{\geq 0}$ we denote by $\mathcal{P}(\vec{0}, \lfloor nq \rfloor)$ the set of paths $\pi: \vec{0} \rightarrow \lfloor nq \rfloor$. Thus \[ \lim_{n \rightarrow \infty} \frac1n \log |\mathcal{P}(\vec{0}, \lfloor nq \rfloor)| = \sum_{i=1}^D -q_i \log\frac{q_i}{||q||_1} := H(q) \] $H(q)$ is the (Shannon) entropy of the number of paths in direction $q$. Note that $H(q)$ scales with positive scalars and $H(q)$ is maximized among $q \in {\mathbb{R}} ^D_{\geq 0}$ with the same 1-norm by \[q = ||q||_1 \bigg(\frac1D, \ldots, \frac1D \bigg) := ||q||_1 \ell\] in which case $H(q) = ||q||_1 \log D$. \end{remark} On the other hand, for $t \geq 0$ recall that we denote by $\mathcal{P}_{\lfloor nt \rfloor}(\vec{0})$ the set of paths from $\vec{0}$ of length $\lfloor nt \rfloor$. Thus \[ \lim_{n \rightarrow \infty} \frac1n \log |\mathcal{P}_{\lfloor nt \rfloor}(\vec{0})| = t \log D = H(t\ell) \] We study the distribution of weights that we observe along paths $\# \pi: \lfloor np \rfloor \rightarrow \lfloor nq \rfloor$. For a path $\pi$, let the unnormalized empirical measure along $\pi$ be \[\sigma_{\pi} = \sum_{e\in \pi} \delta_{\tau_e}\] Note that we normalize by $\frac1n$ rather than $\frac1{|\pi|}$. This is simply for convenience in our proofs, as it gives us a certain superadditivity we do not get when we normalize by $\frac1{|\pi|}$. The Glivenko-Cantelli Theorem \cite[Thm.~2.4.7]{durrett} tells us that for any fixed infinite NE path in the grid, empirical measures along the path converge weakly to $\theta$. \begin{theorem}[Glivenko-Cantelli Theorem]\label{thm1} Let $F_{\theta}$ be the cumulative distribution function of $\theta$, let $X_i \sim \theta$ be i.i.d. random variables and let \[F_{n}(x) = \frac1{n} \sum_{i=1}^n \mathbf{1}_{\{X_i \leq x\}} \] be the cumulative distribution functions of the empirical measures. Then \[\sup_x |F_n(x) - F_{\theta}(x)| \rightarrow 0 \ \mbox{a.s. as} \ n \rightarrow \infty \] \end{theorem} However, we are interested in the limiting behavior of the empirical measure of not one path, but of all paths from $\vec{0}$ or all paths with a given direction, as we scale the length of the paths. This allows us to observe more than just the original measure $\theta$. \subsection{Metrics on Measures} To gauge the distance between two measures we use the Levy-Prokhorov metric. We briefly introduce this metric as well as the total variation metric and we outline the relevant properties. Consider a metric space $(X,d)$ with Borel $\sigma$-algebra $\mathcal{B}$. We denote by $\mathcal{M}$ the set of finite Borel measures on $(X, \mathcal{B})$, by $\mathcal{M}_+$ the set of non-negative finite Borel measures, and by $\mathcal{M}_t$ the set of Borel non-negative finite measures with total mass $t$ for any $t \geq 0$. In this notation, $\mathcal{M}_1$ is the set of Borel probability measures. \begin{definition} The total variation norm on $\mathcal{M}$ is defined by \[||\mu||_{TV} = \sup_{A \in \mathcal{B}} |\mu(A)|\] This of course gives rise to a total variation metric, given by \[d_{TV}(\mu, \nu) = ||\mu-\nu||_{TV}\] \end{definition} For example, the total variation of any measure $\mu \in \mathcal{M}_+$ is its total mass $\mu(X)$. \begin{definition} For any $A \in \mathcal{B}$ and $\epsilon > 0$, the $\epsilon$-neighborhood of $A$ is defined to be $$A^{\epsilon} := \{x \in X: d(x, a) < \epsilon \ \mbox{for some} \ a \in A \}$$ \end{definition} \begin{definition}The Levy-Prokhorov metric on $\mathcal{M}_+$ is defined by \[\rho(\mu, \nu) = \inf \{\epsilon > 0: \mu(A) \leq \nu(A^{\epsilon}) + \epsilon \ \mbox{and} \ \nu(A) \leq \mu(A^{\epsilon}) + \epsilon \ \forall A \in \mathcal{B}\}\] \end{definition} It is a standard result that $\rho$ metrizes the weak convergence of measures and total variation metrizes the strong convergence of measures in $\mathcal{M}_+$. For details, see \cite[Sect.~2.3]{huber}. We now derive two useful inequalities involving the Levy-Prokhorov metric. \begin{lemma}\label{lemma3} For $\mu, \nu \in \mathcal{M}_+$, \[\rho(\mu, \nu) \leq ||\mu - \nu||_{TV}\] \end{lemma} \begin{remark} This lemma establishes that the Levy-Prokhorov metric is weaker than the total variation metric. \end{remark} \begin{remark} It is trivial to see that $\rho(\mu, 0) = ||\mu||_{TV}$. \end{remark} \begin{proof} Let $\epsilon := ||\mu-\nu||_{TV}$. If $\epsilon = 0$ then $\mu = \nu$ so $\rho(\mu,\nu) = 0$. If $\epsilon > 0$, for any $A \in \mathcal{B}$, $$A \subseteq A^{\epsilon} \Rightarrow \mu(A) - \nu(A^{\epsilon}) \leq \mu(A) - \nu(A) \leq \sup_{A' \in \mathcal{B}} |\mu(A') - \nu(A')| = \epsilon$$ and similarly $\nu(A) - \mu(A^{\epsilon}) \leq \epsilon$ so $\rho(\mu, \nu) \leq \epsilon$. \end{proof} We next show that $\rho$ satisfies a kind of subadditivity. \begin{lemma}\label{lemma4} Let $\mu_1, \mu_2, \nu_1, \nu_2 \in \mathcal{M}_+$. Then \[\rho(\mu_1 + \mu_2, \nu_1 + \nu_2) \leq \rho(\mu_1, \nu_1) + \rho(\mu_2, \nu_2)\] \end{lemma} \begin{remark} Note that in the case $\mu_2 = \nu_2$ the inequality becomes \[\rho(\mu_1 + \mu_2, \nu_1 + \mu_2) \leq \rho(\mu_1, \nu_1) \] \end{remark} \begin{proof} For any $\epsilon_1 > \rho(\mu_1, \nu_1), \epsilon_2 > \rho(\mu_2, \nu_2)$ we have for any $A \in \mathcal{B}$, \[\mu_1(A) \leq \nu_1(A^{\epsilon_1}) + \epsilon_1 \leq \nu_1(A^{\epsilon_1 +\epsilon_2}) + \epsilon_1 \ \mbox{and} \ \mu_2(A) \leq \nu_2(A^{\epsilon_2}) + \epsilon_2 \leq \nu_2(A^{\epsilon_1 +\epsilon_2}) + \epsilon_2\] \[\Rightarrow (\mu_1 + \mu_2)(A) \leq (\nu_1 + \nu_2) (A^{\epsilon_1 +\epsilon_2}) + \epsilon_1 + \epsilon_2\] By symmetry, the same inequality holds with $\mu_i, \nu_i$ swapped. Thus \[\rho(\mu_1 + \mu_2, \nu_1 + \nu_2) \leq \rho(\mu_1, \nu_1) + \rho(\mu_2, \nu_2)\] \end{proof} \subsection{A Convenient Coupling of the Edge Weights} We follow \cite[Sect.~2.1]{bates} in coupling the environment to uniform random variables in order to work in a compact space of measures and to connect our results with his. The idea is to write our i.i.d. edge weights $\tau_e \sim \theta$ as \[ \tau_e = \tau(U_e)\] for some measurable function $\tau: [0,1] \rightarrow {\mathbb{R}} $ and i.i.d. Unif[0,1]-valued random variables $(U_e)_{e \in E( {\mathbb{Z}} ^D)}$ on the same probability space as $(\tau_e)_{e \in E( {\mathbb{Z}} ^D)}$. For instance, we could take the quantile function \[ \tau(x) = F_{\theta}^-(x) := \inf \{t \in {\mathbb{R}} : F_{\theta}(t) \geq x\}\] But our results (in particular, our definitions of grid entropy) are independent of the $\tau$ chosen so we allow $\tau$ to be arbitrary (with the conditions stated above). This comes into play later in Section \ref{section5}, when we study the Gibbs Free Energy as a function of $\tau$. To distinguish between the $\tau_e$ and the $U_e$, we call the former edge \emph{weights} and the latter edge \emph{labels}. Let $\Lambda$ denote Lebesgue measure on $[0,1]$. We tweak the definition of empirical measures in this new setup: for any NE path $\pi: \lfloor np \rfloor \rightarrow \lfloor nq \rfloor$ in $ {\mathbb{Z}} ^D$, define \[ \mu_{\pi} := \sum_{e \in \pi} \delta_{U_e} \] Then we can relate $\Lambda$ and the $\mu_{\pi}$ to $\theta$ and the $\sigma_{\pi}$ respectively via the pushforward: \[ \theta = \tau_{*}(\Lambda), \sigma_{\pi} = \tau_{*}(\mu_{\pi}) \ \mbox{where} \ \tau_{*}(\xi)(B) = \xi(\tau^{-1} (B)) \ \forall B \in \mathcal{B}( {\mathbb{R}} ^D) \ \mbox{and $\forall$ measures $\xi$ on $[0,1]$}\] One advantage is of course that the set of probability measures on $[0,1]$ is weakly compact, so for any sequence of paths $\pi_n: \lfloor np \rfloor \rightarrow \lfloor nq \rfloor$ in the grid we get a subsequence for which $\frac1{n_k} \mu_{\pi_{n_k}}$ converges weakly to some measure. In the case of a continuous cumulative distribution function $F_{\theta}$, we can use a lemma proved in \cite{bates} to get a nice duality. \begin{lemma}\label{lemma5Bates} Given a measure $\theta$ on $ {\mathbb{R}} $ with continuous cdf $F_{\theta}$, if we let \\ $\tau = F_{\theta}^{-}: [0,1] \rightarrow {\mathbb{R}} $ be its quantile function then there is a probability 1 event on which $\frac1{n_k} \mu_{\pi_{n_k}} \Rightarrow \nu$ for some subsequence $n_k$ and paths $\pi_{n_k}: \vec{0} \rightarrow \lfloor n_k q \rfloor$ if and only if $ \tau_{*}(\frac1{n_k}\mu_{\pi_{n_k}}) \Rightarrow \tau_{*}(\nu)$. \end{lemma} \begin{proof} \cite[Lemma 6.15]{bates} establishes that there is a probability 1 event on which \\ $\frac1{n_k}\mu_{\pi_{n_k}} \Rightarrow \nu$ implies $ \tau_{*}(\frac1{n_k}\mu_{\pi_{n_k}}) \Rightarrow \tau_{*}(\nu)$. But then on the same event, given a subsequence for which $ \tau_{*}(\frac1{n_k}\mu_{\pi_{n_k}}) \Rightarrow \tau_{*}(\nu)$, compactness gives us a convergent subsubsequence $\frac1{n_{k_j}}\mu_{\pi_{n_{k_j}}} \Rightarrow \xi$ hence \[ \tau_{*}\bigg(\frac1{n_{k_j}}\mu_{\pi_{n_{k_j}}} \bigg) \Rightarrow \tau_{*}(\xi)\] so $\tau_{*}(\xi) = \tau_{*}(\nu)$. The fact that $F_{\theta}$ is continuous and the quantile function $\tau$ satisfies \[ \tau^{-1}((-\infty, x]) = [0,F_{\theta}(x)] \ \forall x \in {\mathbb{R}} \] implies $\xi$ and $\nu$ agree on all sets $[0,F_{\theta}]$ hence $\xi = \nu$. \end{proof} Thus, in the case when $\theta$ has continuous cdf, we lose no generality by doing this coupling and working with measures on [0,1]. However, even in the most general case where $\theta$ may not have continuous cdf or bounded support, our work in developing grid entropy still holds because we only use the compactness of the space of measures in later sections devoted to our variational formula for the Gibbs Free Energy. In short, we lose nothing by restricting ourselves to the compact space of measures on $[0,1]$. A benefit of this coupling is the following amazing result of Bates \cite[Lemma 6.3 and Thm 6.4]{bates}. Here we denote by $\mathcal{M}_+, \mathcal{M}_t$ the sets of finite non-negative Borel measures on [0,1] and finite non-negative Borel measures on [0,1] with total mass $t\geq 0$. \begin{theorem}\label{thm5Bates} \begin{enumerate}[label=(\roman*)] \item[] \item Fix $q \in {\mathbb{R}} ^D_{\geq 0}$. Define $\mathcal{R}_{\infty}^{q}$ to be the (event-dependent) set of measures $\nu \in \mathcal{M}_+$ for which there is a subsequence $\pi_{n_k}$ of paths $\vec{0} \rightarrow \lfloor n_k q \rfloor$ with $\mu_{\pi_{n_k}} \Rightarrow \nu$. Then there exists a deterministic, weakly closed set $\mathcal{R}^{q} \subseteq \mathcal{M}_{||q||_1} $ independent of $\tau$ s.t. \[ P(\mathcal{R}_{\infty}^{q} = \mathcal{R}^{q}) = 1\] \item Fix $t \geq 0$. Define $\mathcal{R}^t_{\infty}$ to be the (event-dependent) set of measures $\nu \in \mathcal{M}_+$ for which there is a subsequence $\pi_{n_k}$ of length $\lfloor nt \rfloor$ paths from $\vec{0}$ with $\mu_{\pi_{n_k}} \Rightarrow \nu$. Then there exists a deterministic, weakly closed set $\mathcal{R}^t \subseteq \mathcal{M}_t$ independent of $\tau$ s.t. \[ P(\mathcal{R}^t_{\infty} = \mathcal{R}^t) = 1\] Moreover, $\mathcal{R}^t = \bigcup \limits_{q \in {\mathbb{R}} ^D_{\geq 0}, ||q||_1 = t} \mathcal{R}^q$. \end{enumerate} \end{theorem} \begin{remark} Bates proves this theorem in the setup of First Passage Percolation, but notes that it holds analogously in the Last Passage model. \end{remark} Instead of looking at \textit{all} empirical measures that have a weakly convergent subsequence we may look at only certain empirical measures with this property and the same result will hold. The proof is almost identical to Bates's original proof except for this change, so we omit it. \begin{corollary}\label{corollary6Bates} \begin{enumerate}[label=(\roman*)] \item[] \item Fix $q \in {\mathbb{R}} ^D$ and $0 \leq \alpha \leq R(q)$. Define $\mathcal{R}_{\infty}^{q, \alpha}$ to be the (event-dependent) set of measures $\nu \in \mathcal{M}_+$ for which there is a subsequence $\pi_{n_k}$ of the paths $\pi_n: \vec{0} \rightarrow \lfloor nq \rfloor$ with the $\lfloor e^{n\alpha}\rfloor$th smallest values of $\rho(\frac1n \mu_{\pi_n}, \nu)$ satisfying \[\frac1n \mu_{\pi_{n_k}} \Rightarrow \nu \ \mbox{i.e.} \ \liminf_{n \rightarrow \infty} \min_{\pi: \vec{0} \rightarrow \lfloor nq \rfloor}^{\lfloor e^{n\alpha} \rfloor} \rho\bigg(\frac1n \mu_{\pi}, \nu\bigg) = 0 \] Then there exists a deterministic weakly closed set $\mathcal{R}^{q,\alpha} \subseteq \mathcal{M}_{||q||_1}$ s.t. \[ P(\mathcal{R}_{\infty}^{q,\alpha} = \mathcal{R}^{q, \alpha}) = 1\] \item Fix $t \geq 0$ and $0 \leq \alpha \leq t \log D$. Define $\mathcal{R}_{\infty}^{t, \alpha}$ to be the (event-dependent) set of measures $\nu \in \mathcal{M}_+$ for which there is a subsequence $\pi_{n_k}$ of the length $\lfloor tn \rfloor$ paths $\pi_n$ from $\vec{0}$ with the $\lfloor e^{n\alpha}\rfloor$th smallest values of $\rho(\frac1n \mu_{\pi_n}, \nu)$ satisfying \[\frac1n \mu_{\pi_{n_k}} \Rightarrow \nu \ \mbox{i.e.} \ \liminf_{n \rightarrow \infty} \min_{\pi: |\pi| = \lfloor nt \rfloor}^{\lfloor e^{n\alpha} \rfloor} \rho\bigg(\frac1n \mu_{\pi}, \nu\bigg) = 0 \] Then there exists a deterministic weakly closed set $\mathcal{R}^{t,\alpha} \subseteq \mathcal{M}_t$ s.t. \[ P(\mathcal{R}_{\infty}^{t,\alpha} = \mathcal{R}^{t,\alpha}) = 1\] \end{enumerate} \end{corollary} \begin{remark} Since the $\min \limits_{\pi: \vec{0} \rightarrow \lfloor nq \rfloor}^{\lfloor e^{n\alpha} \rfloor}$ are increasing in $\alpha$ then the sets $\mathcal{R}^{q, \alpha}$ are decreasing in $\alpha$. The same holds for $\mathcal{R}^{t,\alpha}$. \end{remark} Once we develop the concept of grid entropy, we will easily relate these sets $\mathcal{R}^q, \mathcal{R}^{q,\alpha}, \mathcal{R}^t, \mathcal{R}^{t,\alpha}$ to the sets of measures with finite grid entropy in direction $q$, grid entropy at least $\alpha$ in direction $q$, finite direction-free length $t$ grid entropy, direction-free length $t$ grid entropy at least $\alpha$ respectively. \subsection{Directed Metric Spaces} Grid entropy will turn out to be a directed metric with negative sign. We recall what that entails. \begin{definition} A directed metric space with positive sign is a triple $(M, d, +)$ where $M$ is a vector space, $d: M^2 \rightarrow (-\infty, +\infty]$ is a distance function satisfying $d(x,x) = 0$ and the usual triangle inequality $d(x,y) + d(y,z) \geq d(x,z)$. A directed metric space with negative sign is a triple $(M, d, -)$ such that $(M, -d, +)$ is a directed metric space with positive sign. \end{definition} \begin{remark} Standard metric spaces are clearly examples of directed metric spaces with positive sign. However, directed metric spaces with positive sign might not be metric spaces: the distance $d$ might not be symmetric and might not be positive and finite for non-equal arguments. The "directed" in the name indicates the possibility for asymmetry. \end{remark} Certain directed metrics give rise to directed norms in the same way certain metrics give rise to norms. \begin{definition} If $(M,d, \sigma)$ is a directed metric with positive/negative sign such that it is translation-invariant and homogeneous with respect to positive factors, then it induces a directed norm with positive/negative sign given by \[||x|| := d(\vec{0}, x)\] \end{definition} Of particular interest to us are directed norms defined in terms of the empirical measures we observe along paths between points. In the fixed direction case, our directed norms will be defined on the space of tuples consisting of a point in $ {\mathbb{R}} ^D$ (the "direction" we are observing) and a finite Borel measure on $ {\mathbb{R}} $ (the target measure we want the empirical measures to be near). In the direction-free case, our directed norms will just be defined on the space $\mathcal{M}_+$ of finite Borel measure on $ {\mathbb{R}} $. \subsection{The Subadditive Ergodic Theorem }\label{section2.5} The key theorem we use to prove the existence of the scaling limit of these directed metrics is Liggett's improved version of Kingman's Subadditive Ergodic Theorem. Before stating this theorem, we recall the definitions of stationary sequences and ergodicity, as presented in \cite[Sect. 7]{durrett}. \begin{definition} A sequence $(Y_n)_{n \geq 1}$ of random variables is called stationary if the joint distributions of the shifted sequences $\{Y_{k+n}: n \geq 1\}$ is not dependent on $k \geq 0$. \end{definition} As it turns out, the sequence of random variables we are interested in is a sequence of i.i.d. $(Y_n)_{n \geq 1}$, which clearly is stationary. \begin{definition} Let $(\Omega, \mathcal{F}, P)$ be a probability space and let $T: \Omega \rightarrow \Omega$ be a map. $T$ is said to be measure-preserving if $P(T^{-1}(A)) = P(A) \ \forall A \in \mathcal{F}$. $T$ is said to be ergodic if it is measure-preserving and if all $T$-invariant measurable sets are trivial, i.e. $P(A) \in \{0,1\}$ whenever $A \in \mathcal{F}$ and $T^{-1}(A) = A$. \end{definition} In the context of sequences, we look at the space $\Omega = {\mathbb{R}} ^{\infty}$ of infinite sequences of real numbers with the $\sigma$-algebra $\mathcal{B}_{\infty}$ generated by \[\{\{(y_1, y_2, \ldots) \in \Omega: y_n \in B\}: n \geq 1, B \in \mathcal{B}( {\mathbb{R}} )\}\] (where $\mathcal{B}( {\mathbb{R}} )$ is the Borel $\sigma$-algebra on $ {\mathbb{R}} $) and with the product probability measure $\mu_{\infty}$ on $\mathcal{B}_{\infty}$ determined by $$\mu_{\infty}(B_1 \times B_2 \times \ldots \times B_n \times {\mathbb{R}} \times \cdots) = \prod_{i=1}^n \mu(B_i) $$ where $B_i \in \mathcal{B}( {\mathbb{R}} )$ and $\mu$ is a Borel probability measure on $ {\mathbb{R}} $. We consider the shift operator $T: \Omega \rightarrow \Omega$ given by $T(y_1, y_2, y_3, \ldots) = T(y_2, y_3, \ldots)$. $T$ is easily seen to be measure-preserving with respect to $\mu_{\infty}$ since $\mu_{\infty}(T^{-1}(A)) = \mu_{\infty}(A)$ for the generating sets $A$ of $\mathcal{B}_{\infty}$. When we refer to the ergodicity of a sequence of random variables, we mean the ergodicity of this shift operator. In our case, where we have a sequence of i.i.d. $(Y_n)_{n \geq 1}$, the corresponding shift operator is ergodic. Indeed, if $T^{-1}(A) = A$ then \[(Y_1, Y_2, \ldots) \in A \Leftrightarrow T^{n-1}(Y_1, Y_2, \ldots) = (Y_{n}, Y_{n+1}, \ldots) \in A \ \forall n \geq 1\] so $A$ is in the tail $\sigma$-field $\bigcap \limits_{n \geq 1} \sigma(Y_n, Y_{n+1}, \ldots)$ and thus $\mu_{\infty}(A) \in \{0,1\}$ by Kolmogorov's 0-1 Law. We are now ready to state the Subadditive Ergodic Theorem in the form we need. \begin{theorem}[Kingman's Subadditive Ergodic Theorem, \cite{liggett}] \label{thmKingman} Suppose $(Y_{m,n})_{0 \leq m < n}$ are random variables satisfying \begin{enumerate}[label=(\roman*)] \item $\exists$ constant $C$ s.t. $E|Y_{0,n}| < \infty$ and $EY_{0,n} \geq Cn$ for all $n$ \item $\forall k \geq 1$, $\{Y_{nk, (n+1)k}: n \geq 1\}$ is a stationary process \item The joint distributions of $\{Y_{m, m+k}: k \geq 1\}$ are not dependent on $m$ \item $Y_{0,m+n} \leq Y_{0,m} + Y_{m, m+n} \ \forall m, n > 0$ \end{enumerate} Then \begin{enumerate}[label=(\alph*)] \item $\lim \limits_{n \rightarrow \infty} \frac{EY_{0,n}}n = \inf \limits_{m \geq 0} \frac{EY_{0,m}}m := \gamma$ \item $Y:= \lim \limits_{n \rightarrow \infty} \frac{Y_{0,n}}n$ exists a.s. and in $L^1$, and $EY = \gamma$ \item If the stationary sequences in (ii) are ergodic, then $Y = \gamma$ a.s. \end{enumerate} \end{theorem} \begin{remark} We may replace $Y_{m,n}$ with $-Y_{m,n}$ in the statement of the theorem to obtain a version for superadditive sequences. \end{remark} \noindent This theorem is the basis for the construction of grid entropy in our paper. \subsection{Relative Entropy and Sanov's Theorem} \label{section2.6} In the next preliminary section, we recall the basics of the Kullback-Leibler divergence (introduced in \cite{kullback}) and Sanov's Theorem for large deviations. We later use this theorem to establish a relationship between our grid entropy and this notion of relative entropy. \begin{definition} Let $P, Q$ be distributions on our inherent metric space $X$. The Kullback-Leibler divergence or relative entropy of $Q$ from $P$ is defined to be \[D_{KL} (P || Q) = \begin{cases} \int_X \log f \ dP = \int_X f \log f \ dQ, & P \ll Q \\ +\infty, & \mbox{otherwise} \end{cases}\] where $f := \frac{dP}{dQ}$ is the Radon-Nikodym derivative and $\log$ is the natural logarithm. \end{definition} \begin{remark} \cite{kullback} also derive several basic properties such as $D_{KL}$ being a pre-metric. \cite{posner} shows that $D_{KL}$ is lower semicontinuous, in the sense that, given probability distributions $P_n \Rightarrow P$ and $Q_n \Rightarrow Q$, we have \[ D_{KL}(P||Q) \leq \liminf_{n \rightarrow \infty} D_{KL}(P_n || Q_n)\] \end{remark} Our main interest in relative entropy is that it is the rate function for large deviations of empirical measures. This is captured by Sanov's Theorem. \begin{theorem}[Sanov's Theorem, \cite{deuschel}]\label{thmSanov} Consider a sequence of i.i.d. random variables $X_i \sim \theta$ taking values in a set $X$. Let $\mu_n = \sum_{i=1}^n \delta_{X_i}$ be their unnormalized empirical measures. Then for any weakly closed set $F \subset \mathcal{M}_1$ we have \[\limsup_{n \rightarrow \infty} \frac1n \log P \bigg(\frac1n \mu_n \in F \bigg) \leq -\inf_{\xi \in F} D_{KL}(\xi||\theta)\] and for any weakly open set $G \subset \mathcal{M}_1$ we have \[\liminf_{n \rightarrow \infty} \frac1n \log P\bigg(\frac1n \mu_n \in U \bigg) \geq -\inf_{\xi \in G} D_{KL}(\xi||\theta)\] \end{theorem} \begin{remark} Since $F$ is closed then the infimum is achieved by some $\xi \in F$. Furthermore, if $\theta \in F$ then the right-hand side of the inequality is 0, which gives us no information; however, if $\theta \notin F$, then the theorem gives an exponential bound on large deviations. \end{remark} \section{Grid Entropy as a Directed Norm}\label{section3} \subsection{The Plan for Deriving Direction-Fixed Grid Entropy } For the purposes of Section \ref{section3}, we temporarily forget our original definition \eqref{firstDef} of grid entropy and rederive it as a limit of scaled directed metrics. To summarize the setting we described in section \ref{section2}, we consider empirical measures $\frac1n \mu_{\pi}$ along NE-paths on the lattice $ {\mathbb{Z}} ^D$, where the edges have weights $\tau_e = \tau(U_e)$ for some measurable $\tau: [0,1] \rightarrow {\mathbb{R}} $ and $U_e$ are i.i.d. Unif[0,1] random variables. $\mathcal{M}, \mathcal{M}_+, \mathcal{M}_t$ denote the spaces of finite, finite non-negative, and finite non-negative with total mass $t \geq 0$ Borel measures respectively. We begin with direction-fixed grid entropy. We wish to count the number of paths with empirical measure very close to the target $\nu$. Let us try to define a distance on $\mathbb{ {\mathbb{R}} }^D \times \mathcal{M}_+$ by \[d((p, \xi), (q, \nu)) = \log \# \{ \mbox{paths $\pi: \lfloor p \rfloor \rightarrow \lfloor q \rfloor$ with $ \mu_{\pi}= \nu-\xi$}\}\] Note that this is $-\infty$ if there are no such paths and it is 0 if $p=q, \xi = \nu$ (since there is exactly one path $\pi: \lfloor p \rfloor \rightarrow \lfloor p \rfloor$, and it has empirical measure 0). One glaring issue is that we are only counting paths that have exactly the target empirical measure. Since the Lebesgue measure on [0,1] is continuous, then almost surely the $U_e$ have different values, and thus the unnormalized empirical measures uniquely determine the paths $\pi$. It follows that almost surely $d((p, \xi), (q, \nu))$ is always either $-\infty$ or 0. We need to change our definition of $d$ to count paths with an empirical measure "close" to $\nu-\xi$ instead. Another problem is that we wish to apply the Subadditive Ergodic Theorem to learn about the behavior of \[\frac{d((np,n\xi), (nq, n\nu))}n\] as $n \rightarrow \infty$. Thus we must also change our definition of $d$ so that it is integrable (and in particular finite a.s.) when $q-p \in {\mathbb{R}} _{\geq 0}^D$. The trick is to replace the counting of the paths exhibiting the exact target empirical measures with a "cost function" that attributes an exponential cost to each path based on how far its empirical measure is from the target. We also introduce a parameter $\epsilon$ that, as it decreases to 0, takes the cost function towards the counting of the paths with the target empirical measure we tried initially. \begin{definition} Fix $\epsilon > 0$. Define a distance on $\mathbb{R}^D \times \mathcal{M}_+$ by \[d^{\epsilon}((p, \xi), (q, \nu)) = \log \sum_{\pi \in \mathcal{P}( \lfloor p \rfloor, \lfloor q \rfloor)} e^{-\frac1{\epsilon} \rho(\mu_{\pi}, \nu-\xi)}\] where $\rho$ is the Levy-Prokhorov metric and where we sum over all NE paths $\pi: \lfloor p \rfloor \rightarrow \lfloor q \rfloor$. \end{definition} \begin{remark} This distance is 0 if $p=q, \xi = \nu$ and $-\infty $ if and only if $q-p \notin {\mathbb{R}} _{\geq 0}^D$. \end{remark} \begin{remark} For any path $\pi: \lfloor p \rfloor \rightarrow \lfloor q \rfloor$, the corresponding empirical measure $\mu_{\pi}$ observed must necessarily be of the form $\mu_{\pi} = \sum \limits_{i=1}^{|\pi|} \delta_{a_i}$ where $|\pi|= ||\lfloor q \rfloor - \lfloor p \rfloor||_1$. \end{remark} \begin{remark} As $\epsilon \rightarrow 0$ the costs $e^{-\frac1{\epsilon} \rho(\mu_{\pi}, \nu - \xi) }$ converge to the indicators \[\mathbf{1}_{\nu-\xi}( \mu_{\pi}) = \mathbf{1}_{\frac{\nu-\xi}{||\lfloor q \rfloor - \lfloor p \rfloor||_1} } \bigg(\frac1{|\pi|} \mu_{\pi} \bigg)\] Hence the sum of the costs approaches the number of paths with empirical measures $\frac1{|\pi|} \mu_{\pi}$ precisely equal to $\frac{\nu-\xi}{||\lfloor q \rfloor - \lfloor p \rfloor||_1}$. \end{remark} With this definition in mind, let us discuss the plan of attack. First, we prove the existence of $\lim \limits_{n \rightarrow \infty} \frac{d^{\epsilon}((np,n\xi), (nq, n\nu))}n$ using the Subadditive Ergodic Theorem and some estimates we derive for the error terms when $p, q$ do not have integer coordinates. Then we take the infimum over $\epsilon > 0$ of these limits and we define the resulting norm to be our grid entropy. \begin{theorem}\label{thm7} Fix $\epsilon > 0, \nu, \xi \in\mathcal{M}_+$ and $p, q \in {\mathbb{R}} ^D$. Then \[\frac{d^{\epsilon}((np, n\xi), (nq, n\nu))}n\] converges in probability to a constant. When $p = \vec{0}$, the convergence is pointwise a.s. \end{theorem} \begin{remark} The theorem holds trivially, with the limit being $-\infty$, if $q-p \notin {\mathbb{R}} ^D_{\geq 0}$ or if $q = p$ and $\nu \neq \xi$. It also holds trivially, with the limit being 0, if $q=p$ and $\nu = \xi$. \end{remark} The limit given by this theorem is a directed metric with negative sign on $ {\mathbb{R}} ^D \times \mathcal{M}_+$. When we take an infimum over $\epsilon > 0$ we still get a directed metric with negative sign. \begin{theorem}\label{thm8} For $\epsilon > 0$, $\nu, \xi \in \mathcal{M}_+$ and $p, q \in {\mathbb{R}} ^D$ define \[\widetilde{d}^{\epsilon}((p, \xi), (q, \nu)) := \lim \limits_{n \rightarrow \infty} \frac{d^{\epsilon}((np,n\xi), (nq, n\nu))}n \ \mbox{and} \ \widetilde{d}((p, \xi), (q, \nu)) := \inf_{\epsilon > 0} \widetilde{d}^{\epsilon}((p, \xi), (q, \nu))\] Then each $\widetilde{d}^{\epsilon}$ as well as $\widetilde{d}$ are directed metrics with negative sign on $ {\mathbb{R}} ^D \times \mathcal{M}_+$. \end{theorem} We show that this metric $\widetilde{d}$ gives rise to a norm on $ {\mathbb{R}} ^D_{\geq 0} \times \mathcal{M}_+$. This will finish our discussion of the direction-fixed grid entropy and we will move on to the direction-free case. \subsection{The Limit Shape of \texorpdfstring{$d^{\epsilon}$}{de} Starting at \texorpdfstring{$(\vec{0}, 0)$}{Lg}} In this and the following subsection, we focus on the direction-fixed grid entropy. To prove Theorem \ref{thm7}, we first prove a simplified version, which we later generalize easily. \begin{theorem}\label{thm9} Fix $\epsilon > 0, \nu \in \mathcal{M}_+$ and $q \in {\mathbb{R}} ^D_{\geq 0} \setminus \{\vec{0}\}$. Then \[\frac{X_n^{\epsilon,q, \nu}}n:= \frac{d^{\epsilon}((\vec{0}, 0), (nq, n\nu))}n \rightarrow X^{\epsilon,q, \nu} := \sup \limits_n \frac{EX_n^{\epsilon,q, \nu}}n = \lim \limits_{n \rightarrow \infty} \frac{EX_n^{\epsilon,q, \nu}}n \ \mbox{a.s.}\] \end{theorem} \begin{remark} As noted before, Theorem \ref{thm7} holds trivially when $q \notin {\mathbb{R}} ^D_{\geq 0} \setminus \{\vec{0}\}$, so we need not bother with this case. \end{remark} We prove this theorem in stages, starting with the case when $q$ has integer coordinates. But first, we show a useful bound on our random variables $X_n^{\epsilon, q, \nu}$. \begin{lemma}\label{lemma10} Let $\epsilon > 0, \nu, \xi \in \mathcal{M}_+$ and $p, q \in {\mathbb{Z}} ^D$ with $q-p \in {\mathbb{Z}} ^D_{\geq 0} \setminus \{\vec{0}\}$. Then \[ d^{\epsilon}((p, \xi), (q, \nu)) \in \bigg[ - \frac{1}{\epsilon} (||q-p||_1 + ||\nu-\xi||_{TV}), ||q-p||_1 \log D \bigg]\] \end{lemma} \begin{proof} \noindent Recall that any path $\pi: p \rightarrow q $ has $|| q-p ||_1 $ edges, so $||\mu_{\pi}||_{TV} = ||q-p||_1$, and that the total number of paths $\pi: p \rightarrow q$ is \[\binom{||q-p||_1}{q_1-p_1, \cdots, q_D - p_D} \in [1, D^{ ||q-p||_1}]\] Also, for any such $\pi$ we have by Lemma \ref{lemma3} \[\rho(\mu_{\pi}, \nu-\xi) \in [0, ||\mu_{\pi} - (\nu-\xi)||_{TV}] \subseteq [0,||q-p||_1 + ||\nu-\xi||_{TV}]\] \[\Rightarrow e^{-\frac1{\epsilon} \rho(\mu_{\pi}, \nu-\xi) } \in [e^{-\frac{1}{\epsilon} (||q-p||_1 + ||\nu-\xi||_{TV})}, 1]\] Thus \[ d^{\epsilon}((p, \xi), (q, \nu)) = \log \sum_{\pi \in \mathcal{P}(p,q)} e^{-\frac1{\epsilon} \rho(\mu_{\pi}, \nu-\xi) } \in \bigg[ - \frac{1}{\epsilon} (||q-p||_1 + ||\nu-\xi||_{TV}), ||q-p||_1 \log D \bigg]\] \end{proof} \begin{lemma}\label{lemma11} Theorem \ref{thm9} holds with $q \in {\mathbb{Z}} ^D_{\geq 0} \setminus \{\vec{0}\}$. \end{lemma} \begin{proof} We wish to use Kingman's Subadditive Ergodic Theorem (Theorem \ref{thmKingman}) with \[Y_{m,n} := -d^{\epsilon}((mq, m\nu), (nq, n\nu)) \ \forall m \leq n\] Let us now check the conditions (i)-(iv). By Lemma \ref{lemma10}, \[Y_{0,n} =-d^{\epsilon}((\vec{0}, 0), (nq, n\nu)) \in \bigg[-n||q||_1 \log D, \frac{1}{\epsilon} (n||q||_1 + n||\nu||_{TV}) \bigg]\] hence (i) holds. Next, for every $k \geq 1$, the sequence \[Y_{nk, (n+1)k} = -d^{\epsilon}((nkq, nk\nu), ((n+1)kq, (n+1)k\nu))\] is i.i.d. because the distribution of the unnormalized empirical measures of paths $\pi: nkq \rightarrow (n+1)kq$ is not dependent on $n$ (since the edge labels $U_e$ are i.i.d.) so the distribution of the cost functions $e^{-\frac1{\epsilon} \rho(\mu_{\pi}, k\nu)}$ for $\pi: nkq \rightarrow (n+1)kq$ is not dependent on $n$. Thus (ii) holds. Furthermore, as discussed in section \ref{section2.5}, $Y_{nk, (n+1)k}$ being i.i.d. implies the sequence is ergodic. Similarly, the joint distributions of $$\{Y_{m, m+k}: k \geq 1\} = \{-d^{\epsilon}((mq, m\nu), ((m+k)q, (m+k)\nu)): k \geq 1\}$$ are not dependent on $m$ since the edge labels are i.i.d. and the distribution of empirical measures of paths in a rectangle on the lattice with the difference between the top right and bottom left corners being $kq$ is independent of the location of the rectangle. Thus we have (iii). It remains to show (iv), namely to show that given $m, n > 0,$ \[d^{\epsilon}((\vec{0}, 0), ((m+n)q, (m+n)\nu)) \geq d^{\epsilon}((\vec{0}, 0), (mq, m\nu)) + d^{\epsilon}((mq, m\nu), ((m+n)q, (m+n)\nu)) \] For any paths $\pi: \vec{0} \rightarrow mq$ and $\pi': mq \rightarrow (m+n)q$, we get a unique concatenation $\pi\cdot \pi': \vec{0} \rightarrow mq \rightarrow (m+n)q$. Its empirical measure satisfies $\mu_{\pi \cdot \pi'} = \mu_{\pi} + \mu_{\pi'}$. But the Levy-Prokhorov metric satisfies subadditivity by Lemma \ref{lemma4}. Thus \[\rho(\mu_{\pi \cdot \pi'}, (m+n) \nu) \leq \rho(\mu_{\pi}, m\nu) + \rho(\mu_{\pi'}, n\nu)\] But then \begin{align*} -Y_{0,m} - Y_{m,m+n} & = \log \sum_{\pi : \vec{0} \rightarrow mq} e^{-\frac1{\epsilon} \rho(\mu_{\pi}, m\nu) } + \log \sum_{\pi' : mq \rightarrow (m+n)q} e^{-\frac1{\epsilon} \rho(\mu_{\pi'}, n\nu) }\\ & = \log \sum_{\substack{\pi : \vec{0} \rightarrow mq\\ \pi' : mq \rightarrow (m+n)q}} e^{-\frac{1}{\epsilon} \rho( \mu_{\pi}, m\nu) -\frac{1}{\epsilon} \rho( \mu_{\pi'}, n\nu)} \\ & \leq \log \sum_{\substack{\pi : \vec{0} \rightarrow mq\\ \pi' : mp \rightarrow (m+n)q}} e^{-\frac1{\epsilon} \rho( \mu_{\pi \cdot \pi'}, (m+n)\nu)} \end{align*} Not all paths $\pi''': \vec{0} \rightarrow (m+n)q$ pass through $mq$ so we can upper bound the expression above by removing this condition: \begin{align*} -Y_{0,m} - Y_{m,m+n} & \leq \log \sum_{\pi''': \vec{0} \rightarrow (m+n)q} e^{-\frac1{\epsilon} \rho( \mu_{\pi'''}, (m+n)\nu)} \\ & = -Y_{0, m+n} \end{align*} Thus we can apply the Subadditive Ergodic Theorem (Theorem \ref{thmKingman}) to get that \[\frac{-Y_{0,n}}n = \frac{X_n^{\epsilon, q, \nu}}n\] converges a.s. to the constant \[X^{\epsilon, q, \nu} := \sup \limits_n \frac{EX_n^{\epsilon, q, \nu}}n = \lim \limits_{n \rightarrow \infty} \frac{EX_n^{\epsilon,q, \nu}}n \in \bigg[ - \frac1{\epsilon} \max(||q||_1, ||\nu||), ||q||_1 \log D \bigg]\] \end{proof} The next order of business is proving the theorem for $q$ with rational coordinates. We will find useful the following error estimate on how $X_n^{\epsilon,q,\nu}$ changes when $q$ is perturbed in the SE direction and $\nu$ is perturbed arbitrarily. \begin{lemma}\label{lemma12} Fix $q \in {\mathbb{R}} ^D_{\geq 0}, \epsilon > 0$ and $\nu, \xi \in \mathcal{M}_+$. Then for any $p \in {\mathbb{R}} ^D_{\geq 0}$ with $q-p \in {\mathbb{R}} ^D_{\geq 0}$, \begin{align*} &X_n^{ \epsilon, p, \xi} - \frac1{\epsilon} (n||q-p||_1 + n||\nu-\xi||_{TV} +D) \\ &\leq X_n^{ \epsilon, p, \xi} - \frac1{\epsilon} (||\lfloor nq \rfloor - \lfloor np \rfloor||_1 + n\rho(\nu,\xi)) \\ & \leq X_n^{\epsilon, q, \nu} \end{align*} \end{lemma} \begin{proof} Fix any such $p$. The inequality holds trivially if $p = q$ so we may assume $||q-p||_1 > 0$. Then there is at least one path $\pi': \lfloor np \rfloor \rightarrow \lfloor nq \rfloor$, and we fix it. For any path $\pi: \vec{0} \rightarrow \lfloor np \rfloor$, we concatenate it with $\pi'$ to get a unique path $\pi \cdot \pi': \vec{0} \rightarrow \lfloor nq\rfloor$. Note that $\pi'$ consists of $||\lfloor nq \rfloor - \lfloor np \rfloor ||_1 \leq n||q-p||_1+D$ edges so its empirical measure $\mu_{\pi'}$ has Total Variation at most $n||q-p||_1+D$. Thus $\pi \cdot \pi'$ satisfies \begin{align*} \rho(\mu_{\pi \cdot \pi'}, n\nu) &\leq \rho(\mu_{\pi}+\mu_{\pi'}, \mu_{\pi}) + \rho( \mu_{\pi}, n\xi) + \rho(n\xi, n\nu)\\ &\leq ||\mu_{\pi'}||_{TV} +\rho(\mu_{\pi}, n\xi) + n\rho(\nu,\xi) \\ &\leq \rho(\mu_{\pi}, n\xi)+ (||\lfloor nq \rfloor - \lfloor np \rfloor||_1+ n \rho(\nu,\xi)) \end{align*} It follows that \begin{align*} &X_n^{ \epsilon, p, \xi} - \frac1{\epsilon} (n||q-p||_1 + n||\nu-\xi||_{TV} +D) \\ &\leq X_n^{ \epsilon, p, \xi} - \frac1{\epsilon} (||\lfloor nq \rfloor - \lfloor np \rfloor||_1 + n\rho(\nu,\xi)) \\ & \leq X_n^{\epsilon, q, \nu} \end{align*} \end{proof} Using this lemma to approximate $X_n^{\epsilon, q, \nu}$ for $q \in {\mathbb{Q}} ^D_{\geq 0}$ in terms of $X_n^{\epsilon, p, \nu}$ where $p \in {\mathbb{Z}} ^D_{\geq 0}$, we prove our limit theorem for $q$ with rational coordinates. \begin{lemma}\label{lemma13} Theorem \ref{thm9} holds for $q \in {\mathbb{Q}} ^D_{\geq 0} \setminus \{\vec{0}\}$. \end{lemma} \begin{proof} Let $q = \frac{(s_1, \ldots, s_D)}t$ for $s_i, t \in {\mathbb{Z}} _{\geq 0}$. First, compare $X_{tr}^{\epsilon, q, \nu}, X_{tr+1}^{\epsilon, q, \nu}, \ldots, X_{tr+ (t-1)}^{\epsilon, q, \nu}, X_{t(r+1)}^{\epsilon, q, \nu}$ for arbitrary $r$ using the previous lemma. The idea is that $trq$ has integer coordinates so $\frac{X_{tr}^{\epsilon, q, \nu}}{tr}, \frac{X_{t(r+1)}^{\epsilon, q, \nu}}{tr}$ have the desired limits by Lemma \ref{lemma11}. The rest of the $X_{tr+j}^{\epsilon, q, \nu}$ are bounded above/below by $X_{t(r+1)}^{\epsilon, q, \nu}/X_{tr}^{\epsilon, q, \nu}$ respectively plus some small error terms which go to 0 as $r \rightarrow \infty$. Consider any $1 \leq j \leq t$. Note that \[X_{tr+j-1}^{\epsilon, q, \nu} = d^{\epsilon}((\vec{0}, 0), ((tr+j-1)q, (tr+j-1)\nu)) = X_{tr+j}^{\epsilon, \frac{tr+j-1}{tr+j} q, \frac{tr+j-1}{tr+j}\nu} \] and \[ \bigg|\bigg|q-\frac{tr+j-1}{tr+j} q \bigg|\bigg|_1 + \bigg|\bigg|\nu-\frac{tr+j-1}{tr+j} \nu \bigg|\bigg|_{TV} = \frac{||q||_1+||\nu||_{TV}}{tr+j}\] By Lemma \ref{lemma12}, \[X_{tr+j-1}^{\epsilon, q, \nu}= X_{tr+j}^{\epsilon, \frac{tr+j-1}{tr+j} q, \frac{tr+j-1}{tr+j}\nu} \leq X_{tr+j}^{\epsilon, q, \nu} + \frac1{\epsilon} (||q||_1+ ||\nu||_{TV} + D)\] Thus \begin{equation} \label{1} \begin{split} \frac{X_{tr}^{\epsilon,q,\nu}}{r} &\leq \frac{X_{tr+1}^{\epsilon,q, \nu}}{r} + \frac{||q||_1+||\nu||_{TV}+D}{\epsilon r} \\ & \leq \ldots \leq \frac{X_{t(r+1)}^{\epsilon,q, \nu}}{r} + \frac{t(||q||_1+||\nu||_{TV}+D)}{\epsilon r} \end{split} \end{equation} But $tq \in {\mathbb{Z}} _{\geq 0}^D$ so by Lemma \ref{lemma10}, our limit theorem holds for $X_{tr}^{\epsilon, q, \nu} = X_r^{\epsilon, tq, t\nu}$. That is, \begin{equation} \label{2} \frac{X_{tr}^{\epsilon, q,\nu}}{r} \rightarrow \sup_r \frac{EX_{tr}^{\epsilon, q, \nu}}{r} = \lim_{r \rightarrow \infty} \frac{EX_{tr}^{\epsilon, q, \nu}}{r} \ \mbox{a.s.} \end{equation} Taking expectations in \eqref{1}, then taking the supremum/limit as $r \rightarrow \infty$ and using \eqref{2} we get \begin{equation} \label{3} \lim_{r \rightarrow \infty} \frac{EX_{tr+j}^{\epsilon,q,\nu}}r = \sup_r \frac{EX_{tr+j}^{\epsilon,q, \nu}}r = \lim_{r \rightarrow \infty} \frac{EX_{tr}^{\epsilon, q, \nu}}{r} = \sup_r \frac{EX_{tr}^{\epsilon, q, \nu}}{r} \ \forall 0 \leq j \leq t-1 \end{equation} since the error terms in \eqref{1} go to 0. Similarly, taking the limit as $r \rightarrow \infty$ in \eqref{2} and using \eqref{1}, \eqref{3} we get $$\lim_{r \rightarrow \infty} \frac{X_{tr+j}^{\epsilon,q, \nu}}r = \sup_r \frac{EX_{tr}^{\epsilon, q, \nu}}{r} = \lim_{r \rightarrow \infty} \frac{EX_{tr+j}^{\epsilon,q, \nu}}r = \sup_r \frac{EX_{tr+j}^{\epsilon,q, \nu}}r \ \mbox{a.s.} \ \forall 0 \leq j \leq t-1$$ Multiplying everything by $\frac1t$ and using the fact that any $n$ can be written as $tr+j$, we get $$\frac{X^{\epsilon,q, \nu}_{n}}{n} \rightarrow \lim_{n \rightarrow \infty} \frac{EX_n^{\epsilon,q, \nu}}n = \sup_n \frac{EX_n^{\epsilon,q, \nu}}n \ \mbox{a.s.} $$ \end{proof} Our next objective is to prove the full version of Theorem \ref{thm9}. We will need two short lemmas giving sufficient conditions for a.s. convergence of bounded random variables to the supremum of their expectations. The first of these lemmas looks at the case where we are given a particular lower bound for the liminf of all subsequences of our sequence. \begin{lemma}\label{lemma14} Let $Y_n$ be uniformly bounded random variables. If a.s. \[\sup_n EY_{n} \leq \liminf_{i \rightarrow \infty} Y_{n_i} \ \mbox{for all subsequences $(n_i)$}\] then $\lim \limits_{n \rightarrow \infty} Y_n = \sup \limits_n EY_n = \lim \limits_{n \rightarrow \infty} EY_n$ a.s. \end{lemma} \begin{proof} First, taking expectations in \[ \sup_n EY_{n} \leq \liminf_{i \rightarrow \infty} Y_{n_i} \ \mbox{a.s.}\] and using Fatou's Lemma, we get \begin{align*} &\sup_n EY_{n} \leq E\liminf_{i \rightarrow \infty} Y_{n_i} \leq \liminf_{i \rightarrow \infty} EY_{n_i} \\ &\leq \limsup_{i \rightarrow \infty} EY_{n_i} \leq \sup_i EY_{n_i} \leq \sup_n EY_n \end{align*} so all the inequalities are equalities and \[ \sup_n EY_{n} = \lim \limits_{i \rightarrow \infty} EY_{n_i} = \liminf_{i \rightarrow \infty} Y_{n_i} \ \mbox{a.s.}\] This holds for all subsequences $(n_i)$, including the full sequence. Thus a.s. \[L:= \sup_n EY_{n} = \lim \limits_{n \rightarrow \infty} EY_{n} = \liminf_{i \rightarrow \infty} Y_{n_i} \ \mbox{ for all subsequences $(n_i)$}\] hence $Y_n \rightarrow L$ a.s.. \end{proof} The next lemma is about approximating a sequence of random variables from below by sequences that each converge to the supremum of their expectations. \begin{lemma}\label{lemma15} Let $Y_n$ be uniformly bounded random variables s.t. \begin{enumerate}[label=(\roman*)] \item For every fixed $k$, $Y_{n,k} \leq Y_n$ for large enough $n$ \item For every fixed $k$, \[\lim_{n \rightarrow \infty} Y_{n,k} = \sup_n EY_{n,k} = \lim_{n \rightarrow \infty} EY_{n,k} \ \mbox{a.s.} \] \item $\sup \limits_n EY_n = \sup \limits_n \sup \limits_k EY_{n,k}$ Then $\lim \limits_{n \rightarrow \infty} Y_n = \sup \limits_n EY_n = \lim \limits_{n \rightarrow \infty} EY_n$ a.s. \end{enumerate} \end{lemma} \begin{proof} We wish to prove the hypothesis of Lemma \ref{lemma14} for $Y_n$. Let \[\mathcal{F} = \bigg\{\lim_{n\rightarrow \infty} Y_{n,k} \neq \sup_n EY_{n,k} \ \mbox{for some $k$}\bigg\}\] which has measure 0 by (ii). We claim that in the event $\mathcal{F}^C$ we have \[\sup_n EY_{n} \leq \liminf_{i \rightarrow \infty} Y_{n_i} \ \mbox{for all subsequences $(n_i)$}\] Consider any subsequence $(n_i)$ and any $k$. By (i), $Y_{n_i, k} \leq Y_{n_i} \ \mbox{for large enough $i$}$. Taking the $\liminf$ over $i$, and using the fact that we are in the event $\mathcal{F}^C$, we get \[ \sup_n EY_{n, k} = \lim_{n \rightarrow \infty} Y_{n, k} = \lim_{i \rightarrow \infty} Y_{n_i, k} \leq \liminf_{i \rightarrow \infty} Y_{n_i} \] This holds for all $k$. Taking the supremum over $k$ and using (iii), we get \[ \sup_n EY_n = \sup_n \sup \limits_{k} EY_{n,k} = \sup \limits_{k} \sup_n EY_{n,k} \leq \liminf_{i \rightarrow \infty} Y_{n_i} \] as desired. Applying Lemma \ref{lemma14}, we get \[ \lim_{n \rightarrow \infty} EY_n = \sup_n EY_n = \lim_{n \rightarrow \infty} Y_n \ \mbox{a.s.}\] \end{proof} We can finally prove Theorem \ref{thm9}. \begin{proof}[Proof of Theorem \ref{thm9}] $ $\newline The case $q \in {\mathbb{Q}} ^D_{\geq 0} \setminus \{\vec{0}\}$ is handled by Lemma \ref{lemma13}. So we may assume $q \notin {\mathbb{Q}} ^D_{\geq 0}$. We construct a sequence $p_k \in {\mathbb{Q}} _{\geq 0}^D$ as follows. For every $1 \leq j \leq D$, either $q_j \in {\mathbb{Q}} $ and we pick $(p_k)_j := q_j$, or $q_j \notin {\mathbb{Q}} $ and we pick $(p_k)_j \in {\mathbb{Q}} _{\geq 0}$ s.t. $(p_k)_j \uparrow q_j$. It follows that $p_k \in {\mathbb{Q}} _{\geq 0}^D$ with $p_{k+1} - p_k, q - p_k \in {\mathbb{Q}} ^D_{\geq 0} \ \forall k$. That is, $p_1, p_2, \ldots, q$ forms a "staircase" in $ {\mathbb{R}} ^D$. \begin{center} \begin{tikzpicture} \node (p1) at (0pt,0pt) {}; \node (p5) at (15pt,25pt) {}; \node (p2) at (50pt,25pt) {}; \node (p2) at (50pt,50pt) {}; \node (p3) at (60pt,80pt) {}; \filldraw (0pt,0pt)circle(2pt) (50pt,25pt)circle(2pt) (15pt,20pt)circle(2pt) (50pt,50pt)circle(2pt) (52pt,60pt)circle(1pt) (54pt,65pt)circle(1pt) (56pt,70pt)circle(1pt) (60pt,80pt)circle(2pt); \draw (0pt,0pt) node [ below right] {$p_1$}; \draw (15pt,20pt) node [ below right] {$p_2$}; \draw (50pt,25pt) node [ below right] {$p_3$}; \draw (50pt,50pt) node [ below right] {$p_4$}; \draw (60pt,80pt) node [ below right] {$q$}; \end{tikzpicture} \end{center} The intuition is that $X_n^{\epsilon, p_k, \nu}$ approximates $X_n^{\epsilon, q, \nu}$ from below. Each $\frac{X_n^{\epsilon, p_k, \nu}}n$ converges in $n$ to the right limit a.s. by Lemma \ref{lemma13}, and we use Lemma \ref{lemma15} to prove that $\frac{X_n^{\epsilon,q,\nu}}n$ converges a.s. to the right limit. Let us now prove the hypothesis of Lemma \ref{lemma15} with \[Y_{n,k} := \frac{X_n^{\epsilon, p_k,\nu}}n - \frac{||\lfloor nq\rfloor - \lfloor np_k \rfloor||_1}{\epsilon}, Y_n := \frac{X_n^{\epsilon, q, \nu}}n\] Note that these random variables are bounded by Lemma \ref{lemma10}: \[ Y_{n,k}, Y_n \in \bigg[-\frac1{\epsilon} (||q||_1 + ||\nu||_{TV}) - \frac{||q||_1}{\epsilon} , ||q||_1 \log D \bigg] \] First, fix $n$ and consider any $k$. By Lemma \ref{lemma12}, \[X_n^{\epsilon, p_k, \nu} - \frac{||\lfloor nq\rfloor - \lfloor np_k \rfloor||_1}{\epsilon} \leq X_n^{\epsilon,q, \nu} = Y_n \] so we have (i). Next, (ii) follows from Lemma \ref{lemma13}: \begin{align*} &\lim_{n \rightarrow \infty} Y_{n,k} = \lim_{n \rightarrow \infty} \frac{X_n^{\epsilon, p_k,\nu} - \frac{||\lfloor nq\rfloor - \lfloor np_k \rfloor||_1}{\epsilon}}n = \lim_{n \rightarrow \infty} \frac{X_n^{\epsilon, p_k, \nu}}n - \frac{||q-p_k||_1}{\epsilon} \\ &= \sup_n \frac{EX_n^{\epsilon, p_k, \nu}}n - \frac{||q-p_k||
_1}{\epsilon} = \lim_{n \rightarrow \infty} \frac{EX_n^{\epsilon, p_k, \nu}}n - \frac{||q-p_k||_1}{\epsilon} \\ &= \lim_{n \rightarrow \infty} EY_{n,k} = \sup_n EY_{n,k} \end{align*} It remains to show (iii). Consider any fixed $n$. For each $1 \leq j \leq D$ either $q_j \in {\mathbb{Q}} $ and $q_j = (p_k)_j \ \forall k$, or $nq_j \notin {\mathbb{Z}} $ and $(p_k)_j \uparrow q_j$ hence $\lfloor nq_j \rfloor = \lfloor n(p_k)_j \rfloor$ for large enough $k$. Thus $X_n^{\epsilon, p_k, \nu} = X_n^{\epsilon, q, \nu}$ for large enough $k$. It follows that \[Y_{n,k} = X_n^{\epsilon, p_k, \nu} - \frac{||\lfloor nq\rfloor - \lfloor np_k \rfloor||_1}{\epsilon} \uparrow X_n^{\epsilon, q, \nu} = Y_n \ \mbox{as} \ k \rightarrow \infty\] By the Bounded Convergence Theorem, $EY_{n, k} \uparrow EY_n$. Thus \[\lim_{k \rightarrow \infty} EY_{n,k} = \sup_k EY_{n,k}= EY_n \ \forall n \] so taking the supremum over $n$ we get (iii): \[\sup_n \sup_k EY_{n,k} = \sup_n EY_n\] Applying Lemma \ref{lemma15}, we get \[\lim_{n \rightarrow \infty} Y_n = \lim_{n \rightarrow \infty} EY_n = \sup_n EY_n \ \mbox{a.s.}\] i.e. \[ \lim_{n \rightarrow \infty} \frac{X_n^{\epsilon, q, \nu}}n = \lim_{n \rightarrow \infty} \frac{EX_n^{\epsilon, q,\nu}}n = \sup_n \frac{EX_n^{\epsilon, q, \nu}}n \ \mbox{a.s.}\] \end{proof} This completes the proof of the existence of the limit shape of $d^{\epsilon}$ when measuring the distance from $(\vec{0}, 0) \in {\mathbb{R}} ^D \times \mathcal{M}_+$. \subsection{The Limit Shape of \texorpdfstring{$d^{\epsilon}$}{Lg}—The General Case} We use Lemma \ref{lemma15} to generalize Theorem \ref{thm9} to Theorem \ref{thm7}, where $d^{\epsilon}$ measures distance between any two elements of $ {\mathbb{R}} ^D \times \mathcal{M}_+$. \begin{manualtheorem}{\ref{thm7}} Fix $\epsilon > 0, \nu, \xi \in\mathcal{M}_+$ and $p, q \in {\mathbb{R}} ^D$. Then \[\frac{d^{\epsilon}((np, n\xi), (nq, n\nu))}n\] converges in probability to a constant. When $p = \vec{0}$, the convergence is pointwise a.s. \end{manualtheorem} \begin{remark} During the course of the proof, we also show that the limit shape of \\ $\frac{d^{\epsilon}((np, n\xi), (nq, n\nu))}n$ is translation-invariant. This will help us later. \end{remark} \begin{proof} As noted before, the theorem holds trivially if $q - p \notin {\mathbb{R}} ^D_{\geq 0} \setminus \{\vec{0}\}$. Thus we may assume $q - p \in {\mathbb{R}} ^D_{\geq 0} \setminus \{\vec{0}\}$. Observe that \begin{equation}\label{EQ3} \begin{split} d^{\epsilon}((np, n\alpha), (nq, n\nu)) &= d^{\epsilon}((np, 0), (nq, n\nu - n\xi)) \\ &=^d d^{\epsilon}((\vec{0}, 0), (\lfloor nq \rfloor - \lfloor np \rfloor, n\nu-n\xi)) \end{split} \end{equation} so it suffices to assume $\xi = 0$ and prove \[\frac{X_n^{\epsilon, \frac{\lfloor nq \rfloor - \lfloor np \rfloor}n, \nu}}n = \frac{d^{\epsilon}((\vec{0}, 0), (\lfloor nq \rfloor - \lfloor np \rfloor, n\nu))}n\] converge a.s. to a constant. Our argument mirrors the one used in the proof of Theorem \ref{thm9}: we approximate these $Y_n$ from below by some $Y_{n,k}$ converging in $k$ and we apply Lemma \ref{lemma15}. First we prove some inequalities. Fix $n$. Observe that for $1 \leq j \leq D$, \[n(q_j-p_j) -1 < \lfloor n (q_j-p_j) \rfloor \leq n(q_j-p_j) \] \[n(q_j-p_j) - 1 < \lfloor nq_j \rfloor - \lfloor np_j \rfloor < n(q_j-p_j) + 1\] \begin{equation}\label{4} \Rightarrow (\lfloor nq_j \rfloor - \lfloor np_j \rfloor) - \lfloor n (q_j-p_j) \rfloor \in \{0,1\} \end{equation} and thus \begin{equation}\label{5} (\lfloor nq \rfloor - \lfloor np \rfloor) - \lfloor n (q-p) \rfloor \in {\mathbb{R}} ^D_{\geq 0} \ \mbox{with} \ || (\lfloor nq \rfloor - \lfloor np \rfloor) - \lfloor n (q-p) \rfloor ||_1 \leq D \end{equation} On the other hand, for $1 \leq j \leq D$ s.t. $p_j \neq q_j$, \[ \lfloor nq_j \rfloor - \lfloor np_j \rfloor \leq n(q_j - p_j) + 1 \leq (n+c) (q_j-p_j) \] for $c:= \bigg\lceil \max \limits_{1 \leq j \leq D \ \mbox{s.t.} \ p_j \neq q_j} \bigg(\frac1{q_j-p_j} \bigg) \bigg\rceil$. Thus \[ 0 \leq \lfloor (n+c) (q_j-p_j) \rfloor - (\lfloor nq_j \rfloor - \lfloor np_j \rfloor) \leq c(q_j-p_j) \ \mbox{by} \ \eqref{4}\] This equation also holds trivially for $j$ s.t. $p_j = q_j$. Thus \begin{equation}\label{6} \lfloor (n+c) (q-p) \rfloor - (\lfloor nq \rfloor - \lfloor np \rfloor) \in {\mathbb{R}} ^D_{\geq 0} \ \mbox{with} \ ||\lfloor (n+c) (q-p) \rfloor - (\lfloor nq \rfloor - \lfloor np \rfloor) ||_1 \leq c||q-p||_1 \end{equation} We prove the hypothesis of Lemma \ref{lemma15} with \[Y_{n,k} := \frac{X_n^{\epsilon, q-p, \nu} - \frac{n}{\epsilon k}}n, Y_n := \frac{X_n^{\epsilon, \frac{\lfloor nq \rfloor - \lfloor np \rfloor}n, \nu}}n\] First, by Lemma \ref{lemma12} and \eqref{5}, for every fixed $k$, for large enough $n$ we have \begin{align*} \frac{X_n^{\epsilon, q-p, \nu} - \frac{n}{\epsilon k}}n &\leq \frac{X_n^{\epsilon, q-p, \nu} - \frac{D}{\epsilon}}n \\ &\leq \frac{X_n^{\epsilon, q-p, \nu} - \frac{1}{\epsilon}|| (\lfloor nq \rfloor - \lfloor np \rfloor) - \lfloor n (q-p) \rfloor ||_1}n \\ &\leq \frac{X_n^{\epsilon, \frac{\lfloor nq \rfloor - \lfloor np \rfloor}n, \nu}}n \end{align*} i.e. $Y_{n,k} \leq Y_n$ which is precisely (i). Also, Theorem \ref{thm9} gives (ii): for every fixed $k$, a.s. \[\lim_{n\rightarrow \infty} Y_{n,k} = \lim_{n\rightarrow \infty} \frac{X_n^{\epsilon, q-p, \nu} - \frac{n}{\epsilon k}}n = \sup_n \frac{EX_n^{\epsilon, q-p, \nu}}n - \frac1{\epsilon k}= \lim_{n \rightarrow \infty} \frac{EX_n^{\epsilon, q-p, \nu}}n - \frac1{\epsilon k} \] \[= \sup_n EY_{n,k} = \lim_{n \rightarrow \infty} EY_{n,k} \] Note that this implies \begin{equation}\label{7} \sup_n \frac{EX_n^{\epsilon, q-p, \nu}}n = \sup_n \sup_k EY_{n,k} = \sup_k \sup_n EY_{n,k} = \sup_k \lim_{n \rightarrow \infty} EY_{n,k} = \lim_{n \rightarrow \infty} \frac{EX_n^{\epsilon, q-p, \nu}}n \end{equation} It remains to show (iii). Note that taking expectations in (i) immediately gives \begin{equation}\label{8} \sup_n \sup_k EY_{n,k} = \sup_k \sup_n EY_{n,k} \leq \sup EY_n \end{equation} We show this is an equality. By Lemma \ref{lemma12} and \eqref{6}, for every fixed $k$, for large enough $n$ we have \begin{align*} X_n^{\epsilon,\frac{\lfloor nq \rfloor - \lfloor np \rfloor}{n}, \nu} & \leq X_n^{\epsilon, \frac{n+c}n (q-p), \frac{n+c}n\nu} + \frac1{\epsilon} \bigg(||\lfloor (n+c) (q-p) \rfloor - (\lfloor nq \rfloor - \lfloor np \rfloor) ||_1+ c||\nu ||_{TV} \bigg) \\ &\leq X_{n+c}^{\epsilon, (q-p), \nu} + \frac{c||q-p||_1+c||\nu||_{TV}}{\epsilon} \end{align*} Dividing by $n$, taking expectations and taking the supremum over $n$ we get by \eqref{7} \begin{equation}\label{eqn9} \sup_n EY_n \leq \sup_n \frac{EX_{n+c}^{\epsilon, q-p, \nu}}n = \lim_{n \rightarrow \infty} \frac{EX_n^{\epsilon, q-p, \nu}}n = \sup_n \sup_k EY_{n,k} \end{equation} which when combined with \eqref{8} gives (iii). Applying Lemma \ref{lemma15}, we get \[ \lim_{n \rightarrow \infty} Y_n = \lim_{n \rightarrow \infty} EY_n = \sup_n EY_n \ \mbox{a.s.}\] i.e. \[\lim \limits_{n \rightarrow \infty} \frac{X_n^{\epsilon, \frac{\lfloor nq \rfloor - \lfloor np \rfloor}n}}n = \sup \limits_{n } \frac{EX_n^{\epsilon, \frac{\lfloor nq \rfloor - \lfloor np \rfloor}n}} n = \lim_{n \rightarrow \infty} \frac{EX_n^{\epsilon, \frac{\lfloor nq \rfloor - \lfloor np \rfloor}n}} n \ \mbox{a.s.}\] Furthermore, by \eqref{7}, \eqref{8} and \eqref{eqn9}, we have \[\sup_n EY_n = \sup_n \sup_k EY_{n,k} = \sup_n \frac{EX_n^{\epsilon, q-p, \nu}}n = \lim_{n \rightarrow \infty} \frac{EX_n^{\epsilon, q-p, \nu}}n \] Combining this with \eqref{EQ3}, we get that $\frac{d^{\epsilon}((np, n\xi), (nq, n\nu))}n$ converges in probability to the a.s. limit of $ \frac{d^{\epsilon}((\vec{0}, 0), (nq-np, n\nu - n\xi))}n$. This completes the proof of Theorem \ref{thm7} and of translation-invariance of the limit shape. \end{proof} \subsection{Grid Entropy as a Directed Norm} In the previous sections we showed the existence of the limit shape of $d^{\epsilon}$. We now take the infimum as $\epsilon \downarrow 0$ and we show that the result is a directed metric with negative sign that gives rise to a norm, which we call grid entropy. \begin{manualtheorem}{\ref{thm9}} For $\epsilon > 0$, $\nu, \xi \in \mathcal{M}_+$ and $p, q \in {\mathbb{R}} ^D$ define $$\widetilde{d}^{\epsilon}((p, \xi), (q, \nu)) := \lim \limits_{n \rightarrow \infty} \frac{d^{\epsilon}((np,n\xi), (nq, n\nu))}n, \ \mbox{and} $$ $$\widetilde{d}((p, \xi), (q, \nu)) := \inf_{\epsilon > 0} \widetilde{d}^{\epsilon}((p, \xi), (q, \nu)) \in [-\infty, \infty)$$ Then each $\widetilde{d}^{\epsilon}$ as well as $\widetilde{d}$ are directed metrics with negative sign on $ {\mathbb{R}} ^D \times \mathcal{M}_+$. \end{manualtheorem} \begin{remark} For any $p, q \in {\mathbb{R}} ^D, \epsilon > 0$ and $\nu, \xi \in \mathcal{M}_+$, $d^{\epsilon}((np, n\xi), (nq, n\nu))$ is monotone decreasing as $\epsilon \downarrow 0$ so \[\widetilde{d}((p, \xi), (q, \nu)) = \inf_{\epsilon > 0} \widetilde{d}^{\epsilon}((p, \xi), (q, \nu)) = \lim_{\epsilon \downarrow 0} \widetilde{d}^{\epsilon}((p, \xi), (q, \nu))\] By Lemma \ref{lemma10}, for every $n$ and $\epsilon > 0$, \[d^{\epsilon}((np, n\xi), (nq, n\nu)) \in [-\infty, ||\lfloor nq \rfloor - \lfloor np \rfloor||_1 \log D]\] so it follows that \[ \widetilde{d}((p, \xi), (q, \nu)) \in [-\infty, ||q-p||_1 \log D]\] Once we prove that our two definitions of grid entropy are equivalent, this bound will be improved. \end{remark} \begin{remark} As was the case with Theorem \ref{thm9}, the limit $\widetilde{d}((p, \xi), (q, \nu))$ is trivially $-\infty$ if $q-p \notin {\mathbb{R}} ^D_{\geq 0}$ or if $q=p$ and $\nu \neq \xi$, and it is trivially 0 if $q=p$ and $\nu = \xi$. \end{remark} \begin{proof} As noted above, \[\widetilde{d}^{\epsilon}((p, \xi), (p, \xi)) = \widetilde{d}((p, \xi), (p, \xi)) = 0 \ \forall \epsilon > 0\] It remains to prove the reverse triangle inequality. Let $p,q,r \in {\mathbb{R}} ^D$ and $\nu,\xi, \eta \in \mathcal{M}_+$ and consider any $\epsilon > 0$ and any $n$. If $q-p \notin {\mathbb{R}} ^D_{\geq 0}$ or $r-q \notin {\mathbb{R}} ^D_{\geq 0}$ then the following inequality holds trivially (because the right-hand side is $-\infty$) \begin{equation}\label{9} d^{ \epsilon} ((np, n\xi),(nr, n\eta)) \geq d^{\epsilon} ((np, n\xi),(nq, n\nu)) + d^{\epsilon} ((nq, n \nu),(nr, n\eta)) \end{equation} Now suppose $r-q, q-p \in {\mathbb{R}} ^D_{\geq 0}$. Given paths $\pi: \lfloor np \rfloor \rightarrow \lfloor nq \rfloor$, $\pi': \lfloor nq \rfloor \rightarrow \lfloor nr \rfloor$, we concatenate them to obtain a unique path $\pi \cdot \pi': \lfloor np \rfloor \rightarrow \lfloor nr \rfloor$ with unnormalized empirical measure $\mu_{\pi \cdot \pi'} = \mu_{\pi} + \mu_{\pi'}$. By the subadditivity of the Levy-Prokhorov metric (Lemma \ref{lemma4}), \[\rho(\mu_{\pi \cdot \pi'}, n(\eta-\xi)) = \rho(\mu_{\pi} + \mu_{\pi'}, n(\eta-\nu) + n(\nu-\xi)) \leq \rho(\mu_{\pi}, n(\eta-\nu)) + \rho(\mu_{\pi'}, n(\nu-\xi))\] so \begin{align*} &\bigg(\sum_{\pi: \lfloor np \rfloor \rightarrow \lfloor nq \rfloor} e^{-\frac1{\epsilon} \rho(\mu_{\pi}, n(\eta - \nu) ) } \bigg) \bigg( \sum_{\pi': \lfloor nq \rfloor \rightarrow \lfloor nr \rfloor} e^{-\frac1{\epsilon} \rho(\mu_{\pi'}, n(\nu - \xi) ) } \bigg) \\ &\leq \bigg(\sum_{\pi''': \lfloor np \rfloor \rightarrow \lfloor nr \rfloor} e^{-\frac1{\epsilon} \rho(\mu_{\pi'''}, n(\eta-\xi)) } \bigg) \end{align*} It follows that \eqref{9} holds. Dividing \eqref{9} by $n$ and taking the limit (in probability) as $n \rightarrow \infty$ we get \[\widetilde{d}^{\epsilon} ((p, \xi),(r, \eta)) \geq \widetilde{d}^{\epsilon} ((p, \xi),(q, \nu)) + \widetilde{d}^{\epsilon} ((q, \nu),(r, \eta))\] so $\widetilde{d}^{\epsilon}$ is a directed metric with negative sign. Taking the limit as $\epsilon \rightarrow 0^+$, we still obtain a directed metric with negative sign. \end{proof} We proceed to show that $\widetilde{d}$ gives rise to a directed norm with negative sign. \begin{theorem}\label{thm16} \begin{enumerate}[label=(\roman*)] \item[] \item Each $\widetilde{d}^{\epsilon}$ is translation-invariant and positive-homogeneous. So is $\widetilde{d}$. \item For $q \in {\mathbb{R}} ^D, \nu \in \mathcal{M}_+$ define the grid entropy with respect to $(q, \nu)$ to be \[|| (q, \nu) || := \widetilde{d}((\vec{0}, 0), (q, \nu)) \] Then this is a directed norm with negative sign on $ {\mathbb{R}} ^D \times \mathcal{M}_+$. \end{enumerate} \end{theorem} \begin{remark} From before, $||(q, \nu)||$ is $-\infty$ if $q \notin {\mathbb{R}} ^D_{\geq 0}$ or if $q = \vec{0}$ and $\nu \neq 0$, and it is 0 if $q = \vec{0}$ and $\nu = 0$. \end{remark} \begin{remark} A directed metric with negative sign is clearly concave. Thus each $ \widetilde{d}^{\epsilon}$ as well as $||(\cdot, \cdot)||$ are concave functions on their respective domains. \end{remark} \begin{remark} These properties of grid entropy also follow from \cite{rassoul2014quenched}. \end{remark} \begin{proof} (i) Fix $\epsilon > 0$. We already showed that $\widetilde{d}^{\epsilon}$ is translation-invariant while proving Theorem 7. By translation-invariance, it suffices to show that $\widetilde{d}^{\epsilon}((\vec{0},0), (q,\nu))$ is positive-homogeneous. Consider any $c = \frac{a}{b} \in {\mathbb{Q}} _{> 0}$. Then \[\widetilde{d}^{\epsilon}((\vec{0}, 0) , (cq, c\nu))= \lim_{n \rightarrow \infty} \frac{d^{\epsilon} ((\vec{0}, 0), (cnq, cn\nu))}{n} \ \mbox{a.s.}\] Looking at the subsequence consisting of multiples $n = mb$ of $b$, we get \[\widetilde{d}^{\epsilon}((\vec{0}, 0), (cq, c\nu))= \lim_{m \rightarrow \infty} \frac{d^{\epsilon} ((\vec{0}, 0), (amq, am\nu))}{mb} \ \mbox{a.s.}\] But each $am \in {\mathbb{N} } $ so \[\widetilde{d}^{\epsilon}((\vec{0}, 0), (cq, c\nu))=\frac{a}b \lim_{n \rightarrow \infty} \frac{d^{\epsilon} ((\vec{0}, 0), (nq, n\nu))}{n} = c \widetilde{d}^{\epsilon}((\vec{0}, 0), (q,\nu)) \ \mbox{a.s.}\] Thus $\widetilde{d}^{\epsilon}$ is positive-homogeneous for rational factors. Now consider any $c \in {\mathbb{R}} _{> 0}$ and take sequences $a_k, b_k \in {\mathbb{Q}} _{> 0}, a_k \uparrow c, b_k \downarrow c$. Consider any $n$, and let us look at \[X_n^{\epsilon, cq, c\nu} = d^{\epsilon}((\vec{0},0), (cq, c\nu))\] By Lemma \ref{lemma12}, \[ X_n^{\epsilon, a_kq, a_k \nu } - \frac1{\epsilon} (n|c-a_k| \cdot (||q||_1 + ||\nu||_{TV}) ) \leq X_n^{\epsilon, cq, c\nu} \leq X_n^{\epsilon, b_kq, b_k \nu } + \frac1{\epsilon} (n|b_k-c| \cdot (||q||_1 + ||\nu||_{TV}) ) \] Dividing by $n$ and taking a.s. limits, and applying homogeneity for positive rational factors we get \begin{align*} &a_k \widetilde{d}^{\epsilon}((\vec{0}, 0), (q, \nu)) - \frac{|c-a_k| \cdot (||q||_1 + ||\nu||_{TV})}{\epsilon} \\ & \leq \widetilde{d}^{\epsilon}((\vec{0}, 0), (cq, c \nu)) \\ &\leq b_k \widetilde{d}^{\epsilon}((\vec{0}, 0), (q, \nu)) + \frac{|b_k-c| \cdot (||q||_1 + ||\nu||_{TV})}{\epsilon} \end{align*} Taking $k \rightarrow \infty$ gives us \[\widetilde{d}^{\epsilon}((\vec{0}, 0), (cq, c \nu)) = c\widetilde{d}^{\epsilon}((\vec{0}, 0), (q, \nu))\] so $\widetilde{d}^{\epsilon}$ is homogenous with respect to any positive real factor. Taking the infimum over $\epsilon > 0$ we get that $\widetilde{d}$ is translation-invariant and positive\hyp{}homogenous. (ii) Follows directly from (i). \end{proof} \subsection{Direction-free Grid Entropy} We now wish to develop a grid entropy for the case where we no longer restrict ourselves to paths $\pi: \vec{0} \rightarrow \lfloor nq \rfloor$ for a given direction $q \in {\mathbb{R}} ^D$, and instead look at all length $\lfloor nt \rfloor$ paths from $\vec{0}$ for a given size parameter $t \geq 0$. Another way of putting this is that we look at paths from the origin to the line or "level" $x_1 +x_2+\ldots + x_D = \lfloor nt \rfloor$. Recall that the set of all such paths is denoted $\mathcal{P}_{\lfloor nt \rfloor}(\vec{0})$. If we try to simply repeat our previous argument, we run into a dead end because we are no longer in a superadditive setting. The solution is to observe that the distances $\widetilde{d}^{\epsilon}((\vec{0},0), (q, \nu))$ are maximized over $q \in {\mathbb{R}} ^D_{\geq 0}$ with $||q||_1 = t$ by $q = t(\frac1D, \ldots, \frac1D) := t \ell$. This intuitively makes sense, since this direction is the direction which has the most NE paths. \begin{lemma}\label{maximalLemma} Fix $t \geq 0, \nu \in \mathcal{M}_t$. Then \[\sup_{q \in {\mathbb{R}} ^D_{\geq 0}: ||q||_1 = t} \widetilde{d}^{\epsilon}((\vec{0}, 0), (q, \nu)) = \widetilde{d}^{\epsilon}((\vec{0}, 0), (t\ell, \nu)) \ \forall \epsilon > 0 \ \mbox{and}\ \sup_{q \in {\mathbb{R}} ^D_{\geq 0}: ||q||_1 = t} ||(q, \nu)|| = ||(t\ell, \nu)||\] \end{lemma} \begin{remark} In Section \ref{section4} we show that $||q||_1 = ||\nu||_{TV}$ is a necessary condition for $||(q,\nu)||$ to be finite, so it makes sense that we only take the supremum over $q \in {\mathbb{R}} ^D_{\geq 0}$ with $||q||_1 = t$. \end{remark} \begin{proof} This is an easy consequence of the symmetries of the grid and the concavity of $\widetilde{d}^{\epsilon}$ and direction-fixed grid entropy. We focus on the proof for $\widetilde{d}^{\epsilon}$; the argument for grid entropy goes the same way. Fix $\epsilon > 0$. By positive-homogeneity and since the $t = 0$ is trivial, we may assume $t = 1$. Suppose there exists $q \in {\mathbb{R}} ^D_{\geq 0}$ s.t. $||q||_1 = 1$ and \begin{equation}\label{label10} \widetilde{d}^{\epsilon}((\vec{0}, 0), (q, \nu)) > \widetilde{d}^{\epsilon}((\vec{0}, 0), (\ell, \nu)) \end{equation} Among such $q$ pick one that maximizes the number of coordinates which are equal $\frac{1}D$. Thus there are distinct $1 \leq i,j \leq D$ s.t. $q_i < \frac{1}D < q_j$, so we can write $\frac{1}D$ as a convex combination of $q_i, q_j$: \begin{equation}\label{label11} \frac{1}D = wq_i + (1-w) q_j \ \mbox{for some} \ w \in (0,1) \end{equation} Let $\sigma_{ij}(q)$ be $q$ with $q_i, q_j$ swapped. By symmetry of the grid, \[\widetilde{d}^{\epsilon}((\vec{0}, 0), (q, \nu)) = \widetilde{d}^{\epsilon}((\vec{0}, 0), (\sigma_{ij}(q), \nu))\] hence by concavity of $\widetilde{d}^{\epsilon}$, \begin{align*} \widetilde{d}^{\epsilon}((\vec{0}, 0), (q, \nu)) &= w \widetilde{d}^{\epsilon}((\vec{0}, 0), (q, \nu)) + (1-w) \widetilde{d}^{\epsilon}((\vec{0}, 0), (\sigma_{ij}(q), \nu)) \\ & \leq \widetilde{d}^{\epsilon}((\vec{0}, 0), (wq + (1-w)\sigma_{ij}(q), \nu)) \end{align*} But $wq + (1-w)\sigma_{ij}(q)$ only changes the coordinates of $q$ in positions $i,j$, with $q_i$ becoming $\frac{1}D$ by \eqref{label11}. Thus we have found a $q$ satisfying \eqref{label10} that has at least one more coordinate that is $\frac{1}D$ than our previous $q$, which we had assumed had the maximal number of such coordinates. Contradiction. Therefore $q = \ell$ as desired. \end{proof} We now use this useful fact along with the compactness of $\{q \in {\mathbb{R}} ^D_{\geq 0}: ||q||_1 = t\}$ to show that $||(t\ell, \nu)||$ is the desired direction-free grid entropy of length $t$. \begin{theorem}\label{thmNoDir} Fix $t \geq 0, \nu \in \mathcal{M}_t$. For any $\epsilon > 0$ we have \begin{align*} \widetilde{d}^{\epsilon}((\vec{0},0), (t\ell, \nu)) &= \lim_{n \rightarrow \infty} \sup_{q \in {\mathbb{R}} ^D_{\geq 0}: ||q||_1=t} \frac1n \log \sum_{\pi \in \mathcal{P}(\vec{0}, \lfloor nq \rfloor)} e^{-\frac{n}{\epsilon} \rho(\frac1n \mu_{\pi}, \nu)} \\ &= \lim_{n \rightarrow \infty} \frac1n \log \sum_{\pi \in \mathcal{P}_{ \lfloor nt \rfloor}(\vec{0})} e^{-\frac{n}{\epsilon} \rho(\frac1n \mu_{\pi}, \nu)} \end{align*} a.s. \end{theorem} \begin{proof} The statement is trivial when $t=0$ so we may assume $t > 0$. We focus on the first equality. By Lemma \ref{maximalLemma} and the trivial fact that $\sup \limsup \leq \limsup \sup$ in general, we immediately get \[\widetilde{d}^{\epsilon}((\vec{0},0), (t\ell, \nu)) \leq \limsup_{n \rightarrow \infty} \sup_{q \in {\mathbb{R}} ^D_{\geq 0}: ||q||_1=t} \frac1n \log \sum_{\pi \in \mathcal{P}(\vec{0}, \lfloor nq \rfloor)} e^{-\frac{n}{\epsilon} \rho(\frac1n \mu_{\pi}, \nu)} \ \mbox{a.s.}\] Suppose equality does not hold on some event of positive probability. Thus there exists $\delta > 0$ s.t. \begin{equation} \label{EQ13} \widetilde{d}^{\epsilon}((\vec{0},0), (t\ell, \nu)) + 7\delta < \limsup_{n \rightarrow \infty} \sup_{q \in {\mathbb{R}} ^D_{\geq
rotating convection systems are used to make predictions about their convective behaviors in Section 2, and numerical models of global ocean convection characterizing the predicted regimes are presented in Section 3. Implications for icy satellites are explored in Section 4, and the challenges of extrapolating to realistic ocean conditions are discussed in Section 5. \section{Rotating Convection Scaling Laws} Convection characteristics depend critically on the relative importance of rotation, which tends to organize the fluid into columns aligned with the rotation axis, increase the critical Rayleigh number, constrain heat transfer efficiency, and drive zonal flows \citep[e.g.,][]{AurnouEA15}. \citet{ChengEA18} combines asymptotic predictions, laboratory experiments, and numerical simulations to review the behavior of rotating thermal convection as a function of the dimensionless Ekman, Rayleigh, and Prandtl numbers. The Ekman number, $E=\nu/2\Omega D^2$, represents the ratio of rotational to viscous timescales; thus, low $E$ signifies rapid rotation rates in planetary interiors. The Rayleigh number, $Ra = \alpha g \Delta T D^3 / \nu \kappa$, is the ratio of the thermal diffusion time to the viscous buoyant rise time; large $Ra$ denotes strong buoyancy forcing. The Prandtl number, $Pr = \nu / \kappa$, defines the ratio of thermal to viscous diffusion timescales. Here, $\nu$ is kinematic viscosity, $\Omega$ is rotation rate, $D$ is ocean thickness, $\alpha$ is thermal expansivity, $g$ is gravitational acceleration, $\Delta T$ is superadiabatic temperature contrast, and $\kappa$ is thermal diffusivity. \citet{ChengEA18} identify five rotating convection regimes: columnar, plumes, geostrophic turbulence (GT), unbalanced boundary layer (UBL), and nonrotating heat transfer (NR) (see Fig.~\ref{fig:regimes}). Near onset, convection in the bulk fluid manifests as Taylor columns aligned with the rotation axis (``columnar" regime). With increased buoyancy forcing, the columns begin to deteriorate such that they no longer extend fully across the fluid layer (``plumes" regime). Convection eventually becomes vigorous enough for strong mixing in the bulk fluid (``geostrophic turbulence" regime). Despite the disappearance of coherent vertical structures, the Coriolis force still imposes a vertical stiffness on the flow field. These regimes are shown collectively on Figure~\ref{fig:regimes}. The influence of rotation is lost locally at Rayleigh numbers exceeding $Ra_{GTU}$, which corresponds to the breakdown of geostrophy (balance between Coriolis and pressure gradient forces) in the thermal boundary layers (``unbalanced boundary layers" regime). For Rayleigh numbers greater than $Ra_{UNR}$, the influence of rotation is lost globally (``nonrotating heat transfer" regime). As reviewed by \citet{ChengEA18}, significant debate exists in the community on the scaling laws for $Ra_{GTU}$ and $Ra_{UNR}$. Rather than assume a single scaling law for each transition, I consider upper and lower bound scaling laws for each regime and highlight the resulting range of parameter space for each regime transition in Figure~\ref{fig:regimes}. Using this regime diagram, one can predict the convective regime of a system if the Ekman, Rayleigh, and Prandtl numbers can be estimated (see Table~\ref{tab:physparam}). The Prandtl number depends only on fluid properties and is estimated to be $Pr \sim 10$ for the satellite oceans \citep{AbramsonEA01,NayarEA16}. The Ekman number is also relatively easy to calculate since it only requires assumptions about the fluid viscosity, rotation rate, and ocean thickness. I use the internal structure models of \citet{VanceEA18} (see their Tables 5-8) to obtain ocean thicknesses $D_{ocean}$ for six combinations of possible outer ice shell thicknesses and ocean compositions for each satellite. Enceladus' ocean has the largest Ekman number of $E \sim \mathcal{O}(10^{-10})$, while the ocean of Ganymede has the lowest at $E \sim \mathcal{O}(10^{-13})$. The Rayleigh number is more difficult to estimate because it requires knowledge of the superadiabatic temperature contrast $\Delta T$. One can derive an estimate, however, using the relationship between the Rayleigh number and the convective heat transfer efficiency as measured by the Nusselt number, $Nu = q D / \rho C_p \kappa \Delta T$. Following \citet{SoderlundEA14}, I leverage $Nu-Ra$ scalings to solve for $\Delta T$ algebraically and consider both non-rotating and rapidly rotating scaling laws to give end-member estimates. More recent scaling laws for rotating spherical shells are used here, however. In the non-rotating regime, heat transfer is expected to be independent of the Ekman number and follow the theoretical limit of $Nu = 0.07 Ra^{1/3}$ \citep[e.g.,][]{GastineEA15}. Conversely, in the rapidly rotating limit, heat transfer is predicted to follow $Nu = 0.15 Ra^{3/2} (2E)^2$ \citep{GastineEA16}. As a result, the temperature contrast is given by \begin{equation} \Delta T = 7.3 \left (\frac{\nu}{\alpha g \rho C_p} \right )^{1/4} q^{3/4} \end{equation} in the non-rotating regime and by \begin{equation} \Delta T = 2.1 \left ( \frac{\Omega^4 \kappa}{\rho^2 C_p^2 \nu \alpha^3 g^3} \right )^{1/5} (q^2 D)^{1/5} \end{equation} in the rapidly-rotating regime. Here, I assume the heat flux $q$ from each of the six interior models per satellite, noting that the lower $q$ estimates are associated with thicker ice Ih shells \citep{VanceEA18}. Although these values are generally consistent with the literature, the minimum heat fluxes tend to exceed those predicted for radiogenic heating in the mantle at present day \citep[e.g.,][]{BlandEA09} and the upper bound for Ganymede is appropriate for a past active period \citep[e.g.,][]{DombardMcKinnon01}. If $q$ is decreased by an order of magnitude, the lower bounds for $\Delta T$, and therefore $Ra$, only decrease by a factor of 2.5 per eqn. (2). Our global estimates also neglect spatial variations in heat flow that may be locally strong at Enceladus \citep{ChobletEA17}, for example, where narrow mantle upwellings can reach $1-5$ W/m$^2$ (the global average, however, is in line with \cite{VanceEA18}). If $q$ is increased by an order of magnitude, the $Ra$ upper bounds increase by a factor 5.6 per eqn. (3). As shown in Table~\ref{tab:physparam}, Rayleigh numbers span from $Ra \sim \mathcal{O}(10^{16})$ for the lower Enceladus limit to $Ra \sim \mathcal{O}(10^{24})$ for the upper Ganymede limit. An important caveat to note here, however, is that these estimates do not include compositional contributions due to salinity gradients. This simplification may be especially significant for Titan since the ocean is hypothesized to have a high concentration of dissolved salts \citep[][]{BalandEA14, MitriEA14}. Figures~\ref{fig:regimes} and S1 plot the resulting estimates of the Ekman and Rayleigh numbers on the convective regime diagram. The oceans of Titan, Europa, and Ganymede are predicted to behave similarly since their estimated parameter spaces have considerable overlap. Since these estimates fall near the lower boundary between the UBL and NR regimes, I hypothesize that rotational effects do not dominate the turbulent local-scale convective flows. Conversely, rotation likely has a stronger influence on the ocean of Enceladus, which is also predicted to be primarily in the UBL regime, although extending into the GT transition. \section{Numerical Convection Models} Numerical models of global ocean convection are next used to characterize the currents and heat flow patterns. I utilized the pseudospectral code MagIC, version 5.6 \citep[e.g.,][]{Wicht02,GastineWicht12} to simulate 3D, time-dependent, thermal convection of a Boussinesq fluid in a rotating spherical shell with geometry characterized by the ratio of inner to outer shell radii, $\chi = r_i/r_o = 0.9$. The system is further defined by the Ekman, Rayleigh, and Prandtl numbers. Following \citet{SoderlundEA14}, the boundaries are impenetrable, stress-free, and isothermal. Compositional buoyancy, spatial variations in mantle heat flow, and mechanically driven flows are neglected for simplicity. Seven models that span a convective regime space consistent with the icy satellite ocean predictions are considered (Fig.~\ref{fig:regimes}). In the first series, the Rayleigh and Prandtl numbers are fixed to $Ra = 3.4 \times 10^7$ and $Pr=1$, and the Ekman number is increased from $E = 3.0 \times 10^{-5}$ to $E = 7.5 \times 10^{-4}$. The second series of models increase the Rayleigh number from $Ra = 2.4 \times 10^6$ to $Ra = 3.4 \times 10^7$ for fixed $E=1.5 \times 10^{-4}$ and $Pr=1$; higher $Ra$ values were not pursued to due computational limitations. Hyperdiffusivities are not employed \citep[c.f.][]{ZhangSchubert00}. The numerical grids have 73 radial points, 320 latitudinal points, and 640 longitudinal points for cases with $E \geq 7.5 \times 10^{-5}$ and 65 radial points, 640 latitudinal points, and 1280 longitudinal points for the $E=3.0 \times 10^{-5}$ case. Each model was initiated with a random temperature perturbation or restarted from a lower $E$ or $Ra$ case. Figure~\ref{fig:Vel} shows the mean velocity and temperature fields of each model across the $E$ parameter sweep, while Figure~\ref{fig:HF} shows the normalized heat flux along the outer boundary. In the highest Ekman number case (Fig.~\ref{fig:Vel}a), the zonal and radial flows have comparable magnitudes reminiscent of non-rotating convection. The radial flows have no preferred spatial orientation, while the zonal flows are concentric due to viscous transport of angular momentum \citep{BrunPalacios09}. Ocean temperatures are nearly isothermal away from the boundaries, leading to localized heat flux perturbations along the ice-ocean interface (Fig.~\ref{fig:HF}a). When the Ekman number is decreased (Fig.~\ref{fig:Vel}b-c), homogenization of absolute angular momentum leads to zonal flows that are retrograde (westward) at large cylindrical radii and prograde (eastward) closer to the rotation axis \citep[e.g.,][]{Gilman78,AurnouEA07,GastineEA13}. The mean radial flows become more organized with a pronounced upwelling near the equator and downwellings at mid-latitudes, essentially forming Hadley-like meridional circulation cells in each hemisphere. Upon further decreasing of the Ekman number (Fig.~\ref{fig:Vel}d), multiple zonal jets that alternate in direction develop, and the mean radial flows retain an equatorial upwelling that becomes more aligned with the rotation axis. Both mean zonal and radial flow speeds decrease by a factor of five compared to the $E=[3.0, 1.5] \times 10^{-4}$ cases. In all three of these models (Fig.~\ref{fig:Vel}b-d), ocean temperatures are characterized by thin thermal boundary layers and warmer equatorial waters. Heat flux peaks at low latitudes (with minima at mid-latitudes) due to the mean overturning circulations with secondary peaks forming at high latitudes due to turbulent heat transfer associated with vertically ascending plumes (Fig.~\ref{fig:HF}b-d). In the lowest Ekman number case (Fig.~\ref{fig:Vel}e), Coriolis forces organize the flow into narrow structures that are aligned with the rotation axis. Reynolds stresses associated with these columns drive prograde equatorial flow with jets that alternate in direction at higher latitudes due to correlation locally between the azimuthal and cylindrically radial flow components \citep[e.g.,][]{AurnouOlson01,Christensen01,HeimpelEA05,GastineEA14}. Ocean temperatures are not well-mixed, especially at low latitudes, due to the axialized convective flows and strong equatorial jet \citep[e.g.,][]{AurnouEA08}. Consequently, heat flow along the ice-ocean interface peaks at high latitudes with minima near the equator (Fig.~\ref{fig:HF}e). A similar trend from three-jet zonal flows, equatorial upwelling, and peak low latitude heat flux to multiple zonal jets, axialized convective flows, and peak high latitude heat flux is found as $Ra$ is decreased (Figs.~S2 and S3). \section{Implications for Icy Satellites} In order to apply these models to icy satellite oceans, I assume that the velocity and temperature patterns extrapolate to more extreme parameters following the relative distance between regime boundaries (Fig.~S1). Enceladus' ocean may then be represented by the $E=[3.0 \times 10^{-5}, 7.5 \times 10^{-5}]$ models since both fall approximately between the GT-UBL regime transition and the lower bound of the UBL-NR transition. In contrast, the oceans of Europa, Ganymede, and Titan depend on the UBL-NR transition scaling used. If $Ra^{RoC=1}_{UNR}=E^{-2}Pr$ \citep[e.g.,][]{Gilman77} is assumed, then all of these oceans are near the center of UBL regime such that the $E=[7.5 \times 10^{-5},1.5 \times 10^{-4}]$ models would be most appropriate for these satellites. If the transition instead follows $Ra^{Ga16}_{UN
R}=100(2E)^{-12/7}$ \citep[][]{GastineEA16}, then the $E=3.0 \times 10^{-4}$ model would best characterize Europa and the $E=[3.0 \times 10^{-4}, 7.5 \times 10^{-4}]$ models would be most pertinent to Titan and Ganymede. Below, I discuss the implications for each satellite. Regions with high heat flow are presumed to undergo enhanced melting, leading to ice shell thickness variations. However, large thickness disparities can set up a phenomena known as an ice pump \citep[e.g.,][]{LewisPerkin86} where pressure-induced melting occurs where the ice shell is thick and re-accretes where the ice shell is thin, effectively reducing topography along the base of the ice shell. Since the accretion process is very efficient at excluding impurities in low temperature environments \citep[e.g.,][]{MooreEA94,EickenEA94}, this marine ice may be salt-depleted compared to the overlying ice. The ice may, therefore, have positive buoyancy due to the associated thermal and compositional density anomalies and rise toward the surface in the form of convective diapirs \citep[e.g.,][]{PappalardoBarr04,SoderlundEA14}. Alternatively, if the ice pump mechanism is not efficient, the ice shell may be more unstable to convection where it is relatively thick \citep{TravisEA12,Goodman14}. For Enceladus, I predict the zonal flows to be characterized by multiple jets that alternate in direction (Fig.~\ref{fig:Vel}A d-e). Converting model velocities to dimensional units $U = \Omega D Ro$, I expect peak zonal speeds of nearly a m/s depending on the ocean thickness assumed. Meridional circulations are predicted to either be strongly aligned with the rotation axis with speeds up to a few mm/s (Fig.~\ref{fig:Vel}B e) or be concentrated in a low latitude upwelling with speeds up to a few cm/s (Fig.~\ref{fig:Vel}B d). As a result, heat flow along the ice-ocean interface has distinct peaks at either the poles (Figs.~\ref{fig:Vel}C e, \ref{fig:HF}e) or at the equator and the poles secondarily (Figs.~\ref{fig:Vel}C d, \ref{fig:HF}d). Measurements of Enceladus' shape, gravitational field, and librational motions show that the ice shell is thin below the south pole and thick at the equator, with an intermediate thickness at the north pole \citep[e.g.,][]{CadekEA16,BeutheEA16}. Inverting these measurements to infer the oceanic heat flux along the ice-ocean interface, \citet[][]{CadekEA19} find peak flux near the poles with a minima at the equator. This pattern implies upwelling of warm water at the poles and downwelling of cool water at low latitudes, which may be caused by ocean convection (Fig.~\ref{fig:Vel}e) and/or be a consequence of the pattern of tidal heating in the mantle \citep{ChobletEA17}. Considering the former, the $E = 3.0 \times 10^{-5}$ model is appropriate if a low internal heat flux is assumed \citep[in contrast to][]{ChobletEA17} or if the thermal expansion coefficient in our calculations is overestimated since $\alpha$ trends towards zero with decreasing salinity and becomes negative for freshwater \citep[e.g.,][see also Table~\ref{tab:physparam}]{NayarEA16,Feistel10}, which both effectively reduce the Rayleigh number and make rotational effects more important. Europa's ocean is predicted to have three zonal jets with retrograde equatorial flow that can reach m/s speeds (Fig.~\ref{fig:Vel}A b-c) or multiple zonal jets with retrograde equatorial flow and reduced speeds (Fig.~\ref{fig:Vel}A d) depending on the scaling law assumed. All Europa-relevant models, however, have an equatorial upwelling of warm water with peak speeds of roughly a few cm/s and enhanced heat transfer at low latitudes (Figs.~\ref{fig:Vel}B-C b-d, \ref{fig:HF}b-d). The surface of Europa is riddled with geologic features indicating recent activity and the potential for ocean-derived materials \citep[e.g.,][]{FigueredoGreeley03,FischerEA15}. Chaos terrains, for example, appear to be located preferentially at low latitudes with a secondary prevalence near the poles \citep{LeonardEA18}, and formation models suggest that they may be associated with upwelling diapirs \citep[e.g.,][]{SotinEA02,CollinsNimmo09,SchmidtEA11} and marine ice accretion \citep[][]{SoderlundEA14}. No large gradients in ice shell thickness have been detected \citep{NimmoEA07}, suggesting an efficient ice pump \citep[c.f.][]{Nimmo04}. Given our robust model predictions of high oceanic heat flux at low latitudes with relatively low flux at mid-latitudes, our new calculations continue to support the thermocompositional diapirism hypothesis. Given the similarities in regime predictions for Titan and Ganymede, they are considered together here. Assuming the $Ra^{Ga16}_{UNR}$ scaling and upper $Ra$ estimates, the oceans would behave akin to a non-rotating system with no coherent heat transfer patterns (Figs.~\ref{fig:Vel} C a, \ref{fig:HF}a). If the lower $Ra$ estimates are instead assumed, these satellites are predicted to have three-jet zonal flows with peak speeds up to a few m/s (depending on ocean thickness), Hadley-like circulation cells with peak speeds up to tens of cm/s (depending on ocean thickness), and maximum heat transfer near the equator (Figs.~\ref{fig:Vel} A-C b, \ref{fig:HF}b). Alternatively, for the $Ra^{RoC=1}_{UNR}$ scaling, these satellites are predicted to behave similarly to Enceladus and Europa as discussed above (Figs.~\ref{fig:Vel} A-C c-d, \ref{fig:HF}c-d), except with respect to dimensionalized flow speeds that could be considerably faster due to the larger ocean thicknesses. Looking to Titan, the satellite's surface topography shows polar depressions compared to relatively elevated low latitudes \citep[e.g.,][]{DuranteEA19} that are likely explained by ice shell thickness variations \citep{NimmoBills10,HemingwayEA13,LefevreEA14} or ice shell density variations \citep{ChoukrounSotin12}. As for Enceladus, geophysical measurements by {\it Cassini} have been used to infer the oceanic heat flux along the ice-ocean interface \citep{KvorkaEA18}. The pattern is spatially complex, but simplifies to peaks near the poles when only axisymmetric components are considered. In contrast, the ocean convection models predicted to be relevant for Titan have either no coherent heat flux pattern (Fig.~\ref{fig:HF}a), peak flux and enhanced melting near the equator (Fig.~\ref{fig:HF}b-c), or peak flux near both the equator and the poles (Fig.~\ref{fig:HF}d), none of which are consistent with the observed long-wavelength topography assuming Airy isostasy. If ocean dynamics alone are responsible, this difference implies that either (1) the melted equatorial region in the intermediate scenario was infilled with less dense marine ice to form the equatorial bulge through Pratt isostasy or (2) the ocean has a stably-stratified salinity gradient that reduces the effective buoyancy forcing of the ocean ($Ra$) such that rotational effects become sufficient to maximize heat flow and melting at the poles (Fig.~\ref{fig:HF}e). Observational constraints for Ganymede are much more limited. The satellite's ancient grooved terrains indicate a likely period of geologic activity in its early history \citep{Lucchita80} and detection of hydrated salts suggests a subsurface briny layer of fluid \citep{McCordEA01}, but no clear patterns are present. Mass anomalies were measured in the northern hemisphere during a single Galileo flyby \citep{PalgutaEA06}, but the sparsity of data prohibit both characterization on a global scale and unique determination of their depth of origin. Consequently, there is no clear link at present between observations and the underlying ocean dynamics. \section{Discussion} Our results are broadly consistent with the literature. Moreover, by comparing our numerical models against those with different input parameters, we are able to assess their sensitivity to these aspects. For example, the satellite oceans are predicted to have geometries characterized by $\chi$ values ranging from $0.74$ to $0.99$ (Table~\ref{tab:physparam}), compared to our models with fixed $\chi=0.90$. \cite{GastineEA13} found that anelastic columnar convection in thicker spherical shell geometries ($\chi=0.6$) is also characterized by a prograde equatorial jet with multiple, small-scale meridional circulations aligned with the rotation axis, which transitions to a regime with a retrograde equatorial jet and Hadley cell-like meridional circulations and ultimately a weakening of zonal flow speeds as the influence of rotation is decreased. Similarly, \cite{AurnouEA08} showed that heat transfer is inhibited at low latitudes and generally increases towards the poles for columnar convection in spherical shells with $\chi=[0.85, 0.9]$; this result is different from \cite{MiquelEA18}, who obtained peak equatorial heat transfer in their asymptotic models of rapidly rotating convection near onset, where the polar regions are subcritical \citep[e.g.,][]{DormyEA04}. Conversely, \cite{BrunPalacios09} and \cite{SoderlundEA13} showed that the equatorial heat transfer enhancement for less vertically stiff convection is robust for thicker shells ($\chi \leq 0.75$) and different thermal boundary conditions. Simulations with thinner spherical shells are computationally demanding and uncommon \citep[c.f.][]{DeRosaEA02}. Furthermore, studies with a thin layer of stable stratification below the outer boundary, which may be expected in regions where thermal expansivity is negative \citep[][see also Table~\ref{tab:physparam}]{MeloshEA04}, generally show similar trends \citep[e.g.,][]{HeimpelEA15}. Thus, these studies suggest that our results are not strongly sensitive to variations in ocean thickness or fluid properties with depth. Large spatial variations in ice shell thickness may enhance mechanically driven flows though \citep[e.g.,][]{LemasquerierEA17}, which are not considered here. The effects of different boundary conditions should also be considered. For example, we assumed stress-free mechanical boundaries in order to reduce the effects of viscosity (e.g., artificially large Ekman boundary layers) due to the large $E$ values of our models compared to the satellites \citep[Table~\ref{tab:physparam};][]{KuangBloxham97}. In models with no-slip boundary conditions, inertial effects tend to be reduced substantially and strong zonal flows can be inhibited, which can disrupt convection \citep[e.g.,][]{AurnouHeimpel04,JonesTOG_2015}. For sufficiently driven and rapidly rotating convection, however, no-slip boundaries do not necessarily have this inhibiting effect \citep{MannevilleOlson96,AubertEA01}. Uniform fixed temperature boundary conditions were assumed because they enable a broader comparison with the literature and across the satellites, although fixed heat flux boundary conditions may be more appropriate along the seafloor. At moderate parameters, fixed flux conditions tend to promote larger convective scales \citep{SakurabaRoberts09,HoriEA12} and spatial variations along the boundary can influence the flow and efficiency of heat transfer, especially near the interface \citep[e.g.,][for anomalies along the outer boundary]{DietrichEA16,MoundDavies17}. At extreme (i.e. realistic) parameters, however, the solutions for both thermal conditions appear to converge for rapidly rotating convection \citep{CalkinsEA15_BC} as well as Rayleigh-B\'enard convection \citep{JohnstonDoering09}. This convergence implies that convective-scale spatial variations in boundary heat flow have a secondary influence on the interior convection \citep{CalkinsEA15_BC}. While significant effects may occur if the spatial scale of the thermal anomaly is comparable to the vertical scale of convection \citep[e.g.,][]{DaviesEA09}, it is unclear whether these effects will persist across the entire fluid depth \citep{DaviesMound19}. Future numerical work should (1) strive for more realistic Ekman and Rayleigh numbers, (2) tackle the effect of boundary conditions, especially the Stephan-type condition at the top boundary due to melting/freezing of water along the interface and fixed heat flux along the bottom boundary, (3) consider both temperature and salinity buoyancy sources \citep[e.g.,][]{VanceBrown05,Jansen16}, and (4) couple convectively and mechanically driven flows \citep[e.g.,][]{LeBarsEA15}. Future missions to the outer solar system may be able to better constrain the ocean flows and test the predictions of our calculations and convection models. Looking specifically to the Jovian system, the {\it Europa Clipper} and {\it JUICE} missions will determine the ocean thickness and salinity and may be able to place constraints on spatial variations of ice shell thickness \citep[e.g.,][]{PhillipsPappalardo14,GrassetEA13}. Ice penetrating radar will provide information on ice shell thermophysical structure and constrain ice-ocean exchange processes \citep[e.g.,][]{KalousovaEA17}, while magnetometer measurements may allow probing of ocean currents through their induction of magnetic fields \citep[e.g.,][]{Tyler11}. \acknowledgments I thank Jonathan Aurnou, Baptiste Journaux, and Steve Vance for their helpful comments as well as Gabriel Tobie and Christophe Sotin for their constructive reviews. This work was supported by NASA Grant NNX14AR28G. Computational resources were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center. The MagIC code is publicly available at https://magic-sph.github.io/contents.html. All data is provided within the publication pages.
\section{Introduction} The $n$-dimensional hypercube $Q_{n}$ has vertex-set the power set ${\cal P}\left(\left\{ 1,\dots,n\right\} \right)$ with metric $d\left(x,y\right)=\left|x\Delta y\right|$. For a subset $A$ of the hypercube $Q_{n}$ define the \textit{neighbourhood} of $A$ to be the set $N\left(A\right)=\left\{ x\in Q_{n}:\,d\left(x,A\right)\leq1\right\} $, where $d\left(x,A\right)=\min_{y\in A}d\left(x,y\right)$. Also more generally for each $t>0$ define $N^{t}\left(A\right)=\left\{ x\in Q_{n}:\,d(x,A)\leq t\right\} $. In order to state Harper's vertex-isoperimetric theorem we need a few definitions. For any $n$ and $0\leq r\leq n$ define the \textit{lexicographic order} on $\left[n\right]{}^{(r)}=\left\{ A:\,A\subseteq\left\{ 1,\dots,n\right\} ,\,|A|=r\right\} $ to be given by $A<_{lex}B$ if $\min\left(A\Delta B\right)\in A$ and define the \textit{simplicial order} on $Q_{n}$ to be given by $A<_{sim}B$ if \[ |A|<|B|\text{ or }\left(|A|=|B|\text{ and }A<_{lex}B\right) \] \textbf{Theorem 1 (Harper, \cite{key-4}). }Let $A$ be a subset of $Q_{n}$ and let $B$ be an initial segment of the simplicial order with $\left|A\right|=\left|B\right|$. Then $\left|N\left(A\right)\right|\geq\left|N\left(B\right)\right|$. $\square$\\ It turns out that the sets for which Harper's theorem holds with equality are not in general unique. As a trivial example, any subset of $Q_{2}$ of size $2$ has minimal vertex boundary and not all such sets are isomorphic. There are more interesting and less trivial examples as well. It is easy to verify that if $A$ is an initial segment of the simplicial order, then so is $N\left(A\right)$. Hence Harper's theorem implies that an initial segment of the simplicial order minimises $N^{t}\left(A\right)$ for all $t>0$. For a general introduction to the vertex-isoperimetric theorem, see e.g. Bollob\'{a}s (Chapter 16 in \cite{key-3}) . In this paper we will consider the following question of Aubrun and Szarek {[}1, Exercise 5.66{]}: If $A\subseteq Q_{n}$ for which $N^{t}\left(A\right)$ and $N^{t}\left(A^{c}\right)$ are minimal for all $t>0$, does it follow that $A$ is isomorphic to an initial segment of the simplicial order? For convenience, we say that $A$ is \textit{extremal }if $N^{t}\left(A\right)$ and $N^{t}\left(A^{c}\right)$ are minimal for all $t>0$. Define the \textit{exact Hamming ball} of radius $r$ centred at $x$ to be $B\left(x,r\right)=\left\{ y\in Q_{n}:\,d\left(x,y\right)\leq r\right\} $, and define set $A$ to be a \textit{Hamming ball} if there exists $x$ and $r$ such that $B\left(x,r\right)\subset A\subseteq B\left(x,r+1\right)$. Note that $B\left(\emptyset,r\right)$ is the initial segment of the simplicial order of length $\sum_{i=0}^{r}{n \choose i}$, and every initial segment of the simplicial order is a Hamming ball. If $A$ is an initial segment of simplicial order then $N(A)$ is also an initial segment of simplicial order, and $A^{c}$ is isomorphic to an initial segment of simplicial order. Hence initial segments of simplicial order are always extremal. On the other hand, requiring only $N^{t}\left(A\right)$ to be extremal for all $t>0$ is not strong enough condition to guarantee that $A$ should be isomorphic to the initial segment of simplicial order. Indeed, one could take for example $A=B\left(x,r\right)\setminus\left\{ x\right\} $ for $r\geq1$. Then $N^{t}\left(A\right)=B(x,r+t)$ for all $t>0$ and hence $N^{t}\left(A\right)$ is always extremal, yet $A$ is not isomorphic to the initial segment. It turns out that the answer to the question is negative, and we will present a counterexample in Section 2. Rather surprisingly, it turns out that the only Hamming balls which are extremal \textit{are} the initial segments of the simplicial order. However, it turns out that all the extremal sets are contained between two exact Hamming balls with same centre and radius differing by 2, i.e. there exists $x$ and $r$ such that $B\left(x,r\right)\subseteq A\subseteq B\left(x,r+2\right)$. The second aim of this paper is to classify all the extremal sets $A$ up to isomorphism. In order to state the result, we need some notation. We write $X=\left[n\right]=\left\{ 1,\dots,n\right\} $, $\left[n\right]{}^{(r)}=\left\{ A\subseteq\left[n\right]\,:\,\left|A\right|=r\right\} $, $\left[n\right]^{(\geq r)}=\left\{ A\subseteq\left[n\right]\,:\,\left|A\right|\geq r\right\} $, $X_{i}=\left\{ 1,\dots,n\right\} \setminus\left\{ i\right\} $ and $X_{i,j}=\left\{ 1,\dots,n\right\} \setminus\left\{ i,j\right\} $. Define the \textit{colexicographic order} on $\left[n\right]^{(r)}:=\left\{ A:\,A\subseteq\left\{ 1,\dots,n\right\} ,\,|A|=r\right\} $ to be given by $A<_{colex}B$ if $\text{max}\left(A\Delta B\right)\in B$. For $k\le{n-1 \choose r}$ let ${\cal A}\subseteq\left[n\right]{}^{(r)}$ be the initial segment of the colexicographic order of size $k$. For each $1\leq i\leq n$, let ${\cal A}_{i,0}=\left\{ B\in X_{i}^{(r-1)}\,:\,B\cup\left\{ i\right\} \in{\cal A}\right\} $ and ${\cal A}_{i,1}=\left\{ B\in X_{i}^{(r)}\,:\,i\not\in B,\,B\in{\cal A}\right\} $ be the \textit{$i$-sections} of ${\cal A}$. For each $i$ set $A_{i}=X^{(\ge r+1)}\cup{\cal A}_{i,1}\cup{\cal A}_{i,0}$. Note that $A_{n}=X^{(\geq r+1)}\cup{\cal A}$ is isomorphic to an initial segment of the simplicial order, and that some of $A_{i}$ might be isomorphic to each other. Now we are ready to give the classification of all extremal sets.\\ \textbf{Theorem 2 (Classification of extremal sets).} Let $A\subseteq Q_{n}$ with $\left|X^{(\leq r)}\right|<\left|A\right|\leq\left|X^{(\leq r)}\cup\left\{ B\in X^{(r+1)}\,:\,1\in B\right\} \right|$ for some $r$. Let $A_{1},\dots,A_{n}$ be defined as above with $\left|A_{i}\right|=\left|A\right|$. Then $A$ is extremal if and only if $A$ is isomorphic to some $A_{i}$. \\ It is known that the exact Hamming balls are uniquely extremal sets for Harper's inequality. That is, for $\left|A\right|=\left|X^{(\leq r)}\right|$, if $N\left(A\right)$ is minimal then $A=B\left(x,r\right)$ for some $x$. Thus if $\left|A\right|=\left|X^{(\leq r)}\right|$ for some $r$, and $A$ is extremal, it certainly follows that $A$ has to be isomorphic to an initial segment of the simplicial order. Note that if $A$ is extremal then so is $A^{c}$ as the conditions in the definition of extremality are symmetric under taking complements. Set $G_{r}=X^{(\leq r)}\cup\left\{ B\in X^{(r+1)}\,:\,1\in B\right\} $. It is easy to check that $\left|G_{r}\right|+\left|G_{n-r-2}\right|=2^{n}$. Thus provided $\left|A\right|\neq\left|X^{(\leq r)}\right|$ for all $r$, at least one of $A$ and $A^{c}$ satisfies $\left|X^{(\leq r)}\right|<\left|A\right|\leq\left|G_{r}\right|$ or $\left|X^{(\leq r)}\right|<\left|A^{c}\right|\leq\left|G_{r}\right|$ for some $r$. Hence Theorem 2 together with these observations covers the classification of all extremal sets. The plan of the paper is as follows. In Section 2 we will construct an extremal set which is not isomorphic to an initial segment of the simplicial order. In Section 3 we will prove Theorem 2. In Section 4 we will discuss how the results presented in Section 3 will change if the conditions of extremality are weakened to requiring only $N\left(A\right)$ and $N\left(A^{c}\right)$ to be minimal. In this case there are extremal sets $A$ for which there does not exist $x$ and $r$ with $B\left(x,r\right)\subseteq A\subseteq B\left(x,r+2\right)$. In fact the situation is not even bounded, as it turns out that the constant 2 cannot be replaced by any finite number. However, it remains true in the weaker version as well that all extremal Hamming balls are isomorphic to the initial segment. Recall that exact Hamming balls are uniquely extremal sets for Harper's inequality. In Section 5 we will prove another near-uniqueness result: we will show that there exists only one set $B_{r}$ of size $\left|G_{r}\right|$, apart from the initial segment, which is extremal for Harper's inequality. In fact the set $B_{r}$ is also an extremal set, and we will describe it in Section 2. For convenience we will write $f_{r}=f_{n,r}=\left|X^{(\le r)}\right|=\sum_{j=0}^{r}{n \choose j}$ and $g_{r}=g_{n,r}=\left|G_{r}\right|=\sum_{j=0}^{r}{n \choose j}+{n-1 \choose r}$. In both cases the dependence on $n$ will not be highlighted if $n$ is clear from the context. \section{Construction of an example} In this section we will give a family of counterexamples $B_{r}\subseteq Q_{n}$, with $\left|B_{r}\right|=g_{r}$ and $B_{r}$ extremal for all $r$. The initial segment of the simplicial order of size $g_{r}$ is $C_{r}=X^{(\leq r)}\cup\left(\left\{ 1\right\} +X_{1}^{(r)}\right)$ and hence it follows that $N^{t}\left(C_{r}\right)=C_{r+t}$, which has size $g_{r+t}$ for all $t$. Also $C_{r}^{c}=\left[n\right]{}^{(\geq r+2)}\cup X_{1}^{(r+1)}$ and hence $N^{t}\left(C_{r}^{c}\right)=C_{r-t}^{c}$, which has size $g_{n-2-(r-t)}=g_{n-2-r+t}$ as $g_{r}+g_{n-r-2}=2^{n}$. For $i\in\left[n\right]$ and $A\subseteq Q_{n}$ define $A_{+}=\left\{ B\subseteq\left[n\right]\,:\,i\not\in B,\,B\cup\left\{ i\right\} \in A\right\} $ and $A_{-}=\left\{ B\subseteq\left[n\right]\,:\,i\not\in B,\,B\in A\right\} $ to be the $i$-sections of $A$. Note that $A_{\pm}$ depends on the choice of $i$, but since the choice of $i$ is usually clear this dependence will not be highlighted in the notation. Now $A_{+}$ and $A_{-}$ are subsets of $Q_{n-1}={\cal P}\left(\left[n\right]\setminus\left\{ i\right\} \right)$, and it is easy to verify that $N\left(A\right){}_{+}=N\left(A_{+}\right)\cup A_{-}$ and $N\left(A\right){}_{-}=N\left(A_{-}\right)\cup A_{+}$. Thus it follows that \[ \left|N\left(A\right)\right|=\left|A_{+}\cup N\left(A_{-}\right)\right|+\left|A_{-}\cup N\left(A_{+}\right)\right| \] and it can be deduced in similar way that more generally \begin{equation} \left|N^{t}\left(A\right)\right|=\left|N^{t-1}\left(A_{+}\right)\cup N^{t}\left(A_{-}\right)\right|+\left|N^{t}\left(A_{-}\right)\cup N^{t-1}\left(A_{+}\right)\right| \end{equation} Define $A$ by taking $i=1$, $A_{+}=B\left(\left\{ 2\right\} ,r\right)$ and $A_{-}=B\left(\emptyset,r\right)$, i.e. $A=\left(\left\{ 1\right\} +A_{+}\right)\cup A_{-}$. Note that the set $A$ constructed in this way is the union of two exact Hamming balls of same radius $r$ with centres at $\emptyset$ and $\left\{ 1,2\right\} $, which are points of distance 2 apart from each other. Now $\left|A\right|=2f_{n-1,r}=g_{n,r}$. Since $d\left(\emptyset,\left\{ 2\right\} \right)=1$ it follows that \[ N^{t-1}\left(A_{+}\right)=B\left(\left\{ 2\right\} ,r+t-1\right)\subseteq B\left(\emptyset,r+t\right)=N^{t}\left(A_{-}\right) \] and also $N^{t-1}\left(A_{-}\right)\subseteq N^{t}\left(A_{+}\right)$. Thus $\left|N^{t}\left(A\right)\right|=2f_{n-1,r+t}=g_{n,r+t}$ which proves that $N^{t}\left(A\right)$ is minimal for all $t>0$. The minimality of $N^{t}\left(A^{c}\right)$ for all $t>0$ follows similarly by observing that $\left(A^{c}\right)_{+}=B\left(\left\{ 3,\dots,n\right\} ,\left(n-1\right)-r-1\right)$, $\left(A^{c}\right)_{-}=B\left(\left\{ 2,\dots,n\right\} ,\left(n-1\right)-r-1\right)$ and $d\left(\left\{ 3,\dots,n\right\} ,\left\{ 2,\dots,n\right\} \right)=1$. Thus we can take $B_{r}=A$. Note that it can be checked that the $B_{r}$ obtained in this way is \[ B_{r}=X^{(\leq r)}\cup\left\{ B\,:\,B\in X^{(r+1)}\cup X^{(r+2)},\left\{ 1,2\right\} \subseteq B\right\} \] \section{Classifying all extremal sets} Recall that $f_{r}=\sum_{i=0}^{r}{n \choose i}$ is the size of an exact Hamming ball of radius $r$ and $g_{r}=\sum_{i=0}^{r}{n \choose i}+{n-1 \choose r}$ is the size of the initial segment $X^{(\leq r)}\cup\left(\left\{ 1\right\} +X_{1}^{(r)}\right)$, where $X_{i}=\left[n\right]\setminus\left\{ i\right\} $. It is convenient to exclude sets of size $f_{r}$ from the classification, and this is possible due to the following much stronger result. \\ \textbf{Proposition 3. }Let $\left|A\right|=f_{r}$ for which $N\left(A\right)$ is minimal. Then $A=B\left(x,r\right)$ for some $x\in Q_{n}$ $\square$ Since this is a well-known fact, the proof is omitted. It can be deduced by induction on $n$ and applying Lemma 6 of Katona from \cite{key-5}. A similar technique will be used in the proof of Claim 1 in Theorem 13 in Section 5. \\ Since $\left|A\right|=f_{r}$ for some $r$ is covered by Proposition 3, it is enough to consider the case $f_{r}<\left|A\right|<f_{r+1}$. Furthermore, since $g_{r}+g_{n-2-r}=2^{n}$ and $f_{r}+f_{n-1-r}=2^{n}$, by considering $A^{c}$ if necessary it is enough to classify just those $A$ with $f_{r}<\left|A\right|\leq g_{r}$ for some $r$. Hence from now on we will assume that $f_{r}<\left|A\right|\leq g_{r}$\\ \textbf{Lemma 4. }Let $A$ be an extremal set with $f_{r}<\left|A\right|\leq g_{r}$. Then there exist distinct points $x,y,z\in Q_{n}$ such that $B\left(x,r\right)\subseteq A$, $B\left(y,n-r-2\right)\subseteq A^{c}$ and $B\left(z,n-r-2\right)\subseteq A^{c}$. Furthermore, it follows that $B\left(x,r\right)\subseteq A\subseteq B\left(x,r+2\right)$ and $d\left(y,z\right)\leq2$.\\ The aim of this Lemma is to show that the structure of extremal sets is quite restricted, as the only interesting behaviour occurs only on two layers of the cube, namely on those which are distance $r+1$ and $r+2$ apart from $x$. It also gives some inside on why it is convenient to assume that $f_{r}<\left|A\right|\leq g_{r}$ rather than $f_{r}<\left|A\right|<f_{r+1}$, as the condition $f_{r}<\left|A\right|<f_{r+1}$ would not be strong enough to guarantee existence of both $y$ and $z$. \\ \textbf{Proof. }Since $\left|A\right|\leq g_{r}$, it follows from the minimality of $N^{t}\left(A\right)$ that $\left|N^{n-r-2}\left(A\right)\right|\leq g_{n-2}=2^{n}-2$. Thus there exists distinct points $y$ and $z$ such that $B\left(y,n-r-2\right)\subseteq A^{c}$ and $B\left(z,n-r-2\right)\subseteq A^{c}$. Since $\left|A^{c}\right|<f_{n-r-1}$ it follows that $\left|N^{r}\left(A^{c}\right)\right|\leq f_{n-1}=2^{n}-1$, and hence there exists $x$ with $B\left(x,r\right)\subseteq A$. Since $B\left(x,r\right)\cap B\left(y,n-r-2\right)\subseteq A\cap A^{c}=\emptyset$ we must have $d\left(x,y\right)\geq r+\left(n-r-2\right)+1=n-1$ and thus $d\left(x^{c},y\right)=d\left(x,y^{c}\right)\leq1$. Similarly $d\left(x^{c},z\right)\leq1$. Thus $B\left(x,r\right)\subseteq A\subseteq B\left(y^{c},r+1\right)\subseteq B\left(x,r+2\right)$ and the triangle inequality implies that $d\left(y,z\right)\leq d\left(y,x^{c}\right)+d\left(x^{c},z\right)\leq2$ as required. $\square$\\ Given this result, we can split the rest of the classification into two parts: considering those $A$ which are Hamming balls, i.e. for which $B\left(x,r\right)\subseteq A\subseteq B\left(x,r+1\right)$, and considering those $A$ for which no such $x$ and $r$ exists. It turns out that all the examples apart from the initial segment appear in the second case. This is proved in the Proposition 6 but before that we need a short preliminary lemma. \\ \textbf{Lemma 5. }For all $r\geq1$ and $x\neq y$ we have $\left|B\left(x,r\right)\cup B\left(y,r\right)\right|\geq g_{r}$, with equality if and only if $d\left(x,y\right)\leq2$. \\ \textbf{Proof. }Let $A=B\left(x,r\right)\cup B\left(y,r\right)$, we may assume that $x=\emptyset$. For any $i\in y$ we have $A_{-}=B\left(\emptyset,r\right)\cup B\left(y\setminus\left\{ i\right\} ,r-1\right)$ and $A_{+}=B\left(\emptyset,r-1\right)\cup B\left(y\setminus\left\{ i\right\} ,r\right)$. Hence $\left|A\right|=\left|A_{-}\right|+\left|A_{+}\right|\geq2f_{n-1,r}=g_{r}$ and the equality holds if and only if $B\left(y\setminus\left\{ i\right\} ,r-1\right)\subseteq B\left(\emptyset,r\right)$ and $B\left(\emptyset,r-1\right)\subseteq B\left(y\setminus\left\{ i\right\} ,r\right)$. Thus the equality holds if and only if $d\left(\emptyset,y\setminus\left\{ i\right\} \right)\leq1$, i.e. if and only if $d\left(x,y\right)\leq2$.$\square$\\ \textbf{Proposition 6. }Suppose that $A\subseteq Q_{n}$ is an extremal set for which there exists $t\in Q_{n}$ and $r$ such that $B\left(t,r\right)\subseteq A\subseteq B\left(t,r+1\right)$ . Then $A$ is isomorphic to an initial segment of the simplicial order. \\ \textbf{Proof. }Proof is by induction on $n$. When $n=2$ it is easy to verify that the claim is true. Suppose that the claim holds for $n-1$. If $\left|A\right|=f_{r}$ then the claim follows form Proposition 3. Otherwise, by taking complements if necessary, we may assume that $\left|A\right|\leq g_{r}$. Hence by Lemma 4 there exists distinct $x,\,y,\,z\in Q_{n}$ with $d\left(z,y\right)\le2$, $B\left(x,r\right)\subseteq A$, $B\left(y,n-r-2\right)\subseteq A^{c}$ and $B\left(z,n-r-2\right)\subseteq A^{c}$. \\ \textbf{Case 1.} $\left|A\right|=g_{r}$.\\ By Lemma 5 $\left|A^{c}\right|\geq\left|B\left(y,n-r-2\right)\cup B\left(z,n-r-2\right)\right|\geq g_{n-r-2}=\left|A^{c}\right|$ so the equality must hold throughout, and hence $A^{c}=B\left(y,n-r-2\right)\cup B\left(z,n-r-2\right)$. Thus $A=B\left(y^{c},r-1\right)\cup B\left(z^{c},r-1\right)$. Without loss of generality set $y^{c}=\emptyset$. Then $d\left(y^{c},z^{c}\right)\leq2$ implies that we may assume $z^{c}=\left\{ 1\right\} $ or $z^{c}=\left\{ 1,2\right\} $ (corresponding to $d\left(y,z\right)=1$ and $d\left(y,z\right)=2$ respectively). In the first case $A=X^{(\leq r)}\cup\left(\left\{ 1\right\} +X_{1}^{(r)}\right)$ which is an initial segment of the simplicial order. In the second case \[ A=X^{(\le r)}\cup\left(\left\{ 1,2\right\} +\left(X_{1,2}^{(r-1)}\cup X_{1,2}^{(r)}\right)\right) \] It is straightforward to check that $B\left(w,r\right)\subseteq A$ implies $w=\emptyset$ or $w=\left\{ 1,2\right\} $ and in both cases $A\not\subseteq B\left(w,r+1\right)$ contradicting the assumption on the existence of $t$ with $B\left(t,r\right)\subseteq A\subseteq B\left(t,r+1\right)$. This completes the proof of the case $\left|A\right|=g_{r}$. \\ \textbf{Case 2. $\left|A\right|<g_{r}$}\\ \textbf{Case 2.1}. Suppose there exists $y_{1}$ and $z_{1}$ with $B\left(y_{1},n-r-2\right)\subseteq A^{c}$, $B\left(z_{1},n-r-2\right)\subseteq A^{c}$ and $d\left(y_{1},z_{1}\right)=1$. \\ Recall from the proof of Lemma 4 that these points also satisfy $d\left(x^{c},y_{1}\right)\leq1$ and $d\left(x^{c},z_{1}\right)\leq1$. Together with $d\left(y_{1},z_{1}\right)=1$ it follows that $x^{c}\in\left\{ y_{1},z_{1}\right\} $ as $Q_{n}$ is triangle free. Hence we may assume that $x=\emptyset$, $y_{1}=\left\{ 1,\dots,n\right\} $ and $z_{1}=\left\{ 2,\dots,n\right\} $. Thus $B\left(x,r\right)\subseteq A$ implies that $X^{(\leq r)}\subseteq A$. Also $A\subseteq B\left(y_{1}^{c},r+1\right)\cap B\left(z_{1}^{c},r+1\right)=B\left(\emptyset,r+1\right)\cap B\left(\left\{ 1\right\} ,r+1\right)$ and hence $A\subseteq X^{(\leq r)}\cup\left(\left\{ 1\right\} +X_{1}^{(r)}\right)$. Let $A=X^{(\leq r)}\cup\left(\left\{ 1\right\} +{\cal A}\right)$ with ${\cal A}\subseteq X_{1}^{(r)}$. Consider $A_{\pm}$ in the direction $i=1$. Recall from (1) that $\left|N^{t}\left(A\right)\right|$ is given by \[ \left|N^{t}\left(A\right)\right|=\left|N^{t}\left(A_{+}\right)\cup N^{t-1}\left(A_{-}\right)\right|+\left|N^{t}\left(A_{-}\right)\cup N^{t-1}\left(A_{+}\right)\right| \] Following Bollob\'{a}s and Leader \cite{key-1}, let $C_{+}$ and $C_{-}$ be initial segments of the simplicial order of same sizes as $A_{+}$ and $A_{-}$ respectively, and set $C=\left(C_{+}+\left\{ i\right\} \right)\cup C_{-}$. Note that $\left|C_{+}\right|=f_{n-1,r-1}+\left|{\cal A}\right|\in[f_{n-1,r-1},\,f_{n-1,r})$ and $\left|C_{-}\right|=f_{n-1,r}$. Note that initial segments are nested, and the $t$-neighbourhood of an initial segment is also an initial segment and therefore the $t-$neighbourhood of an initial segment is also minimal. Hence it follows that \[ \left|N^{t}\left(C_{\pm}\right)\right|=\left|N^{t}\left(C_{\pm}\right)\cup N^{t-1}\left(C_{\mp}\right)\right|\leq\left|N^{t}\left(A_{\pm}\right)\cup N^{t-1}\left(A_{\mp}\right)\right| \] Adding up the inequalities corresponding to both choices of $+$ and $-$ yields \[ \begin{array}{c} \left|N^{t}\left(C\right)\right|=\left|N^{t}\left(C_{+}\right)\cup N^{t-1}\left(C_{-}\right)\right|+\left|N^{t}\left(C_{-}\right)\cup N^{t-1}\left(C_{+}\right)\right|\\ \leq\left|N^{t}\left(A_{+}\right)\cup N^{t-1}\left(A_{-}\right)\right|+\left|N^{t}\left(A_{-}\right)\cup N^{t-1}\left(A_{+}\right)\right|=\left|N^{t}\left(A\right)\right| \end{array} \] By the minimality of $\left|N^{t}\left(A\right)\right|$ the equality has to hold throughout and hence $\left|N^{t}\left(C_{\pm}\right)\right|=\left|N^{t}\left(A_{\pm}\right)\cup N^{t-1}\left(A_{\mp}\right)\right|$ for all $t$. Since $C_{+}$ and $C_{-}$ are initial segments it follows that \[ \left|N^{t}\left(A_{\pm}\right)\right|\geq\left|N^{t}\left(C_{\pm}\right)\right|=\left|N^{t}\left(A_{\pm}\right)\cup N^{t-1}\left(A_{\mp}\right)\right|\geq\left|N^{t}\left(A_{\pm}\right)\right| \] Hence $\left|N^{t}\left(A_{\pm}\right)\right|=\left|N^{t}\left(C_{\pm}\right)\right|$ for all $t$ and in particular both $N^{t}\left(A_{+}\right)$ and $N^{t}\left(A_{-}\right)$ are minimal for all $t>0$. By similar argument $N^{t}\left(A_{+}^{c}\right)$ and $N^{t}\left(A_{-}^{c}\right)$ are minimal for all $t>0$ as well. Note that $A_{+}=X_{1}^{(\leq r-1)}\cup{\cal A}$ and hence $B\left(\emptyset,r-1\right)\subseteq A_{+}\subseteq B\left(\emptyset,r\right)$. Since $N^{t}\left(A_{+}\right)$ and $N^{t}\left(A_{+}^{c}\right)$ are minimal for all $t>0$, it follows by induction that $A_{+}$ is isomorphic to an initial segment of the simplicial order. Hence ${\cal A}$ is isomorphic to an initial segment of the lexicographic order in $X_{1}^{(r)}$ and thus $\left\{ 1\right\} +{\cal A}$ is also isomorphic to an initial segment of the lexicographic order in $\left[n\right]{}^{(r+1)}$. Thus $A$ is isomorphic to an initial segment of the simplicial order which completes the proof of Case 2.1.\\ \textbf{Case 2.2. }Suppose that every $y_{1},z_{1}$ with $B\left(y_{1},n-r-2\right)\subseteq A^{c}$ and $B\left(z_{1},n-r-2\right)\subseteq A^{c}$ satisfies $d\left(y_{1},z_{1}\right)=2$. \\ Since $d\left(y_{1},x^{c}\right)\leq1$ and $d\left(z_{1},x^{c}\right)\leq1$ it follows that $d\left(y_{1},x^{c}\right)\neq0$ as otherwise $d\left(z_{1},y_{1}\right)\leq d\left(z_{1},x^{c}\right)+d\left(y_{1},x^{c}\right)=1$ which is a contradiction. Thus $x^{c}\neq y_{1}$ and similarly $x^{c}\neq z_{1}$. Without loss of generality let $x=\emptyset$, $y=\left\{ 2,\dots n\right\} $ and $z=\left\{ 1,3,\dots,n\right\} $. Note that if there exists $w_{1}\neq w_{2}$ with $B\left(w_{1},r\right)\cup B\left(w_{2},r\right)\subseteq A$ then Lemma 5 would imply that $|A|\geq g_{r}$, contradicting the assumption of Case 2. Thus it follows that $t=x$ is the unique point of $Q_{n}$ for which $B\left(t,r\right)\subseteq A$. Recall that by assumption there exists $t$ for which $B\left(t,r\right)\subseteq A\subseteq B\left(t,r+1\right)$. Therefore we have $A\subseteq B\left(\emptyset,r+1\right)$ and thus $B\left(\left\{ 1,\dots,n\right\} ,n-r-2\right)\subseteq A^{c}$. But $d\left(\left\{ 1,\dots,n\right\} ,y_{1}\right)=1$ so in fact Case 2.2 cannot ever occur, which completes the proof.$\square$\\ As usual we define the lower shadow of ${\cal A}$ by $\partial{\cal A}=\left\{ B\,:\,B\cup\left\{ i\right\} \in{\cal A}\text{ for some }i\right\} $, and the iterated lower shadow by $\partial^{-t}{\cal A}=\partial\left(\partial^{-(t-1)}{\cal A}\right)$. Similarly define the upper shadow of ${\cal A}$ by $\partial^{+}{\cal A}=\left\{ B\cup\left\{ i\right\} \,:\,i\in\left[n\right],\,B\in{\cal A}\right\} $, and the iterated upper shadow by $\partial^{+t}{\cal A}=\partial^{+}\left(\partial^{+(t-1)}{\cal A}\right)$. Note that $\partial^{+}$ depends on the ground set, which will be $\left[n\right]$ unless otherwise highlighted in the notation. Now Proposition 6 has the following straightforward corollary. \\ \textbf{Corollary 7. }Let ${\cal A}\subseteq X^{(r)}$ and set ${\cal B}=X^{(r)}\setminus{\cal A}$. Suppose that $\partial^{-t}{\cal A}$ and $\partial^{+t}{\cal B}$ are minimal for all $t>0$. Then ${\cal A}$ is isomorphic to an initial segment of the colexicographic order. \\ \textbf{Proof. }By considering ${\cal A}'=\left\{ A^{c}\,:\,A\in{\cal B}\right\} $ if necessary, and using $\left|\partial^{-t}{\cal A}'\right|=\left|\partial^{+t}{\cal B}\right|$ and $\left|\partial^{+t}\left(X^{(r)}\setminus{\cal A}'\right)\right|=\left|\partial^{-t}{\cal A}\right|$, we may assume that $\left|{\cal A}\right|\leq{n-1 \choose r-1}$ . Set $A=X^{(\geq r+1)}\cup{\cal A}$, then $f_{n-r}<\left|A\right|\leq g_{n-r+1}$. Since $\partial^{-t}{\cal A}$ and $\partial^{+t}{\cal B}$ are minimal for all $t>0$ it follows that $N^{t}\left(A\right)$ and $N^{t}\left(A^{c}\right)$ are minimal for all $t>0$, and hence $A$ is extremal. Thus Proposition 6 implies that $A$ is isomorphic to an initial segment of the order given by $A<B$ if and only if \[ \left|A\right|>\left|B\right|\text{ or }\left(\left|A\right|=\left|B\right|\text{ and }A<B\text{ in colexiographic order}\right) \] Indeed this follows from the fact that the order defined above is isomorphic to the simplicial order in $Q_{n}$ via taking complements and permuting the ground set. Denote the isomorphism by $\theta$. If $\left|A\right|<g_{n-r+1}$ then $B\left(\left\{ 1,\dots,n\right\} ,n-r-1\right)$ is the unique exact Hamming ball of radius $n-r-1$ inside $A$, so the isomorphism must fix $\left\{ 1,\dots,n\right\} $ and hence $\theta\left({\cal A}\right)=\theta\left(A\right)\cap X^{(r)}$, which is an initial segment of the colexicographic order. If $\left|A\right|=g_{n-r+1}$ it follows that $\theta\left(\left\{ 1,\dots,n\right\} \right)=\left\{ 1,\dots,n\right\} $ or $\theta\left(\left\{ 1,\dots,n\right\} \right)=\left\{ 1,\dots,n-1\right\} $. In the first case we're done as above. Note that $\theta\left(A\right)=X^{(\geq r+1)}\cup\left\{ 1,\dots,n-1\right\} {}^{(r)}$ and hence $\theta\left(A\right)$ is preserved under $\tau\left(X\right)=X\Delta\left\{ n\right\} $, which is an isomorphism of $Q_{n}$. Also $\tau\theta\left(\left\{ 1,\dots,n\right\} \right)=\left\{ 1,\dots,n\right\} $ so replacing $\theta$ by $\tau\theta$ we obtain that ${\cal A}$ is isomorphic to an initial segment of the colexicographic order.$\Square$\\ Recall that the sets $A_{1},\dots,A_{n}$ were defined in the introduction as follows. For $k\le{n-1 \choose r}$ let ${\cal A}\subseteq\left[n\right]{}^{(r)}$ be the initial segment of the colexicographic order of size $k$. For each $1\leq i\leq n$ define ${\cal A}_{i,0}=\left\{ B\in X_{i}^{(r-1)}\,:\,B\cup\left\{ i\right\} \in{\cal A}\right\} $ and ${\cal A}_{i,1}=\left\{ B\in X_{i}^{(r)}\,:\,i\not\in B,\,B\in{\cal A}\right\} $. For each $i$ set $A_{i}=X^{(\ge r+1)}\cup{\cal A}_{i,1}\cup{\cal A}_{i,0}$. The motivation behind $k\leq{n-1 \choose r}$ follows from the fact that $g_{n-r-1}=\sum_{i=0}^{n-r-1}{n \choose i}+{n-1 \choose n-r-1}=\sum_{i=r+1}^{n}{n \choose i}+{n-1 \choose r}$ so $k\leq{n-1 \choose r}$ corresponds exactly to $f_{n-r-1}<\left|A\right|\leq g_{n-r-1}$. Note that we have turned our attention into sets of the form $X^{(\geq r)}\cup{\cal A}$ instead and the reason is the fact that the notation is slightly simpler in terms of lower shadows. For the convenience, we restate Theorem 2\\ \textbf{Theorem 2 (Classification of extremal sets).} Let $A\subseteq Q_{n}$ with $\left|X^{(\leq r)}\right|<\left|A\right|\leq\left|X^{(\leq r)}\cup\left\{ B\in X^{(r+1)}\,:\,1\in B\right\} \right|$ for some $r$. Let $A_{1},\dots,A_{n}$ be defined as above with $\left|A_{i}\right|=\left|A\right|$. Then $A$ is extremal if and only if $A$ is isomorphic to some $A_{i}$\\ \textbf{Proof} \textbf{Case 1. $\left|A\right|=g_{r}$}\\ As noticed in the proof of Proposition 6, such set $A$ has to be of the form \[ A=X^{(\leq r)}\cup\left(\left\{ 1\right\} +X_{1}^{(r)}\right) \] or \[ A=X^{(\leq r)}\cup\left(\left\{ 1,2\right\} +\left(X_{1,2}^{(r-1)}\cup X_{1,2}^{(r)}\right)\right) \] In the first case it can be checked that $A$ is isomorphic to $A_{n}=X^{(\geq n-r)}\cup X_{n}^{(n-r-1)}$, and in the second case $A$ is isomorphic to $A_{1}=X^{(\geq n-r)}\cup X_{1,n}^{(n-r-1)}\cup X_{1,n}^{(n-r-2)}$, which completes the proof of Case 1.\\ \textbf{Case 2. $\left|A\right|<g_{r}$}\\ Set $k=n-r$ and let such $A$ be given. By Lemma 4 there exists $x,y,z$ with $d\left(y,z\right)\leq2$, $d\left(x^{c},y\right)\le1$, $d\left(x^{c},z\right)\le1$, $B\left(x,r\right)\subseteq A$, $B\left(y,k-2\right)\subseteq A^{c}$ and $B\left(z,k-2\right)\subseteq A^{c}$ . If $d\left(y,z\right)=1$ then the Case 2.1 in the proof of Proposition 6 implies that $A$ is isomorphic to an initial segment of the simplicial order. Hence it suffices to only consider the case $d\left(y,z\right)=2$. Without loss of generality let $y=\left\{ n-1\right\} $ and $z=\left\{ n\right\} $. Then $d\left(x^{c},y\right)\leq1$ and $d\left(x^{c},z\right)\leq1$ implies that $x=\left[n\right]$ or $x=\left\{ 1,\dots,n-2\right\} $. Since everything up to this point is preserved under the map $A\rightarrow A\Delta\left\{ n-1,n\right\} $, we may assume that $x=\left[n\right]$. Taking complements from $B\left(y,k-2\right)\subseteq A^{c}$ and $B\left(z,k-2\right)\subseteq A^{c}$ it follows that \[ \begin{array}{c} A\subseteq B\left(\left\{ 1,\dots n-1\right\} ,r+1\right)\cap B\left(\left\{ 1,\dots n-2,n\right\} ,r+1\right)\\ =X^{(\geq k)}\cup\left\{ B:\,|B|\in\left\{ k-1,k-2\right\} ,\,B\cap\left\{ n-1,n\right\} =\emptyset\right\} \end{array} \] Hence $A=X^{(\geq k)}\cup{\cal A}_{1}\cup{\cal A}_{0}$ with ${\cal A}_{i}\subseteq\left[n-1\right]{}^{(k-i-1)}$ for $i\in\left\{ 0,1\right\} $ (in fact these are subsets of $\left[n-2\right]{}^{(k-i-1)}$ but them being subsets of $\left[n-1\right]{}^{(k-i-1)}$ is enough). Set ${\cal A}=\left({\cal A}_{0}+\left\{ n\right\} \right)\cup{\cal A}_{1}\subseteq\left[n\right]{}^{(k-1)}$. Now $\left|{\cal A}\right|=\left|{\cal A}_{0}\right|+\left|{\cal A}_{1}\right|$ and \begin{equation} \partial^{-t}{\cal A}=\left(\partial^{-t}{\cal A}_{0}+\left\{ n\right\} \right)\cup\left(\partial^{-(t-1)}{\cal A}_{0}\cup\partial^{-t}{\cal A}_{1}\right) \end{equation} On the other hand \begin{equation} N^{t}\left(A\right)=X^{(\geq k-t)}\cup\left(\partial^{-t}{\cal A}_{1}\cup\partial^{-(t-1)}{\cal A}_{0}\right)\cup\partial^{-t}{\cal A}_{0} \end{equation} and hence combining (2) and (3) yields \begin{equation} \left|N^{t}\left(A\right)\right|=\left|X^{(\geq k-t)}\right|+\left|\partial^{-t}{\cal A}\right| \end{equation} Let ${\cal B}_{i}=\left[n-1\right]{}^{(k-1-i)}\setminus{\cal A}_{i}$ and ${\cal B}=\left[n\right]{}^{(k-1)}\setminus{\cal A}=\left({\cal {\cal B}}_{0}+\left\{ n\right\} \right)\cup{\cal B}_{1}$. In this notation \[ A^{c}=X^{(\leq k-3)}\cup{\cal B}_{0}\cup{\cal B}_{1}\cup\left\{ B:\,\left|B\right|\in\left\{ k-2,k-1\right\} ,\,n\in B\right\} \] Let $\partial_{n}^{+}$ be the upper shadow operator with respect to the ground set $\left\{ 1,\dots,n-1\right\} $ and $\partial^{+}$ be the usual upper shadow operator (i.e. with ground set $\left[n\right]$). Note that \begin{equation} \partial^{+t}{\cal A}=\left(\left(\partial_{n}^{+t}{\cal B}_{0}\cup\partial_{n}^{+(t-1)}{\cal B}_{1}\right)+\left\{ n\right\} \right)\cup\partial_{n}^{+t}{\cal B}_{1} \end{equation} Now \begin{equation} \begin{array}{c} N^{t}\left(A^{c}\right)=X^{(\leq k+t-3)}\cup\left(\partial_{n}^{+t}{\cal B}_{0}\cup\partial_{n}^{+(t-1)}{\cal B}_{1}\right)\cup\partial_{n}^{+t}{\cal B}_{1}\\ \cup\left\{ B:\,\left|B\right|\in\left\{ k+t-2,k+t-1\right\} ,\,n\in B\right\} \end{array} \end{equation} and note that each of \[ X^{(\leq k+t-3)},\,\left(\partial_{n}^{+t}{\cal B}_{0}\cup\partial_{n}^{+(t-1)}{\cal B}_{1}\right),\,\partial_{n}^{+t}{\cal B}_{1} \] and \[ \left\{ B:\,\left|B\right|\in\left\{ k+t-2,k+t-1\right\} ,\,n\in B\right\} \] are pairwisely disjoint set systems. Hence it follows from (6) that \begin{equation} \begin{array}{c} \left|N^{t}\left(A^{c}\right)\right|=\left|X^{(\leq k+t-3)}\right|+\left|\partial_{n}^{+t}{\cal B}_{0}\cup\partial_{n}^{+(t-1)}{\cal B}_{1}\right|+\left|\partial_{n}^{+t}{\cal B}_{1}\right|\\ +\left|\left\{ B:\,\left|B\right|\in\left\{ k+t-2,k+t-1\right\} ,\,n\in B\right\} \right| \end{array} \end{equation} From (5) it can be deduced that \begin{equation} \left|\partial_{n}^{+t}{\cal B}_{0}\cup\partial_{n}^{+(t-1)}{\cal B}_{1}\right|+\left|\partial_{n}^{+t}{\cal B}_{1}\right|=\left|\partial^{+t}{\cal {\cal B}}\right| \end{equation} and finally by counting we have \[ \left|\left\{ B:\,\left|B\right|\in\left\{ k+t-2,k+t-1\right\} ,\,n\in B\right\} \right| \] \begin{equation} ={n-1 \choose k+t-3}+{n-1 \choose k+t-2}={n \choose k+t-2} \end{equation} By using (8) and (9), (7) simplifies to \begin{equation} \left|N^{t}\left(A^{c}\right)\right|=\left|X^{(\leq k+t-3)}\right|+\left|\partial^{+t}{\cal B}\right|+{n \choose k+t-2}=\left|X^{(\leq k+t-2)}\right|+\left|\partial^{+t}{\cal B}\right| \end{equation} Let ${\cal C}\subseteq X^{(k-1)}$ be the
initial segment of the colexicographic order of size $\left|{\cal A}\right|$, and set $C=X^{(\geq k)}\cup{\cal C}$. Since $\left|C\right|=\left|A\right|$ and $C$ is isomorphic to an initial segment of the simplicial order, we must have $\left|N^{t}\left(C\right)\right|=\left|N^{t}\left(A\right)\right|$ and $\left|N^{t}\left(C^{c}\right)\right|=\left|N^{t}\left(A^{c}\right)\right|$ for all $t>0$, as $N^{t}\left(A\right)$ and $N^{t}\left(A^{c}\right)$ are minimal. Setting ${\cal D}=X^{(k-1)}\setminus{\cal C}$ it is straightforward to verify that $N^{t}\left(C\right)=X^{(\geq k-t)}\cup\partial^{-t}{\cal C}$ and $N^{t}\left(C^{c}\right)=X^{(\le k+t-2)}\cup\partial^{+t}{\cal D}$. Hence it follows that $\left|\partial^{-t}{\cal A}\right|=\left|\partial^{-t}{\cal C}\right|$ and $\left|\partial^{+t}{\cal {\cal B}}\right|=\left|\partial^{+t}{\cal D}\right|$ for all $t>0$. Thus Corollary 7 implies that ${\cal A}$ has to be isomorphic to an initial segment of the colexicographic order. Hence $A$ is isomorphic to some $A_{i}$. The extremality of $A_{i}$'s follows immediately from (4) and (10) as when $A=A_{i}$, it is immediate from the definition of $A_{i}$ that ${\cal A}$ is isomorphic to an initial segment of the colexicographic order. $\square$ \\ \textbf{Corollary 8. }For all $n$ and $k\not\in\left\{ f_{0},\dots,f_{n}\right\} $ there exists extremal set $A\subseteq Q_{n}$ which is not isomorphic to any initial segment of the lexicographic order. \\ \textbf{Proof. }When $\left|A\right|=g_{r}$ the claim is true, so by the same argument as before we may assume that $f_{r}<\left|A\right|<g_{r}$. Let ${\cal A}\subseteq X^{(n-r-1)}$ be the initial segment of the colexicographic order of size $\left|A\right|-f_{r}$ and take $i\in\bigcup_{A\in{\cal A}}A$. Then ${\cal A}=\left({\cal A}_{0}+\left\{ i\right\} \right)\cup{\cal A}_{1}$ with ${\cal A}_{0}\neq\emptyset$. Note that since $\left|A\right|<g_{r}$ it follows from Lemma 5 that $B\left(x,r\right)\subseteq A_{i}$ implies $x=\left[n\right]$ and thus it is easy to see that $A_{i}$ is not isomorphic to an initial segment of the simplicial order as $B\left(\left[n\right],r\right)\subseteq A_{i}$ but $A_{i}\not\subseteq B\left(\left[n\right],r+1\right)$. $\square$\\ It is natural to ask: when are $A_{i}$ and $A_{j}$ isomorphic as subsets of $Q_{n}$? Let ${\cal A}$ be the initial segment from which $A_{i}$'s are obtained. If $\sigma=\left(ij\right)\in S_{n}$ satisfies $\sigma\left({\cal A}\right)={\cal A}$ then certainly $A_{i}$ and $A_{j}$ are isomorphic, where $\sigma\left({\cal A}\right)=\left\{ \left\{ \sigma(b_{1}),\dots,\sigma(b_{t})\right\} \,:\,\left\{ b_{1},\dots,b_{t}\right\} \in{\cal A}\right\} $. The aim of the following lemma is to prove that this is the only way the isomorphism can occur.\\ \textbf{Lemma 9. }$A_{i}$ and $A_{j}$ are isomorphic if and only if $\sigma\left({\cal A}\right)={\cal A}$ for $\sigma=\left(ij\right)$\\ \textbf{Proof. }If $\left|A\right|=g_{r}$ then ${\cal A}=\left\{ 1,\dots,n-1\right\} {}^{(n-r-1)}$ and clearly $A_{i}$ and $A_{j}$ are isomorphic for all $i,j\in\left[n-1\right]$. Also note that $A_{n-1}=B\left(\left[n\right],r\right)\cup B\left(\left[n-2\right],r\right)$ and $A_{n}=B\left(\left[n\right],r\right)\cup B\left(\left[n-1\right],r\right)$, so in particular $A_{n-1}$ is union of two exact Hamming balls of radius $r$ whose centres are distance 2 apart, and $A_{n}$ is union of two exact Hamming balls of radius $r$ whose centres are distance 1 apart. Thus they are not isomorphic. Now suppose that $f_{r}<\left|A\right|<g_{r}$. Thus each $A_{i}$ contains an unique exact Hamming ball of radius $r$, which is by construction centred at $\left[n\right]$. Suppose $i<j$ and that $\theta:A_{i}\rightarrow A_{j}$ is an isomorphism. Since $\theta$ must fix the centre of the unique exact Hamming ball of radius $r$, we must have $\theta\left(\left[n\right]\right)=\left[n\right]$ and hence $\theta\left(\emptyset\right)=\emptyset$. It is easy to verify that $\text{Stab}\left(\emptyset\right)=S_{n}$ is given by $\theta_{\sigma}\left(A\right)=\left\{ \sigma\left(a\right)\,:\,a\in A\right\} $ for $\sigma\in S_{n}$. Hence $\theta$ maps ${\cal A}_{i,0}$ to ${\cal A}_{j,0}$ and ${\cal A}_{i,1}$ to ${\cal A}_{j,1}$ so in particular $\left|\left\{ A\in{\cal A}:i\in A\right\} \right|=\left|\left\{ A\in{\cal A}:j\in A\right\} \right|$. Since ${\cal A}$ is an initial segment, it is left compressed so for all $A\in{\cal A}$ if $i\not\in A$, $j\in A$ we must have $\left(A\setminus\left\{ j\right\} \right)\cup\left\{ i\right\} \in{\cal A}$. Thus $\left|\left\{ A\in{\cal A}:i\in A\right\} \right|=\left|\left\{ A\in{\cal A}:j\in A\right\} \right|$ implies that the converse must hold as well, that is for all $A\in{\cal A}$ if $j\in A$, $i\in A$ we must have $\left(A\setminus\left\{ i\right\} \right)\cup\left\{ j\right\} \in{\cal A}$ and hence $\sigma\left({\cal A}\right)={\cal A}$ for $\sigma=\left(ij\right)$. $\square$ \\ From Lemma 9 it follows that for all $s$ there exists $n,\,k$ such that there are at least $s$ pairwisely non-isomorphic extremal sets $A_{1},\dots,A_{s}$ of size $k$ in $Q_{n}$. Indeed, this follows by taking $n=2s+3$, $k=\left|X^{(\geq s+2)}\right|+\sum_{i=2}^{s+1}{2(i-1) \choose i}$. If ${\cal A}\subseteq X^{(s+1)}$ is the initial segment of the colexicographic order of size $\left|{\cal A}\right|=\sum_{i=2}^{s+1}{2(i-1) \choose i}$ it is clear that $\left(ij\right){\cal A}\neq{\cal A}$ for any distinct even integers $1\leq i,j\leq2s$. \section{The weak version} In this section we consider how the results in Section 3 change if we only require $N\left(A\right)$ and $N\left(A^{c}\right)$ to be minimal. First of all we will prove that no such result as Lemma 4 can hold in the weak version. That is, we will prove that there is no constant $k$ such that the extremal sets are contained between two exact Hamming balls with same centre and whose radius differ by at most $k$. \\ \textbf{Proposition 10}. For any positive integer $s$ there exists $n$ and a set $A\subseteq Q_{n}$ for which $N^{t}\left(A\right)$ is minimal for all $t>0$, $N\left(A^{c}\right)$ is minimal, and for all $x\in Q_{n}$, $t\in\mathbb{Z}_{+}$ at least one of $B\left(x,t\right)\subseteq A$ or $A\subseteq B\left(x,t+s\right)$ is violated. \\ \textbf{Proof. }Let $n=2s+8$, $r=s+4$, $k=s+2$ and \[ A=X^{(\geq r)}\setminus\left\{ \left\{ 1,\dots,r+i\right\} :\,0\leq i\le k-1\right\} \] That is, we take $A$ to be $X^{(\geq r)}$ but we exclude the set $\left\{ 1,\dots,r+i\right\} \in X^{(r+i)}$ for all $0\le i\leq k-1$. Now $A^{c}=X^{(\leq r-1)}\cup\left\{ \left\{ 1,\dots,r+i\right\} :\,0\leq i\leq k-1\right\} $. Thus \[ N(A^{c})=X^{(\leq r)}\cup\left\{ \left\{ 1,\dots,r+i,a_{i}\right\} \,:\,0\leq i\leq k-1,r+i+1\le a_{i}\leq n\right\} \] and hence $\left|N(A^{c})\right|=\left|X^{(\leq r)}\right|+\sum_{i=0}^{k-1}\left(n-r-i\right)$. Let ${\cal C}\subseteq X^{(r)}$ be the initial segment of the lexicographic order of size $k$. Since $k+r=2s+6<n+1$, it follows that ${\cal C}=\left\{ \left\{ 1,\dots,r-1,i\right\} \,:\,r\leq i\leq r+k-1\right\} $. By definition $B=X^{(\leq r-1)}\cup{\cal C}$ is an initial segment of the lexicographic order with $\left|B\right|=\left|A^{c}\right|$, and $N\left(B\right)=X^{(\leq r)}\cup\partial^{+}{\cal C}$, so in order to verify that $N\left(A^{c}\right)$ is minimal it suffices to show that $\left|\partial^{+}{\cal C}\right|=\sum_{i=0}^{k-1}\left(n-r-i\right)$. But $\partial^{+}{\cal C}=\left\{ A\in X^{(r+1)}:\left\{ 1,\dots,r-1\right\} \subseteq A,\,\left\{ r,\dots,r+k-1\right\} \cap A\neq\emptyset\right\} $ and hence we can identify $\partial^{+}{\cal C}$ as the set $\left\{ r,\dots,n\right\} {}^{(2)}\setminus\left\{ r+k,\dots,n\right\} {}^{(2)}$ via $A\rightarrow A\setminus\left\{ 1,\dots,r-1\right\} $. Thus \[ \left|\partial^{+}{\cal C}\right|={n-r+1 \choose 2}-{n-r-k+1 \choose 2} \] \begin{equation} =\sum_{i=1}^{n-r}i-\sum_{i=1}^{n-r-k}i=\sum_{i=n-r-(k-1)}^{n-r}i=\sum_{i=0}^{k-1}\left(n-r-i\right) \end{equation} as required. Hence (11) shows that $N\left(A^{c}\right)$ is minimal. Let ${\cal D}=X^{(r)}\setminus{\cal C}$. Note that $\left|{\cal D}\right|=\left|X^{(r)}\right|-k$. For any given $A\in X^{(r-1)}$, there are $n-\left(r-1\right)=n-r+1$ sets $B\in X^{(r)}$ such that $A\subseteq B$. Since $k<n-r+1$ it follows that for all $A\in X^{(r-1)}$ there exists $B\in{\cal D}$ such that $A\subseteq B$, so in particular $\partial^{-}{\cal D}=X^{(r-1)}$. Thus $N\left(B^{c}\right)=X^{(\geq r-1)}$. Since $N\left(B^{c}\right)$ is minimal, $\left|B^{c}\right|=\left|A\right|$, and $N\left(A\right)\subseteq X^{(\geq r-1)}$ it follows that $N\left(A\right)=X^{(\geq r-1)}$ and hence $N\left(A\right)$ is minimal. But since $N\left(A\right)$ is isomorphic to an initial segment of the simplicial order, it follows that $N^{t}\left(A\right)$ is minimal for all $t>0$. To finish the proof, note that it suffices to prove that if $B\left(x,d\right)\subseteq A^{c}$ and $A^{c}\subseteq B\left(x,f\right)$ then $f-d\geq k-1$. Indeed, supposing that this holds, then $B\left(x^{c},n-f-1\right)\subseteq A$ and $A\subseteq B\left(x^{c},n-d-1\right)$ with $\left(n-d-1\right)-\left(n-f-1\right)=f-d\ge k-1>s$ which completes the proof of the claim. Suppose that $B\left(x,d\right)\subseteq A^{c}$ with $\left|x\right|=t$. Since $\left\{ 1,\dots,r+1\right\} $ is the only $r+1$-set in $A^{c}$ it follows that $d+t\leq r$ (as $x$ is contained as a subset in strictly more than 1 set in $X^{(r+1)}$). Also $A^{c}$ contains a $r+k-1$-set $y=\left\{ 1,\dots,r+k-1\right\} $ so $y\in B\left(x,f\right)$ implies that $f\geq\left|y\Delta x\right|\geq\left|y\right|-\left|x\right|=r+k-1-t$. Thus $f-d\geq\left(r+k-1-t\right)-\left(r-t\right)=k-1>s$ as required. $\square$\\ Recall that Proposition 6 states that if $A$ is extremal and is contained between two consecutive layers of the cube, i.e. $X^{(\leq r)}\subseteq A\subseteq X^{(\leq r+1)}$, it follows that $A$ has to be isomorphic to an initial segment of the simplicial order. It turns out that this still remains true in the weak version, and in fact the following theorem by F\"{u}redi and Griggs reduces the proof of this fact to Corollary 7.\\ \textbf{Theorem 11 (F\"{u}redi, Griggs - Theorem 2.1 in \cite{key-6}). }Suppose ${\cal A}\in X^{(r)}$ for which $\left|\partial{\cal A}\right|$ is minimal. Then $\left|\partial^{t}{\cal A}\right|$is minimal for all $t>0$. $\square$\\ \textbf{Corollary 12 (Proposition 6 for the weak version). }Suppose $A\subseteq Q_{n}$ for which there exists $r$ such that $X^{(\geq r)}\subseteq A\subseteq X^{(\geq r-1)}$, and suppose that $N\left(A\right)$ and $N\left(A^{c}\right)$ are minimal. Then $A$ is isomorphic to an initial segment of the simplicial order. \\ \textbf{Proof. }Set $A=X^{(\geq r)}\cup{\cal A}$ with ${\cal A}\subseteq X^{(r-1)}$ and let ${\cal Q}=X^{(r-1)}\setminus{\cal A}$, and ${\cal B}=\left\{ T^{c}\,:\,T\in{\cal Q}\right\} \subseteq X^{(n-r+1)}$. Since $N\left(A\right)$ and $N\left(A^{c}\right)$ are minimal it follows that $\partial^{-}{\cal A}$ and $\partial^{+}{\cal Q}$ are both minimal. Note that $\left|{\cal B}\right|=\left|{\cal Q}\right|$ and $\left|\partial^{-t}{\cal B}\right|=\left|\partial^{+t}{\cal Q}\right|$ for all $t>0$. Since ${\cal \partial}^{+}{\cal Q}$ is minimal, so is $\partial^{-}{\cal B}$ as $\left|\partial^{-}{\cal B}\right|=\left|{\cal \partial}^{+}{\cal Q}\right|$.Thus Theorem 11 implies that $\partial^{-t}{\cal A}$ and $\partial^{-t}{\cal B}$ are minimal for all $t>0$. Hence $\partial^{+t}{\cal Q}$ is minimal for all $t>0$ as $\left|\partial^{-t}{\cal B}\right|=\left|\partial^{+t}{\cal Q}\right|$ for all $t>0$. Thus Corollary 7 implies that ${\cal A}$ is isomorphic to an initial segment of the colexicographic order and hence $A$ is isomorphic to an initial segment of simplicial order. $\Square$\\ Since the classification of all extremal sets was done for the stronger version in which we required $N^{t}\left(A\right)$ and $N^{t}\left(A^{c}\right)$ to be minimal for all $t>0$, one could also ask whether it could be done when we require only $N\left(A\right)$ and $N\left(A^{c}\right)$ to be minimal. This seems to be much harder, as in the stronger version one of the key observations was that any extremal set $A$ satisfies $B\left(x,r\right)\subseteq A\subseteq B\left(x,r+2\right)$ for some $x\in Q_{n}$ and $r$ which already restricts the structure of $A$ - but as it was shown in Proposition 10, a similar result cannot be proved with the weaker conditions on $N\left(A\right)$ and $N\left(A^{c}\right)$. However we are able to show that for sets of size $g_{n}$ the sets presented in Section 2 together with the initial segment are the only extremal sets for the weak version as well. In fact, the proof even shows that the sets $B_{r}$ introduced in Section 2 are the only sets of size $g_{r}$, together with the initial segment, for which $N\left(A\right)$ is minimal. This result is presented in the following section. \section{A uniqueness result for certain sizes} Recall that $f_{n,r}$ and $g_{n,r}$ are defined by $f_{n,r}=\sum_{i=0}^{r}{n \choose i}$ and $g_{n,r}=f_{n,r}+{n-1 \choose r}$. It is easy to verify that $g_{n,r}=g_{n-1,r-1}+g_{n-1,r}$ and $g_{n,r}=2f_{n-1,r-1}$. For $k\in\mathbb{Z}_{+}$ let $C$ be the initial segment of the simplicial order of size $k$ in $Q_{n}$. Set $N\left(k\right)=\left|N\left(C\right)\right|$ for convenience - note that this depends on $n$, but the dependence will not be highlighted in the notation as the value of $n$ is clear from the context.\\ \textbf{Theorem 13. }Let $\left|A\right|=g_{r}$ for which $N\left(A\right)$ is minimal. Then either $A$ is isomorphic to an initial segment of the simplicial order or $A$ is isomorphic to $B_{r}$. \\ Note that this proves that the only extremal families of size $g_{r}$ for the weak version are $B_{r}$ and the initial segment of size $g_{r}$.\\ \textbf{Proof. }The main idea of the proof is to carefully analyse the codimension 1 compressions. Let $\left|A\right|=g_{r}$ be a set for which $N\left(A\right)$ is minimal. Let $I=\left\{ i\,:\,\left|A_{+}\right|>\left|A_{-}\right|\right\} $. By considering $A\Delta I=\left\{ B\Delta I\,:\,B\in A\right\} $ if necessary we may assume that $\left|A_{+}\right|\leq\left|A_{-}\right|$ for all directions $i$ - note that clearly $A$ is isomorphic to $A\Delta I$. Choose a direction $i$, and again similarly as in \cite{key-1} let $C_{+}$ and $C_{-}$ be initial segments of the simplicial order with $\left|C_{+}\right|=\left|A_{+}\right|$ and $\left|C_{-}\right|=\left|A_{-}\right|$, and define $C=C_{-}\cup\left(\left\{ i\right\} +C_{+}\right)$. Since initial segments are nested, we have $\left|C_{\pm}\cup N\left(C_{\mp}\right)\right|=\max\left(\left|C_{\pm}\right|,\left|N\left(C_{\mp}\right)\right|\right)$. Also recall that for all sets $A\subseteq Q_{n}$, by (1) we have \[ \left|N\left(A\right)\right|=\left|N\left(A_{+}\right)\cup A_{-}\right|+\left|A_{+}\cup N\left(A_{-}\right)\right| \] and thus as in the proof of Proposition 6 it follows that $\left|N\left(C\right)\right|=\left|N\left(A\right)\right|$ and so $N\left(C\right)$ is also minimal. \textbf{Claim 1. }$\left|A_{+}\right|=g_{n-1,r-1}$ or $\left|A_{+}\right|=f_{n-1,r}$ \textbf{Proof of Claim 1. }By the definition of $C_{+}$ it is equivalent to prove same assertion for $\left|C_{+}\right|$ instead of $\left|A_{+}\right|$. If $\left|C_{+}\right|<g_{n-1,r-1}$ then $\left|C_{-}\right|>g_{n,r}-g_{n-1,r-1}=g_{n-1,r}$ and also $\left|N\left(C_{-}\right)\right|\geq N\left(g_{n-1,r}\right)=g_{n-1,r+1}$ so $\left|N\left(C\right)\right|>g_{n-1,r}+g_{n-1,r+1}=g_{n,r+1}$ which contradicts the minimality of $N\left(C\right)$ as $N\left(g_{n,r}\right)=g_{n,r+1}$. Thus $\left|C_{+}\right|\ge g_{n-1,r-1}$ and on the other hand $\left|C_{+}\right|\leq\frac{1}{2}g_{n,r}=f_{n-1,r}$. Similarly $\left|C_{-}\right|\ge f_{n-1,r}$ and $\left|C_{+}\right|\leq g_{n-1,r}$. Note that an initial segment of size $g_{n-1,r-1}$ in ${\cal P}\left(X_{i}\right)$ is $X_{i}^{(\leq r-1)}\cup\left\{ A\,:\,\left|A\right|=r,\,s\in A\right\} $ where $s$ is the smallest element of $X_{i}$ (i.e. $s=1$ if $i\neq1$ and $s=2$ if $i=1$). Hence it follows from $g_{n-1,r-1}\leq\left|C_{+}\right|\leq f_{n-1,r}$ that $C_{+}=X_{i}^{(\leq r-1)}\cup\left(\left\{ 1\right\} +X_{1,i}^{(r-1)}\right)\cup{\cal A}_{+}$ where ${\cal A}_{+}\subseteq X_{1,i}^{(r)}$. Similarly the initial segment of size $g_{n-1,r}$ is $X_{i}^{(\leq r-1)}\cup\left\{ A\,:\,\left|A\right|=r,\,1\in A\right\} $ and it follows that $C_{+}=X_{i}^{(\leq r)}\cup\left(\left\{ 1\right\} +{\cal A}_{-}\right)$ where ${\cal A}_{-}\subseteq X_{1,i}^{(r)}$, and \begin{equation} \left|{\cal A}_{+}\right|+\left|{\cal A}_{-}\right|=g_{n,r}-f_{n-1,r}-f_{n-1,r-1}-{n-2 \choose r-1}={n-2 \choose r} \end{equation} We have \[ N\left(C_{-}\right)=X_{i}^{(\leq r+1)}\cup\left(\left\{ 1\right\} +\partial_{1,i}^{+}{\cal A}_{-}\right) \] and \[ N\left(C_{+}\right)=X_{i}^{(\le r)}\cup\left(\left\{ 1\right\} +X_{1,i}^{(r)}\right)\cup\partial_{1,i}^{+}{\cal A}_{+} \] where $\partial_{1,i}^{+}$ is the upper shadow operator with respect to the ground set $X_{1,i}$. The Local LYM inequality for upper shadows states that if ${\cal A}\subseteq X^{(r)}$ then \begin{equation} \frac{\left|\partial^{+}{\cal A}\right|}{{n \choose r+1}}\ge\frac{\left|{\cal A}\right|}{{n \choose r}} \end{equation} and the equality holds if and only if ${\cal A}=X^{(r)}$ or ${\cal A}=\emptyset$. Applying (13) to ${\cal A}_{\pm}\subseteq X_{1,i}^{(r)}$, adding these inequalities together and using (12) yields \begin{equation} \left|\partial_{1,i}^{+}{\cal A}_{-}\right|+\left|\partial_{1,i}^{+}{\cal A}_{+}\right|\geq\frac{{n-2 \choose r+1}}{{n-2 \choose r}}\left(\left|{\cal A}_{+}\right|+\left|{\cal A}_{-}\right|\right)={n-2 \choose r+1} \end{equation} It follows from (14) that \[ \left|N\left(C\right)\right|=\left|N\left(C_{+}\right)\right|+\left|N\left(C_{-}\right)\right| \] \begin{equation} \ge f_{n-1,r+1}+f_{n-1,r}+{n-2 \choose r}+{n-2 \choose r+1}=2f_{n-1,r+1}=g_{n,r+1} \end{equation} Since $N\left(C\right)$ is minimal, it follows that the equality must hold in (15) and hence the equality must hold in both applications of (13) - that is, ${\cal A}_{\pm}=\emptyset$ or ${\cal A}_{\pm}=X_{1,i}^{(r)}$. Since $\left|{\cal A}_{+}\right|+\left|{\cal A}_{-}\right|={n-2 \choose r}$ it follows that exactly one of ${\cal A}_{\pm}$ is $\emptyset$ and the other one is $X_{1,i}^{(r)}$. Thus $\left|C_{+}\right|=g_{n-1,r-1}$ (if ${\cal A}_{+}=\emptyset$) or $\left|C_{+}\right|=f_{n-1,r}$ (if ${\cal A}_{+}=X_{1,i}^{(r)}$) which completes the proof of claim. $\square$ \textbf{Claim 2. }There exists $i$ for which $\left|A_{i,+}\right|=f_{n-1,r}$ \textbf{Proof of Claim 2. }Suppose that such $i$ does not exist. From Claim 1 it follows that $\left|A_{i,+}\right|=g_{n-1,r-1}$ for all $i$. Note that by definition $A_{i,+}=\left|\left\{ B\in A\,:\,i\in B\right\} \right|$, and hence by double counting \begin{equation} \sum_{B\in A}\left|B\right|=\sum_{i=1}^{n}\left|A_{i,+}\right|=ng_{n-1,r-1} \end{equation} For given $\left|A\right|$, the quantity $\sum_{B\in A}\left|B\right|$ is minimal when $A$ is an initial segment of the simplicial order (or in particular any set of the form $A=X^{(\le r)}\cup{\cal A}$ for suitable $r$ and for any ${\cal A}\subseteq X^{(r+1)}$ of appropriate size). Hence if $\left|C\right|=g_{n,r}$ then $\sum_{B\in C}\left|B\right|$ is minimal for $C=X^{(\leq r)}\cup\left(\left\{ 1\right\} +X_{1}^{(r)}\right)$. Note that $\left|C_{1,+}\right|=f_{n-1,r}$ and $\left|C_{i,+}\right|=g_{n-1,r-1}$ for all $i\neq1$. Thus \begin{equation} \sum_{B\in C}\left|B\right|=\left(n-1\right)g_{n-1,r-1}+f_{n-1,r} \end{equation} But $f_{n-1,r}>g_{n-1,r-1}$ for all $n>2$ so (17) together with the fact that $C$ minimises $\sum_{B\in C}\left|B\right|$ contradicts (16). Thus there exists $i$ with $\left|A_{i,+}\right|=f_{n-1,r}$. $\square$ In order to finish the proof, choose $i$ such that $\left|A_{\pm}\right|=f_{n-1,r}$. Since $\left|A_{\pm}\cup N\left(A_{\mp}\right)\right|=\left|C_{\pm}\cup N\left(C_{\mp}\right)\right|$ and initial segments are nested, it follows that $A_{\pm}\subseteq N\left(A_{\mp}\right)$ and $N\left(A_{\pm}\right)$ are minimal. But since exact Hamming balls are uniquely minimal for the vertex isoperimeter (Proposition 3), it follows that $A_{\pm}=B\left(x_{\pm},r\right)$ for some $x_{\pm}\in Q_{n-1}$, say with $x_{-}=\emptyset$. Now $A_{\pm}\subseteq N\left(A_{\mp}\right)$ implies that $d\left(x_{+},x_{-}\right)\leq1$. If $x_{+}=x_{-}=\emptyset$ then $A$ is isomorphic to an initial segment of the simplicial order (and the isomorphism is given by any $f_{\sigma}$ with $\sigma\left(i\right)=1$). If $x_{+}\neq x_{-}$, we have $x_{+}=\left\{ j\right\} $ for some $j\neq i$. It is easy to verify that $A=X^{(\leq r)}\cup\left(\left\{ i,j\right\} +X_{i,j}^{(r-1)}\cup X_{i,j}^{(r)}\right)$ which is isomorphic to $B_{r}$ (via $f_{\sigma}$ for any $\sigma$ with $\sigma\left(i\right)=1$, $\sigma\left(j\right)=2$). $\square$
\section{Appendix} \label{sec:appendix} We calculate all of the actions of $\mathbb{A}^{0|1}\rtimes \mathbb{A}^1$ on $\mathbb{A}^{1|1}$. Table \ref{table:twisted_Field-Theories} is the result of this calculation. The goal is to compute all functions \[ R[y, \epsilon] \lra{\mu^*} R[y,\epsilon,x,\delta] \] such that the following diagram commutes \[ \xymatrix{R[y,\epsilon,x,\delta] \ar[r]^-{\mu^* \otimes 1} & R[y,\epsilon,x_1,\delta_1,x_2,\delta_2] \\ R[y,\epsilon] \ar[u]^-{\mu^*} \ar[r]^-{\mu^*} & R[y,\epsilon,x,\delta]. \ar[u]_-{1\otimes m^*}} \] We can describe an arbitrary map $\mu^*$ by the pair of values \[ y \mapsto f_0(x,y)+f_1(x,y)\delta \epsilon \] and \[ \epsilon \mapsto g_0(x,y)\epsilon + g_1(x,y)\delta. \] Going around the diagram the two ways for $y$ gives \[ y \mapsto f_0(x_1x_2,y) +\epsilon(f_1(x_1x_2,y)x_1\delta_2+f_1(x_1x_2,y)\delta_1) \] and \begin{align*} y \mapsto f_0(x_2,f_0(x_1,y)+\epsilon f_1(x_1,y)\delta_1)+ \\(g_0(x_1,y)\epsilon + g_1(x_1,y)\delta_1)(f_1(x_2,f_0(x_1,y)+\epsilon f_1(x_1,y)\delta_1))\delta_2. \end{align*} For $\epsilon$ we get \[ \epsilon \mapsto g_0(x_1x_2,y)\epsilon+g_1(x_1x_2,y)(x_1\delta_2+\delta_1) \] and \begin{align*} \epsilon \mapsto g_0(x_2,f_0(x_1,y)+f_1(x_1,y)\epsilon \delta_1)(g_0(x_1,y)\epsilon +g_1(x_1,y)\delta_1)\\+g_1(x_2,f_0(x_1,y)+f_1(x_1,y)\epsilon \delta_1)\delta_2. \end{align*} By setting $\delta$, $\delta_0$, and $\delta_1$ to zero we recover the (multiplicative) $\mathbb{A}^1$-action (the grading by $\mathbb{N}$) and this puts some strong structural restrictions on $f_0$ and $g_0$. Going around the diagram the two ways we get \[ y \lra{\mu^*} f_0(x,y) \lra{1 \otimes m^*} f_0(x_1x_2,y) \] and \[ y \lra{\mu^*} f_0(x,y) \lra{\mu^* \otimes 1} f_0(x_1,f_0(x_2,y)). \] \begin{lemma} \label{polylemma} Let $R$ be a connected ring of characteristic $0$ and $p(x) \in R[x]$. If \[ p(x_1x_2) = p(x_1)p(x_2) \] then $p(x) = x^n$ for some $n$. \end{lemma} \begin{proof} It follows immediately that the non-zero coefficients of $p(x)$ are idempotents and thus $1$. Now set $x_1 = x_2$ and it is clear that $p(x)$ must be a monomial. \end{proof} \begin{prop} Over a connected ring $R$ \[ f_0(x,y) = x^ky+cx^k-c \] for some $c \in R$. \end{prop} \begin{proof} The unit implies that $f_0(1,y) = y$. We set \[ f_0(x,y) = f_{0}^{0}(x) + f_{0}^{1}(x)y + f_{0}^{2}(x)y^2 + \ldots. \] However, \[ f_0(x_1x_2,y) = f_0(x_1,f_0(x_2,y)) \] implies that $f_{0}^{k}(x) = 0$ for $k>1$ by looking at the highest power of $y$. This gives the equality \[ f_{0}^{1}(x_1x_2)y+f_{0}^{0}(x_1x_2) = f_{0}^{1}(x_1)f_{0}^{1}(x_2)y+f_{0}^{1}(x_1)f_{0}^{0}(x_2)+f_{0}^{0}(x_1). \] We see that $f_{0}^{1}(1)=1$ and $f_{0}^{0}(1) = 0$. Since $f_{0}^{1}(x_1x_2) = f_{0}^{1}(x_1)f_{0}^{1}(x_2)$, Lemma \ref{polylemma} implies that $f_{0}^{1}(x) = x^k$ for some $k$. \begin{comment} We write \[ f_{0}^{1}(x) = a_ix^i + a_jx^j + \ldots, \] where $i$ is the smallest power of $x$ that occurs (of course it may be zero). Now $f_{0}^{1}(x_1x_2) = f_{0}^{1}(x_1)f_{0}^{1}(x_2)$ implies that \[ a_i(x_1x_2)^i+ a_j(x_1x_2)^j = a_{i}^{2}x_{1}^ix_{2}^i+a_{i}a_jx_{1}^ix_{2}^j +a_{i}a_jx_{2}^ix_{1}^j + a_{j}^{2}x_{1}^jx_{2}^j \ldots. \] This gives $a_{k}^{2}=a_k$ for all $k$, $a_ia_j = 0$ when $i \neq j$, and $\sum_k a_k = 1$. This is a system of orthogonal idempotents in the ring. Since $R$ is connected and $a_i \neq 0$, this gives $a_i = 1$ and $a_j = 0$. Now in we may write \[ f_{0}^{1}(x) = x^i + a_kx^k + \ldots, \] for $k > j$. Using this method inductively and the finiteness of the polynomial, we find that $f_{0}^{1}(x) = x^i$. \end{comment} Now we write $f_{0}^{0}(x) = c + ax^l+\ldots,$ where $l$ is the smallest nonzero power of $x$ to appear. Now \[ f_{0}^{0}(x_1x_2) = f_{0}^{1}(x_1)f_{0}^{0}(x_2)+f_{0}^{0}(x_1) \] gives \[ c+a(x_1x_2)^l + \ldots = x_{1}^k(c+ax_{2}^l+\ldots)+(c+ax_{1}^l+\ldots). \] Since the polynomial on the left is in terms of $x_1x_2$ we see that all of the higher terms must vanish, $l=k$, and $a=-c$. \end{proof} By conjugating the action by the automorphism of the base given by $y \mapsto y+c$ and $\epsilon \mapsto \epsilon$, we move this by an isomorphism into the form \[ f_0(x,y) = x^ky. \] \begin{prop} Over a connected ring $R$, \[ g_0(x,y) = x^n \] for some $n \in \mathbb{N}$. \end{prop} \begin{proof} Using the result of the previous proposition, we have the relation \[ g_0(x_1x_2,y) = g_0(x_2,x_{1}^{k}y)g_0(x_1,y). \] Writing \[ g_0(x,y) = bx^ly^m + \text{lower terms in $y$} \] makes it clear that $m=0$ is the only solution. Now we have that $g_0(x,y) = q(x)$ and \[ q(x_1x_2) = q(x_1)q(x_2) \] so the Lemma \ref{polylemma} implies that $q(x) = g_0(x,y) = x^n$ for some $n$. \end{proof} Using these substitutions, the large formulas above give \[ y \mapsto (x_1x_2)^ky +\epsilon \delta_2 x_1 f_1(x_1x_2,y)+\epsilon \delta_1 f_1(x_1x_2,y) \] and \begin{align*} y \mapsto x_{2}^{k}x_{1}^{k}y + x_{2}^{k} \epsilon \delta_1 f_1(x_1,y) + \\(x_{1}^{n} \epsilon +g_1(x_1,y)\delta_1)\delta_2(f_1(x_2,x_{1}^{k}y+cx_{1}^{k}-c+\epsilon f_1(x,y) \delta_1)). \end{align*} Comparing like terms and using the fact that $\epsilon^2=\delta_{1}^{2}=0$ gives the relations \begin{align*} x_1f_2(x_1x_2,y) &= x_{1}^{n}f_1(x_2,x_{1}^{k}y)\\ f_1(x_1x_2,y) &= x_{2}^kf_1(x_1,y) \\ g_1(x_1,y)f_1(x_2,x_{1}^{k}y) &= 0. \end{align*} The second relation implies that \[ f_1(x,y) = x^kp(y), \] for some polynomial $p(y)$. Now setting $x_2=1$ in the first relation gives \[ x_{1}^{k+1}p(y) = x_{1}^np(x_{1}^ky). \] By looking at the highest power of $y$ in $p(y)$ it is clear that the polynomial can not have lower powers, thus we must have $p(y) = ay^m$ for some $a \in R$ and $k+1 = n+mk$. We conclude that \[ f_1(x,y) = ax^ky^m. \] Using similar reasoning with the relations coming from $\epsilon$, it is easy to conclude that one of $f_1(x,y)$ or $g_1(x,y)$ must be zero (a refinement of the third relation above) and that \[ g_1(x,y) = bx^ny^m, \] where $b\in R$ and $n+1=km$. Finally this allows us to state a general form for actions of $\mathbb{A}^{0|1} \rtimes \mathbb{A}^1$ on $\mathbb{A}^{1|1}$. If $f_1(x,y) \neq 0$, then \[ y \mapsto x^ky+ax^ky^m\epsilon \delta \] and \[ \epsilon \mapsto x^n \epsilon, \] where $k+1 = n+mk$. If $g_1(x,y) \neq 0$, then \[ y \mapsto x^ky \] and \[ \epsilon \mapsto x^n\epsilon + bx^ny^m\delta, \] where $n+1 = km$. One can easily check that both of these maps do indeed give actions. Below is a table containing the forms of the actions of $\mathbb{M}$ on $\mathbb{A}^{1|1}$ for each of the geometries $\mathbb{M} = \mathbb{A}^{0|1}\rtimes \mathbb{Z}/2$ and $\mathbb{M}=\mathbb{A}^{0|1}$. These calculations are similar to the above but easier. Again, it is easy to check that each of the formulas below do in fact give actions. \begin{table}[h] \begin{center} { \small \begin{tabular}{|l|l|} \hline Geometry & Coaction \\ \hline \hline $\mathbb{A}^{0|1} \rtimes \mathbb{Z}/2$ & $y \mapsto y+f(y)\epsilon \delta$ \\ $R[x,\delta]/(x^2-1,\delta^2)$ & $\epsilon \mapsto x\epsilon + ax\delta$ \\ & $a \in R$, $af(y) = 0$ \\ \cline{2-2} & $y \mapsto y$ \\ & $\epsilon \mapsto x\epsilon + f(y)x\delta$ \\ & $f(y) \in R[y]$ \\ \cline{2-2} & $y \mapsto yx+f(y)x\epsilon\delta$ \\ & $\epsilon \mapsto \epsilon$\\ & $f(y) = f(xy)$ \\ \cline{2-2} & $y \mapsto yx$\\ & $\epsilon \mapsto \epsilon+f(y)\delta$ \\ & $f(xy) = xf(y)$ \\ \hline $\mathbb{A}^{0|1}$ & $y \mapsto y+f(y)\epsilon \delta$ \\ & $\epsilon \mapsto \epsilon+g(y) \delta$ \\ & $f(y) = 0$ or ($g(y)=a \in R$ and $af(y) = 0$) \\ \hline \end{tabular} } \end{center} \end{table} \begin{comment} $y \mapsto x^ky+ax^ky^m\epsilon\delta$, $\epsilon \mapsto x^n\epsilon$, $k+1=n+mk $, $y\mapsto x^ky$, $\epsilon \mapsto x^n \epsilon+ax^ny^m\delta$, $n+1 = km$ \end{comment} \section{The Action Of The Endomorphisms Of The Super Point} \label{sec:endos} The superalgebraic cartesian set~$\uEnd(\mathbb{A}^{0|1})$ is an internal monoid and consequently $\Sect(\uEnd(\mathbb{A}^{0|1}))$ is a coalgebra. We begin this section by describing this coalgebra explicitly with generators and relations. After this we describe it qualitatively by showing that a coaction by this coalgebra on a supercommutative ring is the same information as a connective super cdga structure on the ring. This is a supercommutative algebra which is equipped with an additional grading by the natural numbers together with an odd, degree-one differential. These results are, to a large extent, well-known, and have appeared in a variety of guises throughout the literature. A very general and closely related version appears in the context of super Fermat theories \cite{Carchedi:2012aa}. Our treatment is heavily influenced by \cite{Stolz-lecture_notes} and \cite{MR2763085}. The superalgebraic cartesian set~$\uEnd(\mathbb{A}^{0|1})$ of endomorphisms of the superpoint (a monoid object) naturally acts on the internal mapping superalgebraic cartesian set~$\usCart(\mathbb{A}^{0|1}, X)$ for any superalgebraic cartesian set~$X$. In the second part of this section we provide conditions on $X$ under which this action leads to a coaction by $\Sect(\uEnd(\mathbb{A}^{0|1}))$. These results do not seem to have appeared in the literature. \begin{comment} In some cases this induces a coaction on the superalgebra of functions $\mathcal{O}(\underline{\sCart}(\mathbb{A}^{0|1}, X))$ by the coalgebra $\Sect(\uEnd(\mathbb{A}^{0|1}))$, which may equivalently be described as a connective super cdga structure. What this means is that we have a supercommutative algebra which is equipped with an additional grading by the natural numbers together with an odd, degree-one differential. More generally we will only get a {\em pro}-connective super cdga. In general pro-objects in a category are formal cofiltered limits of objects in that category. \CSP{get ref for pro-object} However for the purposes of this paper we will only need to consider a very restricted subclass of pro-connective super cdgas. First we will only need limits indexed by the poset of natural numbers, rather than general cofiltered categories. Moreover it will be sufficient to consider only those limits of towers where $A_n = \tau_{\leq n} A_n+1$. \CSP{Wait, are these categories really different?} \end{comment} The most direct approach to the monoid structure on $\uEnd(\mathbb{A}^{0|1})$ is via the $S$-point formalism. Here $S \in \mathsf{sA}} %{\text{s}\mathcal{A}$ is some unspecified representable superalgebraic cartesian set. The $S$-points of $\uEnd(\mathbb{A}^{0|1}) = \usCart(\mathbb{A}^{0|1}, \mathbb{A}^{0|1})$ are, by construction, the maps $S \times \mathbb{A}^{0|1} \to \mathbb{A}^{0|1}$. This, in turn, is equivalent to a map $S \times \mathbb{A}^{0|1} \to S \times \mathbb{A}^{0|1}$, which commutes with the projection to $S$. This latter description is convenient, as the monoid structure on $\uEnd(\mathbb{A}^{0|1})$ is given by composition. Since $S$ and $S \times \mathbb{A}^{0|1}$ are representable, we may equivalently describe such data by passing to the rings of global functions. We see that an $S$-point of $\uEnd(\mathbb{A}^{0|1})$ is given by an $\Sect(S)$-algebra map \begin{align*} \Sect(S)[\varepsilon] &\to \Sect(S)[\varepsilon] \\ \varepsilon & \mapsto s_{\text{ev}} \cdot \varepsilon + s_{\text{odd}}. \end{align*} where $s_{\text{ev}}$ and $s_{\text{odd}}$ are even, respectively odd, elements of $\Sect(S)$. This description makes explicit the identification $\underline{\sCart}(\mathbb{A}^{0|1}, \mathbb{A}^{0|1}) \cong \mathbb{A}^{1|1}$ from Lemma \ref{lma:affine}. Moreover we have \begin{align*} &\Sect(\underline{\sCart}(\mathbb{A}^{0|1}, \mathbb{A}^{0|1}) \times \underline{\sCart}(\mathbb{A}^{0|1}, \mathbb{A}^{0|1})) \\ & \cong \Sect(\underline{\sCart}(\mathbb{A}^{0|1}, \mathbb{A}^{0|1})) \otimes_R \Sect(\underline{\sCart}(\mathbb{A}^{0|1}, \mathbb{A}^{0|1})) \end{align*} and hence the monoid structure of $\underline{\sCart}(\mathbb{A}^{0|1}, \mathbb{A}^{0|1})$ induces a comonoid structure for $\Sect(\underline{\sCart}(\mathbb{A}^{0|1}, \mathbb{A}^{0|1}))$. It follows immediately from the formula for composition of affine transformations in one variable that the global functions of the multiplication map for the monoid $\uEnd(\mathbb{A}^{0|1})$ are given by the map \begin{align*} R[x,\epsilon] & \lra{m^*} R[x_1,x_2,\epsilon_1,\epsilon_2] \\ x & \mapsto x_1x_2 \\ \epsilon & \mapsto \epsilon_1+x_1\epsilon_2. \end{align*} This implies: \begin{prop} There is an isomorphism of monoidal superalgebraic cartesian sets \[ \uEnd(\mathbb{A}^{0|1}) \cong \mathbb{A}^{0|1} \rtimes \mathbb{A}^1, \] where $\mathbb{A}^{1}$ acts on $\mathbb{A}^{0|1}$ by scalar multiplication. \end{prop} \begin{comment} In Lemma \ref{affine}, we identified $\underline{\sCart}(\mathbb{A}^{0|1},\mathbb{A}^{0|1})$ with $\mathbb{A}^{1|1}$. However, this does not identify the monoidal structure on the endomorphisms. The following result follows immediately from the proof in the smooth case: \end{comment} \begin{definition}\label{def:supercdga} A {\em supercommutative differential graded algebra} (super cdga) is a supercommutative algebra $A$ equipped with \begin{itemize} \item a {\em grading}, i.e. a collection of $R$-module direct summands $A_n \subseteq A$ for each $n \in \mathbb{Z}$ such that $A_p \cdot A_q \subseteq A_{p+q}$, and as $R$-modules $A \cong \bigoplus_n A_n$; and \item a {\em differential}, i.e. an odd derivation $d: A \to A$, which squares to zero, $d^2 = 0$. \end{itemize} We require that the derivation has {\em degree one}, which means $d(A_p) \subseteq A_{p+1}$. A super cdga is {\em connective} if $A_p = 0$ for $p < 0$. A {\em weakly graded super cdga} $B$ is defined identically to a super cdga except that we require $B \cong \prod_n B_n$, the direct product of isotypical factors, rather than the direct sum. \end{definition} \begin{prop} \label{cdgastructure} Let $A$ be a supercommutative algebra (such as $\mathcal{O}(Y)$). A coaction by $\mathcal{O}(\mathbb{A}^{0|1} \rtimes \mathbb{A}^1)$ is equivalent to a connective super cdga structure on $A$. \end{prop} \begin{proof} The semidirect product $\mathbb{A}^{0|1} \rtimes \mathbb{A}^1$ admits a canonical section \[ \mathbb{A}^{0|1} \lra{} \mathbb{A}^{0|1} \rtimes \mathbb{A}^1 \longleftarrow \mathbb{A}^1, \] which allows us to break the action into its constituent parts. A coaction of $\mathcal{O}(\mathbb{A}^1) \cong R[x]$ is equivalent to a connective grading by $\mathbb{N}$, with the elements $a \in A$ of degree $k$ being those where the coaction map sends $a \mapsto a\otimes x^k$. Here by a grading we mean, as above, that $A \cong \oplus A_n$, the direct sum of factors, and not the direct product. A coaction of $\mathcal{O}(\mathbb{A}^{0|1})$ is equivalent to an odd differential. Again, for $a \in A$ the coaction map sends $a \mapsto a + a_1\epsilon$. We set $da = a_1$. The fact that $d$ is a differential follows from the associativity of the action. Now we combine these actions in a twisted way in $\mathbb{A}^{0|1} \rtimes \mathbb{A}^1$. We can check that this tells us that the differential increases degree. The associativity diagram for the coaction is the following: \begin{center} \begin{tikzpicture} \node (LT) at (0, 1.5) {$A$}; \node (LB) at (0, 0) {$A \otimes \mathcal{O}(\mathbb{A}^{0|1} \rtimes \mathbb{A}^1)$}; \node (RT) at (6, 1.5) {$A\otimes \mathcal{O}(\mathbb{A}^{0|1} \rtimes \mathbb{A}^1)$}; \node (RB) at (6, 0) {$A\otimes \mathcal{O}(\mathbb{A}^{0|1} \rtimes \mathbb{A}^1) \otimes \mathcal{O}(\mathbb{A}^{0|1} \rtimes \mathbb{A}^1)$.}; \draw [->] (LT) -- node [left] {$\mu^*$} (LB); \draw [->] (LT) -- node [above] {$\mu^*$} (RT); \draw [->] (RT) -- node [right] {$1 \otimes m^*$} (RB); \draw [->] (LB) -- node [below] {$\mu^* \otimes 1$} (RB); \end{tikzpicture} \end{center} Now if $a \in A$ is degree $k$, we have \begin{center} \begin{tikzpicture} \node (LT) at (0, 1.5) {$a$}; \node (RT) at (4.5, 1.5) {$ax^k+(da)x^k\epsilon$}; \node (RB) at (4.5, 0) {$ax_{1}^{k}x_{2}^{k}+(da)x_{1}^{k}x_{2}^{k}\epsilon_1 - (da)x_{1}^{k+1}x_{2}^{k}\epsilon_2$}; \draw [|->] (LT) -- node [above] {$$} (RT); \draw [|->] (RT) -- node [right] {$$} (RB); \end{tikzpicture}, \end{center} which is completely determined by the formula for $m^*$. Going around the other way we discover that there must be an equality $\mu^*(da) = (da)x^{k+1}$. That is, the differential increases degree by $1$. \end{proof} \begin{cor} There is an equivalence of categories between the category of supercommutative algebras with coactions by $\mathcal{O}(\mathbb{A}^{0|1} \rtimes \mathbb{A}^1)$ and the category of super cdga's. \qed \end{cor} Now we calculate the effect of $\Sect(-)$ on the action map \[ \mu:\uEnd(\mathbb{A}^{0|1}) \times \usCart(\mathbb{A}^{0|1},\mathbb{A}^{n|q}) \lra{} \usCart(\mathbb{A}^{0|1},\mathbb{A}^{n|q}). \] Note that, since everything involved is affine, this gives a coaction of $\Sect(\uEnd(\mathbb{A}^{0|1}))$ on $\Sect(\usCart(\mathbb{A}^{0|1},\mathbb{A}^{n|q}))$. We write \[ \Sect(\usCart(\mathbb{A}^{0|1},\mathbb{A}^{n|q})) \cong R[x_1,\ldots,x_n,\overline{\e}_1,\ldots,\overline{\e}_q,\overline{x}_1,\ldots,\overline{x}_n,\epsilon_1,\ldots,\epsilon_q], \] where $x_i$ and $\overline{\e}_i$ are even and $\overline{x}_i$ and $\epsilon_i$ are odd. We use this notation because $\overline{x}_i$ is induced by $x_i \in \Sect(\mathbb{A}^{n|q})$ and $\overline{\e}_i$ is induced by $\epsilon_i \in \Sect(\mathbb{A}^{n|q})$. \begin{prop} \label{prop:calc} The coaction of $\Sect(\uEnd(\mathbb{A}^{0|1})) \cong R[x,\epsilon]$ on $\Sect(\usCart(\mathbb{A}^{0|1},\mathbb{A}^{n|q}))$ maps \begin{align*} x_i &\mapsto x_i+\overline{x}_i\epsilon \\ \overline{\e}_i &\mapsto \overline{\e}_ix \\ \overline{x}_i &\mapsto \overline{x}_ix \\ \epsilon_i &\mapsto \epsilon_i+\overline{\e}_i\epsilon. \end{align*} \end{prop} \begin{proof} Since $\mathbb{A}^{n|q} \cong (\mathbb{A}^1)^n\times (\mathbb{A}^{0|1})^q$ it suffices to check this on $\mathbb{A}^1$ and $\mathbb{A}^{0|1}$. We prove this for $\mathbb{A}^1$, the case of $\mathbb{A}^{0|1}$ having already been treated during our explicit description of the coalgebra structure of $\Sect(\uEnd(\mathbb{A}^{0|1}))$. Let $T$ be a $\mathbb{Z}/2$-graded commutative $R$-algebra. Using the functor of points a map \[ R[x,\epsilon] \lra{} T \] mapping $x \mapsto t_x$ and $\epsilon \mapsto t_{\epsilon}$ corresponds to the map \[ T[\epsilon] \lra{} T[\epsilon]: \epsilon \mapsto t_x\epsilon + t_{\epsilon}. \] Now $\Sect(\usCart(\mathbb{A}^{0|1},\mathbb{A}^1)) \cong R[x_1,\overline{x}_1]$. A map \[ R[x_1, \overline{x}_1] \lra{} T \] mapping $x_1 \mapsto t_{x_1}$ and $\overline{x}_1 \mapsto t_{\overline{x}_1}$ corresponds to the map \[ T[x] \lra{} T[\epsilon]: x \mapsto t_{x_1} + t_{\overline{x}_1} \epsilon. \] Composing these gives the map \[ T[x] \lra{} T[\epsilon]: x \mapsto (t_{x_1}+t_{\overline{x}_1}t_{\epsilon}) + t_{\overline{x}_1}t_x \epsilon. \] Thus the coaction is the map \[ R[x_1,\overline{x}_1] \lra{} R[x,\epsilon]\otimes_R R[x_1,\overline{x}_1] \] mapping $x_1 \mapsto x_1+\overline{x}_1\epsilon$ and $\overline{x}_1 \mapsto \overline{x}_1x$. \end{proof} The super cdga structure on $\Sect(\usCart(\mathbb{A}^{0|1},\mathbb{A}^{n|q}))$ thus has $x_i$ even of degree $0$, $\overline{x}_i$ odd of degree $1$, $\epsilon_i$ odd of degree $0$, and $\overline{\e}_i$ even of degree $1$. Now let $Y$ be a superalgebraic cartesian set~with an action of $\uEnd(\mathbb{A}^{0|1})$: \[ \uEnd(\mathbb{A}^{0|1}) \times Y \lra{\mu} Y. \] Applying global functions gives a map \begin{equation*} \mu^*: \Sect(Y) \to \Sect(\uEnd(\mathbb{A}^{0|1}) \times Y). \end{equation*} When $Y$ is representable, the codomain decomposes as a tensor product \[ \Sect(\uEnd(\mathbb{A}^{0|1}) \times Y) \cong \Sect(\uEnd(\mathbb{A}^{0|1})) \otimes_R \Sect(Y), \] and hence in this case $\mathcal{O}(Y)$ becomes a supercomodule for the supercoalgebra $R[x,\epsilon]$ described above. In general taking global functions fails to turn products into tensor products, and so we would not generally expect such a coaction. We begin by analyzing the general case in order to see just how bad things can get. Then we show that for a superalgebraic cartesian set~$X$ that is a finite colimit of representables there is a genuine coaction on \[ \Sect(\usCart(\mathbb{A}^{0|1},X)). \] Also we show that in the case when there exists $N \in \mathbb{N}$ such that $X$ is a colimit of representables of the form $\mathbb{A}^{n|0}$ where $n < N$ there is an induced coaction on $\Sect(\usCart(\mathbb{A}^{0|1},X))$. Let $Y$ be a superalgebraic cartesian set~with an action by $\uEnd(\mathbb{A}^{0|1})$. Then $Y = \Colim{I} \text{ } \mathbb{A}^{n|q}$ and we have the sequence of isomorphisms \begin{align*} \Sect(\uEnd(\mathbb{A}^{0|1}) \times Y) &\cong \Sect(\uEnd(\mathbb{A}^{0|1}) \times \Colim{I} \text{ } \mathbb{A}^{n|q}) \\ &\cong \Sect(\Colim{I} \text{ }(\uEnd(\mathbb{A}^{0|1}) \times \mathbb{A}^{n|q})) \\ &\cong \Lim{I} \text{ }\Sect((\uEnd(\mathbb{A}^{0|1}) \times \mathbb{A}^{n|q})). \end{align*} Since the action map $\mu$ is natural in $Y$ and $\mathcal{O}(\mathbb{A}^{n|q})$ admits an coaction map, this implies that the map of rings \[ \Sect(Y) \lra{\mu^*} \Sect(\uEnd(\mathbb{A}^{0|1}) \times Y) \] is a limit of coaction maps \[ \Lim{I} \text{ }\Sect(\mathbb{A}^{n|q}) \lra{} \Lim{I} \text{ }\Sect((\uEnd(\mathbb{A}^{0|1}) \times \mathbb{A}^{n|q})). \] Thus Proposition \ref{prop:calc} provides a formula for this map. An element of the ring $\Sect(Y)$ is a compatible family of polynomials in $\Sect(\mathbb{A}^{n|q})$ as $n$ and $q$ vary. Let $\mathbf{x} = \{x_1, \ldots, x_n\}$ and $\boldsymbol{\epsilon} = \{\epsilon_1,\ldots,\epsilon_q\}$. If $(f_i(\mathbf{x},\boldsymbol{\epsilon}))_{i \in I}$ is a compatible family then \[ \mu^*((f_i(\mathbf{x},\boldsymbol{\epsilon}))_{i \in I}) = (\mu^*f_i(\mathbf{x},\boldsymbol{\epsilon}))_{i \in I}, \] where the $\mu^*$ on the right is the coaction of $\Sect(\uEnd(\mathbb{A}^{0|1}))$ on $\Sect(\mathbb{A}^{n|q})$. Now we explain two cases in which the limit of coaction maps induces a coaction by $\Sect(\uEnd(\mathbb{A}^{0|1}))$. \begin{prop} Let $X = \Colim{I} \text{ } \mathbb{A}^{n|q}$ where $I$ is a finite category. Then there is an isomorphism \[ \Sect(\uEnd(\mathbb{A}^{0|1}))\otimes_R \Sect(\usCart(\mathbb{A}^{0|1},X)) \cong \Sect(\uEnd(\mathbb{A}^{0|1}) \times \usCart(\mathbb{A}^{0|1},X)). \] This implies that the ring of functions applied to the action map gives a coaction for ``finite" superalgebraic cartesian sets. \end{prop} \begin{proof} The key point here is that $\Sect(\uEnd(\mathbb{A}^{0|1}))$ is flat as an $R$-module. The underlying module is an (infinitely generated) free $R$-module. Now we have the sequence of isomorphisms \begin{align*} \Sect(\uEnd(\mathbb{A}^{0|1}) \times &\usCart(\mathbb{A}^{0|1},X)) \\ &\cong \Sect(\uEnd(\mathbb{A}^{0|1}) \times \usCart(\mathbb{A}^{0|1},\Colim{I}\text{ }\mathbb{A}^{n|0})) \\ &\cong \Sect(\uEnd(\mathbb{A}^{0|1}) \times \Colim{I}\text{ }\usCart(\mathbb{A}^{0|1},\mathbb{A}^{n|0})) \\ &\cong \Sect(\Colim{I}\text{ }(\uEnd(\mathbb{A}^{0|1}) \times \usCart(\mathbb{A}^{0|1},\mathbb{A}^{n|0}))) \\ &\cong \Lim{I}\text{ }\Sect(\uEnd(\mathbb{A}^{0|1}) \times \usCart(\mathbb{A}^{0|1},\mathbb{A}^{n|0})) \\ &\cong \Lim{I}\text{ }\Sect(\uEnd(\mathbb{A}^{0|1})) \otimes_R \Sect(\usCart(\mathbb{A}^{0|1},\mathbb{A}^{n|0})) \\ &\cong \Sect(\uEnd(\mathbb{A}^{0|1})) \otimes_R \Lim{I}\text{ }\Sect(\usCart(\mathbb{A}^{0|1},\mathbb{A}^{n|0})) \\ &\cong \Sect(\uEnd(\mathbb{A}^{0|1})) \otimes_R \Sect(\usCart(\mathbb{A}^{0|1},X)). \end{align*} The second isomorphism follows from the fact that $\mathbb{A}^{0|1}$ is cartesian tiny. The third is because colimits distribute with products in a topos. The fifth is because the objects are affine and the sixth uses the fact that $\Sect(\uEnd(\mathbb{A}^{0|1}))$ is flat. \end{proof} \begin{prop} \label{prop:finitedim} Assume that there exists $N \in \mathbb{N}$ such that $X = \Colim{I} \text{ } \mathbb{A}^{n|0}$ with $n < N$. Then the ring of functions on the action map factors through the tensor product. Thus the action map of $\uEnd(\mathbb{A}^{0|1})$ on $\usCart(\mathbb{A}^{0|1},X)$ induces a coaction on global functions. \end{prop} \begin{proof} The functor $\Sect(-)$ applied to the action map gives \[ \Lim{I}\text{ }\Sect(\usCart(\mathbb{A}^{0|1},\mathbb{A}^{n|0}) \lra{} \Lim{I}\text{ }\Sect(\End(\mathbb{A}^{0|1}) \times \usCart(\mathbb{A}^{0|1},\mathbb{A}^{n|0})), \] which is an inverse limit of coactions. We see from Proposition \ref{prop:calc} that the action of $\End(\mathbb{A}^{0|1})$ on $\usCart(\mathbb{A}^{0|1},\mathbb{A}^{n|0}))$ induces a grading on $\Sect(\usCart(\mathbb{A}^{0|1},\mathbb{A}^{n|0}))) \cong \Sect(\mathbb{A}^{n|n})$. The maximal element in the grading has degree $n$. We claim that this implies that there is a factorization of the map above through \[ \Lim{I}\text{ }\Sect(\usCart(\mathbb{A}^{0|1},\mathbb{A}^{n|0}) \lra{} \Sect(\End(\mathbb{A}^{0|1})) \otimes_R \left(\Lim{I}\text{ }\Sect(\usCart(\mathbb{A}^{0|1},\mathbb{A}^{n|0}))\right). \] Since $\Sect(\uEnd(\mathbb{A}^{0|1}))$ is a polynomial ring (and not a power series ring), a factorization exists as long as there is no element in $\Lim{I}\text{ }\usCart(\mathbb{A}^{0|1},\mathbb{A}^{n|0}))$ that has unbounded degree. \end{proof} \begin{example} Consider the superalgebraic cartesian set $\Coprod{n \geq 0} \mathbb{A}^{n|0}$, which is the disjoint union of (non-super) affine spaces, one of each dimension. There is an isomorphism \[ \Sect(\usCart(\mathbb{A}^{0|1},\Coprod{n \geq 0} \mathbb{A}^{n|0})) \cong \Prod{n}\Sect(\mathbb{A}^{n|n}) \] and so this ring has unbounded degree. In particular it contains the element $(1,\varepsilon_1,\varepsilon_1\varepsilon_2,\varepsilon_1\varepsilon_2\varepsilon_3, \ldots)$. Thus no factorization as above could exist for this ring and so the action of $\uEnd(\mathbb{A}^{0|1})$ on $\usCart(\mathbb{A}^{0|1},\Coprod{n \geq 0} \mathbb{A}^{n|0})$ does not induce a coaction after taking the ring of functions. \end{example} \begin{comment} \begin{prop} Let $X$ be an arbitrary simplicial set. Then there is some way to make sense of a coaction. \end{prop} \begin{lemma}[\cite{Stolz-lecture_notes}] \label{lemma:stolz} If $Y = \usCart(\mathbb{A}^{0|1}, X)$ for some superalgebraic cartesian set~$X$ with its natural action by $\uEnd(\mathbb{A}^{0|1})$, then on global functions we have a factorization \begin{equation*} \mu^*: \Sect(Y) \to \Sect(\uEnd(\mathbb{A}^{0|1})) \otimes_R \Sect(Y) \to \Sect( \uEnd(\mathbb{A}^{0|1}) \times Y) \end{equation*} and hence $\Sect(Y)$ is a supercomodule for the supercoalgebra $R[x,\epsilon] \cong \Sect( \uEnd(\mathbb{A}^{0|1}))$. \end{lemma} \begin{proof} \CSPcomm{To be copied from Stephan's notes} \end{proof} \end{comment} For a superalgebraic cartesian set~$X$ satisfying the hypotheses of either of the propositions above, let $\Omega^*(X)$ be the super cdga associated to $\mathcal{O}(\underline{\sCart}(\mathbb{A}^{0|1},X))$ with the coaction by $\mathcal{O}(\mathbb{A}^{0|1} \rtimes \mathbb{A}^1)$. So if $u$ is the forgetful functor from super cdga's to $\sAlg$ we have $u \Omega^*(X) = \mathcal{O}(\underline{\sCart}(\mathbb{A}^{0|1},X)) = \Omega(X)$. \section{Geometries On The Superpoint} \label{sec:geometries} In this section we will study several possible geometries that can be placed on the superpoint. Each of these geometries will give rise to a slightly different breed of supersymmetric $0|1$-dimensional quantum field theory. Following the lead of HKST we will define a geometry in the spirit of Felix Klein's Erlangen program. That is to say a {geometry} is completely specified by its group symmetries, which is a subgroup of $\uAut(\mathbb{A}^{0|1})$. In fact the most natural thing which acts on $\mathbb{A}^{0|1}$ is the {\em monoid} of endomorphisms; we don't see a compelling reason to limit ourselves to sub{\em groups}. \begin{definition}\label{def:geometry} A {\em geometry} on $\mathbb{A}^{0|1}$ is a submonoid $\mathbb{M}$ of the monoid $\uEnd(\mathbb{A}^{0|1})$ of endomorphisms (in superalgebraic cartesian sets). \end{definition} \noindent There are five geometries that we will explore below: \begin{enumerate} \item $\mathbb{M} = \uEnd(\mathbb{A}^{0|1}) \cong \mathbb{A}^{0|1} \rtimes \mathbb{A}^1$ is the full endomorphism monoid. We call this geometry {\em pre-topological}. \item $\mathbb{M} = \uAut(\mathbb{A}^{0|1}) \cong \mathbb{A}^{0|1} \rtimes \mathbb{G}_m$ is the maximal subgroup. We call this geometry {\em topological}. \item $\mathbb{M} = \mathbb{A}^{0|1} \rtimes \mathbb{Z}/2\mathbb{Z}$. Following HKST we call this geometry {\em Euclidean}. \item $\mathbb{M} = \mathbb{A}^{0|1} \times 1$. We call this geometry {\em oriented Euclidean}. \item $\mathbb{M} = 1$. We call this geometry {\em fully-rigid}. \end{enumerate} The geometries (submonoids) include into each other in the following way: \[ 1 \subset \mathbb{A}^{0|1} \times 1 \subset \mathbb{A}^{0|1} \rtimes \mathbb{Z}/2\mathbb{Z} \subset \mathbb{A}^{0|1} \rtimes \mathbb{G}_m \subset \mathbb{A}^{0|1} \rtimes \mathbb{A}^1, \] where we have abused notation and written $\mathbb{G}_m$ for $\mathcal{O}^*(\mathbb{G}_m)$. On global functions these inclusions correspond to the maps of supercommutative bialgebras \[ R[x,\epsilon] \lra{x \mapsto x} R[x,x^{-1},\epsilon] \lra{x \mapsto (1,-1)} (R\times R)[\epsilon] \lra{\pi_1} R[\epsilon] \lra{\epsilon \mapsto 0} R. \] \begin{comment} Let $\mathbb{M}$ be any of the monoids above. We write $\mathcal{O}(\underline{\sCart}(\mathbb{R}^{0|1},X)/\mkern-3mu / \mathbb{M})$ for the $\mathcal{O}(\mathbb{M})$-coinvariant functions in $\mathcal{O}(\underline{\sCart}(\mathbb{R}^{0|1},X))$. \end{comment} \begin{cor} The following are consequences of the proof of Proposition \ref{cdgastructure}: \begin{enumerate} \item A supercommutative algebra with a coaction by $\mathcal{O}(\mathbb{A}^{0|1} \rtimes \mathbb{G}_m)$ is a $\mathbb{Z}$-graded super cdga. \item A supercommutative algebra with a coaction by $\mathcal{O}(\mathbb{A}^{0|1} \rtimes \mathbb{Z}/2)$ is a $\mathbb{Z}/2$-graded super cdga. \item A supercommutative algebra with a coaction by $\mathcal{O}(\mathbb{A}^{0|1})$ is a supercommutative algebra with an odd differential. \end{enumerate} \end{cor} Next we define the notion of $\mathbb{M}$-structure on a superalgebraic cartesian set~that is abstractly isomorphic to the superpoint. \begin{definition} Let $X$ be a superalgebraic cartesian set~that is abstractly isomorphic to the superpoint $\mathbb{A}^{0|1}$. An \emph{$\mathbb{M}$-prestructure} on $X$ is a subfunctor \[ \Gamma \subseteq \underline{\sCart}(\mathbb{A}^{0|1},X) \] with the property that $\Gamma$ is closed under the action of $\mathbb{M}$: \[ \Gamma \cdot \mathbb{M} = \Gamma. \] An $\mathbb{M}$-isometry between two superalgebraic cartesian sets~equipped with $\mathbb{M}$-prestructures, $(X,\Gamma)$ and $(X',\Gamma ')$, is a map $X \lra{f} X'$ such that $f_*\Gamma \subseteq \Gamma '$. Thus $(X, \Gamma)$ is isomorphic to $(X',\Gamma ')$ if there is an isomorphism $X \lra{f} X'$ such that $f_*\Gamma = \Gamma '$. \end{definition} \begin{example} The superpoint $\mathbb{A}^{0|1}$ has a canonical $\mathbb{M}$-prestructure given by \[ \mathbb{M} \subseteq \underline{\sCart}(\mathbb{A}^{0|1},\mathbb{A}^{0|1}). \] \end{example} \begin{definition} An $\mathbb{M}$-prestructure $(X,\Gamma)$ is an \emph{$\mathbb{M}$-structure} if there exists an isomorphism \[ (X,\Gamma) \cong (\mathbb{A}^{0|1}, \mathbb{M}). \] \end{definition} There is an action of $\mathbb{M}$ on the superalgebraic cartesian set \[ \underline{\sCart}(\mathbb{A}^{0|1},X) \] given by precomposition. We may consider the (categorical) quotient superalgebraic cartesian set \[ \underline{\sCart}(\mathbb{A}^{0|1},X)/\mathbb{M}. \] When $\mathbb{M}$ is a group and thus a subgroup of $\mathbb{A}^{0|1}\rtimes \mathbb{G}_m$, we may consider its normalizer in $\mathbb{A}^{0|1}\rtimes \mathbb{G}_m$ defined by the formula \[ N(\mathbb{M})(\mathbb{A}^{n|q}) = \{g \in (\mathbb{A}^{0|1}\rtimes \mathbb{G}_m)(\mathbb{A}^{n|q}) | g\mathbb{M}(\mathbb{A}^{n|q})g^{-1} = \mathbb{M}(\mathbb{A}^{n|q}). \] In each of the above cases the normalizer is the whole of $\mathbb{A}^{0|1}\rtimes \mathbb{G}_m$. It is clear that $N(\mathbb{M})$ acts on \[ \underline{\sCart}(\mathbb{A}^{0|1},X)/\mathbb{M}. \] When $\mathbb{M}$ is normal in $N(\mathbb{M})$, the action factors through $N(\mathbb{M})/\mathbb{M}$. \begin{example} The most interesting example of this is the Euclidean geometry \[ \mathbb{M} = \mathbb{A}^{0|1}\rtimes \mathbb{Z}/2. \] We have $N(\mathbb{M}) = \mathbb{A}^{0|1}\rtimes \mathbb{G}_m$ and when $2 \in R^{\times}$, $\mathbb{Z}/2 \cong \mathbb{G}_m[2]$, and hence \[ N(\mathbb{M})/\mathbb{M} \cong \mathbb{G}_m/\mathbb{G}_m[2] \cong \mathbb{G}_m. \] Note that an action of $\mathbb{G}_m$ on $\Spec(R)$ that factors through $\mathbb{G}_m/\mathbb{G}_m[2]$ is equivalent to a grading by the even integers. \end{example} \begin{remark} For each of the other geometries this is quite elementary. When $\mathbb{M} = 1$, there is an action of $\mathbb{A}^{0|1}\rtimes \mathbb{G}_m$. When $\mathbb{M} = \mathbb{A}^{0|1}$, there is an action of $\mathbb{G}_m$. \end{remark} \begin{comment} Next we develop the notion of $\mathbb{M}$-isometry. It plays an important role in the definition of the bordism category. Given the mapping superalgebraic cartesian set~from a finite collection of superpoints mapping into a superalgebraic cartesian set~$X$ \[ \underline{\sCart}(\Coprod{k} \mathbb{A}^{0|1},X) \] we may permute the superpoints in the domain and also precompose with a collection of endomorphisms of the superpoint. Recall from Example \ref{Groups} that there is a fully faithful embedding of the category of groups into the category of superalgebraic cartesian set. Thus there is an action of the superalgebraic cartesian set~$\mathbb{M} \wr \Sigma_k$ on $\underline{\sCart}(\Coprod{k} \mathbb{A}^{0|1},X)$. \begin{definition} An $S$-family of \emph{$\mathbb{M}$-isometries} of the $S$-family of bordisms $\coprod_k \mathbb{A}^{0|1}$ is given by a map \begin{equation*} S \to \left(\prod_k \mathbb{M} \right) \rtimes \Sigma_k = \coprod_{\Sigma_k} \prod_k \mathbb{M} \end{equation*} where the monoid is the wreath product of $\mathbb{M}$ and the symmetric group on $k$ elements. \end{definition} \end{comment} \begin{comment} An $\mathbb{M}$-isometry of $\underline{\sCart}(\Coprod{k} \mathbb{A}^{0|1},X)$ is a morphism which permutes the superpoint factors and on individual factors is an element of $\mathbb{M}$. There are two (for the purposes of this paper) important subfunctors of the endomorphisms. These are the automorphisms $\mathbb{A}^{0|1} \rtimes \mathbb{G}_m$, ``corepresented" by \[ R[x,x^{-1},\epsilon] \] and the Euclidean automorphisms $\mathbb{A}^{0|1} \rtimes \mathbb{Z}/2$ (following the nomenclature of Stolz-Teichner), ``corepresented" by \[ (R\times R)[\epsilon]. \] These are the subfunctors induced by the following maps (of supercommutative bialgebras) \[ R[x,\epsilon] \lra{x \mapsto x} R[x,x^{-1},\epsilon] \lra{x \mapsto (1,-1)} (R\times R)[\epsilon]. \] \end{comment} \section{Polynomial Forms via Superalgebraic Cartesian Sets} \label{sec:forms} In \cite{Sull}, Sullivan introduced a simplicial commutative differential graded algebra (cdga) called $\Omega^{*}_{\bullet}$. It is defined on $n$-simplices by the formula \[ \sSet(\Delta_n,\Omega^{*}_{\bullet}) \cong R[x_1, \ldots, x_n, dx_1, \ldots dx_n], \] where $|x_i| = 0$. This is the cdga of K\"ahler differential forms on the polynomial algebra $R[x_1, \ldots, x_n,]$. The simplicial maps are built just as in the functor $i$ introduced in Subsection~\ref{sec:SACS_simplicial}. The $n$-simplices of the simplicial cdga in fact have the structures of a super cdga (a cdga with a $\mathbb{Z}/2$ grading and an odd degree $1$ differential). The elements $x_i$ are even of degree $0$ and the elements $dx_i$ are odd of degree $1$. For any simplicial set $X$ the set of maps $\sSet(X, \Omega^{*}_{\bullet}) =: \Omega_R^*(X)$ is a commutative differential graded $R$-algebra, which is only weakly graded if $X$ is infinite dimensional (c.f. Def.~\ref{def:supercdga}). This cdga has a concrete description. An element consists of a compatible choice $\{\omega_\sigma\}_{\sigma \in X}$ of polynomial K\"ahler differential forms for each simplex $\sigma$ of $X$. This collection is required to be compatible with restriction maps in the obvious way. When $R$ is a $\mathbb{Q}$-algebra, the simplicial cdga $\Omega^{*}_{\bullet}$ has the property that, for a simplical set $X$, \[ H^*(\sSet(X, \Omega^{*}_{\bullet})) \cong H^*(X,R), \] where $H^*(X,R)$ is the singular cohomology of $X$ with coefficients in the ring $R$ (c.f. \cite[Thm~7.1]{Sull}). The cdga $\Omega_\mathbb{Q}^*(X)$ is Sullivan's rational polynomial differential forms. There is a forgetful functor $u$ from the category of super cdga's to $\sAlg$. The simplicial supercommutative algebra \[ u\Omega^{*}_{\bullet}: \Delta^{op} \lra{} \sAlg. \] will play an in important role in this section, where we prove that for any simplicial set $X$ there is a natural isomorphism of supercommutative algebras \[ \mathcal{O}(\underline{\sCart}(\mathbb{A}^{0|1}, i_!X)) \cong u\sSet(X, \Omega^{*}_{\bullet}). \] We begin by studying the relationship between $\Omega$ and $\Omega^{*}_{\bullet}$. \begin{prop} There is a natural isomorphism of simplicial supercommutative algebras (where $\Omega$ is the superalgebraic cartesian set~from Example~\ref{ex:Omega}) \[ u\Omega^{*}_{\bullet} \cong i^* \Omega. \] \end{prop} \begin{proof} Evaluating on $\Delta^n$ gives \begin{align*} \sSet(\Delta^n, i^* \Omega) &\cong \sCart(i_!\Delta^n, \Omega) \\ &\cong \sCart(\mathbb{A}^n, \Omega) \\&\cong \mathcal{O}(\underline{\sCart}(\mathbb{A}^{0|1},\mathbb{A}^n)) \\ &\cong \mathcal{O}(\mathbb{A}^{n|n}) \\ &\cong R[x_1,\ldots,x_n,\epsilon_1, \ldots, \epsilon_n] \\ &\cong u\Omega^{*}_{\bullet}(\Delta^n). \end{align*} \end{proof} As a special case we get the following corollary: \begin{cor} \label{maincor} For any simplicial set $X$ there is an isomophism \[ \mathcal{O}(\underline{\sCart}(\mathbb{A}^{0|1}, i_!X)) \cong u\sSet(X, \Omega^{*}_{\bullet}). \] \end{cor} \begin{proof} There are isomorphisms \begin{align*} u\sSet(X, \Omega^{*}_{\bullet}) &\cong \sSet(X, u\Omega^{*}_{\bullet}) \\ &\cong \sSet(X, i^*\Omega) \\ &\cong \sCart(i_!X,\Omega) \\ &\cong \mathcal{O}(\underline{\sCart}(\mathbb{A}^{0|1}, i_!X)). \end{align*} The last isomorphism is an application of Corollary \ref{cor:functions_on_superpoints}. \end{proof} In other words, for any simplicial set $X$ the supercommutative algebra underlying the commutative differential graded algebra $\Omega^*_R(X)$ of polynomial differential forms over the ring $R$ on $X$ is naturally isomorphic to the ring of functions on the internal mapping object $\usCart(\mathbb{A}^{0|1}, i_!X)$. \begin{prop} For $X$ a simplicial set, there is an isomorphism of (weakly graded) super cdga's \[ \Omega^*(i_!X) \cong \sSet(X,\Omega^{*}_{\bullet}). \] \end{prop} \begin{proof} We will first consider the case of $X$ a finite dimensional simplicial set. In Corollary \ref{cor:functions_on_superpoints}, we showed that the above is an isomorphism of the underlying supercommutative algebras. Here we lift this to the category of super cdga's. The forgetful functor $u$ creates limits and $i_!$ preserves colimits. This implies that $X = \Colim{\Delta^k \rightarrow X}\Delta^k$ satisfies the conditions of Proposition \ref{prop:finitedim}. Now there are isomorphisms \begin{align*} \Omega^*(i_!X) &\cong \Omega^*(\Colim{i_!(\Delta^k \rightarrow X)}i_!\Delta^k) \\ &\cong \Omega^*(\Colim{i_!(\Delta^k \rightarrow X)}\mathbb{A}^k) \\ &\cong \Lim{i_!(\Delta^k \rightarrow X)}\Omega^*(\mathbb{A}^k). \end{align*} Thus it suffices to prove the result for $\mathbb{A}^k \cong i_!\Delta^k$. Now this follows from Proposition \ref{prop:calc}. Thus $\overline{x}_i$ corresponds to $dx_i$. \begin{comment} The coaction of $\mathcal{O}(\mathbb{A}^{0|1} \rtimes \mathbb{A}^1)$ on $\mathcal{O}(\underline{\sCart}(\mathbb{A}^{0|1},\mathbb{A}^k)) \cong \mathcal{O}(\mathbb{A}^{k|k})$ is given by a map \[ R[y_1, \ldots, y_k,\delta_1, \ldots, \delta_k] \lra{} R[y_1, \ldots, y_k,\delta_1, \ldots, \delta_k] \otimes_R R[x, \epsilon], \] sending \[ y_i \mapsto y_i + \delta_i \epsilon \] and \[ \delta_i \mapsto \delta_ix. \] We see that the degree of $y_i$ is zero, the degree of $\delta_i$ is one, and $dy_i = \delta_i$. This is the same as the definition of $\Omega^{*}_{\bullet}$ given at the beginning of Section~\ref{sec:forms}. \end{comment} Now let $X$ be an infinite dimensional simplicial set. We may write it as a colimit of its finite dimensional skeleta. This implies that $\Omega(X)$ admits a sequential inverse limit of coactions. We have \begin{equation*} \Omega (X) \cong \Prod{i \in \mathbb{N}} \Omega^i(X). \end{equation*} This fails to have a super cdga structure only in that it is the direct product over isotypical factors, rather than the direct sum. Hence it is weakly graded. \end{proof} \section{Superalgebraic Cartesian Sets} \label{sec:sacs} Let $\sAlg$ be the category of $\mathbb{Z}/2$-graded commutative rings and grading preserving maps. We will refer to objects in this category as supercommutative rings. In this section we introduce superalgebraic cartesian sets. These are a species of space which are a primordial mixture of the concepts of supermanifold, (super)scheme, and simplicial set. While everything we will explain in this section is super (i.e. $\mathbb{Z}/2$-graded commutative), one could just as well form an ungraded analogue called algebraic cartesian sets. \begin{definition} Fix a commutative ring $R$. The superalgebraic cartesian category $\mathsf{sA}} %{\text{s}\mathcal{A}$ has objects $\mathbb{A}^{n|q}_{R}$ for $n,q \in \mathbb{N}$ and morphisms the polynomial maps \[ \mathsf{sA}} %{\text{s}\mathcal{A}(\mathbb{A}^{n|q}_{R}, \mathbb{A}^{m|p}_{R}) = \sAlg^{\text{op}}(R[x_1,\ldots,x_n,\epsilon_1,\ldots,\epsilon_q],R[x_1,\ldots,x_m,\epsilon_1,\ldots,\epsilon_p]). \] \end{definition} \begin{remark} Here and throughout the paper the degree of anything called $\epsilon$ or $\delta$ will be odd. Thus these are square zero elements of the supercommutative ring. \end{remark} \noindent Hence $\mathsf{sA}} %{\text{s}\mathcal{A}$ is a full subcategory of the opposite of the category of supercommutative $R$-algebras. \begin{definition} The category of superalgebraic cartesian sets~is the category of presheaves $\sCart := \Pre(\mathsf{sA}} %{\text{s}\mathcal{A})$. A superalgebraic cartesian set~is an object of $\sCart$. \end{definition} \begin{example} We will often abuse notation and write $\mathbb{A}^{n|q}$ instead of $\mathbb{A}^{n|q}_{R}$ for the {\em representable} superalgebraic cartesian sets, however, everything that we do will be functorial in the ring $R$. Note that $\mathbb{A}^{n|q} \cong (\mathbb{A}^1)^{n}\times(\mathbb{A}^{0|1})^{q}$. The {\em superpoint} is the superalgebraic cartesian set~$\mathbb{A}^{0|1}$. \end{example} \subsection[as a presheaf topos]{Superalgebraic cartesian sets as a presheaf topos} \label{subsec:topos} The category of superalgebraic cartesian sets~is, by definition, a presheaf topos and consequently it enjoys the nicest possible categorical properties. \begin{example} \label{ex:innerhom} The category $\sCart$ is cartesian closed. The categorical product of two superalgebraic cartesian sets~$X$ and $Y$ is computed pointwise, and for each superalgebraic cartesian set~$X$, the right adjoint to $X \times (-)$ is given by the internal mapping functor $\underline{\sCart}(X, -)$. The internal mapping superalgebraic cartesian set~is given as the presheaf \[ \underline{\sCart}(X,Y): \sCart \lra{} \Set \] mapping \[ \mathbb{A}^{n|q} \mapsto \sCart(\mathbb{A}^{n|q} \times X, Y). \] \end{example} The category of superalgebraic cartesian sets~is complete and cocomplete with both limits and colimits computed pointwise \[ (\colim X_\alpha)(\mathbb{A}^{m|p}) = \colim (X_\alpha(\mathbb{A}^{m|p})) \] and \[ (\lim X_\alpha)(\mathbb{A}^{m|p}) = \lim (X_\alpha(\mathbb{A}^{m|p})). \] As a topos, superalgebraic cartesian sets~are also a context in which to carryout mathematics. We can almost effortlessly study the theories of groups, monoids, commutative rings, modules, categories, and even supercommutative rings, internally to superalgebraic cartesian sets. \begin{example}\label{example:the_ring_opbject_O} There is an important supercommutative algebra object $\Sect \in \sCart$. As a superalgebraic cartesian set~we have $\Sect = \mathbb{A}^{1|1}$. Addition is given by \[ R[x,\epsilon] \lra{} R[x_1,x_2,\epsilon_1,\epsilon_2]:(x \mapsto x_1+x_2, \epsilon \mapsto \epsilon_1 + \epsilon_2) \] and multiplication is given by \[ R[x,\epsilon] \lra{} R[x_1,x_2,\epsilon_1,\epsilon_2]: (x \mapsto x_1x_2+\epsilon_1\epsilon_2, \epsilon \mapsto x_1\epsilon_2+x_2\epsilon_1), \] where we have used the embedding of $\mathsf{sA}} %{\text{s}\mathcal{A}$ into $\sAlg^{\text{op}}$ to write down these maps. \end{example} Every topos has a {\em global sections} functor $\Gamma$ which is given by evaluation on the terminal object. In the language of topos theory this is a geometric morphism to the terminal topos, the category of sets. In the case at hand, we have even more structure. Since the category $\mathsf{sA}} %{\text{s}\mathcal{A}$ has all finite products the category of superalgebraic cartesian sets~is a {\em cohesive topos} \cite{MR2125786, MR2369017} (just like the category of simplicial sets). This means that we have a series of adjunctions: \begin{equation*} \pi_0 \dashv \const \dashv \Gamma \dashv \codis \end{equation*} and moreover the functor $\pi_0$ commutes with finite products. In more detail these functors are given by: \begin{align*} \codis: \Set & \to \sCart \\ S & \mapsto (\mathbb{A}^{m|p} \mapsto S^{R^{\times p}} = S^{\Gamma(\mathbb{A}^{m|p})}) \\ \Gamma: \sCart & \to \Set \\ X & \mapsto X(\mathbb{A}^0) \\ \const: \Set & \to \sCart \\ S & \mapsto \coprod_S \mathbb{A}^0 \\ \pi_0 : \sCart & \to \Set \\ X & \mapsto \colim_{\mathsf{sA}} %{\text{s}\mathcal{A}^{\textrm{op}}} X \end{align*} The functor $\pi_0$ sends a superalgebraic cartesian set, viewed as diagram of sets indexed on $\mathsf{sA}} %{\text{s}\mathcal{A}^{\textrm{op}}$, to its colimit. The functor $\Gamma$ evaluates a superalgebraic cartesian set~on the terminal object $\mathbb{A}^0$. The functor $\const$ sends a set to the constant presheaf on that set, and $\codis$ sends a set to the {\em codiscrete} superalgebraic cartesian set~on that set. These functors allow us to pass back and forth between set based mathematical concepts and those same concepts developed internally to superalgebraic cartesian sets. For example every ring object in superalgebraic cartesian sets~has, via the functor $\Gamma$, an underlying ordinary ring. For example $\Gamma(\Sect) = R$ is our chosen base ring. Similarly every ordinary ring may be augmented, via the functor $\const$, to a ring object internal to superalgebraic cartesian sets. The counit map \begin{equation*} \const(R) \to \Sect \end{equation*} is automatically a map of ring objects. These observations will be used in Section~\ref{sec:scommalg-in-sacs}. As we mentioned above, all of these considerations are functorial in the base ring $R$. A ring homomorphism $R' \to R$ induces a functor $\mathsf{sA}} %{\text{s}\mathcal{A}_{R'} \to \mathsf{sA}} %{\text{s}\mathcal{A}_R$ and hence gives rise to a geometric morphism of topoi \begin{equation*} f^*: \sCart_{R'} \leftrightarrows \sCart_R: f_* \end{equation*} where the {\em restriction of scalars} $f_*$ is given by precomposition with $\mathsf{sA}} %{\text{s}\mathcal{A}_{R'}^{\textrm{op}} \to \mathsf{sA}} %{\text{s}\mathcal{A}_R^{\textrm{op}}$. In particular since this is a geometric morphism of topoi the left-adjoint, which is given by left Kan extension along the Yoneda embedding, commutes with finite limits. Moreover this morphism of topoi is {\em local}, that is the functor $f_*$ admits a further right adjoint $f^!: \sCart_{R'} \to \sCart_R$. \subsection[and superalgebras]{Superalgebraic cartesian sets and superalgebras} \label{subsec:superalg} Superalgebraic cartesian sets have a close connection to superalgebras and superschemes. The category $\mathsf{sA}} %{\text{s}\mathcal{A}$ is the multisorted Lawvere theory\footnote{In fact it is a {\em super Fermat theory} \cite{Carchedi:2012ab}.} for supercommutative $R$-algebras, which means that supercommutative $R$-algebras in any category $\mathcal{C}$ with finite products are the same as product preserving functors $\mathsf{sA}} %{\text{s}\mathcal{A} \to \mathcal{C}$. The {\em generic object} of $\mathsf{sA}} %{\text{s}\mathcal{A}$ is the supercommutative $R$-algebra $\Sect$ from Example~\ref{example:the_ring_opbject_O}. \begin{example} The Yoneda embedding $\mathsf{sA}} %{\text{s}\mathcal{A} \to \sCart$ preserves products and corresponds to the supercommutative $R$-algebra object $\Sect$ in superalgebraic cartesian sets~as in Example~\ref{example:the_ring_opbject_O}. \end{example} Recall that $\mathsf{sA}} %{\text{s}\mathcal{A}$ is a full subcategory of $\sAlg^{\text{op}}$. The embedding of $\mathsf{sA}} %{\text{s}\mathcal{A}$ into $\sAlg^{\text{op}}$ is via the functor $\Sect(-) = \sCart(-, \Sect)$. This formula extends the functor $\Sect$ to all of $\sCart$, and for a superalgebraic cartesian set~$X$ we will refer to $\Sect(X)$ as the {\em ring of global functions} on $X$. \begin{example} \label{ex:superspec} The functor $\Sect: \sCart \to \sAlg^{\textrm{op}} $ from superalgebraic cartesian sets~to the opposite category of supercommutative $R$-algebras is easily seen to commute with colimits. It follows that it is given by left Kan extension of its restriction to $\mathsf{sA}} %{\text{s}\mathcal{A}$ along the Yoneda embedding. \begin{center} \begin{tikzpicture} \node (LT) at (0, 1.5) {$\mathsf{sA}} %{\text{s}\mathcal{A}$}; \node (LB) at (0, 0) {$\sCart$}; \node (RT) at (4, 1.5) {$\sAlg^{\text{op}}$}; \draw [->] (LT) -- node [left] {$y$} (LB); \draw [->] (LT) -- node [above] {$\Sect$} (RT); \draw [->] (LB) -- node [above left] {$\Sect$} (RT); \draw [transform canvas={yshift=-1ex},->] (RT) -- node [below right] {$\Sect^*$} (LB); \end{tikzpicture} \end{center} We obtain an adjunction: \begin{equation*} \Sect: \sCart \rightleftarrows \sAlg^{\text{op}}: \Sect^*. \end{equation*} The right adjoint $\Sect^*$ is the functor sending a supercommutative algebra $A$ to the superalgebraic cartesian set~defined via \begin{equation*} \sCart(\mathbb{A}^{n|q}, \Sect^*(A)) \cong \sAlg^{\text{op}}(\Sect(\mathbb{A}^{n|q}), A) = \sAlg(A, \Sect(\mathbb{A}^{n|q})). \end{equation*} Thus every supercommutative algebra gives rise to a superalgebraic cartesian set. \end{example} \begin{example} \label{ex:Omega} We define a superalgebraic cartesian set~called $\Omega$ (purposefully similar to Sullivan's $\Omega^{*}_{\bullet}$ introduced in Section~\ref{sec:forms}) which sends $\mathbb{A}^{n|q}$ to the supercommutative ring $\mathcal{O}(\underline{\sCart}(\mathbb{A}^{0|1},\mathbb{A}^{n|q}))$. Thus $\Omega$ is another supercommutative ring object in superalgebraic cartesian sets. It is an algebra over the supercommutative ring $\Sect$, and we will see in Section~\ref{sec:endos} that $\Omega(\mathbb{A}^{n|q})$ is isomorphic to the ring of K\"ahler differential forms on $\Sect(\mathbb{A}^{n|q})$. \end{example} \subsection[and simplicial sets]{Superalgebraic cartesian sets and simplicial sets}\label{sec:SACS_simplicial} Let $\Delta$ be the category of combinatorial simplices (i.e. the category of finite non-empty totally ordered sets and order preserving maps). There is an important faithful functor (which factors through the category of (non-super) algebraic cartesian sets) \[ i: \Delta \lra{} \mathsf{sA}} %{\text{s}\mathcal{A}. \] The functor $i$ sends $[n]$ to $\mathbb{A}^{n} = \mathbb{A}^{n|0}$ and we use the isomorphism \[ \mathcal{O}(\mathbb{A}^n) = R[x_1, \ldots, x_n] \cong R[x_0,\ldots,x_n]/(\Sigma_i x_i - 1) \] to see the simplicial maps and identities. The $\mathbb{A}^{n|0}$ may be viewed as {\em extended simplices}. \begin{example}\label{example:sSetasSACS} Let $\sSet = \Pre(\Delta)$ be the category of simplicial sets. We apologize for the use of the letter ``s" for both simplicial and super. Given a simplicial set $X$, we can form a superalgebraic cartesian set~by left Kan extension. We have the following diagram \begin{center} \begin{tikzpicture} \node (LT) at (0, 1.5) {$\Delta$}; \node (LB) at (0, 0) {$\sSet$}; \node (MT) at (2, 1.5) {$\mathsf{sA}} %{\text{s}\mathcal{A}$}; \node (RT) at (4, 1.5) {$\sCart$}; \draw [->] (LT) -- node [left] {$y$} (LB); \draw [->] (LT) -- node [above] {$i$} (MT); \draw [->] (MT) -- node [above] {$y$} (RT); \draw [->] (LB) -- node [above left] {$i_!$} (RT); \draw [transform canvas={yshift=-1ex},->] (RT) -- node [below right] {$i^*$} (LB); \end{tikzpicture} \end{center} and the superalgebraic cartesian set~associated to $X$ is $i_{!}X$, the left Kan extension along the Yoneda embedding. We will call this the {\em algebraic realization} of $X$, in analogy with the geometric realization. This fits into an adjunction with the restriction functor $i^*$ that brings a superalgebraic cartesian set~to its underlying simplicial set. \begin{equation*} i_!: \sSet \rightleftarrows \sCart: i^* \end{equation*} Given a superalgebraic cartesian set~$Y$ and a simplicial set $X$, there is a natural isomorphism \[ \sSet(X, i^* Y) \cong \sCart(i_! X, Y). \] As a left adjoint, $i_!$ commutes with colimits. Furthermore, $i^*$ also commutes with colimits, hence it admits a further right adjoint $i_*$, given by right Kan extension. The triple $(i_!, i^*, i_*)$ constitutes an {\em essential morphism of topoi} \cite{MR2369017} from $\sSet$ to $\sCart$. \end{example} \begin{prop} Recall the functor $\pi_0: \sCart \to \Set$ introduced previously. We have the equality $\pi_0 \cong \pi_0 \circ i^*$, in other words the functor $\pi_0$ applied to a superalgebraic cartesian set~may be computed as the path components of the underlying simplical set. Similarly $\pi_0 \cong \pi_0 \circ i_!$, the path components of a simplical set may be computed as the value of $\pi_0$ applied to its algebraic realization. \end{prop} \begin{proof} Recall that $\pi_0 X = \colim_{\mathsf{sA}} %{\text{s}\mathcal{A}^{\textrm{op}}}X$ and that $\pi_0 i^*(X) = \colim_{\Delta^{\textrm{op}}} X \circ i$. Thus one way to see this is to show directly that $\Delta^{\textrm{op}}$ is cofinal in $\mathsf{sA}} %{\text{s}\mathcal{A}^{\textrm{op}}$. Alternatively first observe that $i_*$ sends discrete simplicial sets $\coprod_S \Delta^0$ to constant superalgebraic cartesian sets~$\coprod_S \mathbb{A}^0 = \const(S)$. This follows formally from the observation that $i^* \mathbb{A}^{m|p}$ is a connected simplical set for each $m|p$. From this the above proposition follows immediately since for any set $S$ and any superalgebraic cartesian set~$X$ we have \begin{align*} \Set( \pi_0 X, S) &\cong \sCart(X, \const(S)) \\ & \cong \sCart(X, i_* \coprod_S \Delta^0) \\ & \cong \sSet(i^* X, \coprod_S \Delta^0) \\ & \cong \Set( \pi_0 i^* X, S). \qedhere \end{align*} The second statement is easier: \begin{align*} \pi_0 X & \cong \pi_0 \Colim{i_!(\Delta^k \rightarrow X)} \Delta^k \\ & \cong \Colim{i_!(\Delta^k \rightarrow X)} \pi_0 \Delta^k \\ & \cong \Colim{i_!(\Delta^k \rightarrow X)} \pi_0 i_!(\Delta^k) \\ & \cong \pi_0 i_! \Colim{i_!(\Delta^k \rightarrow X)} \Delta^k \\ & \cong \pi_0 i_! X. \end{align*} The first and last isomorphisms just rewrite $X$ as a colimit over its simplices, the second and fourth isomorphisms follow from the fact that the functors $\pi_0$ and $i_!$ commute with colimits (they are left adjoints), and the third isomorphism is the fact that $\pi_0 \mathbb{A}^{n|0} \cong pt \cong \pi_0 \Delta^k$. \end{proof} \begin{example} The functor $i$ from Example~\ref{example:sSetasSACS} factors through the category $\mathsf{F}$ of finite non-empty sets. Thus there is a situation which is entirely analogous to the previous one with simplicial sets replaced with the category $\Pre(\mathsf{F})$ of presheaves on $\mathsf{F}$. This latter is sometimes called the category of {\em symmetric simplicial sets}. In fact the category of symmetric simplicial sets should be regarded as a special case of our notion of superalgebraic cartesian sets; it is the case where the base ring is $\mathbb{F}_1$, the {\em ``field with one element''}. The functors corresponding to $i_!$ and $i^*$ above are then just `base change' and `restriction of scalars' between $\mathbb{F}_1$ and $R$. \end{example} These observations suggest that we should regard superalgebraic cartesian sets~as an enhanced version of simplicial sets. They are symmetric simplicial sets equipped with additional `face' and `degeneracy' operators which depend on the base ring $R$. \section{The Picard Category} \label{sec:scommalg-in-sacs} Recall that we have a series of adjunctions \begin{equation*} \pi_0 \dashv \const \dashv \Gamma \dashv \codis \end{equation*} which relate the topos of sets to the topos of superalgebraic cartesian sets. The global sections functor $\Gamma$ is a left inverse to the constant presheaf functor, $\Gamma \circ \const \cong id_{\Set}$. Hence we can view the category of sets as consisting of the full subcategory of constant presheaves. For example, the ground ring $R$ induces a ring object $\const(R)$ in $\sCart$, the constant presheaf with value $R$, which we will denote by $R$ to simplify notation. Recall that the object $\Sect$ is a supercommutative $R$-algebra in $\sCart$ and that $\Gamma(\Sect) = \mathbb{A}^{1|1}(\mathbb{A}^0) = R$. The $R$-algebra structure may be viewed as coming from the counit map $R = \const \circ \Gamma(\Sect) \to \Sect$. In this section we develop the internal theory of $\Sect$-modules in order to study the invertible $\Sect$-modules. An $\Sect$-module will be defined in the usual internal manner: an $\Sect$-module is a $\mathbb{Z}/2\mathbb{Z}$-graded superalgebraic cartesian abelian group $M$ with an action by $\Sect$. Equivalently, $M$ is a superalgebraic cartesian set~such that $M(\mathbb{A}^{n|q})$ is an $\Sect(\mathbb{A}^{n|q})$-module for each $\mathbb{A}^{n|q} \in \mathsf{sA}} %{\text{s}\mathcal{A}$. Here, since $\Sect$ is a {\em super}commutative ring, we mean `module' in the $\mathbb{Z}/2\mathbb{Z}$-graded sense. The category $\Mod_\Sect$ of $\Sect$-modules is a symmetric monoidal abelian category with tensor product $\otimes_{\Sect}$ given pointwise: \begin{equation*} (M \otimes_{\Sect} N)(\mathbb{A}^{n|q}) := M(\mathbb{A}^{n|q}) \otimes_{\Sect(\mathbb{A}^{n|q})} N(\mathbb{A}^{n|q}) \end{equation*} for $\mathbb{A}^{n|q} \in \mathsf{sA}} %{\text{s}\mathcal{A}$. The forgetful functor from $\Mod_\Sect$ to $\sCart$ has a left-adjoint which takes the superalgebraic cartesian set~$X$ to the free $\Sect$-module $F_\Sect(X)$. The value of $F_\Sect(X)$ on $\mathbb{A}^{n|q}\in \mathsf{sA}} %{\text{s}\mathcal{A}$ is given by $F_{\Sect(\mathbb{A}^{n|q})}(X(\mathbb{A}^{n|q}))$, the free $\Sect(\mathbb{A}^{n|q})$-module on the set $X(\mathbb{A}^{n|q})$. In addition $\Mod_\Sect$ has an enrichment in $\sCart$. A map $\mathbb{A}^{n|q} \to \uHom_{\Sect}(M, N)$ is defined via \begin{align*} \sCart(\mathbb{A}^{n|q}, \uHom_{\Sect}(M, N)) \cong \Hom_{\Sect}(F_\Sect(\mathbb{A}^{n|q}) \otimes_{\Sect}M, N). \end{align*} This makes $\Mod_\Sect$ into a category enriched in $\sCart$. In fact this enrichment extends to one in the symmetric monoidal category of $\Sect$-modules; $\Mod_\Sect$ is a closed symmetric monoidal category. To distinguish between the ordinary category of $\Sect$-modules and the $\Sect$-linear category (enriched in $\Sect$-modules) we will denote the former by $\Mod_\Sect$ and the latter by $\MMod_\Sect$. Let $\Mod_R$ denote the ordinary category of $R$-modules (in sets). This is a closed symmetric monoidal category and thus an $R$-linear category. Since $\Gamma(\Sect) = R$, we obtain an adjunction: \begin{equation*} \Sect \otimes_R (-): \Mod_R \rightleftarrows \Mod_\Sect: \Gamma, \end{equation*} where the right-adjoint simply applies $\Gamma$ to both the module and ring structure (it is evaluation at $\mathbb{A}^0 \in \mathsf{sA}} %{\text{s}\mathcal{A}$). The left-adjoint is given by first viewing a set theoretical $R$-module as a constant (discrete) superalgebraic cartesian set~and then tensoring up to obtain an $\Sect$-module. As expected, this is a monoidal adjunction with respect to the two symmetric monoidal structures $\otimes_R$ and $\otimes_\Sect$, and moreover $\Gamma \circ (\Sect \otimes_R (-)) \cong id$ is the identity functor. We can do slightly better. Since the above adjunction is monoidal, the functor $\Sect \otimes_R(-)$ may be used to enhance the enrichment of $\Mod_R$ in itself into an enrichment in $\Mod_\Sect$. Thus for ordinary $R$-modules $M$ and $N$, there exists an $\Sect$-module (hence a superalgebraic cartesian set) of homomorphisms between them, given by: \begin{equation*} \Sect \otimes_R \Hom_R(M,N). \end{equation*} We will denote this new $\Mod_\Sect$-enriched category as $\MMod_R$. It has the same objects as $\Mod_R$. The above adjunction now gives rise to a $\Mod_\Sect$-enriched functor: \begin{equation*} \Sect \otimes_R (-): \MMod_R \to \MMod_\Sect, \end{equation*} which sends an $R$-module $M$ to the $\Sect$-module $\Sect \otimes_R \const(M)$. Note that the functor $\Gamma$ will not automatically be an enriched functor. \begin{lemma}\label{lem:fully-faithful-enriched-inclusion} Let $\MMod_R^\textrm{f.g. proj}$ denote the full subcategory of the $\Mod_\Sect$-enriched category $\MMod_R$ consisting of those $R$-modules which are finitely generated and projective. Then the restricted $\Mod_\Sect$-enriched functor \begin{equation*} \Sect \otimes_R (-):\MMod_R^\textrm{f.g. proj} \to \MMod_\Sect \end{equation*} is fully-faithful (in the enriched sense). \end{lemma} \begin{proof} We must show that the canonical map of $\Sect$-modules \begin{equation*} \Sect \otimes_R \Hom_R(M,N) \to \uHom_\Sect( \Sect \otimes_R M, \Sect \otimes_R N) \end{equation*} is an isomorphism if $M$ and $N$ are finitely generated and projective. Note that this is certainly the case if both $M$ and $N$ are finitely generated free $R$-modules. The modules $M= M_0$ and $N=N_0$ are finitely generated and projective if and only if there exist $R$-modules $M_1$ and $N_1$ such that both $M_0 \oplus M_1$ and $N_0 \oplus N_1$ are finitely generated free $R$-modules. Thus the sum of the canonical maps (which is the canonical map of the sums): \begin{equation*} \bigoplus_{i,j=0,1} \Sect \otimes_R \Hom_R(M_i,N_j) \to \bigoplus_{i,j=0,1} \uHom_\Sect( \Sect \otimes_R M_i, \Sect \otimes_R N_j) \end{equation*} is an isomorphism. The lemma now follows from the observation that in an abelian category a finite collection of maps is a collection of isomorphisms if and only if the direct sum of the collection is an isomorphism. \end{proof} Let $\Pic_\Sect$ be the {\em Picard category} of $\Sect$. It is the full subcategory of $\Mod_\Sect$ consisting of the {\em invertible} $\Sect$-modules, those $\Sect$-modules $M$ such that there exists an $\Sect$-module $M'$ with the property that $M \otimes_\Sect M' \cong M' \otimes_\Sect M \cong \Sect$. Let $\PPic_\Sect$ denote the corresponding $\Mod_\Sect$-enriched subcategory. Similarly $\Pic_R$ will denote the category of invertible $R$-modules and $\PPic_R$ the corresponding $\Mod_\Sect$-enriched category. Since $\Sect \otimes_R (-)$ is a monoidal functor, it sends invertible objects to invertible objects. Hence we have an induced $\Mod_\Sect$-enriched functor: \begin{equation*} \Sect \otimes_R (-): \PPic_R \to \PPic_\Sect. \end{equation*} The following theorem is the main result of this section. \begin{thm}\label{thm:Picard-equivalence} The functor $\Sect \otimes_R (-): \PPic_R \to \PPic_\Sect$ induces an equivalence of $\Mod_\Sect$-enriched symmetric monoidal categories. \end{thm} \noindent We will prove this theorem after a few lemmas. \begin{lemma}\label{lem:Objects-of-pic-fgproj} The objects of $\Pic_R$ and $\Pic_\Sect$ are finitely generated and projective. \end{lemma} \begin{proof} This lemma is classical. We will only need the finite generation in the case of $R$-modules. A categorical proof of this notes that for an object $M \in \Pic_R$ the functor $M\otimes_R (-)$ is an equivalence of categories. Any equivalence of categories preserves the finitely generated projective objects (these are characterized by categorical properties) and moreover the trivial $R$-module $R$ is a finitely generated projective module. Hence the image of $R$ under $M \otimes_R (-)$, that is to say the module $M$, is also a finitely generated projective module. \end{proof} \begin{cor}\label{cor:pic_fullyfaithful} The functor $\Sect \otimes_R (-): \PPic_R \to \PPic_\Sect$ is fully-faithful. \qed \end{cor} \begin{lemma}\label{lem:gen-ident-dection} Let $f: \Sect \to \Sect$ be any $\Sect$-module map. Assume that $\Gamma(f): \Gamma(\Sect) \to \Gamma(\Sect)$ is the identity map, then $f$ is the identity map. \end{lemma} \begin{proof} For each $S \in \mathsf{sA}} %{\text{s}\mathcal{A}$ we have a commuting diagram \begin{center} \begin{tikzpicture}[thick] \node (LT) at (0, 1.5) {$\Gamma(\Sect) = R$}; \node (LB) at (0, 0) {$\Sect (S)$}; \node (RT) at (4, 1.5) {$\Gamma(\Sect) = R$}; \node (RB) at (4, 0) {${\Sect} (S)$}; \draw [->] (LT) -- node [left] {$$} (LB); \draw [->] (LT) -- node [above] {$id$} (RT); \draw [->] (RT) -- node [right] {$$} (RB); \draw [->] (LB) -- node [below] {$f_S$} (RB); \end{tikzpicture} \end{center} and thus $f_S$ sends $1 \in \Sect(S)$ to $1 \in \Sect(S)$. As this is a generator of $\Sect(S)$ as an $\Sect(S)$-module it follows that $f_S$ is the identity for all $S \in \mathsf{sA}} %{\text{s}\mathcal{A}$. \end{proof} \begin{proof}[Proof of Thm.~\ref{thm:Picard-equivalence}] The functor $\Sect \otimes_R (-): \PPic_R \to \PPic_\Sect$ is monoidal and, by Corollary~\ref{cor:pic_fullyfaithful}, fully-faithful. It remains to show essential surjectivity, i.e. that every invertible module $M \in \Pic_\Sect$ is of the form $\Sect \otimes_R L$ for some invertible $R$-module $L$. First note that it is sufficient to prove this under the assumption that $\Gamma(M) \cong R$ is the trivial invertible $R$-module. If $M$ is a general $\Sect$-module we may instead consider $N = M \otimes_\Sect ( \Sect \otimes_R \Gamma(M^{-1}))$, if $N$ is in the essential image then so is $M$. Thus without loss of generality assume we have chosen an isomorphism of $R$-modules $\Gamma(M) \cong R$. Let $M'$ be an inverse to $M$. We may also choose an isomorphism $\Gamma(M') \cong R$. Next we will make a few observations. First, from the adjunction $\Sect \otimes_R (-) \dashv \Gamma$, we have canonical $\Sect$-module homomorphisms \begin{align*} \Sect \cong \Sect \otimes_R \Gamma(M) &\to M \\ \Sect \cong \Sect \otimes_R \Gamma(M') &\to M'. \end{align*} Applying $\Gamma$ to either of these yields the identity map of $R$. Next observe that we have a canonical map of $R$-modules in $\sCart$, $R = \Gamma(M) \to M$ and $R = \Gamma(M') \to M'$, where the targets of these are viewed as $R$-modules via the inclusion $R \to \Sect$. Again these reduce to the identity maps after applying $\Gamma$. Third, we may tensor together the maps \[ R \lra{} M' \text{ and } M \lra{\text{id}} M, \text{ over } R \lra{} \Sect \] to get a map of $\Sect$-modules \[ M \otimes_{R} R \lra{} M \otimes_{\Sect} M' \cong \Sect. \] Precomposing with the map $\Sect \lra{} M$ gives a map of $\Sect$-modules \[ \Sect \lra{} M \lra{} \Sect. \] Since this map reduces to the identity map after applying $\Gamma$, Lemma~\ref{lem:gen-ident-dection} implies that this is the identity map of $\Sect$-modules. In particular the map $\Sect \to M$ is a monomorphism (injective when evaluated on each $S \in \mathsf{sA}} %{\text{s}\mathcal{A}$). Tensoring with $M'$ gives a new map \begin{equation*} M' \to M \otimes_\Sect M' \cong \Sect \end{equation*} which remains a monomorphism since $M'$ is projective (and hence flat). Again this map reduces to the identity after applying $\Gamma$. By symmetry, there exists a monomorphism $M \to \Sect$ of $\Sect$-modules with the same property (it reduces to the identity after applying $\Gamma$). Finally we observe that since the $\Sect$-module map $M \to \Sect$ is the identity after applying $\Gamma$, it follows that for each $S \in \mathsf{sA}} %{\text{s}\mathcal{A}$ the component $M(S) \to \Sect(S)$ contains $1 \in \Sect(S)$ in its image. Since $1$ is a generator of $\Sect(S)$ as an $\Sect(S)$-module, it follows that $M(S) \to \Sect(S)$ is a surjective map of $\Sect(S)$-modules. Consequently the map $M \to \Sect$ is both an monomorphism and an epimorphism, hence an isomorphism of $\Sect$-modules. In particular $M \cong \Sect \otimes_R R$ is in the image of $\PPic_R$. \end{proof} Since sets may be regarded as superalgebraic cartesian sets~(via the functor $\const$), we may try to regard the $\Mod_\Sect$-enriched category $\MMod_\Sect$ as a category internal to superalgebraic cartesian sets. However $\Mod_\Sect$ has a large set (or class) of objects, and so is not technically a superalgebraic cartesian set. This problem can be avoided for $\PPic_\Sect$ since it is essentially small. We will tacitly assume that we have chosen a small set of representative invertible $R$-modules to serve as the set of objects of $\PPic_\Sect$. In particular we will regard $\PPic_\Sect$ as a symmetric monoidal category internal to superalgebraic cartesian sets. \section{Superalgebraic Cartesian Quantum Field Theories} \label{sec:SQFT2} We are now in a position to define supersymmetric $0|1$-dimensional quantum field theories over an arbitrary superalgebraic cartesian set. For each geometry $\mathbb{M}$ (discussed in the last section) and each superalgebraic cartesian set~$X$, we construct $\Bord_{(\mathbb{M}, X)}^{0|1}$, the symmetric monoidal 0-category (internal to superalgebraic cartesian sets) consisting of $0|1$-dimensional bordisms equipped with $\mathbb{M}$-structures and maps to $X$. As a symmetric monoidal 0-category is just a commutative monoid, $\Bord_{(\mathbb{M}, X)}^{0|1}$ is just a commutative monoid object in superalgebraic cartesian sets. We will describe it in more detail in just a moment. The target of a field theory is another symmetric monoidal category, which in this 0-dimensional case means another commutative monoid (internal to superalgebraic cartesian sets). The target of a {\em quantum} field theory (as opposed to a classical field theory or other variety of field theory) should have some further mechanism implementing the physical concept of {\em superposition}. This can be accomplished by requiring the target category to have not just a multiplicative (i.e. monoidal) structure, but to also have an additive structure. In the classical context of the Atiyah-Segal axioms this is the direct sum operation on the target category of vector spaces. In the case at hand it means that our target should be a ring (or at least a rig). A natural choice is the ring $\Sect = \mathbb{A}^{1|1}$. A supersymmetric $0|1$-dimensional $\mathbb{M}$-quantum field theory over a superalgebraic cartesian set~$X$ is then defined to be a homomorphism \begin{equation*} Z: \Bord_{(\mathbb{M}, X)}^{0|1} \to \Sect \end{equation*} of commutative monoids in superalgebraic cartesian sets. As usual, the easiest way to describe $\Bord_{(\mathbb{M}, X)}^{0|1}$ and the homomorphism $Z$ is via the formalism of $S$-points. For each representable $\mathbb{A}^{n|q} \in \mathsf{sA}} %{\text{s}\mathcal{A}$, and each map $f:\mathbb{A}^{n|q} \to \Bord_{(\mathbb{M}, X)}^{0|1}$, the field theory $Z$ should associate a map $Z(f): \mathbb{A}^{n|q} \to \Sect$. That is to say we obtain a function $Z(f) \in \Sect(\mathbb{A}^{n|q})$. Maps $\mathbb{A}^{n|q} \to \Bord_{(\mathbb{M}, X)}^{0|1}$ are obtained by considering $\mathbb{A}^{n|q}$-families of bordisms. Each of the structures we place on our bordisms, the $\mathbb{M}$-structure, the map to $X$, the supercommutative ring of functions, indeed even the enrichment in superalgebraic cartesian sets, should be thought of as enhancements we give to an underlying topological 0-bordism. The category $\Bord_{(\mathbb{M}, X)}^{0|1}$ should have a forgetful functor to $\Bord^0$. Hence every $0|1$-dimensional bordism is equivalent to a finite disjoint union of superpoints, and we will say that this bordism is equipped with an $\mathbb{M}$-structure if each component superpoint has an $\mathbb{M}$-structure. Two such bordisms will be equivalent if they are related by $\mathbb{M}$-isometries, where an $\mathbb{M}$-isometry between bordisms is a permutation followed by $\mathbb{M}$-isometries on each factor. This definition ensures there is a forgetful functor to $\Bord^0$. \begin{definition} The superalgebraic cartesian commutative monoid $\Bord_{(\mathbb{M}, X)}^{0|1}$ associates to $\mathbb{A}^{n|q} \in \mathsf{sA}} %{\text{s}\mathcal{A}$ the set of equivalence classes of $\mathbb{A}^{n|q}$-families of $0|1$-dimensional bordisms equipped with $\mathbb{M}$-structure, with the equivalence relation of $\mathbb{M}$-isometry. \end{definition} Hence an $\mathbb{A}^{n|q}$-family of such bordisms induces a map $\mathbb{A}^{n|q} \to \Bord_{(\mathbb{M}, X)}^{0|1}$. Thus a field theory $Z$ will associate to each such family \begin{equation*} f:\mathbb{A}^{n|q} \times Y^{0|1} \to X \end{equation*} a function $Z(f) \in \Sect(\mathbb{A}^{n|q})$ (where $Y^{0|1} \cong \coprod_k \mathbb{A}^{0|1}$ for some $k$, and is equipped with an $\mathbb{M}$-structure). If two $\mathbb{A}^{n|q}$-families of bordisms are related by an $\mathbb{A}^{n|q}$-family of $\mathbb{M}$-isometries, then the associated functions will be the same. This gives rise to the following explicit description of $\Bord_{(\mathbb{M}, X)}^{0|1}$: \begin{prop} \label{prop:Bord_as_free_comm_monoid} The superalgebraic cartesian set~$\Bord_{(\mathbb{M}, X)}^{0|1}$ is given by the quotient \begin{align*} \Bord_{(\mathbb{M}, X)}^{0|1} &\cong \coprod_{k \in \mathbb{N}} \left( \usCart(\coprod_k\mathbb{A}^{0|1}, X) / \mathbb{M} \wr \Sigma_k \right) \\ & \cong \coprod_{k \in \mathbb{N}} \left( \prod_k [\usCart(\mathbb{A}^{0|1}, X)/ \mathbb{M}] / \Sigma_k \right). \end{align*} \end{prop} The commutative monoid structure on $\Bord_{(\mathbb{M}, X)}^{0|1}$ is induced by the disjoint union operation on bordisms, and combining this with the above explicit description we obtain: \begin{cor} $\Bord_{(\mathbb{M}, X)}^{0|1}$ is the free commutative monoid generated by the superalgebraic cartesian set \begin{equation*} \usCart(\mathbb{A}^{0|1}, X)/ \mathbb{M}. \end{equation*} \end{cor} \begin{cor} The supersymmetric $0|1$-dimensional $\mathbb{M}$-quantum field theories over a superalgebraic cartesian set~$X$ (with values in $\Sect$) are in natural bijection with the set of $\mathbb{M}$-invariant functions on $\usCart(\mathbb{A}^{0|1}, X)$. \end{cor} Note that this implies that the field theories naturally have the structure of a commutative ring. Using the description from this last corollary and our previous calculations we may now identify the supersymmetric $0|1$-dimensional $\mathbb{M}$-quantum field theories over a simplicial set. \begin{thm} \label{thm:field-theories} Let $X$ be a finite dimensional simplicial set. Then the set of supersymmetric $0|1$-dimensional $\mathbb{M}$-quantum field theories over $i_! X$ is naturally isomorphic (as supercommutative algebras with a coaction of $\mathcal{O}(N(\mathbb{M})/\mathbb{M})$) to... \begin{enumerate} \item $\mathbb{M} = \uEnd(\mathbb{A}^{0|1}) \cong \mathbb{A}^{0|1} \rtimes \mathbb{A}^1$ {(pre-topological)} \begin{equation*} \pTFT{}(X) \cong \Omega^0_{R,\text{cl}}(X) \end{equation*} closed, degree zero polynomial differential forms on $X$ over $R$. \item $\mathbb{M} = \uAut(\mathbb{A}^{0|1}) \cong \mathbb{A}^{0|1} \rtimes \mathbb{G}_m$ (topological) \begin{equation*} \TFT{}(X) \cong \Omega^0_{R,\text{cl}}(X) \end{equation*} closed, degree zero polynomial differential forms on $X$ over $R$. \item $\mathbb{M} = \mathbb{A}^{0|1} \rtimes \mathbb{Z}/2\mathbb{Z}$ (Euclidean) \begin{equation*} \EFT{}(X) \cong \Omega^{\text{ev}}_{R,\text{cl}}(X) \end{equation*} closed polynomial differential forms on $X$ over $R$ of even degree. \item $\mathbb{M} = \mathbb{A}^{0|1} \times 1$ (oriented Euclidean) \begin{equation*} \EFT{}_{\text{or}}{}(X) \cong \Omega^{*}_{R,\text{cl}}(X) \end{equation*} closed polynomial differential forms on $X$ over $R$ of arbitrary degree. \item $\mathbb{M} = 1$ (fully-rigid) \begin{equation*} \QFT{}_{\text{f-r}}{}(X) \cong \Omega^{*}_{R}(X) \end{equation*} all polynomial differential forms on $X$ over $R$. \end{enumerate} \end{thm} \begin{remark} When $X$ is an infinite dimensional simplicial set we may write it as the colimit over finite dimensional skeleta and then the theorem still holds as long as $\Omega^{\text{ev}}_{R,\text{cl}}(X)$ means the product over even closed polynomial forms instead of the sum (and likewise for the last two cases). \end{remark} \begin{proof} Recall that Proposition \ref{cdgastructure} gives an explicit description of the coaction of $\mathcal{O}(\underline \End(\mathbb{A}^{0|1}))$ on the supercommutative algebra of rational differential forms $u\Omega^{*}_{R}(X)$. In all of the cases above we compute the coinvariants for the respective coaction. For all of the following, let $a \in u\Omega^{*}_{R}(X)$. \begin{enumerate} \item Let $a$ be a $k$-form, then \[ a \mapsto ax^k+(da)x^k\epsilon. \] To be coinvariant $k=0$ and $da =0$. \item This follows from 1. \item Let $a$ be a $k$-form, then \[ a \mapsto a (1,-1)^k + (da)(1,-1)^k\epsilon. \] To be coinvariant, $k \in 2\mathbb{Z}$ and $da =0 $. \item Let $a$ be any form, then \[ a \mapsto a+(da)\epsilon. \] To be coinvariant, $da=0$. \item This is Corollary \ref{maincor}. \end{enumerate} \end{proof} \begin{comment} \begin{cor} There is a bijection \[ \mathcal{O}(\underline{\sCart}(\mathbb{R}^{0|1},X)/\mkern-3mu / \mathbb{A}^{0|1}\rtimes \mathbb{A}^1) \cong \Omega^{0}_{\text{cl}}(X). \] \end{cor} \begin{proof} Compute the coinvariants. For $s \in \mathcal{O}(\underline{\sCart}(\mathbb{A}^{0|1},X))$ a $k$-form, \[ s \mapsto sx^k+(ds)x^k\epsilon. \] To be a coinvariant $k=0$ and $ds =0$. \end{proof} \begin{cor} There is a bijection \[ \mathcal{O}(\underline{\sCart}(\mathbb{R}^{0|1},X)/\mkern-3mu / \mathbb{A}^{0|1}\rtimes \mathbb{Z}/2) \cong \Omega^{\text{ev}}_{\text{cl}}(X). \] \end{cor} \begin{proof} As above, compute the coinvariants. Let $s$ be a $k$-form, then \[ s \mapsto s (1,-1)^k + (ds)(1,-1)^k\epsilon. \] To be coinvariant, $k \in 2\mathbb{Z}$ and $ds =0 $. \end{proof} \end{comment} \section{The Super Point over $X$} \section{Tiny Objects And Internal Homs} \label{sec:tiny} In this section we explore some of the properties of the internal hom functor from Example~\ref{ex:innerhom}. \begin{lemma} \label{lma:affine} There is a natural isomorphism \[ \underline{\sCart}(\mathbb{A}^{0|1},\mathbb{A}^{n|q}) \cong \mathbb{A}^{n+q|n+q}. \] \end{lemma} \begin{proof} Because $\mathbb{A}^{n|q} \cong (\mathbb{A}^1)^{\times n}\times(\mathbb{A}^{0|1})^{q}$, we need only check this on $\mathbb{A}^1$ and $\mathbb{A}^{0|1}$. There are two functors $(-)_{\text{ev}},(-)_{\text{odd}}:\sAlg \lra{} \Set$ given by taking the homogeneous parts. These functors are representable and in fact represented by $\mathcal{O}(\mathbb{A}^1)$ and $\mathcal{O}(\mathbb{A}^{0|1})$ respectively. Now we have \begin{align*} \sCart(\mathbb{A}^{m|p},\underline{\sCart}(\mathbb{A}^{0|1},\mathbb{A}^{1})) &\cong \sCart(\mathbb{A}^{m|p}\times \mathbb{A}^{0|1}, \mathbb{A}^1) \\ &\cong \mathcal{O}(\mathbb{A}^{m|p}\times \mathbb{A}^{0|1})_{\text{ev}} \\ &\cong \mathcal{O}(\mathbb{A}^{m|p}) = \mathbb{A}^{1|1}(\mathbb{A}^{m|p}) \end{align*} and \begin{align*} \sCart(\mathbb{A}^{m|p},\underline{\sCart}(\mathbb{A}^{0|1},\mathbb{A}^{0|1})) &\cong \sCart(\mathbb{A}^{m|p}\times \mathbb{A}^{0|1}, \mathbb{A}^{0|1}) \\ &\cong \mathcal{O}(\mathbb{A}^{m|p}\times \mathbb{A}^{0|1})_{\text{odd}} \\& \cong \mathcal{O}(\mathbb{A}^{m|p}) = \mathbb{A}^{1|1}(\mathbb{A}^{m|p}). \qedhere \end{align*} \end{proof} \begin{definition} Let $D$ be a category. An object $x \in D$ is called {\em compact} if $D(x,-): D \to \Set$ commutes with filtered colimits and called {\em tiny} if $D(x,-): D \to \Set$ commutes with \emph{all} small colimits. Let $D$ be a cartesian closed category. We call an object $x \in D$ {\em cartesian tiny} if $\underline{D}(x,-): D \to D$ commutes with all small colimits, where $\underline{D}(x,-)$ is the internal hom functor. \end{definition} In a presheaf category the tiny objects are precisely those presheaves which are retracts of representables \cite[Prop.2]{MR850528}. \begin{prop} \label{tiny} If $C$ is a (small) category with finite products, then every tiny object of $\Pre(C)$ is a cartesian tiny object. \end{prop} \begin{proof} We will show this for representable presheaves. Let $a$ and $b$ be objects of $C$ (also viewed as representable objects of $\Pre(C)$) and let $I \lra{} \Pre(C)$ be a small diagram in $\Pre(C)$ mapping $i \in I$ to $x_i$. Colimits in $\Pre(C)$ are computed objectwise, and so we can show that the internal hom out of $a$ commutes with arbitrary (small) colimits by evaluation on an object $b$: \begin{align*} \underline{\Pre(C)}(a,\Colim{I} \text{ } x_i)(b) &\cong \Pre(C)(a\times b,\Colim{I} \text{ } x_i) \\ &\cong \Colim{I} \text{ } x_i(a\times b) \\ &\cong \Colim{I} \Pre(C)(a \times b,x_i) \\ &\cong \Colim{I} \text{ } \underline{\Pre(C)}(a, x_i)(b). \end{align*} The second isomorphism uses the fact that the object $a \times b \in C$ is representable. \end{proof} \begin{cor-def}\label{cor:right-adjoint-to-innerhom} The internal hom $\usCart(\mathbb{A}^{0|1}, -)$ functor admits a further right adjoint, which we denote $\Omega_{(-)}$. \qed \end{cor-def} For any superalgebraic cartesian set~$Y$, the superalgebraic cartesian set~$\Omega_Y$ has an elementary description: \begin{equation*} \sCart(\mathbb{A}^{n|q}, \Omega_Y) \cong \sCart( \usCart(\mathbb{A}^{0|1}, \mathbb{A}^{n|q}), Y) \cong Y( \mathbb{A}^{n+q| n+q}). \end{equation*} In general we will denote the mapping set $\sCart(X, \Omega_Y) =: \Omega(X ; Y)$. The case $Y = \Sect$ is especially important for this paper and in this case we will drop the $\Sect$ from our notation; $\Omega := \Omega_\Sect$ (see also Example~\ref{ex:Omega}). Another example that will be important later is $\Omega(X ; M)$ for $M$ an invertible $\Sect$-module. Recall that $M$ may be viewed as an invertible $R$-module due to Theorem~\ref{thm:Picard-equivalence}. In this case we have the isomorphism \[ \Omega(X ; M) \cong \Omega(X;\Sect) \otimes_R M. \]
To prove this it suffices to check it on representables for which it is clear. \begin{cor} \label{cor:functions_on_superpoints} For any superalgebraic cartesian set~$X$ there is an isomorphism of supercommutative rings \[ \mathcal{O}(\underline{\sCart}(\mathbb{A}^{0|1}, X)) \cong \sCart(X, \Omega). \qed \] \end{cor} \section{Concordance} \label{sec:concordance} Theorem \ref{thm:field-theories} and Table \ref{table:twisted_Field-Theories} show that the twisted superalgebraic cartesian quantum field theories with a geometry over a simplicial set $X$ correspond to important subsets of Sullivan's rational differential forms on $X$. To recover the rational cohomology groups of $X$ from the field theories we study a notion of equivalence of field theories called concordance. In this algebraic setting we uncover three notions of concordance. We prove that that they are all equivalent. In each case two closed differential forms are concordant if and only if they are cohomologous. \begin{comment} We have shown that the $0|1$-dimensional supersymmetric (algebraic Euclidean) field theories over a space $X$ are in correspondence with Sullivan's rational differential forms on $X$. To recover the rational cohomology groups of $X$ from the field theories we must understand when two field theories are concordant. In this algebraic setting we discover three notions of concordance and prove that, in each case, two closed differential forms are concordant if and only if they are cohomologous. \end{comment} Given a simplicial set $X$, we may consider the two inclusions \[ f_0,f_1:X \lra{} X \times \Delta^1 \] induced by the coface maps of $\Delta^1$. Now, using the the canonical map \[ i_!(X \times \Delta^1) \lra{} i_!X \times \mathbb{A}^1 \] and the canonical map $\Omega^*(i_!X) \otimes \Omega^*(\mathbb{A}^1) \lra{} \Omega^*(i_!X \times \mathbb{A}^1)$ we build the commutative diagram: \[ \xymatrix{ & \Omega^*(i_!X) \otimes \Omega^*(\mathbb{A}^1) \ar[d] \ar@/^2pc/[dddr] \ar@/_2pc/[dddl] & \\ & \Omega^*(i_!X \times \mathbb{A}^1) \ar[d] & \\ & \Omega^{*}(i_!(X \times \Delta^1)) \ar[dr]^{f_0} \ar[dl]_{f_1} & \\ \Omega^{*}(i_!X) && \Omega^{*}(i_!X).} \] Note that the downward arrows need not be isomorphisms. We use this diagram to describe the three notions of concordance for two differential forms $\omega_0, \omega_1 \in \Omega^{*}_{\text{cl}}(i_!X)$. They fit nicely into a table: \begin{center} \begin{tabular}{|l|l|} \hline Cohomologous & $\exists \alpha, \text{ }\omega_0 - \omega_1 = d\alpha$ \\ \hline Cochain Concordance & $\exists \omega \in \Omega^{*}_{\text{cl}}(i_!X) \otimes \Omega^{*}_{\text{cl}}(\mathbb{A}^1), \text{ } f_j \omega = \omega_j$\\ \hline Algebraic Concordance & $\exists \omega \in \Omega^{*}_{\text{cl}}(i_!X \times \mathbb{A}^1), \text{ } f_j \omega = \omega_j$ \\ \hline Simplicial Concordance & $\exists \omega \in \Omega^{*}_{\text{cl}}(i_!(X \times \Delta^1)), \text{ } f_j \omega = \omega_j$ \\ \hline \end{tabular} \end{center} It is immediate that Cochain Concordance implies Algebraic Concordance implies Simplicial Concordance. \begin{prop} Cohomologous implies Cochain Concordance. \end{prop} \begin{proof} The element $\omega_1 t + \omega_0(1-t) + \alpha dt \in \Omega^{*}_{\text{cl}}(i_!X) \otimes \Omega^{*}_{\text{cl}}(\mathbb{A}^1)$ does the job. \end{proof} \begin{prop} Let $R$ be a $\mathbb{Q}$-algebra, then Simplicial Concordance implies Cohomologous. \end{prop} \begin{proof} It suffices to take $\omega \in \Omega^{*}(i_!(X \times \Delta^1))$ such that $f_0 \omega = 0$ and $f_1 \omega = \omega_1$. We must show that there exists $\alpha$ such that $d\alpha = \omega_1$. However, because $X \times \Delta^1 \simeq X$, by Sullivan's theorem $f_0$ and $f_1$ are quasi-isomorphisms. Thus the cohomology class of $\omega_1$ equals the cohomology class of $0$. \end{proof} The differential twists provide an alternative to the traditional notion of concordance. In the case of the pretopological geometry we can form \[ \pTFT{}^{n-1}_{\text{diff}}(X) \cong \Omega^{n-1}(X), \] which contains the information of the differential. There is a map \[ \pTFT{}^{n-1}_{\text{diff}}(X) \lra{} \pTFT{}^{n}(X) \cong \Omega^{n}_{\text{cl}}(X) \] that has the effect of taking $\omega$ to $d\omega$. The analogous construction can be made for Euclidean field theories as well. Thus we may avoid concordance entirely; the differential itself can be constructed from field theories alone. We use square brackets to denote the set of twisted field theories with a given geometry taken up to concordance. Thus $\pTFT{}^{n}[X]$ denotes the degree $n$ pretopological field theories up to concordance. \begin{thm} Let $R$ be a $\mathbb{Q}$-algebra, $HR$ be cohomology with coefficients in $R$, and $X$ be a simplicial set. There are natural isomorphisms \[ \pTFT{}^n[X] \cong \TFT{}^n[X] \cong HR^n(X) \] and \[ \EFT{}^n[X] \cong PHR^n(X), \] where $PHR$ is periodic cohomology with coefficients in $R$. \end{thm} \begin{remark} There are other choices of twists for which a similar theorem is true. For instance the ``twisted twists" can be used to produce a similar statement. The reader is invited to see what variations on the usual notions of cohomology can be constructed from the twists and geometries. \end{remark} \begin{remark} Because periodic cohomology is defined using the \emph{product}, $X$ may be taken to be an infinite dimensional simplicial set. \end{remark} \subsection*[SUSY field theories and de Rham cohomology]{Review of the literature: supersymmetric field theories and de Rham cohomology} In this section we give a rapid summary of the work of Hohnhold-Kreck-Stolz-Teichner \cite{MR2763085} relating smooth differential forms to $0|1$-dimensional supersymmetric quantum field theories. This material serves as a conceptual blueprint for the theory developed in later sections. The Atiyah-Segal axioms define quantum field theories as symmetric monoidal functors from a bordism category, $\Bord$, to a target symmetric monoidal category $\mathcal{V}$, such as the category of Hilbert spaces. This can be generalized in many ways. One important way is that the theory can be made to satisfy an enhanced form of locality by replacing ordinary 1-categories with $d$-categories. Thus $\Bord^d$ is to be a symmetric monoidal $d$-category. The objects are to be 0-dimensional bordisms, the morphisms are 1-dimensional bordisms, the 2-morphisms are 2-dimensional bordisms between bordisms, etc. all the way up through dimension $d$. A fully local quantum field theory is then a symmetric monoidal functor from this symmetric monoidal $d$-category $\Bord^d$ to another target symmetric monoidal $d$-category $\mathcal{V}$. The work of HKST \cite{MR2763085} considers a degenerate case where $d=0$. Thus the bordism $n$-category becomes a symmetric monoidal 0-category. That is to say it becomes a commutative monoid. The target category $\mathcal{V}$ will also be replaced by a commutative monoid, and field theories become commutative monoid homomorphisms. Furthermore, these theories are supersymmetric. We refer the reader to \cite{MR1701597} for the necessary background material on supermanifolds. In this case the bordisms are (closed) compact\footnote{A supermanifold will be considered compact if the underlying reduced manifold is compact.} supermanifolds of dimension~$0|1$. Each such bordism is a finite disjoint union of copies of the {\em superpoint} $\mathbb{R}^{0|1}$. Each of these bordisms could also be equipped with some kind of geometry which defines the kind of quantum field theory, and HKST consider two possibilities: topological and Euclidean. For simplicity in the remainder of this section we will focus on the topological case (with no geometry). However even in the topological case we will equip the bordisms with a further structure. For each supermanifold $X$, HKST consider a category of bordisms {\em over $X$} where each $0|1$-dimensional bordism is equipped with the structure of a map to $X$. Finally, as mentioned in the introduction, the bordism category $\Bord_X^{0|1}$ is constructed internally to the category of stacks on the Grothendieck site of supermanifolds. In this case this means that $\Bord_X^{0|1}$ is a commutative monoid object in stacks on supermanifolds. The easiest way to describe this object is using the {\em $S$-point formalism} (which is essentially the same as the fibered category approach to stacks (See \cite{MR2223406} for an excellent introdution to fibered categories and stacks)). Given an arbitrary test supermanifold $S$, we will describe the symmetric monoidal groupoid of maps from $S$ into $\Bord_X^{0|1}$. Its objects consist of {\em $S$-families} of $0|1$-dimensional bordisms equipped with a map to $X$. More precisely the objects consist of a pair $(E,f)$ where $E$ is a bundle of ${0|1}$-dimensional supermanifolds over $S$ equipped with a map $f:E \to X$ from the total space to $X$. There is an obvious notion of automorphism making this a groupoid, and the symmetric monoidal structure is given by fiberwise disjoint union over $S$. The definition of quantum field theory is incomplete without the target category, which will be a 0-category analog of the category of vector spaces. In this case HKST use the categorical looping of the category of vector spaces, which is the representable stack $\mathbb{R}$ (endomorphisms of the unit vector space). To indicate that we are thinking of $\mathbb{R}$ as a representable stack we will write it with an underline $\underline{\mathbb{R}}$. Multiplication makes $\underline{\mathbb{R}}$ into a commutative monoid object in supermanifolds (and hence also in stacks on supermanifolds), and $0|1$-dimensional topological quantum field theories over $X$ are the defined \cite[Def.~5.1]{MR2763085} to be the set \begin{equation*} \TFT{}(X) = \Hom(\Bord_X^{0|1}, \underline{\mathbb{R}}) \end{equation*} of homomorphisms of commutative monoids in stacks over supermanifolds. As a commutative monoid $\Bord_X^{0|1}$ is freely generated by the stack quotient \begin{equation*} \usMan(\mathbb{R}^{0|1}, X) /\mkern-3mu / \uAut(\mathbb{R}^{0|1}). \end{equation*} Here $\usMan(\mathbb{R}^{0|1}, X)$ is the internal mapping object from $\mathbb{R}^{0|1}$ into $X$, and $\uAut(\mathbb{R}^{0|1})$ is the internal automorphism object; an $S$-point of $\usMan(\mathbb{R}^{0|1}, X)$ is a map $S \times \mathbb{R}^{0|1} \to X$ and the effect of taking the stack quotient is that, locally in $S$, we may glue these trivial $\mathbb{R}^{0|1}$-bundles together to form non-trivial bundles. Both $\usMan(\mathbb{R}^{0|1}, X)$ and $\uAut(\mathbb{R}^{0|1})$ turn out to be representable by supermanifolds. The former is given by \begin{equation*} \usMan(\mathbb{R}^{0|1}, X) \cong \pi TX \end{equation*} and the latter is a super Lie group $\mathbb{R}^{\times} \ltimes \mathbb{R}^{0|1}$ \cite[Prop.~3.1 and Lma.~3.5]{MR2763085}. The supermanifold $\pi TX$ has the surprising property that its algebra of functions is the superalgebra of differential forms on $X$. With this description of the bordism category, it is straightforward to calculate the topological field theories explicitly. Since the commutative monoid $\Bord_X^{0|1}$ is freely generated by $\pi TX /\mkern-3mu / \uAut(\mathbb{R}^{0|1})$, the commutative monoid homomorphisms from $\Bord_X^{0|1}$ into $\underline{\mathbb{R}}$ are exactly the same as the maps from $\pi TX /\mkern-3mu / \uAut(\mathbb{R}^{0|1})$ into $\underline{\mathbb{R}}$. These in turn may be identified with the even functions on $\pi TX$ (i.e. even differential forms) which are $\uAut(\mathbb{R}^{0|1})$-invariant. These are exactly the locally constant functions on $X$ (i.e. the closed degree zero differential forms on $X$) \cite[Prop.~5.5]{MR2763085}. {\em Twisted} quantum field theories generalize the quantum field theories just described. For a given geometry there is a symmetric monoidal category of twists, and for each twist, $\tau$, there is a corresponding notion of $\tau$-twisted quantum field theory over $X$. Let $\QFT{}^\tau(X)$ denote the set of these. In the above situation there is a natural family of {\em degree $n$ twists} parametrized by the integers (with the untwisted case corresponding to degree zero). A similar calculation to the above \cite[Prop.~6.3]{MR2763085} yields \begin{equation*} \TFT{}^n(X) \cong \Omega^n_\textrm{cl}(X), \end{equation*} that is $0|1$-dimensional degree $n$ supersymmetric topological field theories over $X$ are in natural bijection with closed smooth differential forms of degree $n$ on $X$. As a consequence concordance classes of these degree $n$ topological field theories are in bijection with de Rham cohomology classes on $X$. More generally the category of twists depends on the manifold $X$ and twisted quantum field theories give a model of twisted cohomology \cite{SST}. We will make the definition of twist more precise in our specific context in Section~\ref{sec:SQFT2}. \subsection*{Outline of the paper} In Section \ref{sec:sacs} we define the category of superalgebraic cartesian sets. The category is a presheaf topos and we develop basic properties of the category from that perspective. The category has a distinguished supercommutative algebra object $\Sect$. In Section \ref{sec:scommalg-in-sacs} we study the Picard category of invertible modules for $\Sect$. This is important when studying twisted field theories in Section \ref{sec:twists}. In Sections \ref{sec:tiny} and \ref{sec:endos} we study the mapping space from the superpoint into a superalgebraic cartesian set and show that under certain conditions the action of the endomorphisms of the superpoint on the mapping space produces a cdga structure. In Section \ref{sec:forms} we examine more closely the case where the superalgebraic cartesian set~comes from a simplicial set $X$, we show that the ring of functions on this mapping superalgebraic cartesian set~is precisely Sullivans rational differential forms on $X$, and that the endomorphisms of the superpoint reproduce the grading and differential on Sullivan's rational differential forms. Section \ref{sec:geometries} explores the structure induced by submonoids of the endomorphisms of the superpoint. These are called geometries. In Section \ref{sec:SQFT2} we define and study $0|1$-dimensional supersymmetric quantum field theories in the context of superalgebraic cartesian sets. In analogy to the smooth setting, we define a bordism (0-)category over an arbitrary superalgebraic cartesian set~$X$. The bordisms in this case consist of finite disjoint unions of copies of the superpoint $\mathbb{A}^{0|1}$ and they are equipped with maps to the superalgebraic cartesian set~$X$. For each geometry we describe the collection of $0|1$-dimensional supersymmetric quantum field theories over $X$ in terms of Sullivan's rational differential forms. In Section \ref{sec:twists} we define twisted field theories and describe the twisted field theories in terms of rational differential forms. Various natural notions of concordance are defined in Section \ref{sec:concordance} and we show that they are all equivalent. This gives the main theorem. \begin{comment} As before the internal mapping space $\usCart(\mathbb{A}^{0|1}, X)$, or more specifically the functions on this superalgebraic cartesian set, play a key role in calculating these field theories. The next two sections are devoted to understanding the functions on this mapping space and their natural symmetries. \end{comment} \subsection*{Acknowledgments} We would like to thank Peter Teichner for several useful conversations and for suggesting that we look at Sullivan's work as a geometric model of rational cohomology. We would like to thank the Max Planck Institute for Mathematics in Bonn for their generous hospitality; this work was carried out at the MPIM. \subsection*{Further motivations} One tool that aids in the study of higher height cohomology theories is a form of character theory \cite{hkr, tgcm}. It provides a character map that approximates high height cohomology theories by a form of rational cohomology. The form of rational cohomology has coefficients that are a ring extension of the rationalization of the coefficients of the high height cohomology theory. These rings are often algebras over the p-adic rationals. Many features of these character maps are reminiscent of {\em dimensional reduction} maps between field theories. In fact there is a quantum field theoretic interpretation of the (Bismut) Chern character map which arises precisely as a dimensional reduction \cite{Han:2007aa}. This geometric construction yields a character map from K-theory taking values in periodic de Rham cohomology. Periodic de Rham cohomology cannot be a suitable target for the higher height character maps that take place at a prime $p$. This is essentially because there is no (interesting) map from the real numbers $\mathbb{R}$ to the $p$-adic rationals $\mathbb{Q}_p$. For example the $p$-adic Chern character may be obtained as the completion of the ordinary Chern character, but only once it is factored through periodic {\em rational} cohomology. This project grew out of a desire to explore the relationship between higher character theory and quantum field theory, which remains an ongoing project. This paper achieves a crucial first step, which is to construct a geometric and quantum field theoretic construction of the cohomology theories which serve as targets of these higher character maps. \section*{Introduction} Cohomology theories such as real cohomology, $K$-theory, and cobordism theories have the distinct advantage of a geometric description. They are built out of geometric cochains such as differential forms, vector bundles, or cobordism classes of manifolds. This significantly aids our ability to compute with these theories while also allowing methods from algebraic topology to be used to solve geometric problems. Chromatic homotopy theory organizes cohomology theories according to their height, which is a measure of the complexity of the theory. Real cohomology and $K$-theory are at heights $0$ and $1$, respectively. The theory of {\em topological modular forms} $TMF$ introduced by Hopkins and Miller is of height $2$, while there are numerous theories, such as Morava $E_n$-theory and $K(n)$-theory, which exist for arbitrary heights $n$. In contrast to real cohomology and $K$-theory, there are no known geometric descriptions of these latter theories. In fact, aside from bordism theories (which are manifestly geometric), to our knowledge the only known geometric construction of a cohomology theory of complexity greater than K-theory is via the Baas-Dundas-Richter-Rognes theory of `2-vector bundles' \cite{MR3010546, MR2832571}; it produces $K(ku)$, the algebraic K-theory of topological K-theory, a theory of telescopic complexity two. Nevertheless, several years ago the enticing idea was put forward that quantum field theories could provide some of the best candidates for geometric cochains for higher height cohomology theories. This idea was pioneered by Graeme Segal \cite{MR992209} who proposed to use 2-dimensional conformal field theories to give geometric cocycles for elliptic cohomology. This idea has been further developed in the work of Stolz-Teichner \cite{MR2079378, Stolz-Teichner-Survery2}. While the primary goal of the Stolz-Teichner program has been to use quantum field theories to construct a geometric model of $TMF$, a goal which has not yet been fully realized, as an offshoot they have been very successful in constructing new geometric models of K-theory and de Rham cohomology based entirely on the formalism of quantum field theory. See \cite{MR2763085} for the latter case. There are two categories which go into the Atiyah-Segal formulation of quantum field theories. \begin{itemize} \item A symmetric monoidal category (or more generally $n$-category) $\Bord$ of bordisms. Here the objects are manifolds (say of dimension $d-1$) and the morphisms are isomorphism classes of bordisms between these. In the context relevant to cohomology theories these manifolds will typically be equipped with some geometric structure such as metrics or conformal structures, though the purely topological case is also of interest. \item A target symmetric monoidal category $\mathcal{V}$. This is often the category $\Vect$ of vector spaces (or Hilbert spaces). In higher categorical contexts a suitable higher categorical analog of vector spaces should be used. \end{itemize} A quantum field theory is then defined to be a symmetric monoidal functor: \begin{equation*} Z: \Bord \to \mathcal{V}. \end{equation*} When there is geometry involved the set of all choices of that geometry (on a given bordism) will form a kind of `space', and our quantum field theory should restrict to give a function (continuous, smooth, holomorphic, etc.) on that space. In certain degenerate cases these `spaces' will actually themselves be represented by manifolds, but more generally we will need to use `generalized manifolds' (i.e. concrete sheaves) or stacks. It is important that quantum field theories respect this structure. One way to accomplish this (following \cite[\S2]{Stolz-Teichner-Survery2}) is to regard $\Bord$ as an internal category, internal to stacks or generalized manifolds. The target category $\mathcal{V}$ will be of the same kind and our field theory is required to be an internal functor. There are several other key ideas which play a role in the Stolz-Teichner program. One of them is the use of {\em supersymmetric} quantum field theories. The theory of supermanifolds and resulting supergeometry are used extensively in their work. Another key idea is that it is possible to form {\em twisted field theories}, and in particular field theories of a fixed {\em degree} $n \in \mathbb{Z}$. A third ingredient is that it is possible to consider field theories {\em over a (super) manifold} $X$, in which the relevant cobordisms are equipped with maps to $X$. This will be (contravariantly) functorial in $X$ and hence one obtains a series of (pointed) presheaves: \begin{equation*} X \mapsto \QFT{}^n(X) \end{equation*} Here $\QFT{}^n(X)$ denotes the set of isomorphism classes of degree $n$ quantum field theories over $X$. By varying the dimension of the bordisms, the geometry, and the target category one obtains a plethora of varieties of quantum field theories. Its flexibility is part of the appeal of this subject. Two quantum field theories over $X$ are defined to be {\em concordant} if there exists a quantum field theory over $X \times \mathbb{R}$ which restricts to the two given fields theories on $X \times \{i\}$, $i=0,1$. Concordance induces an equivalence relation, and we denote the set of concordance classes of quantum field theories over $X$ by $\QFT{}^n[X]$. It is automatically homotopy invariant. In very favorable situations this construction yields a cohomology theory; this is the case for de Rham cohomology \cite{MR2763085}, K-theory \cite{stolz-teichner-unpublished}, Tate K-theory \cite{Cheung:2008aa}, and complexified $TMF$ \cite{Berwick-Evans:2013aa}. In the current work we build on these ideas. We were particularly influenced by the results of Hohnhold-Kreck-Stolz-Teichner \cite{MR2763085}. The first major departure from previous results is a move away from (generalized) supermanifolds. In section \ref{sec:sacs} we introduce the notion of {\em superalgebraic cartesian sets}. One way to view manifolds, and also more exotic `generalized manifolds', is as certain sheaves on the category of smooth cartesian spaces, i.e. the category with objects $\mathbb{R}^n$ for $n \in \mathbb{N}_{\geq 0}$ and morphisms $\hom(\mathbb{R}^n, \mathbb{R}^m)$ the set of smooth maps from $\mathbb{R}^n$ to $\mathbb{R}^m$. Similarly generalized supermanifolds may be viewed as certain sheaves on the category of smooth supercartesian spaces $\mathbb{R}^{n|q}$. Superalgebraic cartesian sets~are defined analogously but with the following changes: \begin{itemize} \item We drop the sheaf requirement, allowing ourselves to consider arbitrary presheaves (and indeed arbitrary prestacks); \item Instead of all smooth maps between $\mathbb{R}^{n|q}$ and $\mathbb{R}^{m|p}$, we restrict to functions which are polynomials in the standard coordinates; \item We allows these polynomials to be defined over an arbitrary base ring. \end{itemize} Consequently we find it more appropriate to denote the representable superalgebraic cartesian sets as $\mathbb{A}^{n|q}$. The term `superalgebraic cartesian set' is supposed to remind us that this notion of space is based on the polynomial algebra over an arbitrary ring, while also being evocative of the term `simplicial set'. Indeed any simplicial set has an {\em algebraic realization} as a superalgebraic cartesian set, and any superalgebraic cartesian set~has a corresponding singular simplicial set (see Section~\ref{sec:sacs}). They also have several aspects reminiscent of schemes in algebraic geometry, though the theory of superalgebraic cartesian sets~is more simplistic. Everything we do is functorial in the base ring. Given this new notion of space, we may then mimic the usual definition of quantum field theory. In this paper we will focus on the simplest species of supersymmetric quantum field theories, those of superdimension $0|1$. The bordisms in this case consist of finite disjoint unions of the representable {\em superpoint } $\mathbb{A}^{0|1}$. A second departure from previous work is that instead of working over a supermanifold, we define these quantum field theories over an arbitrary simplicial set. We consider a variety of geometries on the superpoint, each of which gives rise to a notion of supersymmetric ${0|1}$-dimensional quantum field theory. We classify the possible global twists for these theories. In each case there is always a {\em degree $n$ twist} where $n \in \mathbb{N}$ now takes values in the natural numbers. We also examine some more exotic twists. When the base ring is the field $\mathbb{Q}$ of rational numbers, the supersymmetric ${0|1}$-dimensional quantum field theories over a simplcial set~$X$ have a familiar interpretation. They coincide precisely with Sullivan's model of rational polynomial differential forms on $X$ \cite{Sull}. More precisely, the most interesting geometries we consider are: fully-rigid, Euclidean, and topological (no geometry). In these cases we obtain the following result: \begin{thm*}\label{thm:mainthm} Let $R$ be a rational algebra, and consider the category of superalgebraic cartesian sets~defined over $R$. Let $X$ be a simplicial set, regarded as a superalgebraic cartesian set. Then: \begin{enumerate} \item For each of the following geometries the set of supersymmetric ${0|1}$-dimensional quantum field theories of degree $n$ over $X$ may be identified as: \begin{enumerate} \item (topological) closed degree $n$ polynomial forms over $R$ \begin{equation*} \TFT{}^n(X) \cong \Omega^n_{R; cl}(X); \end{equation*} \item (Euclidean) closed periodic polynomial forms over $R$ \begin{equation*} \EFT{}^n(X) = \begin{cases} \Omega^\textrm{ev}_{R; cl}(X) & n \textrm{ even} \\ \Omega^\textrm{odd}_{R; cl}(X) & n \textrm{ odd} \end{cases} \end{equation*} \item (fully-rigid) all polynomial forms over $R$ \[ \QFT{}_\textrm{f-r}(X) \cong \Omega^*_{R}(X). \] \end{enumerate} \item For each of the following geometries the set of concordance classes of supersymmetric ${0|1}$-dimensional quantum field theories of degree $n$ over $X$ may be identified as: \begin{enumerate} \item (topological) $\TFT{}^n[X] \cong HR^n(X)$ degree $n$ $R$-cohomology; \item (Euclidean) $\EFT{}^n[X]\cong PHR^n(X)$ periodic $R$-cohomology. \end{enumerate} \end{enumerate} Moreover in the case of fully-rigid geometry the natural symmetries of the the supersymmetric quantum field theory recover the commutative differential graded algebra structure on $\Omega^*_{R}(X)$. \end{thm*} \section{Twisted Field Theories} \label{sec:twists} In Section~\ref{sec:SQFT2} we saw how a $0|1$-dimensional supersymmetric $\mathbb{M}$-quantum field theory assigned to each $S$-family of $\mathbb{M}$-bordisms over $X$ a function on $S \in \mathsf{sA}} %{\text{s}\mathcal{A}$. A function on $S$ is a map from $S$ to $\Sect$, a section of the trivial $\Sect$-line bundle over $S$. A {\em twisted} field theory is similar, except that we allow the $\Sect$-line bundles to be non-trivial. Following \cite[\S5]{Stolz-Teichner-Survery2} a {\em twisted field theory} is defined to be a natural transformation between certain functors, the {\em twist functors}. Moreover these twist functors are functors of symmetric monoidal categories internal to superalgebraic cartesian sets, and this natural transformation is a transformation in the internal sense. The target symmetric monoidal category, $\PPic_\Sect$, was introduced in Section \ref{sec:scommalg-in-sacs}. The source category is an enhancement of $\Bord_{(\mathbb{M}, X)}^{0|1}$ to be an internal symmetric monoidal category in superalgebraic cartesian sets. \subsection{The bordism category} In Section~\ref{sec:SQFT2} we introduced the bordism category $\Bord_{(\mathbb{M}, X)}^{0|1}$ as a commutative monoid internal to superalgebraic cartesian sets. We will now promote this to a category internal to superalgebraic cartesian sets, which we denote by $\BBord_{(\mathbb{M}, X)}^{0|1}$ to distinguish it from our previous definition. For $S \in \mathsf{sA}} %{\text{s}\mathcal{A}$, the $S$-points of $\Bord_{(\mathbb{M}, X)}^{0|1}$ consisted of the equivalence classes of $S$-families of $0|1$-dimensional bordisms equipped with $\mathbb{M}$-structures and maps to $X$. The equivalence relation was determined by $S$-families of $\mathbb{M}$-isometries. A similar description applies to $\BBord_{(\mathbb{M}, X)}^{0|1}$ only now instead of forming the quotient by the $\mathbb{M}$-isometries, the $\mathbb{M}$-isometries form the morphisms between the objects of $\BBord_{(\mathbb{M}, X)}^{
# \mathcal{M}_2 ) \cdot ( \# \mathcal{M}_3) \] of the numbers of elements of $\mathcal{M}_m$. The assumption that the Seiberg-Witten invariants are odd and Remark \ref{remark almost} mean that $N$ is odd. As in the case where $n=2$, what we must show is that the spin structure on each torus is the Lie group spin structure. To prove this, we need to show the following lemma which is proved in the similar way to Lemma \ref{lem spin V E}. Note that there are natural actions of $T^2_q:=T^3/S^1_d$ on $T \bar{V}$, $E$. \begin{lem} The restrictions of the spin structures on $T \bar{V}$, $E$ induced by $L$ to $\mathcal{M}_X$ are the spin structures induced by the natural $T^2_q:=T^3/S^1_d$-actions on $TV|_{\mathcal{M}_X}$, $E|_{\mathcal{M}_X}$. \end{lem} We leave the detail of the proof of this lemma for the interested reader. \par Since $f$ is $T^3$-equivariant, the sections $s$ of $E$ defined by $f$ is $T^2_q$-equivariant. The spin structures on $\mathcal{M}_X$ is defined by the spin structures on $T \bar{V}$, $E$ and the section $s$ and these are compatible with the $T^2_q$-actions. Hence the spin structures on each component of $\mathcal{M}_X$ is the Lie group spin structure. Therefore the spin cobordism class of $\mathcal{M}_X$ is non-trivial by Fact \ref{1, 2 dim spin}. We have done the proof of Theorem \ref{thm-spin-non-v}. \vspace{5mm} In particular, Theorem \ref{thm-spin-non-v} implies the following result: \begin{thm}\label{cor-B} For $m=1,2,3$, let $X_m$ be \begin{itemize} \item a closed oriented almost complex 4-manifold with ${b}_{1}({X}_{m})=0$, ${b}^{+}({X}_{m}) \equiv 3 \ (\bmod \ 4)$ and $SW_{X_{m}}(\Gamma_{X_{m}}) \equiv 1 \ (\bmod \ 2)$, where $\Gamma_{X_{m}}$ is a spin${}^c$ structure compatible with the almost complex structure, or \item a closed oriented almost complex 4-manifold with ${b}^{+}(X_m)>1$, $c_{1}(X_{m}) \equiv 0 \ (\bmod \ 4)$ and $SW_{X_{m}}(\Gamma_{X_{m}}) \equiv 1 \ (\bmod \ 2)$, where $\Gamma_{X_{m}}$ is a spin${}^c$ structure compatible with the almost complex structure. \end{itemize} Let $X:=\displaystyle\#_{m=1}^{n}{X}_{m}$, here $n=2, 3$ and $\Gamma_X = \#_{m=1}^n ( \pm \Gamma_{X_m})$. Here the signs $\pm$ are arbitrary. Fix an orientation $\mathcal{O}$ on $\mathcal{H}_g^1(X) \oplus \mathcal{H}_g^+(X)$ and choose a square root $L$ of $\operatorname{det}_{\mathbb{C}} (\operatorname{Ind} D)$. Then ${\mathcal{M}}$ associated with the spin${}^{c}$ structure $\Gamma_{X}$ defines a non-trivial spin cobordism class: \begin{eqnarray*} {SW}^{spin}(\Gamma_{X}, L) \not\equiv 0 \in \Omega^{spin}_{n-1}. \end{eqnarray*} \end{thm} \begin{proof} By Theorem \ref{thm-spin-non-v}, it is sufficient to check that the conditions (\ref{condition-11}) and (\ref{condition-22}) hold for each $X_m$. \par When $X_m$ is an almost complex $4$-manifold with $b_1(X_m)=0$, $b^+(X_m) \equiv 3 \bmod 4$, (\ref{condition-11}), (\ref{condition-22}) hold clearly. Let $X_m$ be an almost complex 4-manifold with $c_1(X_m) \equiv 0 \bmod 4$. Since $X_m$ is spin, it follows from Rochlin's theorem that the signature of $X_m$ can be divided by $16$. The numerical index of spin-c Dirac operators is given by \[ \frac{c_1^2(X_m) - \tau(X_m)}{8}. \] Since $c_1^2(X_m)$ and $\tau(X_m)$ are divided by $16$, the index is even. Hence Lemma \ref{simple-com} implies that (\ref{condition-11}) holds. Moreover it follows from the definition of $\frak{S}^{ij}$ that (\ref{condition-22}) is satisfied. \end{proof} We give examples of 4-manifolds which satisfy the conditions in Theorem \ref{cor-B}. \begin{cor}\label{sym-cor-1} For $m=1,2,3$, let $X_{m}$ be \begin{itemize} \item a product $\Sigma_{g} \times \Sigma_{h}$ of oriented closed surfaces of odd genus $g, h \geq 1$, or \item a closed symplectic 4-manifold with ${b}_{1}({X}_{m})=0$ and ${b}^{+}({X}_{m}) \equiv 3 \ (\bmod \ 4)$, or \item a primary Kodaira surface. \end{itemize} Let $\Gamma_{X_m}$ be the spin$^c$ structures on $X_m$ induced by an almost complex structure compatible with the symplectic structure. And let $X$ be the connected sum $X:=\displaystyle\#_{m=1}^{n}{X}_{m}$ for $n=2, 3$ and we denote by $\Gamma_{X}$ a spin${}^{c}$ structure on $X$ defined by $\Gamma_X = \#_{m=1}^n (\pm \Gamma_{X_m})$. Here the signs $\pm$ are arbitrary. Fix an orientation $\mathcal{O}$ on $\mathcal{H}_g^1(X) \oplus \mathcal{H}_g^+(X)$ and choose a square root $L$ of $\operatorname{det}_{\mathbb{C}} (\operatorname{Ind} D)$. Then ${\mathcal{M}}$ associated with the spin${}^{c}$ structure $\Gamma_{X}$ defines a non-trivial spin cobordism class: \begin{eqnarray*} {SW}^{spin}(\Gamma_{X}, L) \not\equiv 0 \in \Omega^{spin}_{n-1}. \end{eqnarray*} \end{cor} \begin{proof} By Taubes's theorem in \cite{t-1}, $SW(\Gamma_{X_m})$ is equal to $\pm 1$, in particular it is odd. Let $X_m$ be $\Sigma_g \times \Sigma_h$ with $g, h$ odd. Then $c_1( \mathcal{L}_{\Gamma_{X_m} })$ is \[ 2(g-1) \alpha + 2(h-1) \beta \in H^2(X_m;\mathbb{Z}). \] Here $\alpha$, $\beta$ are the generators of $H^2(\Sigma_g;\mathbb{Z})$, $H^2(\Sigma_h;\mathbb{Z})$ respectively. Since $g$ and $h$ are odd, we have \[ c_1( \mathcal{L}_{ \Gamma_{X_m} }) \equiv 0 \bmod 4. \] Hence $(X_m, \Gamma_m)$ satisfies the second condition in Theorem \ref{cor-B}. Let $X_m$ be a symplectic 4-manifold with $b^+ > 1$ with $b_1=0$, $b^+ \equiv 3 \bmod 4$. Then $(X_m, \Gamma_{X_m})$ clearly satisfies the first condition in Theorem \ref{cor-B}. Let $X_m$ be a primary Kodaira surface. This is a symplectic 4-manifold with $c_1(X_m) = 0$. Hence $(X_m, \Gamma_m)$ satisfies the second condition in Theorem \ref{cor-B}. \end{proof} Here we give remarks on almost complex 4-manifolds with $c_1 = 0$, with $b^ + > 1$ and with $SW(\Gamma_X) \equiv 1 \bmod 2$. \begin{rem} \label{remark-thm25} Let $X$ be a closed oriented almost complex 4-manifold with vanishing first Chern class, with ${b}^{+}(X)>1$ and with $SW(\Gamma_{X}) \equiv 1 \bmod 2$. Then this satisfies the second condition in Theorem \ref{cor-B}. We are able to deduce that there are constraints to the Betti numbers of $X$ as follows. The 4-manifold $X$ must be spin and satisfies $c^2_{1}({\cal L}_{\Gamma_{X}})=2\chi(X) + 3\tau(X)=0$. Rochlin's theorem tells us that the signature $\tau(X)={b}^{+}(X) - {b}^{-}(X)$ of $X$ is divided by $16$. Hence, $\tau(X)=16k$ holds for some integer $k \in {\mathbb Z}$. Namely, we have ${b}^{-}(X)={b}^{+}(X)-16k$. By the direct computation, we have ${b}^{-}(X)={b}^{+}(X)-16k$. By the direct computation, we have $0 = 2\chi(X) + 3\tau(X)=4-4{b}_{1}(X)+5{b}^+(X)-{b}^{-}(X) = 4-4{b}_{1}(X)+5{b}^+(X)-({b}^{+}(X)-16k) = 4(1-{b}_{1}(X)+{b}^{+} + 4k)$. Hence we get \begin{eqnarray}\label{tau-1} {b}_{1}(X)=1+{b}^{+} + 4k. \end{eqnarray} The assumption that $SW_X(\Gamma_X) \equiv 1 \bmod 2$ implies that $b^+(X) \leq 3$. In fact, Bauer \cite{b-06} proved that $SW_{M}(\Gamma_{M}) \equiv 0 \ (\bmod \ 2)$ for all almost complex 4-manifolds $M$ with vanishing first Chern class and with $b^+(M) \geq 4$ (cf. \cite{Li-1, Li-2}). Since we assume that ${b}^{+}(X)>1$ and $SW_{X_{m}}(\Gamma_{X}) \equiv 1 \ (\bmod \ 2)$, we have ${b}^{+}(X)=2$ or ${b}^{+}(X)=3$. Notice also that $\tau(X)=16k \leq 0$ since ${b}^{+}(X) \leq 3$. \par Suppose that ${b}^{+}(X)=2$. Then, (\ref{tau-1}) tells us that ${b}_{1}(X)=3 + 4k$. Since $k \leq 0$, we have ${b}_{1}(X)=3$ and $k=0$. Equivalently, we have ${b}_{1}(X)=3$ and $\tau(X)=0$. In particular, ${b}^{+}(X)-{b}_{1}(X)=2-3=-1 \equiv 3 \ (\bmod \ 4)$. \par On the other hand, suppose that ${b}^{+}(X)=3$. By (\ref{tau-1}), we have ${b}_{1}(X)=4 + 4k$. Since $\tau(X)=16k \leq 0$ and ${b}_{1}(X) \geq 0$, we have $k=0$ or $k=-1$. In the case of $k=0$, we have ${b}_{1}(X)=4$ and $\tau(X)=0$. Hence we have ${b}^{+}(X)-{b}_{1}(X)=3-4=-1 \equiv 3 \ (\bmod \ 4)$. On the other hand, in the case of $k=-1$, we have ${b}_{1}(X)=0$ and $\tau(X)=-16$. \par We have proved that there are three cases. Indeed, we have $({b}^{+}(X), {b}_{1}(X), \tau(X))=(2,3,0)$, $(3,4,0)$, or $(3,0,-16)$. A primary Kodaira surface is an example of the first case $(2,3,0)$. A 4-torus is an example of the second case $(3,4,0)$. A $K3$ surface is an example of the third case $(3,0,-16)$. \end{rem} \subsection{Non-vanishing theorem}\label{sub-32} Proposition \ref{spin-BF} and Theorem \ref{thm-spin-non-v} immediately imply the following result which is nothing but Theorem \ref{main-A}: \begin{thm}\label{thm-A} For $m=1,2,3$, let $X_m$ be a closed oriented almost complex 4-manifold with ${b}^{+}(X_m)>1$ and satisfying \begin{eqnarray*} {b}^{+}(X_{m})-{b}_{1}(X_{m}) \equiv 3 \ (\bmod \ 4). \end{eqnarray*} Let $\Gamma_{X_{m}}$ be a spin${}^{c}$ structure on $X_m$ which is induced by the almost complex structure and assume that $SW_{X_{m}}(\Gamma_{X_{m}}) \equiv 1 \ (\bmod \ 2)$. Under Definition \ref{def-1}, moreover assume that the following condition holds for each $m$: \begin{eqnarray*} \frak{S}^{ij}(\Gamma_{X_{m}}) \equiv 0 \bmod 2 & \text{for all $i, j$}. \end{eqnarray*} Let $X=\#_{m=1}^n X_m$ and $\Gamma_X = \#_{m=1}^n (\pm \Gamma_{X_m})$ for $n=2, 3$. Here the signs $\pm$ are arbitrary. Then the connected sum $X$ has a non-trivial stable Seiberg-Witten invariant for the spin$^c$ structure $\Gamma_X$. \end{thm} Similarly, it is also clear that Proposition \ref{spin-BF} and Theorem \ref{cor-B} tell us that the following result holds, i.e., Theorem \ref{main-B}: \begin{thm}\label{cor-1} For $m=1,2,3$, let $X_m$ be \begin{itemize} \item a product $\Sigma_{g} \times \Sigma_{h}$ of oriented closed surfaces of odd genus $h, g \geq 1$, or \item a closed oriented almost complex 4-manifold with ${b}_{1}({X}_{m})=0$, ${b}^{+}({X}_{m}) \equiv 3 \ (\bmod \ 4)$ and $SW_{X_{m}}(\Gamma_{X_{m}}) \equiv 1 \ (\bmod \ 2)$, where $\Gamma_{X_{m}}$ is a spin${}^c$ structure compatible with the almost complex structure, or \item a closed oriented almost complex 4-manifold with vanishing first Chern class, ${b}^{+}(X_m)>1$ and $SW_{X_{m}}(\Gamma_{X_{m}}) \equiv 1 \ (\bmod \ 2)$, where $\Gamma_{X_{m}}$ is a spin${}^c$ structure compatible with the almost complex structure. \end{itemize} And let $X = \#_{m=1}^n X_m$ and $\Gamma_{X}=\#_{m=1}^n (\pm \Gamma_{X_m})$ for $n=2, 3$. Here the signs $\pm$ are arbitrary. Then a connected sum $X$ has a non-trivial stable cohomotopy Seiberg-Witten invariant for the spin${}^{c}$ structure $\Gamma_{X}$. In particular, Conjecture \ref{conj-1} in the case where $\ell=2$ is true. \end{thm} Finally, Proposition \ref{spin-BF} and Corollary \ref{sym-cor-1} also imply \begin{cor}\label{key-cor-1} For $m=1,2,3$, let $X_{m}$ be \begin{itemize} \item a product $\Sigma_{g} \times \Sigma_{h}$ of oriented closed surfaces of odd genus $h, g \geq 1$, or \item a closed symplectic 4-manifold with ${b}_{1}({X}_{m})=0$ and ${b}^{+}({X}_{m}) \equiv 3 \ (\bmod \ 4)$, or \item a primary Kodaira surface. \end{itemize} Let $\Gamma_{X_m}$ be the spin$^c$ structures on $X_m$ induced by an almost complex structure compatible with the symplectic structure. And let $X= \#_{m=1}^n X_m$ and $\Gamma_{X}= \#_{m=1}^n ( \pm \Gamma_{X_m})$ for $n=2, 3$. Here the signs $\pm$ are arbitrary. Then the connected sum $X$ has a non-trivial stable cohomotopy Seiberg-Witten invariant for the spin${}^{c}$ structure $\Gamma_{X}$. \end{cor} \section{Various applications of Theorem \ref{main-A}}\label{sec-4} In this section, we shall give various application of Theorem \ref{main-A} and Theorem \ref{main-B}. \subsection{Decompositions and exotic smooth structures of connected sums of $4$-manifolds} \label{subsec-exotic} We will give proofs of Theorem \ref{thm decomp} and Theorem \ref{thm exotic-B}. The key of the proofs is the following lemma: \begin{lem} \label{lem moduli dim} Let $Z_{l}$ be closed, oriented, 4-manifolds with $b^+ > 0$ for $l=1, 2, \dots$ and $\Gamma_{l}$ be spin$^c$ structures on $Z_l$. Put $Z=\#_{l=1}^N Z_l$, $\Gamma_{Z}:=\#_{l=1}^N \Gamma_{l}$ for some $N \geq 0$. Assume that the moduli space $\mathcal{M}^{SW}_{\Gamma_{Z}}(g, \eta)$ is not empty for all Riemannian metrics $g$ and self-dual 2-forms $\eta$ on $Z$. Then the virtual dimension of $\mathcal{M}^{SW}_{\Gamma_Z}(g, \eta)$ is larger than or equal to $N-1$. \end{lem} \begin{proof} To simplify notations we consider the case $N=3$. Take points $z_1 \in Z_1$, $z_2, z_2' \in Z_2$, $z_3 \in Z_3$ and small open disks $D_1, D_2, D_2', D_3$ centered at these points. We put \[ \begin{split} \hat{Z}_1 &= (Z_1 \backslash D_1) \cup S^3 \times \mathbb{R}_{ \geq 0}, \\ \hat{Z}_2 &= S^3 \times \mathbb{R}_{\leq 0} \cup ( Z_2 \backslash D_2 \cup D_2') \cup S^3 \times \mathbb{R}_{\geq 0}, \\ \hat{Z}_3 &= S^3 \times \mathbb{R}_{ \leq 0} \cup (Z_3 \backslash D_4), \end{split} \] and for each $T>0$ we define \[ \begin{split} \hat{Z}_1(T) &= \hat{Z}_1 \backslash S^3 \times [2T, \infty) \\ \hat{Z}_2(T) &= \hat{Z_2} \backslash \big( S^3 \times ( -\infty, -2T) \cup S^3 \times [2T, \infty) \big) \\ \hat{Z}_3(T) &= \hat{Z}_3 \backslash S^3 \times (-\infty, -2T]. \end{split} \] There is an identification \[ \begin{array}{rccc} \varphi_T: & S^3 \times (T, 2T) & \cong & S^3 \times (-2T, -T) \\ &(y, t) & \longmapsto & (y, t-3T). \end{array} \] Gluing $\hat{Z}_1(T), \hat{Z}_2(T), \hat{Z}_3(T)$ by using $\varphi_{T}$, we have a manifold $Z(T)$ which is diffeomorphic to the connected sum $Z=\#_{l=1}^3 Z_l$. We take Riemannian metrics $\hat{g}_l$ on $\hat{Z}_l$ which coincide with $g_{S^3} + dt^2$ on the ends. Here $g_{S^3}$ is the standard metric on $S^3$. These metrics naturally induce a Riemannian metric $g(T)$ on $Z(T)$. Let $\mathcal{M}^{SW}_{ \hat{ \Gamma }_l }(\hat{g}_l, \hat{\eta}_l)$ be the moduli spaces of monopoles on $\hat{Z}_l$ which converge to the trivial monopole on $S^3$ for all $l$. Here $\hat{\Gamma}_{ l }$ are spin$^c$ structures on $\hat{Z}_l$ induced by $\Gamma_{l}$. Since $b^+(\hat{Z}_l)>0$, we can choose self-dual 2-forms $\hat{\eta}_l$ such that $\mathcal{M}^{SW}_{ \hat{\Gamma}_{ l }}(\hat{g}_l, \hat{\eta}_l)$ contain no reducible monopoles and are smooth of the expected dimension or empty. Moreover we may suppose that the supports of $\hat{\eta}_l$ do not intersect the ends of $\hat{Z}_l$. (See Proposition 4.4.1 in \cite{nico}.) Extending $\hat{\eta}_l$ trivially, we consider $\hat{\eta}_l$ as self-dual 2-forms on $Z(T)$, and we get a self-dual $2$-form $\eta(T) := \hat{\eta}_1 + \hat{\eta}_2 + \hat{\eta}_3$ on $Z(T)$. Then we have \begin{equation} \label{eq dim sum} \operatorname{dim} \mathcal{M}^{SW}_{\Gamma_Z}(g(T), \eta(T))= \sum_{l=1}^3 \operatorname{dim} \mathcal{M}^{SW}_{ \hat{\Gamma}_l }(\hat{g}_l, \hat{\eta}_l) + 2. \end{equation} Here $\operatorname{dim} \mathcal{M}^{SW}_{\Gamma_Z}(g(T), \eta(T))$, $\operatorname{dim} \mathcal{M}^{SW}_{ \hat{\Gamma}_l }(\hat{g}_l, \hat{\eta}_l)$ are the virtual dimensions of the moduli spaces. This is derived from the excision principle of index of elliptic differential operators. We can also see this from the theory of gluing of monopoles. For large $T$, coordinates of $\mathcal{M}^{SW}_{ \Gamma_Z }(g(T), \eta(T))$ are given by coordinates of $\mathcal{M}^{SW}_{ \hat{\Gamma}_{l}}( \hat{g}_l, \hat{\eta}_l)$ and gluing parameters. Since $Z(T)$ has two necks, the space of gluing parameters is $U(1) \times U(1)$ and it is 2-dimensional. Hence we have the formula (\ref{eq dim sum}). Let $\{ T^{\alpha} \}_{\alpha=1}^{\infty}$ be a sequence of positive numbers which diverges to infinity. By the assumption, $\mathcal{M}^{SW}_{\Gamma_Z}(g(T^{\alpha}), \eta(T^{\alpha}))$ are non-empty. Hence we can take elements $[\phi^{\alpha}, A^{\alpha}] \in \mathcal{M}^{SW}_{ \Gamma_Z }(g(T^{\alpha}), \eta(T^{\alpha}))$ for all $\alpha$. There is a subsequence $\{ [\phi^{\alpha'}, A^{\alpha'}] \}_{\alpha'}$ which converges to some $([\phi^{\infty}_1, A^{\infty}_1], [\phi^{\infty}_2, A^{\infty}_2], [\phi^{\infty}_3, A^{\infty}_3]) \in \mathcal{M}^{SW}_{\hat{\Gamma}_1}( \hat{g}_1, \hat{\eta}_1 ) \times \mathcal{M}^{SW}_{\hat{\Gamma}_2}( \hat{g}_2, \hat{\eta}_2 ) \times \mathcal{M}^{SW}_{\hat{\Gamma}_3}( \hat{g}_3, \hat{\eta}_3 )$. In particular, $\mathcal{M}^{SW}_{\hat{\Gamma}_l}( \hat{g}_l, \hat{\eta}_l )$ are non-empty. Since $\mathcal{M}^{SW}_{\hat{\Gamma}_l}( \hat{g}_l, \hat{\eta}_l )$ are non-empty and smooth of the expected dimension, their virtual dimensions are at least zero. From (\ref{eq dim sum}), we have \[ \operatorname{dim} \mathcal{M}^{SW}_{\Gamma_Z}(g(T), \eta(T)) \geq 2. \] The virtual dimension of $\mathcal{M}^{SW}_{\Gamma_Z}(g, \eta)$ is independent of $g, \eta$, so we have obtained the required result. The proof for the general case is similar. \end{proof} We restate Theorem \ref{thm decomp} here. \begin{thm} Let $X_m$ be closed symplectic 4-manifolds with $c_1(X_m) \equiv 0 \ (\bmod \ 4)$ for $m=1, 2, 3$, and $X$ be a connected sum $\#_{m=1}^n X_m$, where $n=2, 3$. Then $X$ can not be written as a connected sum $\#_{m=1}^{N} Y_{m}$ with $b^+(Y_m) >0 $ and with $N>n$. \end{thm} \begin{proof} It follows from Theorem \ref{main-B} that $\mathcal{M}^{SW}_{\Gamma_X}(g,\eta)$ are non-empty for all $g, \eta$. Here $\Gamma_X$ is the connected sum of the spin$^c$ structures $\Gamma_{X_m}$ of $X_m$ induced by the almost complex structures. Suppose that $X$ has a decomposition $X = \#_{m=1}^N Y_{m}$ with $b^+(Y_m) > 0$ and $N > n$. Then Lemma \ref{lem moduli dim} implies \[ \operatorname{dim} \mathcal{M}^{SW}_{\Gamma_X}(g,\eta) \geq N - 1. \] On the other hand, the dimension of $\mathcal{M}^{SW}_{\Gamma_X}(g,\eta)$ is $n-1$. Therefore we have $N \leq n$. Since we assumed that $N > n$, this is a contradiction. \end{proof} Next we show Theorem \ref{thm exotic-B}. More precisely we prove the following. \begin{thm} \label{thm exotic} Let $X$ be a closed, simply connected, non-spin, symplectic 4-manifold with $b^+ \equiv 3 \bmod 4$. We denote $b^+(X)$, $b^-(X)$ by $p$, $q$, and put $Y= p \mathbb C \mathbb P^2 \# q \overline{\mathbb C \mathbb P}^2 $. Let $X_m$ be almost complex 4-manifolds which have the properties in Theorem \ref{main-A} for $m=1, 2$, and let $X'$ be $X_1$ or the connected sum $\#_{m=1}^2 X_m$. Then the connected sum $X \# X'$ is homeomorphic to $Y \# X'$, but not diffeomorphic to $Y \# X'$. \end{thm} \begin{proof} First we show that $X \# X'$ is homeomorphic to $Y \# X'$. Let $Q_X$ be the intersection form on $H_2(X,\mathbb{Z})$. If $q$ is zero, by Donaldson's theorem \cite{d}, $Q_X$ is isomorphic to the intersection form of $Y = p \mathbb C \mathbb P^{2}$. If $q$ is not zero, then $Q_{X}$ is an odd, indefinite, unimodular form. In general, an odd, indefinite, unimodular form is diagonalizable. (See, for example, \cite{serre}.) Hence $Q_X$ is isomorphic to the intersection form of $Y= p \mathbb C \mathbb P^2 \# q \overline{\mathbb C \mathbb P}^2 $. Therefore $X$ is homeomorphic to $Y$ by Freedman's theorem \cite{freedman}, and so $X \# X'$ is homeomorphic to $Y \# X'$. Next we prove that $X \# X'$ is not diffeomorphic to $Y \# X'$. Let $\Gamma$ be the spin$^c$ structure on $X \# X'$ induced by almost complex structures on $X$, $X_m$. Then the dimension of moduli space associated with $\Gamma$ is $1$ or $2$ and it follows from Theorem \ref{main-A} that $\mathcal{M}^{SW}_{\Gamma}(g, \eta)$ is non-empty for any $g, \eta$. On the other hand, by the assumption that $b^+(X) \equiv 3 \bmod 4$, $Y$ is the connected sum $p \mathbb C \mathbb P^2 \# q \overline{\mathbb C \mathbb P}^2 $ with $p \geq 3$. Hence we can write $Y \# X'$ as $\#_{l=1}^4 Z_l$ with $b^+(Z_l) > 0$. Suppose that $X \# X'$ is diffeomorphic to $Y \# X'$. Then it follows from Lemma \ref{lem moduli dim} that the dimension of the moduli space $\mathcal{M}^{SW}_{\Gamma}(g, \eta)$ is at least $3$. Hence we have a contradiction since the dimension of the moduli space is 1 or 2. \end{proof} \subsection{Adjunction inequality}\label{subsec-adj} Let $X_i$, $X$ be as in Theorem \ref{bau-1}. Then the non-vanishing result of the stable cohomotopy Seiberg-Witten invariants for $X$ implies that the Seiberg-Witten equations on $X$ have solutions for any Riemannian metrics and perturbations (see also subsection \ref{4.22} below). Hence we are able to apply the arguments of Kronheimer-Mrowka \cite{K-M} to $X$ and we obtain an estimate for the genus of embedded surfaces in $X$ as a corollary of Bauer's non-vanishing theorem: \begin{cor} \label{thm adjunction-bau} Let $X_i$ and $X$ be as in Theorem \ref{bau-1} and let $\Gamma_{X_i}$ be a spin$^c$ structure on $X_i$ induced by the complex structure. Let $\Gamma_X$ be a spin$^c$ structure on $X$ defined by $\Gamma_{X}=\#_{i=1}^n (\pm \Gamma_{X_i})$.Here $n= 2, 3$ and the signs $\pm$ are arbitrary. Assume that $\Sigma$ is an embedded surface in $X$ with $[\Sigma] \cdot [\Sigma] \geq 0$ and $g(\Sigma)>0$, where $g(\Sigma)$ is the genus of $\Sigma$. Then, \[ [\Sigma] \cdot [\Sigma] - \< c_1(\mathcal{L}_{\Gamma_{X}}), [\Sigma] \> \leq 2g(\Sigma) - 2. \] \end{cor} The above inequality is called adjunction inequality. Notice that we assume that $b_{1}(X_i)=0$. In the case where $b_{1} \not=0$, Theorem \ref{main-A} implies the following result: \begin{thm} \label{thm adjunction} Let $X_m$ and $X$ be as in Theorem \ref{main-A} and let $\Gamma_{X_m}$ be a spin$^c$ structure on $X_m$ induced by the complex structure. Let $\Gamma_X$ be a spin$^c$ structure on $X$ defined by $\Gamma_{X}=\#_{m=1}^n (\pm \Gamma_{X_m})$. Here $n= 2, 3$ and the signs $\pm$ are arbitrary. Assume that $\Sigma$ is an embedded surface in $X$ with $[\Sigma] \cdot [\Sigma] \geq 0$ and $g(\Sigma)>0$. Then, \[ [\Sigma] \cdot [\Sigma] - \< c_1(\mathcal{L}_{\Gamma_{X}}), [\Sigma] \> \leq 2g(\Sigma) - 2. \] \end{thm} See also \cite{furuta-k-m-2, furuta-k-m-3, furuta-k-m-1, sasa} for related results. In particular, we notice that Theorem \ref{thm adjunction} never follow form adjunction inequalities proved in \cite{furuta-k-m-2, furuta-k-m-3, furuta-k-m-1, sasa}. \par By Theorem \ref{thm adjunction} and Corollary \ref{key-cor-1}, we obtain \begin{cor} For $m=1,2,3$, let $X_{m}$ be \begin{itemize} \item a product $\Sigma_{h} \times \Sigma_{g}$ of oriented closed surfaces of odd genus $h, g \geq 1$, or \item a closed symplectic 4-manifold with ${b}_{1}({X}_{m})=0$ and ${b}^{+}({X}_{i}) \equiv 3 \ (\bmod \ 4)$, or \item a primary Kodaira surface. \end{itemize} let $\Gamma_{X_m}$ be a spin$^c$ structure on $X_m$ induced by the complex structure. Let $\Gamma_X$ be a spin$^c$ structure on $X$ defined by $\Gamma_{X}=\#_{m=1}^n (\pm \Gamma_{X_m})$. Here $n= 2, 3$ and the signs $\pm$ are arbitrary. Assume that $\Sigma$ is an embedded surface in $X$ with $[\Sigma] \cdot [\Sigma] \geq 0$ and $g(\Sigma)>0$. Then, \[ [\Sigma] \cdot [\Sigma] - \< c_1(\mathcal{L}_{\Gamma_{X}}), [\Sigma] \> \leq 2g(\Sigma) - 2. \] \end{cor} \subsection{Monopole classes and curvature bounds}\label{4.22} In this subsection, for the convenience of the reader, following a recent beautiful article \cite{leb-17} of LeBrun, we shall recall firstly curvature estimates arising Seiberg-Witten monopole equations in terms of the convex hull of the set of all monopole classes on 4-manifolds. We shall use these estimates in the rest of this article. The main results in this subsection are Theorems \ref{mono-key-bounds} and \ref{bf-ricci} below. \par First of all, let us recall the definition of monopole class \cite{kro, leb-11, ishi-leb-2, leb-17}. \begin{defn}\label{ishi-leb-2-key} Let $X$ be a closed oriented smooth 4-manifold with $b^+(X) \geq 2$. An element $\frak{a} \in H^2(X, {\mathbb Z})$/torsion $\subset H^2(X, {\mathbb R})$ is called monopole class of $X$ if there exists a spin${}^c$ structure $\Gamma_{X}$ with \begin{eqnarray*} {c}^{\mathbb R}_{1}({\cal L}_{\Gamma_{X}}) = \frak{a} \end{eqnarray*} which has the property that the corresponding Seiberg-Witten monopole equations have a solution for every Riemannian metric on $X$. Here ${c}^{\mathbb R}_{1}({\cal L}_{\Gamma_{X}})$ is the image of the first Chern class ${c}_{1}({\cal L}_{\Gamma_{X}})$ of the complex line bundle ${\cal L}_{\Gamma_{X}}$ in $H^2(X, {\mathbb R})$. We shall denote the set of all monopole classes on $X$ by ${\frak C}(X)$. \end{defn} Crucial properties of the set ${\frak C}(X)$ are summarized as follow \cite{leb-17, ishi-leb-2}: \begin{prop}[\cite{leb-17}]\label{mono} Let $X$ be a closed oriented smooth 4-manifold with $b^+(X) \geq 2$. Then ${\frak C}(X)$ is a finite set. Moreover ${\frak C}(X) = -{\frak C}(X)$ holds, i.e., $\frak{a} \in H^2(X, {\mathbb R})$ is a monopole class if and only if $-\frak{a} \in H^2(X, {\mathbb R})$ is a monopole class, too. \end{prop} Recall that, for any subset $W$ of a real vector space $V$, one can consider the convex hull ${\bf{Hull}}(W) \subset V$, meaning the smallest convex subset of $V$ containing $W$. Then, Proposition \ref{mono} implies \begin{prop} [\cite{leb-17}] \label{mono-leb} Let $X$ be a closed oriented smooth 4-manifold with $b^+(X) \geq 2$. Then the convex hull ${\bf{Hull}}({\frak C}(X)) \subset H^2(X, {\mathbb R})$ of ${\frak C}(X)$ is compact, and symmetric, i.e., ${\bf{Hull}}({\frak C}(X)) = -{\bf{Hull}}({\frak C}(X))$. \end{prop} Since ${\frak C}(X)$ is a finite set, we are able to write as ${\frak C}(X)=\{{\frak{a}}_{1},{\frak{a}}_{2}, \cdots, {\frak{a}}_{n} \}$. The convex hull ${\bf{Hull}}({\frak C}(X))$ is then expressed as follows: \begin{eqnarray}\label{hull} {\bf{Hull}}({\frak C}(X))= \{ \sum^{n}_{i=1}t_{i} {\frak{a}}_{i} \ | \ t_{i} \in [0,1], \ \sum^{n}_{i=1}t_{i}=1 \}. \end{eqnarray} Notice that the symmetric property tells us that ${\bf{Hull}}({\frak C}(X))$ contains the zero element. On the other hand, consider the following self-intersection function: \begin{eqnarray*} {\cal Q} : H^2(X, {\mathbb R}) \rightarrow {\mathbb R} \end{eqnarray*} which is defined by $x \mapsto x^2:=\< x \cup x, [X] \>$, where $[X]$ is the fundamental class of $X$. Since this function ${\cal Q}$ is a polynomial function and hence is a continuous function on $H^2(X, {\mathbb R})$. We can therefore conclude that the restriction ${\cal Q} |_{{\bf{Hull}}({\frak C}(X))}$ to the compact subset ${\bf{Hull}}({\frak C}(X))$ of $H^2(X, {\mathbb R})$ achieves its maximum. Then we introduce \begin{defn}[\cite{leb-17}]\label{beta} Suppose that $X$ is a closed oriented smooth 4-manifold with $b^+(X) \geq 2$. Let ${\bf{Hull}}({\frak C}(X)) \subset H^2(X, {\mathbb R})$ be the convex hull of the set ${\frak C}(X)$ of all monopole classes on $X$. If ${\frak C}(X) \not= \emptyset$, define \begin{eqnarray*} {\beta}^2(X):= \max \{ {\cal Q}(x):=x^2 \ | \ x \in {\bf{Hull}}({\frak C}(X)) \}. \end{eqnarray*} On the other hand, if ${\frak C}(X) = \emptyset$ holds, define simply as ${\beta}^2(X):=0$. \end{defn} Notice again that ${\bf{Hull}}({\frak C}(X))$ contains the zero element if $\frak{C}(X)$ is not empty. Hence the above definition with this fact implies that ${\beta}^2(X) \geq 0$ holds. \par The existence of the monopole classes gives a constraint on the existence of Riemannian metrics of some type: \begin{prop}[\cite{leb-17}]\label{beta-ine-key-0} Let $X$ be a closed oriented smooth 4-manifold with ${b}^+(X) \geq 2$. If there is a non-zero monopole class $\frak{a} \in H^2(X, {\mathbb R})-\{0\}$, then $X$ cannot admit Riemannian metric $g$ of scalar curvature $s_{g} \geq 0$. \end{prop} On the other hand, it is also known that the existence of monopole classes implies the following family of integral inequalities: \begin{thm}[\cite{leb-17}]\label{beta-ine-key} Suppose that $X$ is a closed oriented smooth 4-manifold with $b^+(X) \geq 2$. Then any Riemannian metric $g$ on $X$ satisfies the following curvature estimates: \begin{eqnarray*} {\int}_{X}{{s}^2_{g}}d{\mu}_{g} \geq {32}{\pi}^{2}\beta^2(X), \end{eqnarray*} \begin{eqnarray*} {\int}_{X}\Big({s}_{g}-\sqrt{6}|W^{+}_{g}|\Big)^2 d{\mu}_{g} \geq 72{\pi}^{2}\beta^2(X), \end{eqnarray*} where $s_{g}$ and $W^{+}_{g}$ denote respectively the scalar curvature and the self-dual Weyl curvature of $g$. If $X$ has a non-zero monopole class and, moreover, equality occurs in either the first or the second estimate if and only if $g$ is a K{\"{a}}hler-Einstein metric with negative scalar curvature. \end{thm} Notice that if $X$ has no monopole class, we define as $\beta^2(X):=0$ (see Definition \ref{beta} above). On the other hand, notice also that the left-hand side of these two curvature estimates in Theorem \ref{beta-ine-key} is always non-negative. Therefore the result of Theorem \ref{beta-ine-key} holds trivially when $X$ has no monopole class. Thus, the main problem is to detect the existence of monopole classes. It is known that the non-triviality of the stable cohomotopy Seiberg-Witten invariant implies the existence of monopole classes: \begin{prop}[\cite{ishi-leb-2}]\label{BF-mono} Let $X$ be a closed oriented smooth 4-manifold with ${b}^+(X) \geq 2$ and a spin$^c$ structure $\Gamma_{X}$. Suppose that $BF_{X}(\Gamma_{X})$ is non-trivial. Then ${c}^{\mathbb R}_{1}({\cal L}_{\Gamma_{X}})$ is a monopole class. \end{prop} On the other hand, it is also known that the Bauer's connected sum formula \cite{b-1} implies \begin{prop}[\cite{ishi-leb-2, b}]\label{BF-mono-ne} Let $X$ be a closed oriented smooth 4-manifold with $b^+(X) \geq 2$ and a spin$^c$ structure $\Gamma_{X}$. Suppose that $BF_{X}(\Gamma_{X})$ is non-trivial. Let $N$ be a closed oriented smooth 4-manifold with ${b}^+(N)=0$ and let $\Gamma_{N}$ be any spin${}^c$ structure on $N$ with ${c}^{2}_{1}({\cal L}_{\Gamma_{N}})=-b_{2}(N)$. Then the stable cohomotopy Seiberg-Witten invariant is also non-trivial for the spin${}^c$ structure $\Gamma_{X} \# \Gamma_{N}$ on the connected sum $X \# N$. \end{prop} Theorem \ref{thm-A} with Propositions \ref{BF-mono} and \ref{BF-mono-ne} tells us that the following holds (cf. Proposition 10 in \cite{ishi-leb-2}): \begin{thm}\label{con-mono} Let ${X}_{m}$ be as in Theorem \ref{thm-A} and suppose that $N$ is a closed oriented smooth 4-manifold with $b^{+}(N)=0$ and let $E_{1}, E_{2}, \cdots, E_{k}$ be a set of generators for $H^2(N, {\mathbb Z})$/torsion relative to which the intersection form is diagonal. (We can take such generators by Donaldson's theorem \cite{d}.) Then, for any $n=2,3$, \begin{eqnarray}\label{mono-cone} \sum^{n}_{m=1} \pm {c}_{1}(X_{m}) + \sum^{k}_{r=1} \pm{E}_{r} \end{eqnarray} is a monopole class of $M:=\Big(\#^{n}_{m=1}{X}_{m} \Big) \# N$, where ${c}_{1}(X_{m})$ is the first Chern class of the canonical bundle of the almost-complex 4-manifold $X_{m}$ and the $\pm$ signs are arbitrary, and are independent of one another. \end{thm} As a corollary of Theorem \ref{con-mono}, we are able to get \begin{cor}\label{mono-cor} Let ${X}_{m}$, $N$ and $M$ be as in Theorem \ref{con-mono} above. Then, for any $n=2,3$, \begin{eqnarray}\label{monopole-123446} \beta^2(M) \geq \sum^{n}_{m=1}{c}^2_{1}(X_{m}). \end{eqnarray} \end{cor} \begin{proof} First of all, by the very definition, we have \begin{eqnarray*} {\beta}^2(M):= \max \{ {\cal Q}(x):=x^2 \ | \ x \in {\bf{Hull}}({\frak C}(M)) \}. \end{eqnarray*} On the other hand, by (\ref{mono-cone}), we particularly have the following two monopole classes of $M$: \begin{eqnarray*} {\frak{a}}_{1}:=\sum^{n}_{m=1} {c}_{1}(X_{m}) + \sum^{k}_{r=1} {E}_{r}, \ {\frak{a}}_{2}:=\sum^{n}_{m=1} {c}_{1}(X_{m}) - \sum^{k}_{r=1} {E}_{r}. \end{eqnarray*} By (\ref{hull}), we are able to conclude that \begin{eqnarray*} \sum^{n}_{m=1} {c}_{1}(X_{m})= \frac{1}{2}{\frak{a}}_{1}+\frac{1}{2}{\frak{a}}_{2} \in {\bf{Hull}}({\frak C}(M)). \end{eqnarray*} We therefore obtain the desired bound: \begin{eqnarray*} {\beta}^2(M) \geq \Big( \sum^{n}_{m=1} {c}_{1}(X_{m})\Big)^2=\sum^{n}_{m=1}{c}^2_{1}(X_{m}). \end{eqnarray*} \end{proof} Theorem \ref{beta-ine-key} and Corollary \ref{mono-cor} imply an important result for our purpose: \begin{thm}\label{mono-key-bounds} Let ${X}_{m}$ be as in Theorem \ref{thm-A} and suppose that $N$ is a closed oriented smooth 4-manifold with $b^{+}(N)=0$. Consider a connected sum $M:=\Big(\#^{n}_{m=1}{X}_{m} \Big) \# N$, where $n=2,3$. Then any Riemannian metric $g$ on $M$ satisfies the following curvature estimates: \begin{eqnarray}\label{weyl-leb-sca-1} {\int}_{M}{{s}^2_{g}}d{\mu}_{g} \geq {32}{\pi}^{2}\sum^{n}_{m=1}{c}^2_{1}(X_{m}), \end{eqnarray} \begin{eqnarray}\label{weyl-leb-sca-2} {\int}_{M}\Big({s}_{g}-\sqrt{6}|W^{+}_{g}|\Big)^2 d{\mu}_{g} \geq 72{\pi}^{2}\sum^{m}_{n=1}{c}^2_{1}(X_{m}). \end{eqnarray} \end{thm} On the other hand, recall that, for any closed oriented Riemannian 4-manifold $(X, g)$, we have the following Gauss-Bonnet-type formula \cite{hit, be, thor}: \begin{eqnarray}\label{ein-gauss} 2\chi(X)+3\tau(X)=\frac{1}{4\pi^{2}}\int_{X}\Big(2|W^{+}_{g}|^{2} + \frac{s^{2}_{g}}{24} - \frac{| \stackrel {\circ}{r}_{g} |^2}{2}\Big)d{\mu}_{g}, \end{eqnarray} where $W^{+}_{g}$ is the self-dual part of the Weyl curvature of $g$ and $\stackrel {\circ}{r}_{g}$ is the trace-free part of the Ricci curvature $r_{g}$ of $g$. We also have \begin{eqnarray*} {\int}_{X}|r_{g}|^2 d{\mu}_{g} = {\int}_{X} \Big(\frac{s^{2}_{g}}{4} + {|\stackrel {\circ}{r}_{g}|^2} \Big)d{\mu}_{g}. \end{eqnarray*} We therefore have the following equality for any Riemannian metric $g$ on $X$: \begin{eqnarray}\label{r-w-1} {\int}_{X}|r_{g}|^2 d{\mu}_{g} = {\int}_{X} \Big(\frac{s^{2}_{g}}{3} + 4|W^{+}_{g}|^{2} \Big)d{\mu}_{g} - 8{\pi}^2 \Big( 2\chi(X) + 3\tau(X) \Big). \end{eqnarray} On the other hand, the Cauchy-Schwarz and triangle inequalities \cite{leb-11} tell us that the following inequality holds: \begin{eqnarray}\label{r-w-2} {\int}_{X} \Big(\frac{s^{2}_{g}}{3} + 4|W^{+}_{g}|^{2} \Big)d{\mu}_{g} \geq \frac{2}{9}{\int}_{M}\Big({s}_{g}-\sqrt{6}|W^{+}_{g}|\Big)^2 d{\mu}_{g}. \end{eqnarray} By using (\ref{weyl-leb-sca-2}), (\ref{r-w-1}) and (\ref{r-w-2}), we are able to prove \begin{thm}\label{bf-ricci} Let ${X}_{m}$ be as in Theorem \ref{thm-A} and suppose that $N$ is a closed oriented smooth 4-manifold with $b^{+}(N)=0$. Consider a connected sum $M:=\Big(\#^{n}_{m=1}{X}_{m} \Big) \# N$, where $n=2,3$. Then any Riemannian metric $g$ on $M$ satisfies \begin{eqnarray}\label{r-w-3} {\int}_{M}|r_{g}|^2 d{\mu}_{g} \geq 8{\pi}^{2} \Big[4n-\Big(2\chi(N)+3\tau(N)\Big)+\sum^{n}_{m=1}{c}^2_{1}(X_{m}) \Big]. \end{eqnarray} \end{thm} \begin{proof} A direct computation tells us that \begin{eqnarray*} 2\chi(M)+3\tau(M) &=& -4{n}+\Big( 2\chi(N)+3\tau(N) \Big)+\sum_{m=1}^{n}\Big( 2\chi(X_{m})+3\tau(X_{m}) \Big) \\ &=& -4{n}+\Big( 2\chi(N)+3\tau(N) \Big)+\sum_{m=1}^{n}{c}^2_{1}(X_{m}). \end{eqnarray*} This formula, (\ref{r-w-1}) and (\ref{r-w-2}) imply \begin{eqnarray*} {\int}_{M}|r_{g}|^2 d{\mu}_{g} &\geq& \frac{2}{9}{\int}_{M}\Big({s}_{g}-\sqrt{6}|W^{+}_{g}|\Big)^2 d{\mu}_{g} \\ &-& 8{\pi}^2\Big[-4{n}+\Big( 2\chi(N)+3\tau(N) \Big)+\sum_{m=1}^{n}{c}^2_{1}(X_{m}) \Big]. \end{eqnarray*} By this bound and (\ref{weyl-leb-sca-2}), we have \begin{eqnarray*} {\int}_{M}|r_{g}|^2 d{\mu}_{g} &\geq& \frac{2}{9} \Big(72{\pi}^{2}\sum^{m}_{n=1}{c}^2_{1}(X_{m}) \Big) \\ &+& 8{\pi}^2\Big[4{n}-\Big( 2\chi(N)+3\tau(N) \Big)-\sum_{m=1}^{n}{c}^2_{1}(X_{m}) \Big]. \end{eqnarray*} This immediately implies the desired bound. \end{proof} \subsection{Computation of several differential geometric invariants}\label{4.2} In this section, we shall compute the values of several differential geometric invariants. The main results in this subsection are Theorems \ref{compu-scalar-invariant} and \ref{compu-Ricci-invariant} stated below. \par As one of interesting differential geometric invariants, there exists a natural diffeomorphism invariant arising from a variational problem for the total scalar curvature of Riemannian metrics on a closed oriented Riemannian manifold $X$ of dimension $n\geq 3$. As was conjectured by Yamabe \cite{yam}, and later proved by Trudinger, Aubin, and Schoen \cite{aubyam,lp,rick,trud}, every conformal class on a smooth compact manifold contains a Riemannian metric of constant scalar curvature. Hence, for each conformal class $[g]=\{ vg ~|~v: X\to {\Bbb R}^+\}$, we are able to consider an associated number $Y_{[g]}$, which is so called Yamabe constant of the conformal class $[g]$ and defined by \begin{eqnarray*} Y_{[g]} = \inf_{h \in [g]} \frac{\int_X s_{{h}}~d\mu_{{h}}}{\left(\int_X d\mu_{{h}}\right)^{\frac{n-2}{n}}}, \end{eqnarray*} where $s_{h}$ is the scalar curvature of the metric $h$ and $d\mu_{{h}}$ is the volume form with respect to the metric $h$. The Trudinger-Aubin-Schoen theorem tells us that this number is actually realized as the constant scalar curvature of some unit volume metric in the conformal class $[g]$. Then, Kobayashi \cite{kob} and Schoen \cite{sch} independently introduced the following interesting invariant of $X$: \begin{eqnarray*} {\mathcal Y}(X) = \sup_{\mathcal{C}}Y_{[g]}, \end{eqnarray*} where $\mathcal{C}$ is the set of all conformal classes on $X$. This is now commonly known as the Yamabe invariant of $X$. It is known that ${\mathcal Y}(X) \leq 0$ if and only if $X$ does not admit a metric of positive scalar curvature. \par The Yamabe invariant ${\mathcal Y}(X)$ is closely related with the diffeomorphism invariant defined \cite{BCG, leb-11} by \begin{eqnarray}\label{scalar-yama-def} {\mathcal I}_{s}(X):=\inf_{g \in {\mathcal R}_{X}} {\int}_{X}|s_{g}|^{n/2}d{\mu}_{g}, \end{eqnarray} where the space of all Riemannian metrics on $X$ is denoted by ${\cal R}_{X}$. It is known that the invariant ${\mathcal I}_{s}$ vanishes for every simply connected $n$-manifold with $n \geq 5$. Moreover, for every closed $n$-manifold with $n \geq 3$ admitting non-negative scalar curvature i.e., ${\mathcal Y}(X) \geq 0$, we have the following \cite{leb-1}: \begin{eqnarray}\label{vani-scal} {\mathcal I}_{s}(X)=0. \end{eqnarray} On the other hand, Proposition 12 in \cite{ishi-leb-2} tells us that the following equality holds whenever ${\mathcal Y}(X) \leq 0$: \begin{eqnarray}\label{s-y-1} {\mathcal I}_{s}(X)=|{\mathcal Y}(X)|^{n/2}. \end{eqnarray} Hence, the invariant ${\mathcal I}_{s}(X)$ of a closed 4-manifold $X$ with ${\mathcal Y}(X) \leq 0$ is just \begin{eqnarray}\label{scalar-yama} {\mathcal I}_{s}(X)=|{\mathcal Y}(X)|^2=\inf_{g \in {\mathcal R}_{X}} {\int}_{X}s^{2}_{g}d{\mu}_{g}. \end{eqnarray} On the other hand, consider the following quantity: \begin{eqnarray}\label{scalar-yama-def-2} {\mathcal K}(X):=\sup_{g \in {\mathcal R}_{X}}\Big( (\min_{x \in X}{s}_{g})(vol_{g})^{n/2} \Big), \end{eqnarray} where $vol_{g}={\int}_{X}d{\mu}_{g}$ is the total volume with respect to $g$. Kobayashi \cite{kob} pointed out that the following equality holds whenever ${\mathcal Y}(X) \leq 0$: \begin{eqnarray}\label{scalar-yama-3} {\mathcal K}(X)={\mathcal Y}(X). \end{eqnarray} It is now clear that the scalar curvature bound (\ref{weyl-leb-sca-1}) in Theorem \ref{mono-key-bounds}, (\ref{scalar-yama}) and (\ref{scalar-yama-3}) imply \begin{prop}\label{prop-scal} Let ${X}_{m}$, $N$ and $M$ be as in Theorem \ref{mono-key-bounds}. Then, for $n=2,3$, \begin{eqnarray*} {\mathcal I}_{s}(M)=|{\mathcal Y}(M)|^2=|{\mathcal K}(M)|^2 \geq {32}{\pi}^{2}\sum^{n}_{m=1}{c}^2_{1}(X_{m}). \end{eqnarray*} \end{prop} \begin{proof} Notice that there is nothing to prove when $\sum^{n}_{m=1}{c}^2_{1}(X_{m}) \leq 0$. Hence we may assume that $\sum^{n}_{m=1}{c}^2_{1}(X_{m}) >0$. Then the connected sum $M$ has non-zero monopole classes by Theorem \ref{con-mono}. This fact and Proposition \ref{beta-ine-key-0} force that the connected sum $M$ cannot admit any Riemannian metric $g$ of scalar curvature $s_{g} \geq 0$. Thanks to a result of Kobayashi \cite{kob}, it is known that, for any closed $n$-manifold $X$ with $n \geq 3$ has ${\mathcal Y}(X)>0$ if and only if $X$ admits a metric of positive scalar curvature. Hence, we are able to conclude that the connected sum $M$ in question must satisfy ${\mathcal Y}(M) \leq 0$. This fact, (\ref{weyl-leb-sca-1}) in Theorem \ref{mono-key-bounds}, (\ref{scalar-yama}) and (\ref{scalar-yama-3}) imply the desired result: \end{proof} On the other hand, Proposition 13 in \cite{ishi-leb-2} tells us that \begin{eqnarray}\label{bound-conn} {\mathcal I}_{s}(X \# Y) \leq {\mathcal I}_{s}(X) + {\mathcal I}_{s}(Y), \end{eqnarray} where $X$ and $Y$ are any closed smooth manifolds with $n \geq 3$. Proposition \ref{prop-scal}, (\ref{vani-scal}) and (\ref{bound-conn}) imply the following result which can be seen as a generalization of both Theorems A and B in \cite{ishi-leb-2} to the case where $b_{1} \not=0$: \begin{main}\label{compu-scalar-invariant} Let $N$ be a closed oriented smooth 4-manifold with $b^{+}(N)=0$ and with a Riemannian metric of non-negative scalar curvature. For $m=1,2,3$, let $X_m$ be a minimal K{\"{a}}hler surface with ${b}^{+}(X_m)>1$ and satisfying \begin{eqnarray*} {b}^{+}(X_{m})-{b}_{1}(X_{m}) \equiv 3 \ (\bmod \ 4). \end{eqnarray*} Let $\Gamma_{X_{m}}$ be a spin${}^{c}$ structure on $X_m$ which is induced by the K{\"{a}}hler structure. Under Definition \ref{def-1}, moreover assume that the following condition holds for each $m$: \begin{eqnarray*} \frak{S}^{ij}(\Gamma_{X_{m}}) \equiv 0 \bmod 2 & \text{for all $i, j$}. \end{eqnarray*} Then, for $n=2,3$, a connected sum $M:=(\#^{n}_{m=1}{X}_{m} ) \# N$ satisfies \begin{eqnarray}\label{scalar-com} {\mathcal I}_{s}(M) = |{\mathcal Y}(M)|^2 = |{\mathcal K}(M)|^2 = {32}{\pi}^{2}\sum^{n}_{m=1}{c}^2_{1}(X_{m}). \end{eqnarray} In particular, the Yamabe invariant of $M$ is given by \begin{eqnarray*} {\mathcal Y}(M)={-4{\pi}}\sqrt{2\sum^n_{m=1}c^2_{1}(X_{m})}. \end{eqnarray*} \end{main} \begin{proof} First of all, notice that we have \begin{eqnarray}\label{n-vani} {\mathcal I}_{s}(N)=0 \end{eqnarray} by the assumption that $N$ admits a metric of non-negative scalar curvature and (\ref{vani-scal}). Moreover, LeBrun \cite{leb-44, leb-1} showed that, for any minimal compact K{\"{a}}hler surface $X$ with ${b}^{+}(X)>1$, the following holds: \begin{eqnarray*} {\mathcal I}_{s}(X)= |{\mathcal Y}(X)|^2 = |{\mathcal K}(X)|^2 = {32}{\pi}^{2}{c}^2_{1}(X) \end{eqnarray*} This fact with the bounds (\ref{n-vani}) and (\ref{bound-conn}) implies that \begin{eqnarray*} {\mathcal I}_{s}(M)=|{\mathcal Y}(M)|^2 = |{\mathcal K}(M)|^2 \leq {32}{\pi}^{2}\sum^{n}_{m=1}{c}^2_{1}(X_{m}). \end{eqnarray*} Proposition \ref{prop-scal} with this bound tells us that the desired equality holds as promised. \end{proof} On the other hand, instead of scalar curvature, it is so natural to consider the following Ricci curvature version of (\ref{scalar-yama-def}): \begin{eqnarray}\label{ricci-inv} {\mathcal I}_{r}(X):=\inf_{g \in {\mathcal R}_{X}} {\int}_{X}|r_{g}|^{n/2}d{\mu}_{g}. \end{eqnarray} Here $r_{g}$ is again the Ricci curvature of $g$. It is known \cite{leb-11} that there is the following relation between (\ref{scalar-yama-def}) and (\ref{ricci-inv}): \begin{eqnarray}\label{ricci-scalar} {\mathcal I}_{r}(X) \geq {n}^{-n/4}{\mathcal I}_{s}(X), \end{eqnarray} and that equality holds if the Yamabe invariant is both non-positive and realized by an Einstein metric. The failure of the equality gives a quantitative obstruction to Yamabe's program for finding Einstein metrics. Therefore, it is quite interesting to investigate when the above inequality (\ref{ricci-scalar}) becomes strict. \par By using Theorem \ref{bf-ricci} and the same method with the proof of Theorem C in \cite{ishi-leb-2}, we are able to obtain the following interesting result: \begin{main}\label{compu-Ricci-invariant} Let $N$ be a closed oriented smooth 4-manifold with anti-self-dual metric of positive scalar curvature. For $m=1,2,3$, let $X_m$ be a minimal K{\"{a}}hler surface as in Theorem \ref{compu-scalar-invariant}. Then, for any $n=2,3$, a connected sum $M:=(\#^{n}_{m=1}{X}_{m} ) \# N$ satisfies \begin{eqnarray}\label{ricci-com} {\mathcal I}_{r}(M) = 8{\pi}^{2} \Big[4n-\Big( 2\chi(N)+3\tau(N) \Big)+\sum^{n}_{m=1}{c}^2_{1}(X_{m}) \Big]. \end{eqnarray} \end{main} We leave, as an exercise, the detail of the proof of Theorem \ref{compu-Ricci-invariant} for the interested reader. Use Theorem \ref{bf-ricci} and the strategy of the proof of Theorem C in \cite{ishi-leb-2}. We also notice that Theorem \ref{compu-Ricci-invariant} in the case where $b_{1}({X}_{m}) \not=0$ never follows from Theorem C in \cite{ishi-leb-2}. \par The above hypotheses regarding $N$ and Proposition 1 in \cite{leb-topology} force that $b^{+}(N)=0$. Hence we have \[ 2\chi(N) + 3 \tau(N) = 4 - 4 b_1(N) + 5b^+(N) - b^- (N)=4 - 4b_1(N) - b^-(N) \leq 4. \] By this fact, (\ref{scalar-com}) and (\ref{ricci-com}), we are able to conclude that the strict inequality holds whenever $n=2,3$: \begin{eqnarray*} {\mathcal I}_{r}(M)> \frac{1}{4}{\mathcal I}_{s}(M). \end{eqnarray*} Hence, the Yamabe sup-inf on this connected sums never realized by an Einstein metric. \par On the other hand, since the connected sum $k \overline{{\mathbb C}{P}}^2 \# {\ell} (S^{1} \times S^{3})$ admits anti-self-dual metrics of positive scalar curvature \cite{kim, leb-self}, Theorem \ref{compu-Ricci-invariant} particularly implies \begin{cor} Let $X_m$ be a minimal K{\"{a}}hler surface as in Theorem \ref{compu-scalar-invariant}. Then, for any $n=2,3$, and any integers $k, \ell \geq 0$, \begin{eqnarray*} {\mathcal I}_{r}\Big((\#^{n}_{m=1}{X}_{m}) \# k \overline{{\mathbb C}{P}}^2 \# {\ell} (S^{1} \times S^{3}) \Big) = 8{\pi}^{2} \Big[k + 4(n + \ell -1)+\sum^{n}_{m=1}{c}^2_{1}(X_{m}) \Big]. \end{eqnarray*} \end{cor} \subsection{Invariant arising from a variant of Perelman's ${\mathcal F}$-functional}\label{sub-43} The main results of this subsection are Theorem \ref{cor-perel-yama} and Theorem \ref{compu-scalar-pre-invariant} below. Theorem \ref{compu-scalar-pre-invariant} is nothing but Theorem \ref{main-CCC} stated in Introduction. \par Let us start with recalling the definition of Pelerman's ${\mathcal F}$-functional \cite{p-1, p-2, lott}. Let $X$ be a closed oriented Riemannian manifold of dimension $n \geq 3$ and $g$ be any Riemannian metric on $X$. We shall denote the space of all Riemannian metrics on $X$ by ${\cal R}_{X}$ and the space of all $C^{\infty}$ functions on $X$ by $C^{\infty}(X)$. Then, the ${\mathcal F}$-functional which was introduced by Perelman \cite{p-1} is the following functional ${\mathcal F} : {\cal R}_{X} \times C^{\infty}(X) \rightarrow {\mathbb R}$ defined by \begin{eqnarray}\label{f-functional} {\cal F}(g, f):={\int}_{X}({s}_{g} + |{\nabla}f|^{2}){e}^{-f} d\mu_{g}, \end{eqnarray} where $f \in C^{\infty}(X)$, ${s}_{g}$ is the scalar curvature and $d\mu_{g}$ is the volume measure with respect to $g$. One of the fundamental discovery of Perelman is that the Ricci flow can be viewed as the gradient flow of ${\cal F}$-functional. Moreover the ${\cal F}$-functional is nondecreasing under the following coupled version of the Ricci flow: \begin{eqnarray}\label{c-Ricci} \frac{\partial}{\partial t}{g}=-2Ric_{g}, \ \frac{\partial}{\partial t}f=-\Delta f - s + |\nabla f|^2, \end{eqnarray} where $Ric_{g}$ is the Ricci curvature and $s$ is the scalar curvature of the evaluating metric. It is then known that, for a given metric $g$, there exists a unique minimizer of the ${\cal F}$-functional under the constraint ${\int}_{X}{e}^{-f} d\mu_{g} =1$. Hence it is so natural to consider the following functional ${{\lambda}} : {\cal R}_{X} \rightarrow {\mathbb R}$ which is so called Perelman $\lambda$-functional: \begin{eqnarray*} {\lambda}(g):=\inf_{f} \ \{ {\cal F}(g, f) \ | \ {\int}_{X}{e}^{-f} d\mu_{g} =1 \}. \end{eqnarray*} It turns out that ${\lambda}(g)$ is nothing but the least eigenvalue of the elliptic operator $4 \Delta_g+s_g$, where $\Delta = d^*d= - \nabla\cdot\nabla $ is the positive-spectrum Laplace-Beltrami operator associated with $g$. The nondecreasing of the ${\cal F}$-functional implies the nondecreasing of $\lambda$-functional. This also has a fundamental importance. In fact, Perelman used this fact to prove the non-existence of non-trivial steady and expanding Ricci breathers. Now, following Perelman, let us consider the scale-invariant quantity $\lambda(g) (vol_g)^{2/n}$, where $vol_g:={\int}_{X}d\mu_{g}$. Then let us recall \begin{defn}[\cite{p-1, p-2, lott}]\label{pere-inv} Perelman's $\bar{\lambda}$ invariant of $X$ is defined to be \begin{eqnarray*}\label{p-inv} \bar{\lambda}(X)= \sup_{g \in {\cal R}_{X}} \lambda(g) (vol_g)^{2/n}. \end{eqnarray*} \end{defn} It turns our that, for any $X$ which dose not admit positive scalar curvature metric, $\bar{\lambda}(X)={\mathcal Y}(X)$ always holds \cite{A-ishi-leb-3}, where ${\mathcal Y}(X)$ is the Yamabe invariant of $X$. \par In this subsection, inspired by recent interesting works of Cao \cite{cao-X} and Li \cite{li}, we would like to introduce one parameter family $\bar{\lambda}_{k}$ of smooth invariants, where $k \in {\mathbb R}$. We shall call this invariant $\bar{\lambda}_{k}$ invariant. In particular, $\bar{\lambda}_{k}$ invariant includes Perelman's $\bar{\lambda}$ invariant as a special case. Indeed, $\bar{\lambda}_{1}=\bar{\lambda}$ holds as we shall see below. \par We shall start with introducing the following definition which is essentially due to Li \cite{li}. The definition in the case where $k \geq 1$ is nothing but Definition 41 in \cite{li}. We notice that the following definition was also appeared as the equality (8) in \cite {o-s-w}: \begin{defn}[\cite{li, o-s-w}] Let $X$ be a closed oriented Riemannian manifold with dimension $\geq 3$. Then, we define the following variant ${\mathcal F}_{k} : {\cal R}_{X} \times C^{\infty}(X) \rightarrow {\mathbb R}$ of the Perelman's $\mathcal F$-functional: \begin{eqnarray}\label{li-pere} {\mathcal F}_{k}(g, f):={\int}_{X}\Big(k{s}_{g}+|\nabla f|^2 \Big){e}^{-f} d{\mu}_{g}, \end{eqnarray} where $k$ is a real number $k \in {\mathbb R}$. We shall call this ${\mathcal F}_{k}$-functional. \end{defn} Notice that ${\mathcal F}_{1}$-functional is nothing but Perelman's ${\mathcal F}$-functional (\ref{f-functional}). Li \cite{li} showed that all functionals ${\mathcal F}_{k}$ with $k \geq 1$ have the monotonicity properties under the coupled system (\ref{c-Ricci}). \begin{rem} It is not clear, at least for the present authors, that if these ${\mathcal F}_{k}$-functional have the monotonicity properties under the coupled system (\ref{c-Ricci}) in the case where $k<1$. In fact, the proof of Li \cite{li} breaks down in the case where $k<1$. See the proof of Theorem 42 in \cite{li}. \end{rem} As was already mentioned in \cite{li, lott} essentially, for a given metric $g$ and $k \in {\mathbb R}$, there exists a unique minimizer of the ${\cal F}_{k}$-functional under the constraint ${\int}_{X}{e}^{-f} d\mu_{g} =1$. In fact, by using a direct method of the elliptic regularity theory \cite{g-t}, one can see that the following infimum is always attained: \begin{eqnarray*} {{\lambda}}(g)_{k}:=\inf_{f} \ \{ {\cal F}_{k}(g, f) \ | \ {\int}_{X}{e}^{-f} d\mu_{g} =1 \}. \end{eqnarray*} Notice that $\lambda(g)_k$ is nothing but the least eigenvalue of the elliptic operator $4 \Delta_g+ks_g$. It is then natural to introduce the following quantity: \begin{defn} For any real number $k \in {\mathbb R}$, the $\bar{\lambda}_{k}$ invariant of $X$ is defined to be \begin{eqnarray*}\label{p-inv} \bar{\lambda}_{k}(X)= \sup_{g \in {\cal R}_{X}}\lambda(g)_{k} (vol_g)^{2/n}. \end{eqnarray*} \end{defn} It is clear that $\bar{\lambda}_{1}=\bar{\lambda}$ holds. The $\bar{\lambda}_{k}$ invariant is also closely related to the Yamabe invariant. Indeed, we shall prove the following result which can be seen as a generalization of Theorem A proved in \cite{A-ishi-leb-3}: \begin{prop}\label{lambda-k-inv} Suppose that $X$ is a smooth closed $n$-manifold, $n \geq 3$. Then the following holds: $$\bar{\lambda}_{k}(X) = \begin{cases} k{\mathcal Y}(X) & \text{ if } {\mathcal Y}(X) \leq 0 \text{ and } k \geq \frac{n-2}{n-1}, \\ +\infty & \text{ if } {\mathcal Y}(X) > 0 \text{ and } k > 0. \end{cases} $$ \end{prop} Let us include the proof of Proposition \ref{lambda-k-inv} for completeness and for the reader's convenience. \par Suppose now that $X$ is a closed oriented Riemannian manifold of dimension $n \geq 3$, and moreover that $\gamma:=[g]=\{ ug ~|~u: X \to {\Bbb R}^+\}$ is the conformal class of an arbitrary metric $g$. As was already mentioned, Trudinger, Aubin, and Schoen \cite{aubyam,lp,rick,trud} proved every conformal class on $X$ contains a Riemannian metric of constant scalar curvature. Such a metric $\hat{g}$ can be constructed by minimizing the Einstein-Hilbert functional: $$ \hat{g}\mapsto \frac{\int_X s_{\hat{g}}~d\mu_{\hat{g}}}{\left(\int_X d\mu_{\hat{g}}\right)^{\frac{n-2}{n}}}, $$ among all metrics conformal to $g$. Notice that, by setting $\hat{g} = u^{4/(n-2)}g$, the following identity holds: \begin{eqnarray*} \frac{\int_X s_{\hat{g}}~d\mu_{\hat{g}}}{\left(\int_X d\mu_{\hat{g}}\right)^{\frac{n-2}{n}}}= \frac{\int_X\left[ s_gu^2 + 4 \frac{n-1}{n-2}|\nabla u|^2\right] d\mu_g}{\left(\int_X u^{2n/(n-2)}d\mu_g\right)^{(n-2)/n}}. \end{eqnarray*} Associated to each conformal class $\gamma:=[g]$, we are also able to define the Yamabe constant of the conformal class $\gamma$ in the following way: \begin{eqnarray}\label{yama-def-0} Y_{\gamma} = \inf_{u \in {C}^{\infty}_{+}(X)}\frac{\int_X\left[ s_gu^2 + 4 \frac{n-1}{n-2}|\nabla u|^2\right] d\mu_g}{\left(\int_X u^{2n/(n-2)}d\mu_g\right)^{(n-2)/n}}, \end{eqnarray} where ${C}^{\infty}_{+}(X)$ is the set of all positive functions $u: X \to {\Bbb R}^+$. Trudinger-Aubin-Schoen theorem teaches us that this number is actually realized as the constant scalar curvature of some unit-volume metric in each conformal class $\gamma$. A constant-scalar-curvature metric of this type is called a Yamabe minimizer. Again, the Yamabe invariant \cite{kob, sch} of $X$ is then given by \begin{eqnarray}\label{yama-def-1} {\mathcal Y}(X) = \sup_{\gamma \in \mathcal{C}} Y_{\gamma}, \end{eqnarray} where $\mathcal{C}$ is the set of all conformal classes on $X$. \par We are now in a position to prove the following lemma. We shall use the following to prove Proposition \ref{lambda-k-inv}: \begin{lem} \label{yupyup} Suppose that $\gamma$ is a conformal class on a closed oriented Riemannian manifold $X$ of dimension $n \geq 3$, which does not contain a metric of positive scalar curvature, i.e., ${Y}_\gamma \leq 0$. Then \begin{eqnarray} Y_\gamma = \frac{1}{k} \Big(\sup_{g\in \gamma} \lambda(g)_k (vol_{g})^{2/n} \Big), \end{eqnarray} where $k$ is a real number satisfying $k \geq \frac{n-2}{n-1}$. \end{lem} \begin{proof} Let $g\in \gamma$, and let $\hat{g}= u^{4/(n-2)}g$ be the Yamabe minimizer in $\gamma$. By (\ref{yama-def-0}) and the hypothesis that $Y_\gamma \leq 0$, we have \begin{eqnarray*} 0 \geq Y_\gamma = \frac{\int_X\left[ s_gu^2 + 4 \frac{n-1}{n-2}|\nabla u|^2\right] d\mu_g}{\left(\int_X u^{2n/(n-2)}d\mu_g\right)^{(n-2)/n}}. \end{eqnarray*} Namely, \begin{eqnarray}\label{43-1} 0 \geq {\int_X\left[ s_gu^2 + 4 \frac{n-1}{n-2}|\nabla u|^2\right] d\mu_g} = Y_\gamma {\left(\int_X u^{2n/(n-2)}d\mu_g\right)^{(n-2)/n}}. \end{eqnarray} On the other hand, the eigenvalue $\lambda(g)_k$ can be expressed in terms of Raleigh quotient as \begin{eqnarray*} \lambda(g)_k = \inf_{{u \in {C}^{\infty}_{+}(X)}} \frac{\int_{X} \left[k s_{g}u^2 + 4|\nabla u|^2 \right]d\mu_{g}}{\int_{X} u^{2}d\mu_{g}}. \end{eqnarray*} Thus \begin{eqnarray*} \lambda(g)_{k} \int_{X} u^2 d\mu_g &\leq& \int_{X} \left[k s_{g}u^2 + 4|\nabla u|^2 \right]d\mu_g = k \Big(\int_{X} \left[s_{g}u^2 + 4\frac{1}{k}|\nabla u|^2 \right]d\mu_g \Big) \\ &\leq& k \Big( \int_{X} \left[ s_{g}u^2 + 4\frac{n-1}{n-2} |\nabla u|^2 \right]d\mu_{g} \Big), \end{eqnarray*} where we used the hypothesis that $k \geq \frac{n-2}{n-1}$, i.e., $\frac{1}{k} \leq \frac{n-1}{n-2}$. This bound and (\ref{43-1}) tells us that \begin{eqnarray*} \lambda(g)_{k} \int_{X} u^2 d\mu_g &\leq& k Y_\gamma \left(\int u^{2n/(n-2)}d\mu_{g} \right)^{(n-2)/n} \\ &\leq& k Y_\gamma (vol_{g})^{-2/n} \int u^2 d\mu \end{eqnarray*} where notice that, since $Y_\gamma\leq 0$, the last step is an the application of the H\"older inequality $$\int f_1f_2 ~d\mu \leq \left(\int |f_1|^pd\mu \right)^{1/p} \left(\int |f_2|^qd\mu \right)^{1/q}, ~~~\frac{1}{p}+ \frac{1}{q}=1,$$ with $f_1=1$, $f_2=u^2$, $p= n/2$, and $q=n/(n-2)$. Moreover, equality holds precisely when $u$ is constant, namely, precisely when $g$ has constant scalar curvature. Since we shows that \begin{eqnarray*} \frac{1}{k} \lambda(g)_{k} (vol_{g})^{2/n} \leq Y_\gamma \end{eqnarray*} for every $g\in \gamma$, and since equality occurs if $g$ is the Yamabe minimizer, it follows that \begin{eqnarray*} Y_\gamma = \frac{1}{k} \Big(\sup_{g\in \gamma} \lambda(g)_k (vol_{g})^{2/n} \Big). \end{eqnarray*} \end{proof} The proof of Lemma \ref{yupyup} tells us that, under ${\mathcal Y}(X) \leq 0$ and any real number $k \geq \frac{n-2}{n-1}$, each constant scalar curvature metric maximizes $\frac{1}{k}\lambda_k (vol)^{2/n}$ in its conformal class. Given any maximizing sequence $\hat{g_{i}}$ for $\frac{1}{k}\lambda_k (vol)^{2/n}$, we may construct a new maximizing sequence ${g_{i}}$ consisting of unit volume constant scalar curvature metrics by conformal rescaling. However, for any such sequence, the constant number $s_{g_{i}}$ is viewed either as $\{ {Y}_{[{g_{i}]}} \}$ or as $\{ \frac{1}{k}\lambda({g_{i}})_k (vol)^{2/n}_{{g_{i}}} \}$. Therefore, we are able to conclude that the suprema over the space of all Riemannian metrics of ${Y}_{[g]}$ and $\frac{1}{k}\lambda({g}_k (vol)^{2/n}_{g}$ must coincide, namely, ${\mathcal Y}(X) = \frac{1}{k}\bar{\lambda}_{k}(X)$ must holds in this case, i.e., $k{\mathcal Y}(X) = \bar{\lambda}_{k}(X)$. Therefore, it is enough to prove the following lemma in order to prove Proposition \ref{lambda-k-inv}: \begin{lem} \label{aha} If ${\mathcal Y}(X)>0$, then $\bar{\lambda}_{k}(X) = +\infty$ for any positive real number $k > 0$. \end{lem} \begin{proof} Given such a manifold $X$ with ${\mathcal Y}(X)>0$ and any smooth non-constant function $f: X\to \mathbb R$, Kobayashi \cite{kob} has shown that there exists a unit-volume metric $g$ on $M$ with $s_{g}=f$. The claim of this lemma follows from this result of Kobayashi. First of all, for any sufficiently large positive constant $L$, take a smooth non-constant function $f: X\to \mathbb R$ such that $\min_{x}f \geq L$. Then the above result of Kobayashi tells us that there is a metric $g$ on $M$ with $s_{g}=f$ and $vol_{g}=1$. Notice that $\min_{x}s_{g}=\min_{x}f \geq L$ holds. For this metric $g$, the eigenvalue $\lambda(g)_k$ can be expressed in terms of Raleigh quotient as \begin{eqnarray}\label{Raleigh} \lambda(g)_k = \inf_{{u \in {C}^{\infty}_{+}(X)}} \frac{\int_{X} \left[k s_{g}u^2 + 4|\nabla u|^2 \right]d\mu_{g}}{\int_{X} u^{2}d\mu_{g}}. \end{eqnarray} On the other hand, we have \begin{eqnarray*} \frac{{\int}_{X}\left[k {s}_{g}u^2+4|\nabla u|^2\right] d{\mu}_{g}}{{\int}_{X}u^2 d{\mu}_{g}} &\geq& \frac{{\int}_{X}k {s}_{g}u^2 d{\mu}_{g}}{{\int}_{X}u^2 d{\mu}_{g}} \geq \frac{{\int}_{X}k ({\min_{x}{s}_{g}})u^2 d{\mu}_{g}}{{\int}_{X}u^2 d{\mu}_{g}} \\ &=& \frac{k({\min_{x}{s}_{g})}{\int}_{X}u^2 d{\mu}_{g}}{{\int}_{X}u^2 d{\mu}_{g}}= k({\min_{x}{s}_{g}})=k({\min_{x}f}) \\ &\geq& kL. \end{eqnarray*} This bound and (\ref{Raleigh}) imply that \begin{eqnarray*} \lambda(g)_k \geq kL. \end{eqnarray*} Since $vol_{g}=1$, this bound tells us the following holds: \begin{eqnarray*} \bar{\lambda}_{k}(X):= \sup_{g}{\lambda}(g)_{k}(vol_{g})^{n/2} \geq \sup_{g, vol_{g}=1}{\lambda}(g)_{k}(vol_{g})^{n/2} \geq kL. \end{eqnarray*} Therefore, we obtain \begin{eqnarray*} \bar{\lambda}_{k}(X) \geq kL. \end{eqnarray*} Thus, taking $L \rightarrow +\infty$, $\bar{\lambda}_{k}(X) = +\infty$ holds for any $k > 0$. \end{proof} We therefore proved Proposition \ref{lambda-k-inv}. \begin{rem} Notice that the Yamabe invariant of any closed smooth manifold is always finite. For example, it is known that ${\mathcal Y}({\mathbb C}{P}^2)=12{\pi}\sqrt{2}$ holds. On the other hand, Lemma \ref{aha} tells us that $\bar{\lambda}_{k}({\mathbb C}{P}^2) = +\infty$ holds for any $k > 0$. \end{rem} In particular, (\ref{s-y-1}), (\ref{scalar-yama-3}) and Proposition \ref{lambda-k-inv} tell us that the following result holds, where notice that any manifold $M$ which dose not admit any Riemannian metric of positive scalar curvature must satisfy ${\mathcal Y}(M) \leq 0$: \begin{thm}\label{cor-perel-yama} Let $X$ be a smooth compact $n$-manifold with $n \geq 3$ and assume that $X$ dose not admit any Riemannian metric of positive scalar curvature. Then, for any real number $k$ with $k \geq \frac{n-2}{n-1}$, the following holds: \begin{eqnarray}\label{equi-scalar} {\mathcal I}_{s}(M) = |{\mathcal Y}(M)|^{\frac{n}{2}} = |{\mathcal K}(M)|^{\frac{n}{2}} = \Big|\frac{\bar{\lambda}_{k}(M)}{k} \Big{|}^{\frac{n}{2}}. \end{eqnarray} \end{thm} Theorem \ref{compu-scalar-invariant} and Theorem \ref{cor-perel-yama} immediately imply Theorem \ref{main-CCC} which was already mentioned in Introduction. More precisely, we have \begin{thm}\label{compu-scalar-pre-invariant} Let $N$ be a closed oriented smooth 4-manifold with $b^{+}(N)=0$ with a Riemannian metric of non-negative scalar curvature. For $m=1,2,3$, let $X_m$ be a minimal K{\"{a}}hler surface with ${b}^{+}(X_m)>1$ and satisfying \begin{eqnarray*} {b}^{+}(X_{m})-{b}_{1}(X_{m}) \equiv 3 \ (\bmod \ 4). \end{eqnarray*} Let $\Gamma_{X_{m}}$ be a spin${}^{c}$ structure on $X_m$ which is induced by the K{\"{a}}hler structure. Under Definition \ref{def-1}, moreover assume that the following condition holds for each $m$: \begin{eqnarray*} \frak{S}^{ij}(\Gamma_{X_{m}}) \equiv 0 \bmod 2 & \text{for all $i, j$}. \end{eqnarray*} Then, for any $n=2,3$ and any real number $k \geq \frac{2}{3}$, a connected sum $M:=(\#^{n}_{m=1}{X}_{i} ) \# N$ satisfies \begin{eqnarray*} {\mathcal I}_{s}(M) = |{\mathcal Y}(M)|^2 = |{\mathcal K}(M)|^2 =\Big|\frac{\bar{\lambda}_{k}(M)}{k} \Big{|}^2 ={32}{\pi}^{2}\sum^{n}_{m=1}{c}^2_{1}(X_{m}). \end{eqnarray*} In particular, $\overline{{\lambda}}_{k}$-invariant of $M$ is given by \begin{eqnarray*} \bar{\lambda}_{k}(M)={-4k{\pi}}\sqrt{2\sum^n_{m=1}c^2_{1}(X_{m})}. \end{eqnarray*} \end{thm} On the other hand, it is now well known that the value of the Yamabe invariant is sensitive to the choice of smooth structures of a four-manifold. By using Theorem \ref{cor-perel-yama} and a result in \cite{leb-44}, we are able to prove the following result. This result can be seen as a generalization of Theorem 5 in \cite{kot-2}: \begin{cor}\label{main-pere} The number of distinct values that the following four invariants can take on the smooth structures in a fixed homeomorphism type of simply connected 4-manifolds $X$ is unbounded: \begin{itemize} \item The Yamabe invariant ${\mathcal Y}(X)$, \item the invariant ${\mathcal I}_{s}(X)$ arising from the scalar curvature, \item the invariant ${\mathcal K}(X)$, \item the $\bar{{\lambda}}_{k}$-invariant $\bar{{\lambda}}_{k}(X)$ for a real number $k$ satisfying $k \geq \frac{2}{3}$. \end{itemize} \end{cor} \begin{proof} By Theorem \ref{cor-perel-yama}, we have \begin{eqnarray}\label{equi-pere-yama} {\mathcal I}_{s}(M) = |{\mathcal Y}(M)|^{2} = |{\mathcal K}(M)|^{2} = \Big|\frac{\bar{\lambda}_{k}(M)}{k} \Big{|}^{2}. \end{eqnarray} Hence, in order to prove the statement of this corollary, it is enough to prove the claim for the Yamabe invariant. First of all, let us recall that LeBrun \cite{leb-44} proved that the Yamabe invariant of any minimal complex surface $M$ of general type satisfies \begin{eqnarray}\label{comp-yama} {\mathcal Y}(M)={\mathcal Y}(M \# \ell \overline{ {\mathbb C}{P}^{2}})=-4{\pi}\sqrt{2 c^2_{1}(M)} < 0, \end{eqnarray} where $\ell \geq 0$. In what follows, we shall use the method of the proof of Theorem 5 in \cite{kot-2}. For the reader's convenience, we shall reproduce the argument. Using the standard result \cite{persson} on the geography on minimal surface of general type, for every integer $n$, one can always find positive integers $\alpha$ and $\beta$ satisfying the property that all pairs of integers $(\alpha-j, \beta+j)$, where $1 \leq j \leq n$, are realized as pairs $({c}_{2}(X_{j}), c^2_{1}(X_{j}))$ of Chern numbers of some simply connected minimal complex surface $X_{j}$ of general type. Consider the $j$-times blowup $M_{j}$ of $X_{j}$, i.e., $M_{j} = X_{j} \# j \overline{ {\mathbb C}{P}^{2}}$. Then all these $M_{j}$, where $1 \leq j \leq n$, are simply connected, non-spin and have the same Chern numbers $(\alpha, \beta)$. Hence, Freedman \cite{freedman} tells us that they must be homeomorphic to each other. On the other hand, equality (\ref{comp-yama}) implies that $M_{j}$ have pairwise different Yamabe invariants, i.e., \begin{eqnarray*} {\mathcal Y}(M_{j}) = {\mathcal Y}(X_{j} \# \ell \overline{ {\mathbb C}{P}^{2}}) = {\mathcal Y}(X_{j})=-4{\pi}\sqrt{2 c^2_{1}(X_{j})} = -4{\pi}\sqrt{2(\beta + j)}. \end{eqnarray*} By using this and (\ref{equi-pere-yama}), the desired result now follows. \end{proof} \begin{rem} As Corollary \ref{main-pere} above, let us here remark that all the results in \cite{kot-2} for the Perelman's $\bar{\lambda}$ invariant also holds for the above four invariants, i.e., ${\mathcal I}_{s}(X), \ {\mathcal Y}(X), {\mathcal K}(X)$ and ${\overline{\lambda}_{k}(X)}$, where $k \geq \frac{2}{3}$, without serious change of the proof. We leave this for the interested reader. \end{rem} \subsection{Einstein metrics, simplicial volumes, and smooth structures}\label{sub-44} In this section, we shall prove Theorem \ref{main-CC} which was already mentioned in Introduction. \par First of all, we shall show that Theorem \ref{mono-key-bounds} and the method of the proof of Theorem D in \cite{ishi-leb-2} give rise to a new obstruction to the existence of Einstein metrics on 4-manifolds. The following theorem includes interesting cases which cannot be derived from Theorem D in \cite{ishi-leb-2}: \begin{main}\label{einstein} Let $N$ be a closed oriented smooth 4-manifold with $b^{+}(N)=0$. For $m =1,2,3$, let $X_m$ be a closed oriented almost complex 4-manifold with ${b}^{+}(X_m)>1$ and satisfying \begin{eqnarray*} {b}^{+}(X_{m})-{b}_{1}(X_{m}) \equiv 3 \ (\bmod \ 4). \end{eqnarray*} Let $\Gamma_{X_{m}}$ be a spin${}^{c}$ structure on $X_m$ which is induced by the almost complex structure and assume that $SW_{X_{m}}(\Gamma_{X_{m}}) \equiv 1 \ (\bmod \ 2)$. Under Definition \ref{def-1}, moreover assume that the following condition holds for each $m$: \begin{eqnarray*} \frak{S}^{ij}(\Gamma_{X_{m}}) \equiv 0 \bmod 2 & \text{for all $i, j$}. \end{eqnarray*} Then a connected sum $M:=(\#_{m=1}^{n}{X}_{m}) \# N$, where $n=2,3$, cannot admit any Einstein metric if the following holds: \begin{eqnarray}\label{asm} 4{n}-\Big(2\chi(N)+3\tau(N) \Big) \geq \frac{1}{3}\sum_{m=1}^{n}\Big( 2\chi(X_{m})+3\tau(X_{m}) \Big). \end{eqnarray} \end{main} \begin{proof} First of all, a direct computation tells us that \begin{eqnarray}\label{u-222} 2\chi(M)+3\tau(M)=-4{n}+\Big( 2\chi(N)+3\tau(N) \Big)+\sum_{m=1}^{n} \Big( 2\chi(X_{m})+3\tau(X_{m}) \Big). \end{eqnarray} On the other hand, notice that the condition that $b^{+}(N)=0$ forces that $2\chi(N)+3\tau(N)= 4 - 4 b_1(N) + 5b^+(N) - b^- (N)=4 - 4b_1(N) - b^-(N) \leq 4$. Hence we always have \begin{eqnarray}\label{nega-N} -4{n}+(2\chi(N)+3\tau(N)) < 0 \end{eqnarray} when $n=2,3$. \par Now assume that $\sum_{m=1}^{n}(2\chi(X_{m})+3\tau(X_{m})) \leq 0$. Then, by (\ref{u-222}) and (\ref{nega-N}), we have \begin{eqnarray}\label{hit-vio} 2\chi(M)+3\tau(M) < 0. \end{eqnarray} Notice that any Einstein 4-manifold must satisfy the Hitchin-Thorpe inequality (\ref{ht-int}). Hence, in the case where $\sum_{m=1}^{n}(2\chi(X_{m})+3\tau(X_{m})) \leq 0$, we are able to conclude that $M$ cannot admit any Einstein metric by (\ref{hit-vio}). Let us remark that, (\ref{asm}) holds trivially in the case where $\sum_{m=1}^{n}(2\chi(X_{m})+3\tau(X_{m})) \leq 0$ because we have $4{n}-(2\chi(N)+3\tau(N)) > 0$. \par By the above observation, we may assume that $\sum_{m=1}^{n}(2\chi(X_{m})+3\tau(X_{m})) > 0$ holds. In particular, Theorem \ref{main-A} or Theorem \ref{con-mono} tells us that the connected sum $M$ has non-zero monopole classes. \par As was already noticed in \cite{leb-11, ishi-leb-2}, we have the following inequality for any Riemannian metric $g$ on $M$ (cf. Proposition 3.1 in \cite{leb-11}): \begin{eqnarray*} \int_{M}\Big(2|W^{+}_{g}|^{2} + \frac{s^{2}_{g}}{24} \Big)d{\mu}_{g} \geq \frac{1}{27}{\int}_{M}\Big({s}_{g}-\sqrt{6}|W^{+}_{g}| \Big)^2 d{\mu}_{g}. \end{eqnarray*} This fact, the existence of non-zero monopole classes on $M$ and the curvature bound (\ref{weyl-leb-sca-2}) in Theorem \ref{mono-key-bounds} imply the following bound for any Riemannian metric $g$ on $M$: \begin{eqnarray}\label{u-1} \frac{1}{4\pi^{2}}\int_{M}\Big(2|W^{+}_{g}|^{2} + \frac{s^{2}_{g}}{24} \Big)d{\mu}_{g} > \frac{2}{3}\sum_{m=1}^{n}\Big( 2\chi(X_{m})+3\tau(X_{m}) \Big), \end{eqnarray} where notice that $M$ has non-zero monopole class and $M$ cannot admit any symplectic structure. This and Theorem \ref{beta-ine-key} force that the above inequality must be strict. \par On the other hand, by the definition, any Einstein 4-manifold $(X, g)$ must satisfy $\stackrel {\circ}{r}_{g} \equiv 0$, where $\stackrel {\circ}{r}_{g}$ is again the trace-free part of the Ricci curvature $r_{g}$ of $g$. Therefore, the equality (\ref{ein-gauss}) implies \begin{eqnarray*} 2\chi(X)+3\tau(X)=\frac{1}{4\pi^{2}}\int_{X}\Big(2|W^{+}_{g}|^{2} + \frac{s^{2}_{g}}{24} \Big)d{\mu}_{g}. \end{eqnarray*} Suppose now that the connected sum $M$ admit an Einstein metric $g$. Then the left-hand side of the above inequality (\ref{u-1}) is nothing but $2\chi(M)+3\tau(M)$. By combining (\ref{u-1}) with (\ref{u-222}), we are able to obtain \begin{eqnarray*} 4{n}-\Big( 2\chi(N)+3\tau(N) \Big) < \frac{1}{3}\sum_{m=1}^{n}\Big( 2\chi(X_{m})+3\tau(X_{m}) \Big). \end{eqnarray*} By contraposition, we get the desired result. \end{proof} On the other hand, let us recall the definition of simplicial volume due to Gromov \cite{gromov}. Let $M$ be a closed manifold. We denote by ${C}_{*}(M):=\sum^{\infty}_{k=0}{C}_{k}(M)$ the real coefficient singular chain complex of $M$. A chain $c \in {C}_{k}(M)$ is a finite combination $\sum{r}_{i}{\sigma}_{i}$ of singular simplexes ${\sigma}_{i} : {\Delta}^k \rightarrow M$ with real coefficients ${r}_{i}$. We define the norm $|c|$ of $c$ by $|c| : = \sum|r_{i}| \geq 0$. If $[\alpha] \in H_{*}(M, {\mathbb R})$ is any homology class, then the norm $||\alpha||$ of $[\alpha]$ is define as \begin{eqnarray*} ||\alpha||:=\inf \{|\frak a| : [{\frak a}] \in H_{*}(M, {\mathbb R}), [\frak a]=[\alpha]\}, \end{eqnarray*} where the infimum is taken over all cycles representing $\alpha$. Suppose that $M$ is moreover oriented. Then we have the fundamental class $[M] \in H_{n}(M, {\mathbb R})$ of $M$. We then define the {simplicial volume} of $M$ by $||M||$. It is known that any simply connected manifold $M$ satisfies $||M||=0$. For the product of compact oriented manifolds, Gromov pointed out (see p.10 of \cite{gromov}) that the simplicial volume is essentially multiplicative. Indeed, there are universal constants $c_{n}$ depending only on the dimension $n$ of the product $M_{1} \times M_{2}$ such that \begin{eqnarray}\label{sim-upper} c^{-1}_{n}||M_1|| \cdot ||M_2|| \leq ||M_{1} \times M_{2}|| \leq c_{n}||M_1|| \cdot ||M_2|| \end{eqnarray} On the other hand, for the connected sum, we have the following formula (cf \cite{be}): \begin{eqnarray}\label{sim-connec} ||M_{1} \# M_{2}|| = ||M_{1}|| + ||M_{2}|| \end{eqnarray} We shall use the following to prove Theorem \ref{main-CC}: \begin{lem}\label{simplicial-lem} Let $X_{m}$ be a closed oriented simply connected 4-manifold and consider a connected sum: \begin{eqnarray*} M:=(\#_{m} X_{m}) \# k (\Sigma_{h} \times \Sigma_{g}) \# \ell_{1}({S}^{1} \times {S}^{3}) \# \ell_{2} \overline{{\mathbb C}{P}^{2}}, \end{eqnarray*} where $g, h \geq 1$, $m, k \geq 1$ and $\ell_{1}, \ell_{2} \geq 0$. Then the simplicial volume of $M$ satisfies the following bound: \begin{eqnarray}\label{simplicial-M} 16c^{-1}_{4}k(g-1)(h-1) \leq ||M|| \leq 16c_{4}k(g-1)(h-1), \end{eqnarray} where $c_{4}$ is the positive universal constant depending only on the dimension of the product $\Sigma_{h} \times \Sigma_{g}$. On the other hand, we have \begin{eqnarray*} 2\chi(M)+3\tau(M) &=& \Big(\sum_{m} 2\chi(X_{m})+3\tau(X_{m}) \Big)+4k(g-1)(h-1)\\ &-&4(m+k+\ell_{1})-{\ell}_{2}, \\ 2\chi(M)-3\tau(M) &=& \Big( \sum_{m} 2\chi(X_{m})-3\tau(X_{m}) \Big)+4k(g-1)(h-1) \\ &-& 4(m+k+\ell_{1})+5{\ell}_{2}. \end{eqnarray*} \end{lem} \begin{proof} First of all, as was already noticed in \cite{gromov}, any closed surface $\Sigma_{g}$ of genus $g \geq 2$ satisfies \begin{eqnarray*} ||\Sigma_{g}|| = -2\chi(\Sigma_{g})=4(g-1). \end{eqnarray*} The bounds (\ref{sim-upper}) and (\ref{sim-connec}) together imply the following bound on the simplicial volume of a connected sum $k (\Sigma_{g} \times \Sigma_{h} )$ of $k$-copies of the product $\Sigma_{g} \times \Sigma_{h}$: \begin{eqnarray*} 16c^{-1}_{4}k(g-1)(h-1) &=& kc^{-1}_{4} ||\Sigma_{g}|| \cdot ||\Sigma_{h}|| \leq ||k (\Sigma_{g} \times \Sigma_{h} ) || = k ||\Sigma_{g} \times \Sigma_{h} || \\ & \leq & k c_{4}||\Sigma_{g} || \cdot ||\Sigma_{h} || = 16c_{4}k(g-1)(h-1). \end{eqnarray*} Now, consider the connected sum $M:=(\#_{m} X_{m}) \# k (\Sigma_{h} \times \Sigma_{g})\# \ell({S}^{1} \times {S}^{3})$. By the formula (\ref{sim-connec}), we are able to conclude that $||M||=k||\Sigma_{g} \times \Sigma_{h}||$ holds, where notice that $||\#_{m} X_{m}||=0$, $||{S}^{1} \times {S}^{3} ||=0$ and $||\overline{{\mathbb C}{P}^{2}}||=0$. Therefore we are able to obtain the following bound on the simplicial volume of $M$: \begin{
eqnarray*} 16c^{-1}_{4}k(g-1)(h-1) \leq ||M|| \leq 16c_{4}k(g-1)(h-1). \end{eqnarray*} On the other hand, one can easily derive the formulas on $2\chi(M)+3\tau(M)$ and $2\chi(M)-3\tau(M)$ by simple direct computations, where note that $\tau(\Sigma_{g} \times \Sigma_{h})=0$. \end{proof} On the other hand, as an interesting special case of Theorem \ref{einstein}, we obtain \begin{cor}\label{speical-ein} For $m=1,2,3$, let $X_{m}$ be a simply connected symplectic 4-manifold with ${b}^{+}(X_{m}) \equiv 3 \ (\bmod \ 4)$. Consider a connected sum $M:=(\#_{m=1}^{n} X_{m}) \# k (\Sigma_{h} \times \Sigma_{g}) \# \ell_{1}({S}^{1} \times {S}^{3}) \# \ell_{2} \overline{{\mathbb C}{P}^{2}}$, where $n, k \geq 1$ satisfying $n+k \leq 3$, $\ell_{1}, \ell_{2} \geq 0$ and $g, h$ are odd integers $\geq 1$. Then $M$ cannot admit any Einstein metric if \begin{eqnarray*} 4(m+\ell_{1} + k) + \ell_{2} \geq \frac{1}{3}\Big( \sum_{m=1}^{n} 2\chi(X_{m})+3\tau(X_{m})+4k(1-h)(1-g) \Big). \end{eqnarray*} \end{cor} \begin{proof} Use Corollary \ref{key-cor-1} and Theorem \ref{einstein}. \end{proof} On the other hand, we need to recall a construction of a certain sequence of homotopy $K3$ surfaces. Let $Y_{0}$ be a Kummer surface with an elliptic fibration $Y_{0} \rightarrow {\mathbb C}{P}^{1}$. Let $Y_{\ell}$ be obtained from $Y_{0}$ by performing a logarithmic transformation of order $2 \ell + 1$ on a non-singular fiber of $Y_{0}$. It turns out that the $Y_{\ell}$ are simply connected spin manifolds with $b^{+}(Y_{\ell}) = 3$ and $b^{-}(Y_{\ell}) = 19$. By the Freedman classification \cite{freedman}, $Y_{\ell}$ is homeomorphic to $K3$ surface. However, $Y_{\ell}$ is a K{\"{a}}hler surface with $b^{+}(Y_{\ell}) > 1$ and hence a result of Witten \cite{w} tells us that $\pm {c}_{1}(Y_{\ell})$ are monopole classes for each $\ell$. We have $ {c}_{1}(Y_{\ell}) = 2{\ell}\frak{f}$, where $\frak{f}$ is Poincar{\'{e}} dual to the multiple fiber which is introduced by the logarithmic transformation. See also \cite{BPV}. \par We are now in a position to prove \begin{thm}\label{simplicial-Ein-inf} There exist infinitely many closed topological spin 4-manifolds satisfying the following three properties: \begin{itemize} \item Each 4-manifold $M$ has non-trivial simplicial volume, i.e., $||M|| \not=0$. \item Each 4-manifold $M$ satisfies the strict Gromov-Hitchin-Thorpe inequality, i.e., \begin{eqnarray*} 2\chi(M) - 3|\tau(M)| > \frac{1}{81{\pi}^2}||M||. \end{eqnarray*} \item Each 4-manifold $M$ is admits infinitely many distinct smooth structures for which no compatible Einstein metric exists. \end{itemize} \end{thm} \begin{proof} First of all, take any pair $(m, n)$ of positive integers satisfying $4m+2n-1 \equiv 3 \ (\bmod \ 4)$, $m \geq 2$ and $n \geq 1$. For any pair $(g, h)$ of odd integers which are greater than and equal to $3$, if necessarily, by taking another pair $(m, n)$ of large positive integers satisfying $4m+2n-1 \equiv 3 \ (\bmod \ 4)$, we are always able to find at east one positive integer $\ell_{1}$ satisfying the following three inequalities \begin{eqnarray}\label{ein-in-1} 2n+\Big(1-\frac{4c_{4}}{81{\pi}^2}\Big) (g-1)(h-1)-3 > \ell_{1}. \end{eqnarray} \begin{eqnarray}\label{ein-in-2} 2(n+12m)+\Big(1-\frac{4c_{4}}{81{\pi}^2}\Big) (g-1)(h-1)+21 > \ell_{1}. \end{eqnarray} \begin{eqnarray}\label{ein-in-3} \ell_{1} \geq \frac{1}{3}\Big( 2n+(g-1)(h-1) \Big)-3, \end{eqnarray} where $c_{4}$ is the universal constant appeared in Lemma \ref{simplicial-lem}. Notice that the inequality (\ref{ein-in-1}) implies the inequality (\ref{ein-in-2}), and note also that we have infinitely many choices of such pair $(m, n)$ and, hence, of $\ell_{1}$. \par On the other hand, let us recall that Gompf \cite{gom} showed that, for arbitrary integers $\alpha \geq 2$ and $\beta \geq 0$, one can construct a simply connected symplectic spin 4-manifold $X_{\alpha, \beta}$ satisfying \begin{eqnarray}\label{go-1} \Big( \chi(X_{\alpha, \beta}), \tau(X_{\alpha, \beta}) \Big)= \Big( 24\alpha+4\beta, -16\alpha \Big) \end{eqnarray} Notice also that this implies \begin{eqnarray}\label{go-2} b^+(X_{\alpha, \beta}) &=& 4\alpha+2\beta-1, \end{eqnarray} \begin{eqnarray}\label{go-3} 2\chi(X_{\alpha, \beta}) + 3\tau(X_{\alpha, \beta}) &=& 8\beta, \end{eqnarray} \begin{eqnarray}\label{go-4} 2\chi(X_{\alpha, \beta}) - 3\tau(X_{\alpha, \beta}) &=& 8(12\alpha+\beta). \end{eqnarray} Now, as was already observed in the above, for any pair $(g, h)$ of odd integers which are greater than and equal to $3$, we can find infinitely many pairs $(m, n)$ satisfying $4m+2n-1 \equiv 3 \ (\bmod \ 4)$ and also can find at least one positive integer $\ell_{1}$ satisfying inequalities $(\ref{ein-in-1})$, $(\ref{ein-in-2})$ and (\ref{ein-in-3}). For each such five integers $(m, n, g, h, \ell_{1})$ and for each new integer $\ell \geq 0$, consider the following connected sum: \begin{eqnarray*} M(m, n, \ell, g, h, \ell_{1}):=X_{m, n} \# {Y}_{\ell} \# (\Sigma_{g} \times \Sigma_{h}) \# \ell_{1}({S}^{1} \times {S}^{3}), \end{eqnarray*} where ${Y}_{\ell}$ is obtained from $Y_{0}$ by performing a logarithmic transformation of order $2 \ell + 1$ on a non-singular fiber of $Y_{0}$. Note that we have $b_{1}(X_{m, n})=0$, $b^+(X_{m, n})=4m+2n-1 \equiv 3 \ (\bmod \ 4)$, $b_{1}(Y_{\ell})=0$ and $b^+(Y_{\ell})=3$. Each of these smooth oriented smooth 4-manifolds is homeomorphic to the following spin 4-manifold: \begin{eqnarray}\label{homeo-spin} X_{m, n} \# K3 \# (\Sigma_{g} \times \Sigma_{h}) \# \ell_{1}({S}^{1} \times {S}^{3}). \end{eqnarray} For any fixed $(m, n, g, h, \ell_{1})$, the sequence $\{M(m, n, \ell, g, h, \ell_{1}) \ | \ \ell \in {\mathbb N} \}$ contains infinitely many distinct diffeotype. There are two essentially same way to see this. One can use the bandwidth argument developed in \cite{ishi-leb-1, ishi-leb-2} to see this. Alternatively, one can also see this more directly by using only the finiteness property (see Proposition \ref{mono}) of the set of monopole classes (cf. \cite{kot-g, kot-2}). Moreover, each of these smooth oriented smooth 4-manifolds cannot admit any Einstein metric as follows. First of all, notice that Corollary \ref{speical-ein} tells us that, for any fixed $(m, n, \ell, g, h, \ell_{1})$, each 4-manifold $M(m, n, \ell, g, h, \ell_{1})$ cannot admit any Einstein metric if \begin{eqnarray*} 4(2+\ell_{1} + 1) \geq \frac{1}{3}\Big( 2\chi(X_{m,n})+3\tau(X_{m,n})+ 2\chi(Y_{\ell})+3\tau(Y_{\ell})+4(1-h)(1-g) \Big), \end{eqnarray*} equivalently, \begin{eqnarray*} \ell_{1}+ 3 \geq \frac{1}{12}\Big( 8n+4(1-h)(1-g) \Big), \end{eqnarray*} where we used $ 2\chi(X_{m,n})+3\tau(X_{m,n})=8n$ (see (\ref{go-3})) and $2\chi(Y_{\ell})+3\tau(Y_{\ell})=0$. The last inequality is nothing but the inequality (\ref{ein-in-3}) above. Hence, for any fixed $(m, n, \ell, g, h, \ell_{1})$, each 4-manifold $M(m, n, \ell, g, h, \ell_{1})$ cannot admit any Einstein metric as desired. Hence each of topological spin manifolds (\ref{homeo-spin}) admits infinitely many distinct smooth structures for which no compatible Einstein metric exists. \par In what follows, we shall prove that each $M$ of topological spin manifolds (\ref{homeo-spin}) has non-zero simplicial volume and satisfies the strict Gromov-Hitchin-Thorpe inequality. In fact, by the bound (\ref{simplicial-M}), we have the following bound on the simplicial volume of $M$: \begin{eqnarray}\label{tel-1} 0 < \frac{16c^{-1}_{4}}{81{\pi}^2}(g-1)(h-1) \leq \frac{1}{81{\pi}^2}||M|| \leq \frac{16c_{4}}{81{\pi}^2}(g-1)(h-1). \end{eqnarray} In particular, $||M|| \not=0$ holds. On the other hand, we have \begin{eqnarray}\label{euler-M-1} 2\chi(M)+3\tau(M) = 8n+4(g-1)(h-1)-4(3+\ell_{1}), \end{eqnarray} \begin{eqnarray}\label{euler-M-2} 2\chi(M)-3\tau(M) = 8(12m+n)+96+4(g-1)(h-1)-4(3+\ell_{1}), \end{eqnarray} where see the final formulas in Lemma \ref{simplicial-lem} and notice that, for a K3 surface, we have $2\chi+3\tau=0$ and $2\chi-3\tau=96$. \par Now, by multiplying both sides of (\ref{ein-in-1}) by $4$, we have \begin{eqnarray*} 8n+\Big(4-\frac{16c_{4}}{81{\pi}^2}\Big) (g-1)(h-1) > 4(\ell_{1}+3). \end{eqnarray*} Equivalently, \begin{eqnarray*} 8n+4(g-1)(h-1) - 4(\ell_{1}+3) >\frac{16c_{4}}{81{\pi}^2}(g-1)(h-1). \end{eqnarray*} This inequality, (\ref{tel-1}) and (\ref{euler-M-1}) imply \begin{eqnarray*} 2\chi(M) + 3\tau(M) > \frac{1}{81{\pi}^2}||M||. \end{eqnarray*} Similarly, by multiplying both sides of (\ref{ein-in-2}) by $4$, we get \begin{eqnarray*} 8(12m+n)+\Big(4-\frac{16c_{4}}{81{\pi}^2} \Big) (g-1)(h-1)+84 > 4\ell_{1}. \end{eqnarray*} Namely we have \begin{eqnarray*} 8(12m+n)+96+4(g-1)(h-1)-4(3+\ell_{1}) > \frac{16c_{4}}{81{\pi}^2}(g-1)(h-1). \end{eqnarray*} This inequality, (\ref{tel-1}) and (\ref{euler-M-2}) tells us that the following holds: \begin{eqnarray*} 2\chi(M) - 3\tau(M) > \frac{1}{81{\pi}^2}||M||. \end{eqnarray*} Therefore, the spin 4-manifold $M$ satisfies the strict Gromov-Hitchin-Thorpe inequality as desired: \begin{eqnarray*} 2\chi(M) - 3|\tau(M)| > \frac{1}{81{\pi}^2}||M||. \end{eqnarray*} Hence, the spin 4-manifold $M$ has the desired properties. Because we have an infinitely many choice of the above integers $(g, h, m, n, \ell_{1})$, we are able to conclude that there exist infinitely many closed topological spin 4-manifolds with desired properties as promised. \end{proof} Similarly, we have \begin{thm}\label{simplicial-Ein-inf-2} There exist infinitely many closed topological non-spin 4-manifolds and each of these 4-manifolds satisfies the three properties in Theorem \ref{simplicial-Ein-inf}. \end{thm} \begin{proof} The proof is similar to the spin case. In fact, instead of the above connected sum, consider the following connected sum \begin{eqnarray*} X_{m, n} \# {Y}_{\ell} \# (\Sigma_{h} \times \Sigma_{g}) \# \ell_{2} \overline{{\mathbb C}{P}^{2}}. \end{eqnarray*} Notice that these 4-manifolds are non-spin whenever $\ell_{2} \geq 1$. For completeness, let us include the proof of this theorem. As the case of spin, take again any pair $(m, n)$ of positive integers satisfying $4m+2n-1 \equiv 3 \ (\bmod \ 4)$, $m \geq 2$ and $n \geq 1$. For any pair $(g, h)$ of odd integers which are greater than and equal to $3$, if necessarily, by taking another pair $(m, n)$ of large positive integers satisfying $4m+2n-1 \equiv 3 \ (\bmod \ 4)$, we are always able to find at east one positive integer $\ell_{2}$ satisfying the following three inequalities \begin{eqnarray}\label{ein-in-12} 8n+4\Big( 1-\frac{4c_{4}}{81{\pi}^2} \Big) (g-1)(h-1)-12 > \ell_{2}. \end{eqnarray} \begin{eqnarray}\label{ein-in-22} 8(n+12m)+4 \Big( 1-\frac{4c_{4}}{81{\pi}^2} \Big) (g-1)(h-1)+84 > -5\ell_{2}. \end{eqnarray} \begin{eqnarray}\label{ein-in-32} \ell_{2} \geq \frac{1}{3}\Big(8n+4(g-1)(h-1)\Big)-12, \end{eqnarray} where $c_{4}$ is again the universal constant appeared in Lemma \ref{simplicial-lem}. Notice that we have infinitely many choices of such pair $(m, n)$ and of $\ell_{2}$. \par On the other hand, as was already used in the proof of Theorem \ref{simplicial-Ein-inf}, the construction of Gompf \cite{gom} enables us to construct, for arbitrary integers $\alpha \geq 2$ and $\beta \geq 0$, a simply connected symplectic spin 4-manifold $X_{\alpha, \beta}$ satisfying (\ref{go-1}), (\ref{go-2}), (\ref{go-3}) and (\ref{go-4}). \par As was already observed above, for any pair $(g, h)$ of odd integers which are greater than and equal to $3$, we are able to find infinitely many pairs $(m, n)$ satisfying $4m+2n-1 \equiv 3 \ (\bmod \ 4)$ and also can find at least one positive integer $\ell_{2}$ satisfying inequalities $(\ref{ein-in-12})$, $(\ref{ein-in-22})$ and (\ref{ein-in-32}). For each such five integers $(m, n, g, h, \ell_{2})$ and for each new integer $\ell \geq 0$, consider the following: \begin{eqnarray*} N(m, n, \ell, g, h, \ell_{2}):=X_{m, n} \# {Y}_{\ell} \# (\Sigma_{g} \times \Sigma_{h}) \# \ell_{2} \overline{{\mathbb C}{P}^{2}}, \end{eqnarray*} where ${Y}_{\ell}$ is again obtained from $Y_{0}$ by performing a logarithmic transformation of order $2 \ell + 1$ on a non-singular fiber of $Y_{0}$. We have $b_{1}(X_{m, n})=0$, $b^+(X_{m, n})=4m+2n-1 \equiv 3 \ (\bmod \ 4)$, $b_{1}(Y_{\ell})=0$ and $b^+(Y_{\ell})=3$. Each of these smooth oriented smooth 4-manifolds is homeomorphic to the following non-spin 4-manifold: \begin{eqnarray}\label{homeo-non-spin} X_{m, n} \# K3 \# (\Sigma_{g} \times \Sigma_{h}) \# \ell_{2}\overline{{\mathbb C}{P}^{2}}. \end{eqnarray} For any fixed $(m, n, g, h, \ell_{1})$, again, the sequence $\{N(m, n, \ell, g, h, \ell_{2}) \ | \ \ell \in {\mathbb N} \}$ contains infinitely many distinct diffeotype. Moreover, we can also see that each of these smooth oriented smooth 4-manifolds cannot admit any Einstein metric. In fact, Corollary \ref{speical-ein} tells us that, for any fixed $(m, n, \ell, g, h, \ell_{2})$, each 4-manifold $N(m, n, \ell, g, h, \ell_{2})$ cannot admit any Einstein metric if \begin{eqnarray*} 4(2+ 0+ 1) + {\ell}_{2} \geq \frac{1}{3}\Big( 2\chi(X_{m,n})+3\tau(X_{m,n})+ 2\chi(Y_{\ell})+3\tau(Y_{\ell})+4(1-h)(1-g) \Big), \end{eqnarray*} equivalently, \begin{eqnarray*} \ell_{1}+ 12 \geq \frac{1}{12}\Big( 8n+4(1-h)(1-g) \Big), \end{eqnarray*} where notice again that we have $ 2\chi(X_{m,n})+3\tau(X_{m,n})=8n$ and $2\chi(Y_{\ell})+3\tau(Y_{\ell})=0$. This inequality is nothing but the inequality (\ref{ein-in-32}) above. Hence, for any fixed $(m, n, \ell, g, h, \ell_{2})$, each 4-manifold $N(m, n, \ell, g, h, \ell_{2})$ cannot admit any Einstein metric as desired. Hence each of topological non-spin manifolds (\ref{homeo-non-spin}) admits infinitely many distinct smooth structures for which no compatible Einstein metric exists. \par Finally, we shall prove that each $N$ of topological spin manifolds (\ref{homeo-non-spin}) has non-zero simplicial volume and satisfies the strict Gromov-Hitchin-Thorpe inequality. As the case of spin, we have \begin{eqnarray}\label{tel-12} 0 < \frac{16c^{-1}_{4}}{81{\pi}^2}(g-1)(h-1) \leq \frac{1}{81{\pi}^2}||N|| \leq \frac{16c_{4}}{81{\pi}^2}(g-1)(h-1). \end{eqnarray} In particular, $||N|| \not=0$ holds. On the other hand, we get \begin{eqnarray}\label{euler-M-12} 2\chi(N)+3\tau(N) = 8n+4(g-1)(h-1)-12 -{\ell}_{2}, \end{eqnarray} \begin{eqnarray}\label{euler-M-22} 2\chi(M)-3\tau(M) = 8(12m+n)+84+4(g-1)(h-1)+5\ell_{2}, \end{eqnarray} where we used the final formulas in Lemma \ref{simplicial-lem}. \par Now, inequality (\ref{ein-in-12}) is equivalent to \begin{eqnarray*} 8n+4(g-1)(h-1)-12-{\ell}_{2} > \frac{16c_{4}}{81{\pi}^2}(g-1)(h-1). \end{eqnarray*} This inequality, (\ref{tel-12}) and (\ref{euler-M-12}) imply \begin{eqnarray*} 2\chi(N) + 3\tau(N) > \frac{1}{81{\pi}^2}||N||. \end{eqnarray*} Similarly, inequality (\ref{ein-in-22}) can be rewritten as \begin{eqnarray*} 8(12m+n)+4(g-1)(h-1)+84 + 5{\ell}_{2} > \frac{16c_{4}}{81{\pi}^2} (g-1)(h-1). \end{eqnarray*} This inequality, (\ref{tel-12}) and (\ref{euler-M-22}) implies \begin{eqnarray*} 2\chi(N) - 3\tau(N) > \frac{1}{81{\pi}^2}||N||. \end{eqnarray*} Thus, the non-spin 4-manifold $N$ satisfies the strict Gromov-Hitchin-Thorpe inequality as desired: \begin{eqnarray*} 2\chi(N) - 3|\tau(N)| > \frac{1}{81{\pi}^2}||N||. \end{eqnarray*} Therefore, the non-spin 4-manifold $N$ has the desired properties. Since we have an infinitely many choice of the above integers $(g, h, m, n, \ell_{2})$, we are able to conclude that there exist infinitely many closed topological non-spin 4-manifolds with desired properties. \end{proof} It is now clear that Theorem \ref{main-CC} mentioned in Introduction follows from Theorems \ref{simplicial-Ein-inf} and \ref{simplicial-Ein-inf-2}. \begin{rem}\label{final-rem} It is known that the right-hand side of Gromov-Hitchin-Thorpe inequality (\ref{k-GHT}) can be replaced by the volume entropy (or asymptotic volume) \cite{kot-g}. For the reader's convenience, let us here recall briefly the definition of the volume entropy (or asymptotic volume) of a Riemannian manifold. Let $X$ be a closed oriented Riemannian manifold with smooth metric $g$, and let $\Tilde{M}$ be its universal cover with the induced metric $\Tilde{g}$. For each $\Tilde{x} \in \Tilde{M}$, let $V(\Tilde{x}, R)$ be the volume of the ball with the center $\Tilde{x}$ and radius $R$. We set \begin{eqnarray*} {\lambda}(X, g):=\lim_{R \rightarrow +\infty}\frac{1}{R}\log V(\Tilde{x}, R). \end{eqnarray*} Thanks to work of Manning \cite{Manning}, it turns out that this limit exists and is independent of the choice of $\Tilde{x}$. We call ${\lambda}(X, g)$ the volume entropy of the metric $g$ and define the volume entropy of $X$ to be \begin{eqnarray*} {\lambda}(X):=\inf_{g \in {\cal R}^{1}_{X}}{\lambda}(X, g), \end{eqnarray*} where ${\cal R}^{1}_{X}$ means the set of all Riemannian metrics $g$ with $V(X, g)=1$. It is known that the volume entropy can only be positive for manifolds with fundamental groups of exponential growth \cite{mil}. Now, it is known that any closed Einstein 4-manifold $X$ must satisfy the following bound \cite{kot-g}: \begin{eqnarray}\label{gro-vol} 2\chi(X) - 3|\tau(X)| \geq \frac{1}{54{\pi}^2}{\lambda}(X)^4, \end{eqnarray} where equality can occur if and only if every Einstein metric on $X$ is flat, is a non-flat Calabi-Yau metric, or is a metric of constant negative sectional curvature. The inequality (\ref{gro-vol}) is stronger than the inequality (\ref{k-GHT}). Notice that the connected sums appeared in Theorems \ref{simplicial-Ein-inf} and \ref{simplicial-Ein-inf-2} have non-zero volume entropy because we have the following inequality \cite{gromov, p-p} which holds for any closed manifold $X$ of dimension $n$: \begin{eqnarray*} \frac{1}{c_{n} n!} || X || \leq [{\lambda}(X)]^n, \end{eqnarray*} where $c_{n}$ is the universal constant depends only on $n$. It is natural to ask if Theorem \ref{main-CC} still holds for the inequality (\ref{gro-vol}). To prove such a result, we need to compute the value of the volume entropy of connected sums or to give an upper bound like Lemma \ref{simplicial-lem} above. To the best of our knowledge, there is no literature which discusses such a subject. If we could prove such a result, Theorem \ref{main-CC} can be easily generalized to the case of volume entropy. In the present article, we could not pursue this point. We hope to return this issue in near future. \end{rem} \section{Concluding remarks}\label{final} In the present article, we have proved a new non-vanishing theorem of stable cohomotopy Seiberg-Witten invariants and have given several applications of the non-vanishing theorem to geometry of 4-manifolds. As was already mentioned in Introduction, our non-vanishing theorem, i.e.,Theorem \ref{main-A}, includes Bauer's non-vanishing theorem as a special case except the case of four connected sums. Moreover, we showed that Conjecture \ref{conj-1} in the case where $\ell = 2$ is true. Based on Theorem \ref{main-A}, Theorem \ref{main-B} and Corollary \ref{conje-cor}, it is so natural to propose the following including Conjecture \ref{conj-1} in the case where $\ell =3$ as a special case: \begin{conj}\label{conj-2} For $i=1,2,3,4$, let $X_i$ be are almost complex 4-manifolds with $b^+(X_i) > 1$, ${b}_{1}(X_{i}) \not=0$, $SW_{X}(\Gamma_{X}) \equiv 1 \ (\bmod \ 2)$, and satisfying both conditions (\ref{spin-0}) and (\ref{spin-014}). Then a connected sum $X:=\#^{4}_{i=1}{X}_{i}$ has a non-trivial stable cohomotopy Seiberg-Witten invariant. \end{conj} Notice that, if one drop the condition that ${b}_{1}(X_{i}) \not=0$ in the above, the claim is not true because the conclusion contradicts Bauer's non-vanishing theorem. Unfortunately, the method developed in the present article cannot be used to explore the above conjecture because the spin cobordism Seiberg-Witten invariant must vanish for the connected sum $X:=\#^{4}_{i=1}{X}_{i}$. See Remark \ref{remark almost} above. Hence, in order to attack the above conjecture, we need to develop a completely new method. We hope to return this issue in some day. It is also interesting to ask, in the case where ${b}_{1}(X_{i}) \not=0$, if there is an integer $n \geq 5$ such that the connected sum $\#^{n}_{i=1}{X}_{i}$ has a non-trivial stable cohomotopy Seiberg-Witten invariant. This is also completely open. We remark that, for any closed 4-manifold $X$ with $b^+(X) \geq 1$ and ${b}_{1}(X)=0$, there is some large integer $N$ such that, for any $n \geq N$, the $n$-fold connected sum of $X$ with itself, has a trivial stable cohomotopy Seiberg-Witten invariant. See \cite{furuta-k-m-0} for more detail. \par On the other hand, it is not so easy, at least for the present authors, to find examples of almost complex 4-manifold with $b^+ \geq 2$, $b_{1} \not=0$, $SW_{X_{i}}(\Gamma_{X_{i}}) \equiv 1 \ (\bmod \ 2)$ and satisfying both (\ref{spin-0}) and (\ref{spin-014}), where $\Gamma_{X_{i}}$ is a spin${}^c$ structure compatible with the almost complex structure. In Theorem \ref{cor-1} above, we saw that the product $\Sigma_{g} \times \Sigma_{h}$ with $g$, $h$ odd and primary Kodaira surface are such examples. We have the following problem: \begin{problem} Find another example of almost complex 4-manifold $X$ with $b^+(X) \geq 2$, $b_{1}(X) \not=0$, $SW_{X}(\Gamma_{X}) \equiv 1 \ (\bmod \ 2)$ and satisfying both (\ref{spin-0}) and (\ref{spin-014}). \end{problem} On the other hand, Okonek and Teleman \cite{o-t} introduces a new class of the stable cohomotopy Seiberg-Witten invariant, which has clear factorial properties with respect to diffeomorphism of 4-manifolds. In particular, for any closed 4-manifold with $b^{+}(X) \geq 2$ and $b_{1}(X)=0$, the invariant of Okonek and Teleman is equivalent to the stable cohomotopy Seiberg-Witten invariant $BF_{X}$ due to Bauer and Furuta \cite{b-f}. However, it seems that the invariant of Okonek and Teleman is finer than the stable cohomotopy Seiberg-Witten invariant in general. Among other things, in \cite{o-t}, Okonek and Teleman clarifies a relationship between the new invariant and a variant of the original Seiberg-Witten invariant $SW_{X}$, which is called full Seiberg-Witten invariant \cite{o-t-1}. On the other hand, in Theorem \ref{prop diagram} proved in subsection \ref{diag} above, we established a natural commutative diagram among $BF_{X}$, $\widehat{SW}^{spin}_X$ and $SW_{X}$. In light of these results, there should exist an analogue of Theorem \ref{prop diagram} in the context of Okonek and Teleman \cite{o-t}. We also hope to return this issue in a further research. \par \noindent {} \par \noindent {\bf Acknowledgement:} We would like to express deep gratitude to Mikio Furuta for his warm encouragement. Furthermore, the first author would like to express deep gratitude to Claude LeBrun for his warm encouragement. Some of the main parts of this article were written during the first author's stay at State University of New York at Stony Brook. He would like to express many thanks to Claude LeBrun and the department of Mathematics at SUNY for their hospitality during his stay. The second author is partially supported by the 21th century COE program at Graduate School of Mathematical Sciences, the University of Tokyo.
\part{\xspace}} \def\ensuremath{D^{*+}}\xspace {\ensuremath{D^{*+}}\xspace} \def\ensuremath{D^{*-}}\xspace {\ensuremath{D^{*-}}\xspace} \def\ensuremath{D^{*\pm}}\xspace {\ensuremath{D^{*\pm}}\xspace} \def\ensuremath{D^{*\mp}}\xspace {\ensuremath{D^{*\mp}}\xspace} \def\ensuremath{D^+_s}\xspace {\ensuremath{D^+_s}\xspace} \def\ensuremath{D^-_s}\xspace {\ensuremath{D^-_s}\xspace} \def\ensuremath{D^{*+}_s}\xspace {\ensuremath{D^{*+}_s}\xspace} \def\ensuremath{D^{*-}_s}\xspace {\ensuremath{D^{*-}_s}\xspace} \def\ensuremath{\Dz{\text{-}}{\kern -0.16em \Dzb}}\xspace {\ensuremath{\ensuremath{D^0}\xspace{\text{-}}{\kern -0.16em \ensuremath{\Dbar{}^0}\xspace}}\xspace} \def\ensuremath{B}\xspace {\ensuremath{B}\xspace} \def\kern 0.18em\overline{\kern -0.18em B}{}\xspace {\kern 0.18em\overline{\kern -0.18em B}{}\xspace} \def\ensuremath{\Bbar}\xspace {\ensuremath{\kern 0.18em\overline{\kern -0.18em B}{}\xspace}\xspace} \def\BB {\ensuremath{B\kern 0.18em\overline{\kern -0.18em B}{}\xspace}\xspace} \def\ensuremath{B^0}\xspace {\ensuremath{B^0}\xspace} \def\ensuremath{\Bbar{}^0}\xspace {\ensuremath{\kern 0.18em\overline{\kern -0.18em B}{}\xspace{}^0}\xspace} \def\ensuremath{\Bz {\kern -0.16em \Bzb}}\xspace {\ensuremath{\ensuremath{B^0}\xspace {\kern -0.16em \ensuremath{\Bbar{}^0}\xspace}}\xspace} \def\ensuremath{B^+}\xspace {\ensuremath{B^+}\xspace} \def\ensuremath{B^-}\xspace {\ensuremath{B^-}\xspace} \def\ensuremath{\Bu}\xspace {\ensuremath{\ensuremath{B^+}\xspace}\xspace} \def\ensuremath{\Bub}\xspace {\ensuremath{\ensuremath{B^-}\xspace}\xspace} \def\ensuremath{B^\pm}\xspace {\ensuremath{B^\pm}\xspace} \def\ensuremath{B^\mp}\xspace {\ensuremath{B^\mp}\xspace} \def\ensuremath{\Bu {\kern -0.16em \Bub}}\xspace {\ensuremath{\ensuremath{B^+}\xspace {\kern -0.16em \ensuremath{B^-}\xspace}}\xspace} \def\ensuremath{B_s}\xspace {\ensuremath{B_s}\xspace} \def\ensuremath{\Bbar{}_s}\xspace {\ensuremath{\kern 0.18em\overline{\kern -0.18em B}{}\xspace{}_s}\xspace} \def\ensuremath{\Bbar{}^{(*)0}_s} {\ensuremath{\kern 0.18em\overline{\kern -0.18em B}{}\xspace{}^{(*)0}_s}} \def\ensuremath{{J\mskip -3mu/\mskip -2mu\psi\mskip 2mu}}\xspace {\ensuremath{{J\mskip -3mu/\mskip -2mu\psi\mskip 2mu}}\xspace} \def\ensuremath{\psi{(2S)}}\xspace {\ensuremath{\psi{(2S)}}\xspace} \def\ensuremath{\psi(3770)}\xspace {\ensuremath{\psi(3770)}\xspace} \def\ensuremath{\eta_c}\xspace {\ensuremath{\eta_c}\xspace} \def\ensuremath{\chi_{c0}}\xspace {\ensuremath{\chi_{c0}}\xspace} \def\ensuremath{\chi_{c1}}\xspace {\ensuremath{\chi_{c1}}\xspace} \def\ensuremath{\chi_{c2}}\xspace {\ensuremath{\chi_{c2}}\xspace} \def\Y#1S{\ensuremath{\Upsilon{(#1S)}}\xspace \def\Y1S {\Y1S} \def\Y2S {\Y2S} \def\Y3S{\Y3S} \def\Y4S {\Y4S} \def\Y5S {\Y5S} \def\Y6S {\Y6S} \def\chic#1{\ensuremath{\chi_{c#1}}\xspace} \def\ensuremath{p}\xspace {\ensuremath{p}\xspace} \def\ensuremath{\overline p}\xspace {\ensuremath{\overline p}\xspace} \def\ensuremath{n}\xspace {\ensuremath{n}\xspace} \def\ensuremath{\overline n}\xspace {\ensuremath{\overline n}\xspace} \def\kern 0.25em\overline{\kern -0.25em \Deltares}{}\xspace{\kern 0.25em\overline{\kern -0.25em \Deltares}{}\xspace} \def\kern 0.2em\overline{\kern -0.2em\Lambda}{}\xspace{\kern 0.2em\overline{\kern -0.2em\Lambda}{}\xspace} \def\kern 0.2em\overline{\kern -0.2em \Sigma}{}^{0}{}\xspace{\kern 0.2em\overline{\kern -0.2em \Sigma}{}^{0}{}\xspace} \def\kern 0.2em\overline{\kern -0.2em \Xi}{}\xspace{\kern 0.2em\overline{\kern -0.2em \Xi}{}\xspace} \def\kern 0.2em\overline{\kern -0.2em \Omega}{}\xspace{\kern 0.2em\overline{\kern -0.2em \Omega}{}\xspace} \def\kern 0.2em\overline{\kern -0.2em N}{}\xspace{\kern 0.2em\overline{\kern -0.2em N}{}\xspace} \def\kern 0.2em\overline{\kern -0.2em X}{}\xspace{\kern 0.2em\overline{\kern -0.2em X}{}\xspace} \def\ensuremath{X}\xspace {\ensuremath{X}\xspace} \newcommand{\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace}{\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace} \newcommand{\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace}{\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace} \newcommand{\ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace}{\ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace} \newcommand{\ensuremath{\mathrm{\,ke\kern -0.1em V}}\xspace}{\ensuremath{\mathrm{\,ke\kern -0.1em V}}\xspace} \newcommand{\ensuremath{\mathrm{\,e\kern -0.1em V}}\xspace}{\ensuremath{\mathrm{\,e\kern -0.1em V}}\xspace} \newcommand{\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace}{\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace} \newcommand{\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace}{\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace} \newcommand{\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace}{\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace} \newcommand{\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace}{\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace} \def\ensuremath{^{\prime\prime}}\xspace {\ensuremath{^{\prime\prime}}\xspace} \def\inch {\ensuremath{\rm \,in}\xspace} \def\ensuremath{\rm \,ft}\xspace {\ensuremath{\rm \,ft}\xspace} \def\ensuremath{{\rm \,km}}\xspace {\ensuremath{{\rm \,km}}\xspace} \def\ensuremath{{\rm \,m}}\xspace {\ensuremath{{\rm \,m}}\xspace} \def\ensuremath{{\rm \,cm}}\xspace {\ensuremath{{\rm \,cm}}\xspace} \def\ensuremath{{\rm \,cm}^2}\xspace {\ensuremath{{\rm \,cm}^2}\xspace} \def\ensuremath{{\rm \,mm}}\xspace {\ensuremath{{\rm \,mm}}\xspace} \def\ensuremath{{\rm \,mm}^2}\xspace {\ensuremath{{\rm \,mm}^2}\xspace} \def\mum {\ensuremath{{\,\mu\rm m}}\xspace \def\ensuremath{{\,\mu\rm m}^2}\xspace {\ensuremath{{\,\mu\rm m}^2}\xspace} \def\ensuremath{{\rm \,nm}}\xspace {\ensuremath{{\rm \,nm}}\xspace} \def\ensuremath{{\rm \,fm}}\xspace {\ensuremath{{\rm \,fm}}\xspace} \def\ensuremath{{\rm \,nm}}\xspace {\ensuremath{{\rm \,nm}}\xspace} \def\ensuremath{{\rm \,b}}\xspace{\ensuremath{{\rm \,b}}\xspace} \def\ensuremath{{\rm -b}}\xspace{\ensuremath{{\rm -b}}\xspace} \def\ensuremath{{\rm \,mb}}\xspace{\ensuremath{{\rm \,mb}}\xspace} \def\ensuremath{{\rm -mb}}\xspace{\ensuremath{{\rm -mb}}\xspace} \def\ensuremath{{\rm \,nb}}\xspace {\ensuremath{{\rm \,nb}}\xspace} \def\ensuremath{\mbox{\,nb}^{-1}}\xspace {\ensuremath{\mbox{\,nb}^{-1}}\xspace} \def\ensuremath{{\rm \,pb}}\xspace {\ensuremath{{\rm \,pb}}\xspace} \def\ensuremath{\mbox{\,pb}^{-1}}\xspace {\ensuremath{\mbox{\,pb}^{-1}}\xspace} \def\ensuremath{\mbox{\,fb}}\xspace {\ensuremath{\mbox{\,fb}}\xspace} \def\ensuremath{\mbox{\,fb}^{-1}}\xspace {\ensuremath{\mbox{\,fb}^{-1}}\xspace} \def\ensuremath{\mbox{\,ab}^{-1}}\xspace {\ensuremath{\mbox{\,ab}^{-1}}\xspace} \def\ensuremath{\rm \,\mus}\xspace {\ensuremath{\rm \,\ensuremath{\rm \,\mus}\xspace}\xspace} \def\ensuremath{\rm \,ns}\xspace {\ensuremath{\rm \,ns}\xspace} \def\ensuremath{\rm \,ps}\xspace {\ensuremath{\rm \,ps}\xspace} \def\ensuremath{\rm \,fs}\xspace {\ensuremath{\rm \,fs}\xspace} \def\ensuremath{\rm \,g}\xspace {\ensuremath{\rm \,g}\xspace} \def\sec{\ensuremath{\rm {\,s}}\xspace} \def\ms {\ensuremath{{\rm \,ms}}\xspace} \def\ensuremath{\rm \,\mus}\xspace {\ensuremath{\,\mu{\rm s}}\xspace} \def\ensuremath{\rm \,ns}\xspace {\ensuremath{{\rm \,ns}}\xspace} \def\ensuremath{\rm \,ps}\xspace {\ensuremath{{\rm \,ps}}\xspace} \def\ensuremath{X_0}\xspace {\ensuremath{X_0}\xspace} \def\ensuremath{\lambda_{int}}\xspace{\ensuremath{\lambda_{int}}\xspace} \let\dgr\ensuremath{^{\circ}}\xspace \def\ensuremath{{\rm \,cm}^{-2} {\rm s}^{-1}}\xspace {\ensuremath{{\rm \,cm}^{-2} {\rm s}^{-1}}\xspace} \def\ensuremath{\,\mu{\rm C}}\xspace {\ensuremath{\,\mu{\rm C}}\xspace} \def\ensuremath{\rm \,krad}\xspace {\ensuremath{\rm \,krad}\xspace} \def\ensuremath{{\rm \,cm}^3}\xspace {\ensuremath{{\rm \,cm}^3}\xspace} \def\ensuremath{\rm \,yr}\xspace {\ensuremath{\rm \,yr}\xspace} \def\ensuremath{\rm \,hr}\xspace {\ensuremath{\rm \,hr}\xspace} \def\ensuremath{^\circ}{C}\xspace {\ensuremath{^\circ}{C}\xspace} \def\ensuremath {\rm K}\xspace {\ensuremath {\rm K}\xspace} \def\ensuremath{^{\circ}}\xspace{\ensuremath{^{\circ}}\xspace} \def\mrad{\ensuremath{\rm \,mrad}\xspace} \def\ensuremath{\rm \,rad}\xspace{\ensuremath{\rm \,rad}\xspace} \def\ensuremath{\rm -mr}\xspace{\ensuremath{\rm -mr}\xspace} \def\sx {\ensuremath{\sigma_x}\xspace} \def\sy {\ensuremath{\sigma_y}\xspace} \def\sz {\ensuremath{\sigma_z}\xspace} \def{\ensuremath{\cal O}}\xspace{{\ensuremath{\cal O}}\xspace} \def{\ensuremath{\cal L}}\xspace{{\ensuremath{\cal L}}\xspace} \def{\ensuremath{\cal L}}\xspace{{\ensuremath{\cal L}}\xspace} \def{\ensuremath{\cal S}}\xspace{{\ensuremath{\cal S}}\xspace} \def{\ensuremath{\cal A}}\xspace{{\ensuremath{\cal A}}\xspace} \def{\ensuremath{\cal D}}\xspace{{\ensuremath{\cal D}}\xspace} \def{\ensuremath{\cal R}}\xspace{{\ensuremath{\cal R}}\xspace} \def\ensuremath{\rightarrow}\xspace {\ensuremath{\rightarrow}\xspace} \def\ensuremath{\rightarrow}\xspace {\ensuremath{\rightarrow}\xspace} \newcommand{\ensuremath{\mathrm{(stat)}}\xspace}{\ensuremath{\mathrm{(stat)}}\xspace} \newcommand{\ensuremath{\mathrm{(syst)}}\xspace}{\ensuremath{\mathrm{(syst)}}\xspace} \newcommand{\ensuremath{^{-1}}\xspace}{\ensuremath{^{-1}}\xspace} \newcommand{\ensuremath{\mathrm{d}\hspace{-0.1em}E/\mathrm{d}x}\xspace}{\ensuremath{\mathrm{d}\hspace{-0.1em}E/\mathrm{d}x}\xspace} \newcommand{\ensuremath{\chi^2}\xspace}{\ensuremath{\chi^2}\xspace} \newcommand{\ensuremath{m_{\dstr}-m_{\dz}}\xspace}{\ensuremath{m_{\dstr}-m_{\dz}}\xspace} \newcommand{\lum} {\ensuremath{\mathcal{L}}\xspace} \def\gsim{{~\raise.15em\hbox{$>$}\kern-.85em \lower.35em\hbox{$\sim$}~}\xspace} \def\lsim{{~\raise.15em\hbox{$<$}\kern-.85em \lower.35em\hbox{$\sim$}~}\xspace} \def\ensuremath{q^2}\xspace {\ensuremath{q^2}\xspace} \section{Introduction} Charge-parity ($\ensuremath{C\!P}\xspace$) violation is one of the conditions necessary to explain the matter-antimatter asymmetry of the universe~\cite{Sakharov:1967dj}. The single complex phase in the Cabibbo-Kobayashi-Maskawa matrix provides the only source of $\ensuremath{C\!P}\xspace$ violation (CPV) in the standard model (SM), but it is not large enough to explain the observed matter-antimatter asymmetry. Baryogenesis, the process by which the baryon-antibaryon asymmetry of the universe developed, is directly related to baryon CPV~\cite{Sakharov:1988vdp,Shaposhnikov:1987tw}. Two-body decays of baryons containing a charm quark are sensitive to $\ensuremath{C\!P}\xspace$ asymmetries, yet are largely unexplored to date. Precise measurements of charm baryon branching fractions also provide useful probes of heavy-baryon dynamics and decay asymmetry parameter measurements can be used to study parity-violating and parity-conserving amplitudes in weak hyperon decays. To date, CPV has been observed in the open-flavored meson sector (i.e. $K$, $D$ and $B$ mesons), but not yet established in the baryon sector. Since CPV in charm decays is predicted in the SM to be very small~\cite{Brod:2011re,Cheng:2012wr,Li:2012cfa}, an observation of CPV in charm decays larger than $10^{-3}$ could indicate new physics beyond the SM~\cite{Grossman:2006jg,Grossman:2012eb}. Singly Cabibbo-suppressed (SCS) decays of charm hadrons provide an ideal laboratory for studying CPV as they are a unique window on the physics of decay-rate dynamics in the charm sector~\cite{Brod:2011re,Cheng:2012wr}. The only observation of CPV in the charm sector was made by the LHCb collaboration in SCS charm meson decays, $D^0\ensuremath{\rightarrow}\xspace h^+h^-$ ($h=K,\,\pi$ throughout this paper)~\cite{LHCb:2019hro}. $\ensuremath{C\!P}\xspace$ asymmetry measurements in SCS charm baryon decays are experimentally more challenging than in charm meson decays and relatively unexplored. The direct $\ensuremath{C\!P}\xspace$ asymmetry, taking $\Lcp$ decays as an example, is defined as \begin{eqnarray} \ensuremath{A_{\CP}}\xspace^{\rm dir} = \frac{\Gamma(\Lcp\ensuremath{\rightarrow}\xspace f)-\Gamma(\Lcm\ensuremath{\rightarrow}\xspace \overline{f})}{\Gamma(\Lcp\ensuremath{\rightarrow}\xspace f)+\Gamma(\Lcm\ensuremath{\rightarrow}\xspace \overline{f})}\,, \label{eqn:Acpdir} \end{eqnarray} where $\Gamma(\Lcp\ensuremath{\rightarrow}\xspace f)$ and $\Gamma(\Lcm\ensuremath{\rightarrow}\xspace\overline{f})$ are the partial decay widths for the final state $f$ and its $\ensuremath{C\!P}\xspace$-conjugate state $\overline{f}$. Searches for direct CPV in SCS charm baryon decays were made by LHCb in $\Lcp\ensuremath{\rightarrow}\xspace ph^+h^-$~\cite{LHCb:2017hwf} and $\Xi_c^+\ensuremath{\rightarrow}\xspace p\ensuremath{K^-}\xspace\ensuremath{\pi^+}\xspace$~\cite{LHCb:2020zkk}. No direct CPV searches in two-body SCS decays of charm baryons have been made to date. Theoretical CPV predictions in two-body decays are more straightforward than in multi-body decays, which are complicated by plentiful intermediate processes. Direct $\ensuremath{C\!P}\xspace$ asymmetry measurements for two-body SCS decays of charm baryons provide useful constraints on theoretical predictions for CPV in the charm baryon sector. Since the $\Lcp$ was discovered, many efforts have been made to predict the BFs of its hadronic decays using phenomenological models such as current algebra~\cite{Uppal:1994pt}, pole model~\cite{Cheng:2018hwl,Zou:2019kzq} and SU(3)${}_{\rm F}$ symmetry~\cite{Geng:2017esc,Geng:2019xbo}. The non-factorizable contributions in charm baryon decays, arising from $W$-exchange or internal $W$-emission diagrams, play an essential role~\cite{Zou:2019kzq}. For example, there is only the non-factorization contribution in $\Lambda_c^+\to\Sigma^{0}\Kp$, while the factorization contribution is dominant in $\Lambda_c^+\to\Lambda\Kp$. Unlike semileptonic decays that can be calculated precisely, the theoretical predictions for hadronic weak decay rate of charm baryons are nontrivial due to non-perturbative strong dynamics, which complicate the calculation of non-factorizable contributions, and the lack of knowledge on baryon structure~\cite{Koniuk:1979vy,Cheng:2021qpd}. Experimentally, studies of charm baryon decays are more challenging than those of charm mesons due to lower production rates. The current world average BFs, $\ensuremath{\mathcal{B}}\xspace(\Lambda_c^+\to\Lambda\Kp)=(6.1\pm 1.2)\times10^{-4}$ and $\ensuremath{\mathcal{B}}\xspace(\Lambda_c^+\to\Sigma^{0}\pip)=(5.2\pm 0.8)\times10^{-4}$~\cite{bib:PDG2022}, rely on measurements with partial datasets from Belle and BaBar~\cite{Belle:2001hyr,BaBar:2006eah}. We perform a measurement with the full Belle dataset. The decay asymmetry parameter $\alpha$ was introduced by Lee and Yang to study the parity-violating and parity-conserving amplitudes in weak hyperon decays~\cite{Lee:1957qs}. In a weak decay of a spin $1/2$ baryon into a spin $1/2$ baryon and a pseudoscalar meson, $\alpha = 2\cdot {\rm Re}(S^{*}P)/(|S|^2 + |P|^2)$, where $S$ and $P$ denote the parity-violating $S$-wave and parity-conserving $P$-wave amplitudes, respectively. A measurement of $\alpha$ in $\Lcp$ decays is a necessary input for various dynamical modes. It would also improve the knowledge of contributions from parity-violating and parity-conserving amplitudes in two-body $\Lcp$ decays and the dynamical properties of $\Lcp$ decays: for example the relative amplitude between $S$-wave and $P$-wave is helpful to constrain phenomenological models. Most theoretical predictions for $\alpha$ of $\Lambda_c^+\to\Lambda\pip$ are in good agreement with experimental results, while those for $\alpha$ in $\Lcp\ensuremath{\rightarrow}\xspace \Sigma^0\ensuremath{\pi^+}\xspace$ are not~\cite{Uppal:1994pt,Cheng:2018hwl,Zou:2019kzq,Geng:2017esc,Geng:2019xbo,bib:PDG2022}. Experimentally, the $\alpha$-parameters for SCS decays of charm baryons are unexplored. Since $\alpha$ is $\ensuremath{C\!P}\xspace$-odd, the $\alpha$-induced $\ensuremath{C\!P}\xspace$ asymmetry is defined as \begin{eqnarray} \ensuremath{A_{\CP}}\xspace^{\alpha} \equiv \frac{\alpha_{\Lcp} - \widehat{\ensuremath{C\!P}\xspace}\alpha_{\Lcp}\widehat{\ensuremath{C\!P}\xspace}^{\dag}}{\alpha_{\Lcp}+\widehat{\ensuremath{C\!P}\xspace}\alpha_{\Lcp}\widehat{\ensuremath{C\!P}\xspace}^{\dag} } = \frac{\alpha_{\Lcp}+\alpha_{\Lcm}}{\alpha_{\Lcp}-\alpha_{\Lcm}} \,, \label{eqn:AcpAlpha} \end{eqnarray} where $\widehat{\ensuremath{C\!P}\xspace}$ denotes the $\ensuremath{C\!P}\xspace$ conjugation operator. In the case that $\ensuremath{A_{\CP}}\xspace^{\rm dir}$ is zero, $\ensuremath{A_{\CP}}\xspace^{\alpha}$ is given by the CPV in ${\rm Re}(S^{*}P)$. Therefore, $\ensuremath{A_{\CP}}\xspace^{\alpha}$ provides an observable complementary to the $\ensuremath{A_{\CP}}\xspace^{\rm dir}$ induced by decay widths. To date, there is only one $\ensuremath{A_{\CP}}\xspace^{\alpha}$ measurement for hadronic $\Lcp$ decays, $\ensuremath{A_{\CP}}\xspace^{\alpha}(\Lcp\ensuremath{\rightarrow}\xspace\Lambda \ensuremath{\pi^+}\xspace)=-0.07 \pm 0.22$, from the FOCUS experiment~\cite{FOCUS:2005vxq}. Using the high-statistics $\Lcp$ sample at Belle, we make the first measurements of $\ensuremath{A_{\CP}}\xspace^{\alpha}$ in $\Lambda_c^+\to\Lambda\Kp$ and $\Lambda_c^+\to\Sigma^{0} h^+$ decays and measure $\ensuremath{A_{\CP}}\xspace^{\alpha}$ with improved precision in $\Lambda_c^+\to\Lambda\pip$. CPV in hyperon decays is predicted to be at the level of $\mathcal{O}(10^{-4})$ or smaller in the SM~\cite{Donoghue:1985ww,Donoghue:1986hh} and can be enhanced to reach the level of $10^{-3}$ in some new physics models~\cite{Chang:1994wk,He:1999bv,Chen:2001cv,Tandean:2003fr}. The world average value of $\ensuremath{A_{\CP}}\xspace^{\alpha}(\Lambda\ensuremath{\rightarrow}\xspace{}p\ensuremath{\pi^-}\xspace)=-0.0024\pm 0.0044$~\cite{bib:PDG2022,BESIII:2022qax}, is dominated by a measurement by BESIII in $J/\psi\ensuremath{\rightarrow}\xspace\Lambda\overline{\Lambda}$ based on 10 billion $J/\psi$ events~\cite{BESIII:2022qax}. In this analysis, we search for $\Lambda$-hyperon CPV with the novel and complementary method proposed in Ref.~\cite{Prof.Yu}. The total $\alpha$-induced $\ensuremath{C\!P}\xspace$ asymmetry for $\Lambda_c^+\to\Lambda\pip,\,\Lambda\ensuremath{\rightarrow}\xspace{}p\ensuremath{\pi^-}\xspace$ and $\Lambda_c^+\to\Sigma^{0}\pip,\,\Sigma^0\ensuremath{\rightarrow}\xspace\Lambda\gamma,\,\Lambda\ensuremath{\rightarrow}\xspace{}p\ensuremath{\pi^-}\xspace$ decay chains is determined according to \begin{eqnarray} \ensuremath{A_{\CP}}\xspace^{\alpha}({\rm total}) \equiv \frac{\alpha_{\Lcp}\alpha_{-} - \alpha_{\Lcm}\alpha_{+}}{\alpha_{\Lcp}\alpha_{-} +\alpha_{\Lcm} \alpha_{+}}\,, \label{eqn:Acp4Lambda} \end{eqnarray} where $\alpha_-$ and $\alpha_+$ are the decay asymmetry parameters of $\Lambda\ensuremath{\rightarrow}\xspace p\ensuremath{\pi^-}\xspace$ and $\overline{\Lambda}\ensuremath{\rightarrow}\xspace \overline{p}\ensuremath{\pi^+}\xspace$, respectively~\cite{bib:PDG2022}. Under the assumption that $\alpha_{\Lcp}=-\alpha_{\Lcm}$ for these CF $\Lcp$ decays, which is the expected result in the SM, ${\ensuremath{A_{\CP}}\xspace^{\alpha}({\rm total})=\ensuremath{A_{\CP}}\xspace^{\alpha}(\Lambda\ensuremath{\rightarrow}\xspace p\ensuremath{\pi^-}\xspace)}$. In this paper, we report the direct $\ensuremath{C\!P}\xspace$ asymmetries and the BFs for SCS decays $\Lambda_c^+\to\Lambda\Kp$ and $\Lambda_c^+\to\Sigma^{0}\Kp$, using the CF decays $\Lambda_c^+\to\Lambda\pip$ and $\Lambda_c^+\to\Sigma^{0}\pip$ as reference modes. We also measure $\alpha$ and $\ensuremath{A_{\CP}}\xspace^{\alpha}$ in these four decays and search for $\Lambda$-hyperon CPV in the CF $\Lcp$ decays. Inclusion of charge conjugate states is implicit, unless otherwise stated. \section{Detector and data set} This analysis is based on the full data set recorded by the Belle detector~\cite{belle_detector} operating at the KEKB~\cite{KEKB} asymmetric-energy $e^+e^-$ collider. This data sample corresponds to a total integrated luminosity of 980 $\ensuremath{\mbox{\,fb}^{-1}}\xspace$ collected at or near the $\Upsilon(nS)$ ($n=1,\,2,\,3,\,4,\,5$) resonances. The Belle detector is a large-solid-angle magnetic spectrometer consisting of a silicon vertex detector (SVD), a central drift chamber (CDC), an array of aerogel threshold Cherenkov counters (ACC), a barrel-like arrangement of time-of-flight scintillation counters (TOF), and an electromagnetic calorimeter (ECL) consisting of CsI(Tl) crystals. These components are all located inside a superconducting solenoid coil that provides a 1.5~T magnetic field. The iron flux-return of the magnet is instrumented to detect $K^0_L$ mesons and to identify muons (KLM). The detector is described in detail elsewhere~\cite{belle_detector}. Monte Carlo (MC) simulated events are generated with {\sc{evtgen}}~\cite{Lange:2001uf} and {\sc{pythia}}~\cite{Sjostrand:2000wi}, and are subsequently processed through a full detector simulation based on {\sc{geant3}}~\cite{Brun:1987ma}. Final-state radiation from charged particles is included at event generation using {\sc{photos}}~\cite{Barberio:1993qi}. Signal $\Lcp$ baryons are produced via the inclusive process $e^+e^-\ensuremath{\rightarrow}\xspace{}c\bar{c}\ensuremath{\rightarrow}\xspace\Lcp+{\rm anything}$ and $\Lambda_c^+\to\Lambda h^+,\,\Sigma^0h^+$ decays, where $\Sigma^0\ensuremath{\rightarrow}\xspace\Lambda\gamma$ and $\Lambda\ensuremath{\rightarrow}\xspace{}p\ensuremath{\pi^-}\xspace$. \section{Measurement methods} The raw asymmetry in the decays of $\Lcp\ensuremath{\rightarrow}\xspace f$ and $\Lcm\ensuremath{\rightarrow}\xspace\overline{f}$ is defined with signal yields $N$ as follows: \begin{eqnarray} A_{\rm raw} = \dfrac{N(\Lcp\ensuremath{\rightarrow}\xspace f)-N(\Lcm\ensuremath{\rightarrow}\xspace \overline{f})}{N(\Lcp\ensuremath{\rightarrow}\xspace f) + N(\Lcm\ensuremath{\rightarrow}\xspace \overline{f})}\,. \label{eqn:acp} \end{eqnarray} Several sources contribute to the raw asymmetry, which for $\Lambda_c^+\to\Lambda\Kp$ is given by \begin{eqnarray} \hskip-15pt A_{\rm raw} & = & A_{\rm FB}^{\Lcp} + \ensuremath{A_{\CP}}\xspace^{\Lambda_c^+\to\Lambda\Kp} + \ensuremath{A_{\CP}}\xspace^{\Lambda\ensuremath{\rightarrow}\xspace p\ensuremath{\pi^-}\xspace} + A_{\varepsilon}^{\Lambda} + A_{\varepsilon}^{\ensuremath{K^+}\xspace}, \end{eqnarray} where all terms are small (${<1\%}$): $A_{\rm FB}^{\Lcp}$ is the forward-backward asymmetry of $\Lcp$ production due to $\gamma$-$Z^0$ interference and higher-order QED effects in ${\ensuremath{e^+e^-}\xspace\ensuremath{\rightarrow}\xspace c\ensuremath{\overline c}\xspace}$ collisions~\cite{Brown:1973ji}, $\ensuremath{A_{\CP}}\xspace^{\Lambda_c^+\to\Lambda\Kp}$ is the direct $\ensuremath{C\!P}\xspace$ asymmetry associated with the $\Lcp$ decay; $\ensuremath{A_{\CP}}\xspace^{\Lambda\ensuremath{\rightarrow}\xspace{}p\ensuremath{\pi^-}\xspace}$ is the direct $\ensuremath{C\!P}\xspace$ asymmetry associated with the $\Lambda$ decay; $A_{\varepsilon}^{\Lambda}$ is the detection asymmetry resulting from differences in the reconstruction efficiency between $\Lambda$ and $\overline{\Lambda}$; and $A_{\varepsilon}^{\ensuremath{K^+}\xspace}$ is the detection asymmetry resulting from differences in reconstruction efficiencies between $K^+$ and its anti-particle $K^-$. The reference mode $\Lambda_c^+\to\Lambda\pip$ and signal mode have nearly the same $\Lambda$ kinematic distributions, including the $\Lambda$ decay length, the polar angle with respect to the direction opposite the positron beam and the momentum of the proton and pion in the laboratory reference frame. Asymmetries common between the reference and signal modes therefore cancel such that $A_{\rm raw}=\ensuremath{A_{\CP}}\xspace^{\Lambda_c^+\to\Lambda\Kp} + A_{\varepsilon}^K$. We weight $\Lambda_c^{\pm}$ candidates with factors $1\mp A_{\varepsilon}^{h^+}$ to remove the $\ensuremath{K^+}\xspace$ or $\ensuremath{\pi^+}\xspace$ detection asymmetry from the raw asymmetry in $\Lambda_c^+\to\Lambda\Kp$ and $\Lambda_c^+\to\Lambda\pip$. Here $A_{\varepsilon}^{h^+}$ depends on the cosine of the polar angle and transverse momentum of the $h^+$ tracks in the laboratory frame and was determined at Belle using $\ensuremath{D^0}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{K^-}\xspace\ensuremath{\pi^+}\xspace$ and $\ensuremath{D^+_s}\xspace\ensuremath{\rightarrow}\xspace\phi\ensuremath{\pi^+}\xspace$ events for $\mathcal{A}_{\varepsilon}^{\ensuremath{K^+}\xspace}$~\cite{Belle:2012ygx} and $\ensuremath{D^+}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{K^-}\xspace\ensuremath{\pi^+}\xspace\pip$ and $\ensuremath{D^0}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{K^-}\xspace\ensuremath{\pi^+}\xspace\ensuremath{\pi^0}\xspace$ events for $\mathcal{A}_{\varepsilon}^{\ensuremath{\pi^+}\xspace}$~\cite{Belle:2012ygt}. The difference of the corrected raw asymmetries is \begin{eqnarray} A_{\rm raw}^{\rm corr}(\Lambda_c^+\to\Lambda\Kp)-A_{\rm raw}^{\rm corr}(\Lambda_c^+\to\Lambda\pip) \nonumber \\ = \ensuremath{A_{\CP}}\xspace^{\rm dir}(\Lambda_c^+\to\Lambda\Kp) - \ensuremath{A_{\CP}}\xspace^{\rm dir}(\Lambda_c^+\to\Lambda\pip)\,. \label{eqn:DeltaAcp} \end{eqnarray} Since $\ensuremath{C\!P}\xspace$ is well conserved in CF charm decays not involving a $K_S^0$ or $K_L$ in the final-state in the SM, $\ensuremath{A_{\CP}}\xspace^{\rm dir}(\Lambda_c^+\to\Lambda\pip)$ can be set to be zero. Thus, the measured asymmetry difference in Eq.(\ref{eqn:DeltaAcp}) is equal to $\ensuremath{A_{\CP}}\xspace^{\rm dir}$ for $\Lambda_c^+\to\Lambda\Kp$. The branching fractions of signal modes are measured relative to those of the reference modes using \begin{eqnarray} \frac{\mathcal{B}_{\rm sig}}{\mathcal{B}_{\rm ref}} = \frac{N_{\rm sig}/\varepsilon_{\rm sig}}{ N_{\rm ref}/\varepsilon_{\rm ref} }\,, \label{eqn:BRratio} \end{eqnarray} where $N_{\rm sig}$ is the extracted signal yield and $\ensuremath{\varepsilon}\xspace$ is the reconstruction efficiency. The world average values $\ensuremath{\mathcal{B}}\xspace(\Lambda_c^+\to\Lambda\pip)=(1.37\pm0.07)\%$ and $\ensuremath{\mathcal{B}}\xspace(\Lambda_c^+\to\Sigma^{0}\pip)=(1.29\pm0.07)\%$~\cite{bib:PDG2022} are used for the reference modes. The common systematic uncertainties between the signal modes and reference modes, such as the inclusive $\Lcp$ yield produced from $e^+e^-\ensuremath{\rightarrow}\xspace c\ensuremath{\overline c}\xspace$ and the mass resolution of the $\Lambda$ and $\Sigma^0$, cancel in the ratio. For $\Lambda_c^+\to\Lambda h^+$ decays, the differential decay rate depends on $\alpha$ parameters and one helicity angle as \begin{eqnarray} \frac{dN}{d\cos\theta_{\Lambda}} \propto 1+\alpha_{\Lcp}\alpha_{-}\cos\theta_{\Lambda}\,, \label{eqn:alpha_LcToLamHp} \end{eqnarray} where $\alpha_{\Lcp}$ is the decay asymmetry parameter of $\Lambda_c^+\to\Lambda h^+$, and $\theta_{\Lambda}$ is the angle between the proton momentum and the direction opposite the $\Lcp$ momentum in the $\Lambda$ rest frame, as illustrated in Fig.~\ref{fig:Helicity} (top). For $\Lambda_c^+\to\Sigma^{0} h^+$ decays, considering $\alpha(\Sigma^0\ensuremath{\rightarrow}\xspace\gamma\Lambda)$ is zero due to parity conservation for an electromagnetic decay, the differential decay rate related to the $\alpha$ parameters and helicity angles is given by \begin{eqnarray} \frac{dN}{d\cos\theta_{\Sigma^0}d\cos\theta_{\Lambda}} \propto 1 - \alpha_{\Lcp}\alpha_{-}\cos\theta_{\Sigma^{0}}\cos\theta_{\Lambda}\,, \label{eqn:alpha_LcToSigHp} \end{eqnarray} where $\theta_{\Lambda}$ ($\theta_{\Sigma^0}$) is the angle between the proton ($\Lambda$) momentum and the direction opposite the $\Sigma^0$ ($\Lcp$) momentum in the $\Lambda$ ($\Sigma^0$) rest frame, as illustrated in Fig.~\ref{fig:Helicity} (bottom). \begin{figure}[!htbp] \begin{centering}% \includegraphics[width=0.5\textwidth]{Helicity.pdf}% \vskip-10pt \caption{\label{fig:Helicity}Schematic plot showing the helicity angles: (top) $\theta_{\Lcp}$ and $\theta_{\Lambda}$ in $\Lambda_c^+\ensuremath{\rightarrow}\xspace\Lambda\ensuremath{\pi^+}\xspace,\,\Lambda\ensuremath{\rightarrow}\xspace p\ensuremath{\pi^-}\xspace$; and (bottom) $\theta_{\Sigma^0}$ and $\theta_{\Lambda}$ in $\Lambda_c^+\ensuremath{\rightarrow}\xspace\Sigma^0\ensuremath{\pi^+}\xspace,\,\Sigma^0\ensuremath{\rightarrow}\xspace\gamma\Lambda,\,\Lambda\ensuremath{\rightarrow}\xspace p\ensuremath{\pi^-}\xspace$.} \end{centering} \end{figure} \section{Event selection and optimization} We improve the invariant mass resolution by calculating the corrected mass difference wherever the final state includes a hyperon. Taking $\Lambda_c^+\to\Lambda h^+$ as an example, the corrected mass is $M(\Lcp)=M_{\Lcp}-M_{\Lambda}+m_{\Lambda}$ where $M_{X}$ is the invariant mass of reconstructed particle $X$ and $m_X$ represents its nominal mass~\cite{bib:PDG2022}. The event selection criteria are optimized with a figure-of-merit (FOM), which is defined as $S/\sqrt{S+B}$ where $S$ and $B$ are the expected signal and background yields in the signal region. The signal region is defined as $|M(\Lcp)-m_{\Lcp}|<15$~MeV/$c^2$, corresponding to 2.5 standard deviations in the $M(\Lcp)$ resolution. The particle identification (PID) likelihood for a given particle hypotheses, $\mathcal{L}_i$ ($i=\pi,\,K,\,p$), is calculated from the photon yield in the ACC, energy-loss measurements in the CDC, and time-of-flight information from the TOF~\cite{Nakano:2002jw}. Charged tracks satisfying $\mathcal{R}(K|\pi)=\mathcal{L}_K/(\mathcal{L}_K+\mathcal{L}_{\pi})>0.7$, are identified as kaons. All other tracks are identified as pions. The highly proton-like tracks with $\mathcal{R}(p|K)>0.8$ and $\mathcal{R}(p|\pi)>0.8$ are rejected as $h^+$ candidates for signal modes and reference modes, respectively. To suppress the background from $\Lambda_c^+$ semileptonic decays, tracks that are highly electron-like ($\mathcal{L}_e/(\mathcal{L}_{e}+\mathcal{L}_{{\rm non-}e})>0.95$) or muon-like ($\mathcal{L}_{\mu}/(\mathcal{L}_{\mu}+\mathcal{L}_{\pi}+\mathcal{L}_{K})>0.95$) are rejected. The electron and muon likelihoods depend primarily on the information from the ECL and KLM, respectively~\cite{Hanagaki:2001fz,Abashian:2002bd}. PID requirements have a signal efficiency of about $83\%$ for signal modes and $96\%$ for reference modes and a background-rejection rate of 44\% and 9\%, respectively. We require the $h^+$ candidate to have at least two hits in the SVD to improve its impact parameter resolution with respect to the interaction point. The $\Lambda$ candidates are reconstructed from one $p$ and one $\pi$ candidate, which a fit requires to originate from a common vertex. We require $|M_{\Lambda}-m_{\Lambda}|<3$ MeV/$c^2$, corresponding to approximately 2.5 standard deviations of the $M_{\Lambda}$ resolution. Proton candidates are required to have $\mathcal{R}(p|K)>0.2$. To suppress the non-$\Lambda$ background, we calculate the significance of the $\Lambda$ decay length ($L/\sigma_L$), where $L$ is the projection of the $\Lambda$ displacement vector, relative to the production vertex, onto its momentum direction. The corresponding uncertainty $\sigma_L$ is calculated by propagating uncertainties in the vertices and the $\Lambda$ momentum, including their correlations. We require $L/\sigma_L>4$ to suppress the non-$\Lambda$ background. The signal efficiency loss due to this requirement is 5\% for all decay modes and the background rejection rate is 22\% for$\Lambda_c^+\to\Lambda\Kp$, 35\% for $\Lambda_c^+\to\Lambda\pip$, 19\% for $\Lambda_c^+\to\Sigma^{0}\Kp$ and 23\% for $\Lambda_c^+\to\Sigma^{0}\pip$. Photon candidates are identified as energy clusters in the ECL that are not associated with any charged track. The ratio of the energy deposited in the 3$\times$3 array of crystals centered on the crystal with the highest energy to the energy deposited in the corresponding 5$\times$5 array is required to be greater than 0.85. Candidate $\Sigma^0\ensuremath{\rightarrow}\xspace\Lambda\gamma$ decays are formed by combining the $\Lambda$ candidate with a photon candidate that has an ECL cluster energy above 0.1 GeV. The $\Sigma^0$ candidate is required to have $|M(\Sigma^0)-m_{\Sigma^0}|<6$ MeV/$c^2$, corresponding to 1.5 standard deviations of the $M(\Sigma^0)$ resolution. Candidate $\Lambda_c^+\to\Lambda h^+$ and $\Lambda_c^+\to\Sigma^{0} h^+$ decays are reconstructed by combining $\Lambda$ or $\Sigma^0$ candidate with a $h^+$ candidate. A fit constrains the $\Lambda$ and $h^+$ candidates to originate from a common vertex and the $\chi^2$ of the fit is required to be less than 9. To suppress combinatorial backgrounds, the normalized momentum $x_p=p^{*}c/\sqrt{s/4-M^2(\Lcp)\cdot{c}^{4}}$ is required to be greater than 0.5, where $p^{*}$ is the $\Lcp$ momentum in $\ensuremath{e^+e^-}\xspace$ center-of-mass frame and $\sqrt{s}$ is the center-of-mass energy. After applying the optimized requirements, the $\Lcp$ candidate multiplicity is greater than one for $1\%$, $7\%$, $7\%$, and $11\%$ of events for $\Lambda_c^+\to\Lambda\Kp,\,\Lambda\ensuremath{\pi^+}\xspace,\,\Sigma^0\ensuremath{K^+}\xspace$, and $\Sigma^0\ensuremath{\pi^+}\xspace$, respectively. For modes including a $\Sigma^0$, the multiplicity is predominantly from multiple photons. We perform a best candidate selection (BCS) for events with multiple candidates by retaining candidates with the smallest sum of $\chi^2$ from the vertex fits of the $\Lambda$ and $\Lcp$ candidates for $\Lambda_c^+\to\Lambda h^+$ modes. For $\Lambda_c^+\to\Sigma^{0} h^+$ modes, an additional term given by $(M(\Sigma^0)-m_{\Sigma^0})^2/\sigma_{M}^{2}$ where $\sigma_M=4$ MeV/$c^2$ is the $\Sigma^0$ mass resolution, is added. The BCS has a signal efficiency of 60\% for events with multiple candidates and does not introduce any peaking backgrounds. \section{Direct $\ensuremath{C\!P}\xspace$ asymmetry}\label{sec:BR} The signal probability density function (PDF) uses a sum of three or four asymmetric Gaussian functions for SCS or CF modes, respectively. These Gaussian functions share a common mean parameter but have different width parameters. For modes that include a $\Sigma^0$, an additional component denoted broken-$\Sigma^0$ signal, which is the signal decay but with the $\gamma$ in $\Sigma^0\ensuremath{\rightarrow}\xspace\Lambda\gamma$ replaced by a random photon in the event, is added into the signal and its shape and ratio to the total signal is fixed according to the results of a fit to the MC sample. The signal parameters are fixed to the fitted results of truth-matched signal, but with a common shift ($\delta_\mu$) for the mean parameter and a common scaling factor ($k_{\sigma}$) for all width parameters to account for discrepancies between the experimental data and simulated samples. The background PDF is constructed from a sum of empirical shapes based on truth-matched background events in simulation and a second-order polynomial function for $\Lambda_c^+\to\Lambda\Kp$ or a third-order polynomial for the other modes. For $\Lambda_c^+\to\Lambda\Kp$, the empirical backgrounds include $\Lambda_c^+\to\Lambda\pip$ decays with the $\ensuremath{\pi^+}\xspace$ misidentified as a $\ensuremath{K^+}\xspace$, a feed-down background from $\Lambda_c^+\to\Sigma^{0}\Kp$ with a missing $\gamma$, and a wide enhancement of $\Lambda_c^+\to\Sigma^{0}\pip$ with a misidentified $\ensuremath{\pi^+}\xspace$ and a missing $\gamma$. For $\Lambda_c^+\to\Lambda\pip$, the empirical backgrounds include a feed-down background from $\Lambda_c^+\to\Sigma^{0}\pip$, and a feed-down $\Xi_c$ background from $\Xi_c^{0,+}\ensuremath{\rightarrow}\xspace\Xi^{-,0}\ensuremath{\pi^+}\xspace$ where $\Xi^{-,0}\ensuremath{\rightarrow}\xspace\Lambda\pi^{-,0}$ with one missing pion. For $\Lambda_c^+\to\Sigma^{0}\Kp$, the empirical backgrounds include a background from $\Lambda_c^+\to\Sigma^{0}\pip$ with a misidentified $\ensuremath{\pi^+}\xspace$ and a feed-down background from $\Lcp\ensuremath{\rightarrow}\xspace\Xi^0\ensuremath{K^+}\xspace$ where $\Xi^0\ensuremath{\rightarrow}\xspace\Lambda\ensuremath{\pi^0}\xspace,\,\ensuremath{\pi^0}\xspace\ensuremath{\rightarrow}\xspace\gamma\gamma$ with one missing photon. For $\Lambda_c^+\to\Sigma^{0}\pip$, the empirical backgrounds include a reflection background from $\Lambda_c^+\to\Lambda\pip$ where $\Lambda$ is combined with a random $\gamma$ to form fake $\Sigma^0$ candidate. The yields of each component and the parameters of the polynomial functions are floated to account for discrepancies between the experimental data and simulated samples. We perform an unbinned extended maximum likelihood fit on the $M(\Lambda_c^{\pm})$ distributions of the weighted $\Lcp$ and $\Lcm$ samples simultaneously to measure the corrected raw asymmetry differences. In the fit, the mass resolution of $\Lcp$ and $\Lcm$ are allowed to differ. The fit projections are shown in Fig.~\ref{fig:CPasym_Final1} for $\Lambda_c^+\to\Lambda h^+$ and in Fig.~\ref{fig:CPasym_Final2} for $\Lambda_c^+\to\Sigma^{0} h^+$, along with the distribution of pull values, defined as $(N_{\rm data}-N_{\rm fit})/\sqrt{N_{\rm data}}$. The fitted $A_{\rm raw}^{\rm corr}$ values are \begin{eqnarray} A_{\rm raw}^{\rm corr}(\Lambda_c^+\to\Lambda\Kp) & = & (+3.66 \pm 2.59)\%\,, \\ A_{\rm raw}^{\rm corr}(\Lambda_c^+\to\Lambda\pip) & = & (+1.55 \pm 0.30)\%\,, \\ A_{\rm raw}^{\rm corr}(\Lambda_c^+\to\Sigma^{0}\Kp) & = & (+8.60 \pm 5.34)\%\,, \\ A_{\rm raw}^{\rm corr}(\Lambda_c^+\to\Sigma^{0}\pip) & = & (+6.11 \pm 0.40)\%\,. \end{eqnarray} Using Eq.(\ref{eqn:DeltaAcp}), we measure the $\ensuremath{C\!P}\xspace$ asymmetries: \begin{eqnarray} \ensuremath{A_{\CP}}\xspace^{\rm dir}(\Lambda_c^+\to\Lambda\Kp) & = & (+2.1 \pm 2.6 \pm 0.1)\% \,, \label{eqn:Adir1}\\ \ensuremath{A_{\CP}}\xspace^{\rm dir}(\Lambda_c^+\to\Sigma^{0}\Kp) & = & (+2.5 \pm 5.4 \pm 0.4)\% \,, \label{eqn:Adir2} \end{eqnarray} where the first uncertainties are statistical and the second are systematic, which are discussed in detail below. No evidence of charm $\ensuremath{C\!P}\xspace$ violation is found. This is the first direct $\ensuremath{C\!P}\xspace$ asymmetry measurement for SCS two-body decays of charm baryons. \begin{figure*}[!hbtp] \begin{centering \begin{overpic}[width=0.48\textwidth]{MassLc_LcToLamKp_exp_Final_Lcp_diffResol.eps}% \put(20,65){$\Lambda_c^+\to\Lambda\Kp$} \end{overpic}% \begin{overpic}[width=0.48\textwidth]{MassLc_LcToLamKp_exp_Final_Lcm_diffResol.eps}% \put(20,65){$\Lcm\to\overline{\Lambda}\Km$} \end{overpic}\\ \begin{overpic}[width=0.48\textwidth]{MassLc_LcToLamPip_exp_Final_Lcp_diffResol.eps}% \put(20,65){$\Lambda_c^+\to\Lambda\pip$} \end{overpic}% \begin{overpic}[width=0.48\textwidth]{MassLc_LcToLamPip_exp_Final_Lcm_diffResol.eps}% \put(20,65){$\Lcm\to\overline{\Lambda}\pim$} \end{overpic} \vskip-5pt \caption{\label{fig:CPasym_Final1}The simultaneous fit to $\Lcp$ (left) and $\Lcm$ (right) samples from real data for $\Lambda_c^+\to\Lambda\Kp$ (top) and $\Lambda_c^+\to\Lambda\pip$ (bottom). The red curve is the total fitting result. The dashed lines show the components of signal and backgrounds (see text).} \end{centering} \end{figure*} \begin{figure*}[!hbtp] \begin{centering \begin{overpic}[width=0.48\textwidth]{MassLc_LcToSigKp_exp_Final_Lcp_diffResol.eps}% \put(20,65){$\Lambda_c^+\to\Sigma^{0}\Kp$} \end{overpic}% \begin{overpic}[width=0.48\textwidth]{MassLc_LcToSigKp_exp_Final_Lcm_diffResol.eps}% \put(20,65){$\Lcm\to\overline{\Sigma}{}^{0}\Km$} \end{overpic}\\ \begin{overpic}[width=0.48\textwidth]{MassLc_LcToSigPip_exp_4bkg_Final_Lcp_diffResol_noXic.eps}% \put(20,65){$\Lambda_c^+\to\Sigma^{0}\pip$} \end{overpic}% \begin{overpic}[width=0.48\textwidth]{MassLc_LcToSigPip_exp_4bkg_Final_Lcm_diffResol_noXic.eps}% \put(20,65){$\Lcm\to\overline{\Sigma}{}^{0}\pim$} \end{overpic} \vskip-5pt \caption{\label{fig:CPasym_Final2}The simultaneous fit to $\Lcp$ (left) and $\Lcm$ (right) samples from real data for $\Lambda_c^+\to\Sigma^{0}\Kp$ (top) and $\Lambda_c^+\to\Sigma^{0}\pip$ (bottom). The red curve is the total fitting result. The dashed lines show the components of signal and backgrounds (see text).} \end{centering} \end{figure*} \section{Branching fraction} To measure the branching fraction, we perform a fit to the $M(\Lcp)$ distribution for the combined $\Lcp$ and $\Lcm$ sample. The fitted signal yields are listed in Table~\ref{tab:BR}, along with the reconstruction efficiency ratio for the SCS modes relative to the CF modes. The efficiency is determined based on signal MC events, which are produced with a special angular distribution using our measured $\alpha$ values. An event-by-event correction (typically 0.3\% and 2.8\%) is applied to account for discrepancies in the $K^+$ and $\pi^+$ PID efficiencies between data and simulation. These correction factors depend on the momentum and polar angle of tracks and are determined using a sample of $D^{*+}\ensuremath{\rightarrow}\xspace[D^0\ensuremath{\rightarrow}\xspace\ensuremath{K^-}\xspace\ensuremath{\pi^+}\xspace]\ensuremath{\pi^+}\xspace$ decays. Additional details are given in the supplementary materials. \begin{table}[!htbp] \begin{centering} \caption{\label{tab:BR}The fitted yield ($N_{\rm sig}$), efficiency ($\ensuremath{\varepsilon}\xspace$) ratio, and ratio of branching fractions ($\ensuremath{\mathcal{B}}\xspace$) for signal modes ($\Lambda_c^+\to\Lambda\Kp,\,\Sigma^0\ensuremath{K^+}\xspace$) relative to reference modes ($\Lambda_c^+\to\Lambda\pip,\,\Sigma^0\ensuremath{\pi^+}\xspace$), compared with the world average values (W.A.)~\cite{bib:PDG2022}.} \begin{lrbox}{\tablebox} \begin{tabular}{ccccc} \hline Channel & $N_{\rm sig}$ & $\ensuremath{\varepsilon}\xspace_{\rm sig}/\ensuremath{\varepsilon}\xspace_{\rm ref}$ & $\ensuremath{\mathcal{B}}\xspace_{\rm sig}/\ensuremath{\mathcal{B}}\xspace_{\rm ref}$ (\%) & W.A.(\%) \\ \hline $\Lambda_c^+\to\Lambda\Kp$ & $\,\,\,11175 \pm 296$ & \multirow{2}{*}{$0.836$} & \multirow{2}{*}{$5.05\pm 0.13\pm 0.09$} & \multirow{2}{*}{$4.7\pm 0.9$} \\ $\Lambda_c^+\to\Lambda\pip$ & $264470 \pm 787$ & & & \\ \hline $\Lambda_c^+\to\Sigma^{0}\Kp$ & $\,\,\,\,\,\,2436 \pm 132$ & \multirow{2}{*}{$0.835$} & \multirow{2}{*}{$2.78\pm 0.15\pm 0.05$} & \multirow{2}{*}{$4.0\pm 0.6$} \\ $\Lambda_c^+\to\Sigma^{0}\pip$ & $105018 \pm 475$ & & & \\ \hline \end{tabular \end{lrbox} \scalebox{0.95}{\usebox{\tablebox}} \end{centering} \end{table} Using the fitted yields and efficiency ratios, we calculate the branching fraction ratios according to Eq.(\ref{eqn:BRratio}) as \begin{eqnarray} \frac{\mathcal{B}(\Lambda_c^+\to\Lambda\Kp)}{\mathcal{B}(\Lambda_c^+\to\Lambda\pip)} & = & (5.05\pm 0.13\pm 0.09)\%\,, \\ \frac{\mathcal{B}(\Lambda_c^+\to\Sigma^{0}\Kp)}{\mathcal{B}(\Lambda_c^+\to\Sigma^{0}\pip)} & = & (2.78\pm 0.15\pm 0.05)\%\,, \end{eqnarray} where the first uncertainties are statistical and the second are systematic. Systematic uncertainties are described in detail in Sec.~\ref{sec:sys}. Multiplying the branching fraction results by the world average values for the branching fraction of the appropriate reference mode, $\mathcal{B}(\Lambda_c^+\to\Lambda\pip)=(1.30\pm0.07)\%$ and $\mathcal{B}(\Lambda_c^+\to\Sigma^{0}\pip)=(1.29\pm0.07)\%$~\cite{bib:PDG2022}, we measure the absolute branching fraction for the SCS decays, \begin{eqnarray} \ensuremath{\mathcal{B}}\xspace(\Lambda_c^+\to\Lambda\Kp) & = & (6.57\pm 0.17\pm 0.11\pm \nonumber \\ & & \quad \quad \quad \quad 0.35)\times 10^{-4}\,, \\ \ensuremath{\mathcal{B}}\xspace(\Lambda_c^+\to\Sigma^{0}\Kp) & = & (3.58\pm 0.19\pm 0.06\pm \nonumber \\ & & \quad \quad \quad \quad 0.19)\times 10^{-4}\,, \end{eqnarray} where the first uncertainties are statistical, the second are systematic, and the third are from the uncertainties on the branching fractions for the reference modes~\cite{bib:PDG2022}. These results are consistent with current world average values~\cite{bib:PDG2022}, but with significantly improved precision. \begin{comment} \begin{figure*}[!htbp] \begin{centering}% \begin{overpic}[width=0.5\textwidth]{MassLc_LcToLamKp_exp_final.eps}% \put(22,67){\small{$\Lambda_c^+\to\Lambda\Kp$}}% \end{overpic}% \begin{overpic}[width=0.5\textwidth]{MassLc_LcToLamPip_exp_final.eps}% \put(22,67){\small{$\Lambda_c^+\to\Lambda\pip$}}% \end{overpic}\\% \begin{overpic}[width=0.5\textwidth]{MassLc_LcToSigKp_exp_final.eps}% \put(22,67){\small{$\Lambda_c^+\to\Sigma^{0}\Kp$}}% \end{overpic}% \begin{overpic}[width=0.5\textwidth]{MassLc_LcToSigPip_exp_4bkg_final_noXic.eps}% \put(22,67){\small{$\Lambda_c^+\to\Sigma^{0}\pip$}}% \end{overpic}% \vskip-10pt \caption{\label{fig:YieldsFinal}The fitting results of $\Lcp$ invariant mass distributions for $\Lambda_c^+\to\Lambda\Kp/\Lambda\ensuremath{\pi^+}\xspace/\Sigma^0\ensuremath{K^+}\xspace/\Sigma^0\ensuremath{\pi^+}\xspace$ decays. The red curve is for total fitting result and the blue curve for total background; the dashed curves are each component of signal and backgrounds (see text). The projection quality $\chi^2/91=1.12$, $\chi^2/91=1.38$, $\chi^2/91=
1.38$, and $\chi^2/92=1.07$, respectively.} \end{centering} \end{figure*} \end{comment} \section{Decay asymmetry parameter $\alpha$} To extract the $\alpha$ parameter, the $\cos\theta_{\Lambda}$ distributions of $\Lambda_c^+\to\Lambda h^+$ modes are divided into 10 bins of uniform width. The $\cos\theta_{\Sigma^0}$ versus $\cos\theta_{\Lambda}$ distributions for $\Lambda_c^+\to\Sigma^{0} h^+$ modes are similarly divided into 5$\times$5 bins for $\Lambda_c^+\to\Sigma^{0}\Kp$ and 6$\times$6 bins for $\Lambda_c^+\to\Sigma^{0}\pip$, since the latter mode has much greater statistics. To extract the per-bin yield, we fit the $M(\Lcp)$ distribution with signal parameters and background polynomial parameters fixed according to the fit to the full sample integrated over helicity angles. In the $\Lambda_c^+\to\Sigma^{0} h^+$ modes, the ratio of broken-$\Sigma^0$ signal to total signal depending on the $\cos\theta_{\Sigma^0}$ bin is fixed to the truth-matched results in simulation. In the $\Lambda_c^+\to\Sigma^{0}\pip$ mode, the shape of the reflection background $\Lambda_c^+\to\Lambda\pip$ is found to depend on the $\cos\theta_{\Sigma^0}$ bins and its shape in each bin is fixed to the results from a fit to simulation. The fitted signal yields are corrected bin-by-bin with the signal efficiencies, which are determined based on signal MC events produced with our measured angular distribution. These distributions are fitted according to Eqs.(\ref{eqn:alpha_LcToLamHp},\,\ref{eqn:alpha_LcToSigHp}) and the fit results are shown in Fig.~\ref{fig:AlphaFinal_LcToLamHp} for $\Lambda_c^+\to\Lambda h^+$ and Fig.~\ref{fig:AlphaFinal_LcToSigHp} for $\Lambda_c^+\to\Sigma^{0} h^+$. The fitted slope factors ($\alpha_{\Lcp}\alpha_{-}$) are \begin{eqnarray} \alpha_{\rm avg}(\Lambda_c^+\to\Lambda\Kp)\cdot\alpha_{-}^{\rm avg} & = & -0.441\pm 0.037\,, \\ \alpha_{\rm avg}(\Lambda_c^+\to\Lambda\pip)\cdot\alpha_{-}^{\rm avg} & = & -0.570\pm 0.004\,, \\ \alpha_{\rm avg}(\Lambda_c^+\to\Sigma^{0}\Kp)\cdot\alpha_{-}^{\rm avg} & = & -0.41\phantom{0}\pm 0.14\,, \\ \alpha_{\rm avg}(\Lambda_c^+\to\Sigma^{0}\pip)\cdot\alpha_{-}^{\rm avg} & = & -0.354\pm 0.012\,, \end{eqnarray} where only statistical uncertainties are given. The subscript (superscript) `avg' denotes the averaged $\alpha$ value for the combined $\Lcp$ ($\Lambda$) and $\Lcm$ ($\overline{\Lambda}$) decays. Dividing these results by the most precise $\alpha_{-}^{\rm avg}=0.7542\pm 0.0022$ from BESIII~\cite{BESIII:2022qax} gives the final decay asymmetry parameters $\alpha_{\rm avg}$ for the combined $\Lcp$ and $\Lcm$ sample, \begin{eqnarray} \alpha_{\rm avg}(\Lambda_c^+\to\Lambda\Kp) & = & -0.585\pm 0.049 \pm 0.018\,, \\ \alpha_{\rm avg}(\Lambda_c^+\to\Lambda\pip) & = & -0.755\pm 0.005 \pm 0.003\,, \\ \alpha_{\rm avg}(\Lambda_c^+\to\Sigma^{0}\Kp) & = & -0.55\phantom{0} \pm 0.18\phantom{0}\pm 0.09\,, \\ \alpha_{\rm avg}(\Lambda_c^+\to\Sigma^{0}\pip) & = & -0.463\pm 0.016 \pm 0.008\,, \end{eqnarray} where the first uncertainties are statistical and the second are systematic, which are described in detail in Sec.~\ref{sec:sys}. The measured values of $\alpha$ for the $\Lambda_c^+\to\Lambda\Kp$ and $\Lambda_c^+\to\Sigma^{0}\Kp$ modes are the first $\alpha$ results for SCS decays of charm baryons. The measured values of $\alpha$ for the $\Lambda_c^+\to\Lambda\pip$ and $\Lambda_c^+\to\Sigma^{0}\pip$ modes are consistent with the current world average values~\cite{bib:PDG2022}, but with significantly improved precision. \begin{figure}[!hbtp] \begin{centering} \hskip-10pt \begin{overpic}[width=0.25\textwidth]{alphaFit_Data_Final_LcToLamKp.eps}% \put(52,65){\small{$\Lambda_c^+\to\Lambda\Kp$}}% \end{overpic}% \begin{overpic}[width=0.25\textwidth]{alphaFit_Data_Final_LcToLamPip_wXicBkg.eps}% \put(52,65){\small{$\Lambda_c^+\to\Lambda\pip$}}% \end{overpic}% \vskip-10pt \caption{\label{fig:AlphaFinal_LcToLamHp}The $\cos\theta_{\Lambda}$ distributions of $\Lambda_c^+\to\Lambda\Kp$ and $\Lambda_c^+\to\Lambda\pip$ and their conjugated decays after efficiency corrections. The red curves show the fitted results with the $\chi^2$ divided by the number of degree of freedom, $\chi^2/9=0.42$ and 1.05, respectively.} \end{centering} \end{figure} \begin{figure}[!hbtp] \begin{centering} \hskip-10pt \begin{overpic}[width=0.25\textwidth]{alphaFit0_Data_Final_LcToSigKp.eps}% \put(25,73){\small{$\Lambda_c^+\to\Sigma^{0}\Kp$}}% \end{overpic}% \begin{overpic}[width=0.25\textwidth]{alphaFit_Data_Final_LcToSigKp.eps}% \put(25,73){\small{fit result}}% \end{overpic}\\ \vskip8pt \hskip-10pt \begin{overpic}[width=0.25\textwidth]{alphaFit0_Data_Final_LcToSigPip.eps}% \put(25,73){\small{$\Lambda_c^+\to\Sigma^{0}\pip$} \end{overpic}% \begin{overpic}[width=0.25\textwidth]{alphaFit_Data_Final_LcToSigPip.eps}% \put(25,73){\small{fit result} \end{overpic}% \vskip-10pt \caption{\label{fig:AlphaFinal_LcToSigHp}The left figures show the $[\cos\theta_{\Sigma^0},\,\cos\theta_{\Lambda}]$ distributions of $\Lambda_c^+\to\Sigma^{0}\Kp$ and $\Lambda_c^+\to\Sigma^{0}\pip$ and their conjugated decays after efficiency correction; the right figures show the fitted results of left figures with the $\chi^2$ divided by the number of degree of freedom, $\chi^2/24=0.87$ and $\chi^2/35=1.45$, respectively.} \end{centering} \end{figure} \section{$\alpha$-induced $\ensuremath{C\!P}\xspace$ asymmetry} We separate the $\Lcp$ and $\Lcm$ samples and measure $\alpha_{\Lcp}$ and $\alpha_{\Lcm}$ with the same method described above. The signal shape parameters for individual bins of helicity angles are fixed to the fitted results in the full sample integrated over helicity angles for $\Lcp$ and $\Lcm$ separately. The helicity angle distributions for the $\Lcp$ and $\Lcm$ samples are fitted separately, and the fitted slope factors, $\alpha_{\Lcp}\alpha_{-}$ and $\alpha_{\Lcm}\alpha_{+}$, are listed in Table~\ref{tab:alpha_AcpAlpha}. Additional details are given in the supplementary materials. Using the precise results $\alpha_{-}(\Lambda\ensuremath{\rightarrow}\xspace p\ensuremath{\pi^-}\xspace)=0.7519\pm 0.0041$ and $\alpha_{+}(\overline{\Lambda}\ensuremath{\rightarrow}\xspace \overline{p}\ensuremath{\pi^+}\xspace)=-0.7559\pm 0.0046$ measured by BESIII~\cite{BESIII:2022qax}, we measure four $\alpha$-induced $\ensuremath{C\!P}\xspace$ asymmetries as listed in Table~\ref{tab:alpha_AcpAlpha}, where $\ensuremath{A_{\CP}}\xspace^{\alpha}$ for $\Lambda_c^+\to\Lambda\Kp$, $\Lambda_c^+\to\Sigma^{0}\Kp$, and $\Lambda_c^+\to\Sigma^{0}\pip$ are measured for the first time. The measured $\ensuremath{A_{\CP}}\xspace^{\alpha}$ for $\Lambda_c^+\to\Lambda\pip$ is consistent with previous results, but with much better precision. \begin{table* \begin{centering} \caption{\label{tab:alpha_AcpAlpha}The fitted slopes $\alpha_{\Lambda_c^{\pm}}\alpha{\mp}$ for $\Lcp$ and $\Lcm$ samples, and decay asymmetry parameters $\alpha_{\Lcp}$ and $\alpha_{\Lcm}$ for individual $\Lcp$ and $\Lcm$ samples using the most precise $\alpha_{\mp}$ from BESIII recently~\cite{BESIII:2022qax}, and the corresponding $\alpha$-induced $\ensuremath{C\!P}\xspace$ asymmetry $\ensuremath{A_{\CP}}\xspace^{\alpha}$, comparing with current world averages (W.A.)~\cite{bib:PDG2022}.} \begin{lrbox}{\tablebox} \renewcommand\arraystretch{1.2} \setlength{\tabcolsep}{1mm}{ \begin{tabular}{ccccccc \hline Channel & $\alpha_{\Lambda_c^{+}}\alpha_{-}$ & $\alpha_{\Lambda_c^{-}}\alpha_{+}$ & $\alpha_{\Lcp}$ & $\alpha_{\Lcm}$ & $\ensuremath{A_{\CP}}\xspace^{\alpha}$ & W.A. $\ensuremath{A_{\CP}}\xspace^{\alpha}$ \\ \hline $\Lambda_c^+\to\Lambda\Kp$ & $-0.418\pm 0.053$ & $-0.442\pm 0.053$ & $-0.566\pm 0.071\pm 0.028$ & $0.592\pm 0.070\pm 0.079$ & $-0.023\pm 0.086\pm 0.071$ & -- \\ $\Lambda_c^+\to\Lambda\pip$ & $-0.582\pm 0.006$ & $-0.565\pm 0.006$ & $-0.784\pm 0.008\pm 0.006$ & $0.754\pm 0.008\pm 0.018$ & $+0.020\pm 0.007\pm 0.013$ & $-0.07\pm0.22$ \\ $\Lambda_c^+\to\Sigma^{0}\Kp$ & $-0.43\phantom{0}\pm0.18\phantom{0}$ & $-0.37\phantom{0}\pm0.21\phantom{0}$ & $-0.58\phantom{0}\pm0.24\phantom{0}\pm 0.09\phantom{0}$ & $0.49\phantom{0}\pm0.28\phantom{0}\pm 0.14\phantom{0}$ & $+0.08\phantom{0}\pm0.35\phantom{0}\pm 0.14\phantom{0}$ & -- \\ $\Lambda_c^+\to\Sigma^{0}\pip$ & $-0.340\pm 0.016$ & $-0.358\pm 0.017$ & $-0.452\pm 0.022\pm 0.023$ & $0.473\pm 0.023\pm 0.035$ & $-0.023\pm 0.034\pm 0.030$ & -- \\ \hline \end{tabular} } \end{lrbox} \scalebox{0.94}{\usebox{\tablebox}} \end{centering} \end{table*} We search for hyperon CPV in $\Lambda\ensuremath{\rightarrow}\xspace{}p\ensuremath{\pi^-}\xspace$ according to Eq.(\ref{eqn:Acp4Lambda}). Using the fitted slopes $\alpha_{\Lcp}\alpha_{-}$ and $\alpha_{\Lcm}\alpha_{+}$ from Fig.~\ref{fig:AlphaFinal_LcToLamHp} for $\Lambda_c^+\to\Lambda\pip$ and from Fig.~\ref{fig:AlphaFinal_LcToSigHp} for $\Lambda_c^+\to\Sigma^{0}\pip$, the $\alpha$-induced $\ensuremath{C\!P}\xspace$ asymmetry of $\Lambda\ensuremath{\rightarrow}\xspace{}p\ensuremath{\pi^-}\xspace$ is measured to be ${+0.0169\pm 0.0073 \pm 0.0120 }$ in $\Lambda_c^+\to\Lambda\pip$ and ${-0.026\pm 0.034\pm 0.030 }$ in $\Lambda_c^+\to\Sigma^{0}\pip$. Finally, their average value is calculated to be \begin{eqnarray} \ensuremath{A_{\CP}}\xspace^{\alpha}(\Lambda\ensuremath{\rightarrow}\xspace p\ensuremath{\pi^-}\xspace) = +0.013\pm 0.007 \pm 0.011\,. \end{eqnarray} This is the first measurement of hyperon CPV searches in CF charm decays. No evidence of $\Lambda$-hyperon CPV is found. \section{Systematic uncertainties}\label{sec:sys} Most of the systematic uncertainties for the direct $\ensuremath{C\!P}\xspace$ asymmetry cancel since they affect both $\Lcp$ and $\Lcm$ decays. The remaining sources of systematic uncertainty are listed in Table~\ref{tab:sysAcp}. The uncertainty due to each charged track asymmetry map is evaluated by varying the asymmetry value bin-by-bin by its uncertainty ($\pm1\sigma$) and repeating the measurement of the $\ensuremath{A_{\CP}}\xspace^{\rm dir}$. The resulting deviations from the nominal $\ensuremath{A_{\CP}}\xspace^{\rm dir}$ value are added in quadrature for positive and negative shifts, separately, and assigned as a systematic uncertainty. We sample the parameters of the signal PDF, which are fixed in the nominal fit, from a multivariate Gaussian distribution that accounts for their uncertainties and correlations and re-fit for the signal yield. The procedure is repeated 1000 times, and the root-mean-square of the distribution of fitted yields is taken as the systematic uncertainty due to the fixed parameters. To allow for the different background shapes for the $\Lcp$ and $\Lcm$ candidates, the background parameters are allowed to differ. The difference in the fitted results relative to the nominal results are assigned as a systematic uncertainty. We consider the possible fit bias with a linearity test for $\ensuremath{A_{\CP}}\xspace^{\rm dir}$ with toy MC samples which are generated with five $A_{\rm raw}^{\rm corr}$ values per channel. A linear fit is applied to the measured $A_{\rm raw}^{\rm corr}$ distribution versus the generated values. The fitted slopes consistent with one indicate no fit bias. The relative shift between the fitted linear function and the nominal value is taken as a systematic uncertainty. The total systematic uncertainty is determined from the sum of all contributions in quadrature to be ${}^{+1.2}_{-0.7}\times10^{-3}$ for $\ensuremath{A_{\CP}}\xspace^{\rm dir}(\Lambda_c^+\to\Lambda\Kp)$ and ${}^{+3.0}_{-4.2}\times10^{-3}$ for $\ensuremath{A_{\CP}}\xspace^{\rm dir}(\Lambda_c^+\to\Sigma^{0}\Kp)$. And considering the statistical uncertainties of $\ensuremath{A_{\CP}}\xspace^{\rm dir}$ results are larger than $1\%$, we assign $0.1\%$ and $0.4\%$ as the final systematic uncertainties of $\ensuremath{A_{\CP}}\xspace^{\rm dir}(\Lambda_c^+\to\Lambda\Kp)$ and $\ensuremath{A_{\CP}}\xspace^{\rm dir}(\Lambda_c^+\to\Sigma^{0}\Kp)$, respectively. \begin{table}[!htpb] \begin{centering} \caption{\label{tab:sysAcp}The absolute systematic uncertainties (in units of $10^{-3}$) for $\ensuremath{C\!P}\xspace$ asymmetry $\ensuremath{A_{\CP}}\xspace^{\rm dir}$.} \renewcommand\arraystretch{1.2} \begin{tabular}{ccc} \hline Sources & $\ensuremath{A_{\CP}}\xspace^{\rm dir}(\Lambda_c^+\to\Lambda\Kp)$ & $\ensuremath{A_{\CP}}\xspace^{\rm dir}(\Lambda_c^+\to\Sigma^{0}\Kp)$ \\ \hline $A_{\varepsilon}^{\ensuremath{K^+}\xspace}$ map & ${}^{+0.8}_{-0.2}$ & $\pm0.4$ \\ $A_{\varepsilon}^{\ensuremath{\pi^+}\xspace}$ map& $\pm0.4$ & ${}^{+0.5}_{-2.5}$ \\ Signal shape & $\pm0.5$ & $\pm1.4$ \\ Background shape & $-0.2$ & $-3.1$ \\ Fit bias & $+0.6$ & $+2.6$ \\ \hline Total & $^{+1.2}_{-0.7}$ & $^{+3.0}_{-4.2}$ \\ \hline \end{tabular} \end{centering} \end{table} For the branching fraction ratio measurement, most systematic uncertainties cancel since they affect both the signal and reference modes. The remaining systematic uncertainties are listed in Table~\ref{tab:sysBR}. Using the $D^{*+}\ensuremath{\rightarrow}\xspace[\ensuremath{D^0}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{K^-}\xspace\ensuremath{\pi^+}\xspace]\ensuremath{\pi^+}\xspace$ control sample, the PID uncertainties are estimated to be 0.9\% for $\Lambda_c^+\to\Lambda\Kp$, 0.8\% for $\Lambda_c^+\to\Lambda\pip$, 0.9\% for $\Lambda_c^+\to\Sigma^{0}\Kp$, and 0.8\% for $\Lambda_c^+\to\Sigma^{0}\pip$. Since the kaon and pion PID efficiency use the same control sample, we assign 1.7\% as the systematic uncertainty for both branching fraction ratios. The systematic uncertainties associated with the fixed parameters in the signal-yield fit is determined according to the same method as for $\ensuremath{A_{\CP}}\xspace^{\rm dir}$ to be 0.2\% and 0.4\% for the $\Lambda$- and $\Sigma^0$-involved modes, respectively. In modes that include a $\Sigma^0$, the broken-$\Sigma^0$ signal has a fixed ratio to signal based on MC simulation. The $M(\Lcp)$ distributions of the MC sample and experimental data in $M(\Sigma^0)$ sideband region have nearly same shapes, which suggests that the MC simulation is reliable for this broken-$\Sigma^0$ signal. We vary its ratio in the $M(\Lcp)$ fit by $\pm10\%$ and the larger deviation, $0.1\%$, is assigned as a conservative estimate. We consider the effects of the $\Xi_c$ background shape in the two CF modes by parameterizing it separately from the other backgrounds. The difference in the fitted signal yield is 0.3\% for $\Lambda_c^+\to\Lambda\pip$ and 0.1\% for $\Lambda_c^+\to\Sigma^{0}\pip$. Since the multiplicity of events for modes that include a $\Lambda$ is small, we remove events with multiple candidates and repeat the measurement. For modes that include a $\Sigma^0$, an alternative BCS method is applied to select the candidate with highest momentum $\gamma$ from the $\Sigma^0$ decay. The resulting changes in the branching fraction measurement are assigned as systematic uncertainties. The $\alpha$ value used in signal MC production is varied by its uncertainty and the resulting change in the efficiency is assigned as a systematic uncertainty. A systematic uncertainty due to limited MC statistics is also considered. The total systematic uncertainty is determined by adding the uncertainties from all sources in quadrature, as given in Table~\ref{tab:sysBR}. \begin{table}[!htpb] \begin{centering} \caption{\label{tab:sysBR}Relative systematic uncertainties (in units of \%) for branching fractions.} \renewcommand\arraystretch{1.2} \setlength{\tabcolsep}{1mm}{ \begin{tabular}{ccc} \hline Sources & $\frac{\mathcal{B}(\Lambda_c^+\to\Lambda\Kp)}{\mathcal{B}(\Lambda_c^+\to\Lambda\pip)}$ & $\frac{\mathcal{B}(\Lambda_c^+\to\Sigma^{0}\Kp)}{\mathcal{B}(\Lambda_c^+\to\Sigma^{0}\pip)}$ \\ \hline PID efficiency correction & 1.7 & 1.7 \\ Signal shape & 0.2 & 0.4 \\ Background shape & 0.3 & 0.1 \\ BCS effect & 0.1 & 0.4 \\ Efficiency ratio & 0.2 & 0.4 \\ \hline Total & 1.7 & 1.8 \\ \hline \end{tabular}} \end{centering} \end{table} For the $\alpha$ and $\ensuremath{A_{\CP}}\xspace^{\alpha}$ measurements, we consider the systematic uncertainty due to the number of helicity angle bins, the efficiency curve, the fit bias, and the quoted uncertainty on $\alpha_{\mp}$. We change the number of helicity angle bins from 10 to 8 or 12 for $\Lambda_c^+\to\Lambda h^+$, from 5$\times$5 to 4$\times$4 or 6$\times$6 for $\Lambda_c^+\to\Sigma^{0}\Kp$, and from 6$\times$6 to 5$\times$5 or 7$\times$7 for $\Lambda_c^+\to\Sigma^{0}\pip$. The $\alpha$ value used in signal MC production is varied by its uncertainty. The resulting changes in $\alpha$ or $\ensuremath{A_{\CP}}\xspace^{\alpha}$ are assigned as systematic uncertainties. We consider the possible fit bias for $\alpha$ and $\ensuremath{A_{\CP}}\xspace^{\alpha}$ with a linearity test, in which we replace the signal events in the MC sample with events produced with a special angular distribution using five $\alpha$ values. A linear fit is applied to the measured $\alpha$ distribution versus the generated values. The fitted slopes consistent with one indicate no fit bias. The relative shift between the fitted linear function and the nominal value is taken as a systematic uncertainty. The quoted uncertainties of $\alpha_{\rm avg}$ and $\alpha_{\mp}$ of $\Lambda\ensuremath{\rightarrow}\xspace p\ensuremath{\pi^-}\xspace$ are assigned as systematic uncertainties. The total systematic uncertainties for $\alpha_{\rm avg}$/$\alpha_{\Lcp}$/$\alpha_{\Lcm}$/$A_{\ensuremath{C\!P}\xspace}^{\alpha}$/$\ensuremath{A_{\CP}}\xspace^{\alpha}(\Lambda)$ are taken as the sum in quadrature of all contributions, as listed in Table~\ref{tab:sysAlphaAcp}. \begin{table*}[!htpb] \begin{centering} \caption{\label{tab:sysAlphaAcp}Absolute systematic uncertainties (in units of $10^{-2}$) for decay asymmetry parameters and the $\alpha$-induced $\ensuremath{C\!P}\xspace$ asymmetries: $\alpha_{\rm avg}$/$\alpha_{\Lcp}$/$\alpha_{\Lcm}$/$A_{\ensuremath{C\!P}\xspace}^{\alpha}$ in each decay mode (the fifth items in $\Lambda_c^+\to\Lambda\pip/\Sigma^0\ensuremath{\pi^+}\xspace$ are for $\ensuremath{A_{\CP}}\xspace^{\alpha}(\Lambda)$).} \renewcommand\arraystretch{1.2} \setlength{\tabcolsep}{4mm}{ \begin{tabular}{lcccc} \hline Sources & $\Lambda_c^+\to\Lambda\Kp$ & $\Lambda_c^+\to\Lambda\pip$ & $\Lambda_c^+\to\Sigma^{0}\Kp$ & $\Lambda_c^+\to\Sigma^{0}\pip$ \\ \hline $\cos\theta$ bins & 0.8/1.5/1.5/1.4 & 0.0/0.1/0.2/0.2/0.16 & 7.4/6.9/\phantom{0}9.9/\phantom{0}8.4 & 0.7/1.5/3.3/1.9/1.9 \\ Efficiency curve & 0.2/0.5/0.1/0.4 & 0.2/0.3/0.1/0.3/0.29 & 1.4/1.1/\phantom{0}1.8/\phantom{0}0.9 & 0.1/0.5/0.4/0.9/0.9 \\ Fit bias & 1.6/2.3/7.7/6.9 & 0.2/0.3/1.7/1.2/1.15 & 4.1/5.7/10.3/11.7 & 0.4/1.7/0.9/2.1/2.1 \\ $\alpha_{\mp}(\Lambda\ensuremath{\rightarrow}\xspace p\ensuremath{\pi^-}\xspace)$ & 0.2/0.3/0.4/0.4 & 0.2/0.4/0.5/0.4/--\phantom{.}\phantom{0}\phantom{0} & 0.2/0.3/\phantom{0}0.3/\phantom{0}0.4 & 0.1/0.2/0.3/0.4/--\phantom{.}\phantom{0} \\ \hline Total & 1.8/2.8/7.9/7.1 & 0.3/0.6/1.8/1.3/1.20 & 8.6/9.0/14.4/14.4 & 0.8/2.3/3.5/3.0/3.0 \\ \hline \end{tabular}} \end{centering} \end{table*} \section{Summary} In conclusion, based on the 980 $\ensuremath{\mbox{\,fb}^{-1}}\xspace$ data set collected with the Belle detector, we make the first measurement of the direct $\ensuremath{C\!P}\xspace$ asymmetry in SCS two-body decays of charm baryons, $\ensuremath{A_{\CP}}\xspace^{\rm dir}(\Lambda_c^+\to\Lambda\Kp) = +0.021 \pm 0.026 \pm 0.001$ and $\ensuremath{A_{\CP}}\xspace^{\rm dir}(\Lambda_c^+\to\Sigma^{0}\Kp) = +0.025 \pm 0.054 \pm 0.004$. The relative branching fractions are measured to be, $\ensuremath{\mathcal{B}}\xspace(\Lambda_c^+\to\Lambda\Kp)/\ensuremath{\mathcal{B}}\xspace(\Lambda_c^+\to\Lambda\pip) = (5.05\pm 0.13 \pm 0.09)\%$ and $\ensuremath{\mathcal{B}}\xspace(\Lambda_c^+\to\Sigma^{0}\Kp)/\ensuremath{\mathcal{B}}\xspace(\Lambda_c^+\to\Sigma^{0}\pip) = (2.78\pm 0.15 \pm 0.05)\%$, which supersede previous Belle measurements~\cite{Belle:2001hyr}. Using the world average values for the branching fractions for $\Lambda_c^+\to\Lambda\pip$ and $\Lambda_c^+\to\Sigma^{0}\pip$, we measure $\ensuremath{\mathcal{B}}\xspace(\Lambda_c^+\to\Lambda\Kp) = [6.57\pm 0.17 \pm 0.11 \pm 0.35 ]\times 10^{-4}$ and $\ensuremath{\mathcal{B}}\xspace(\Lambda_c^+\to\Sigma^{0}\Kp) = [3.58\pm 0.19 \pm 0.06 \pm 0.19 ]\times 10^{-4}$. These results are the most precise to date and significantly improve the precision of the world average values~\cite{bib:PDG2022}. We measure the averaged decay asymmetry parameters $\alpha(\Lcp\ensuremath{\rightarrow}\xspace\Lambda\ensuremath{K^+}\xspace) = -0.585\pm 0.049 \pm 0.018$ and $\alpha(\Lcp\ensuremath{\rightarrow}\xspace\Sigma^0\ensuremath{K^+}\xspace) = -0.55 \pm 0.18 \pm 0.09$ for the first time. We measure $\alpha(\Lcp\ensuremath{\rightarrow}\xspace\Lambda\ensuremath{\pi^+}\xspace) = -0.755\pm 0.005 \pm 0.003$ and $\alpha(\Lcp\ensuremath{\rightarrow}\xspace\Sigma^0\ensuremath{\pi^+}\xspace) = -0.463\pm 0.016 \pm 0.008 $, which are consistent with previous measurements~\cite{bib:PDG2022} but with improved precision. We also determine the $\alpha$-parameter for $\Lcp$ and $\Lcm$ individually and search for CPV via the $\alpha$-induced $\ensuremath{C\!P}\xspace$ asymmetry, as listed in Table~\ref{tab:alpha_AcpAlpha}. These results include the first measurements of $\ensuremath{A_{\CP}}\xspace^{\alpha}$ for SCS decays of charm baryons, $\ensuremath{A_{\CP}}\xspace^{\alpha}(\Lambda_c^+\to\Lambda\Kp)=-0.023\pm0.086\pm0.071$ and $\ensuremath{A_{\CP}}\xspace^{\alpha}(\Lambda_c^+\to\Sigma^{0}\Kp)= +0.08\pm 0.35 \pm 0.14$. We search for $\Lambda$ hyperon CPV via the $\alpha$-induced $\ensuremath{C\!P}\xspace$ asymmetry in $\Lambda_c^+\to\Lambda\pip$ and $\Lambda_c^+\to\Sigma^{0}\pip$ decays, and determine $\ensuremath{A_{\CP}}\xspace^{\alpha}(\Lambda\ensuremath{\rightarrow}\xspace p\ensuremath{\pi^-}\xspace) = +0.013\pm 0.007\pm 0.011$ by combining the two modes. No evidence of baryon $\ensuremath{C\!P}\xspace$ violation is found. The method used in our $\ensuremath{A_{\CP}}\xspace^{\alpha}(\Lambda\ensuremath{\rightarrow}\xspace p\ensuremath{\pi^-}\xspace)$ measurement can be applied to other hyperons, such as $\ensuremath{A_{\CP}}\xspace^{\alpha}(\Xi^{0,-}\ensuremath{\rightarrow}\xspace\Lambda\pi^{0,-})$ in $\Lcp\ensuremath{\rightarrow}\xspace\Xi^0\ensuremath{K^+}\xspace$ and $\Xi_c^{+,0}\ensuremath{\rightarrow}\xspace\Xi^{0,-}\pi^+$. Our measurement is a milestone for hyperon CPV searches in charm CF decays and this method is promising for precise measurements of hyperon CPV at Belle II and LHCb. \section*{Conflict of interest} The authors declare that they have no conflict of interest. \section*{Acknowledgments} \input{ack.tex} \section*{Appendix A. Supplementary materials} Supplementary materials to this article can be found online at xxxxx (to be added by the publisher).
\section{Introduction} The discipline of star formation, in all of its aspects, has always been of paramount importance in astrophysics: lying at the interception of several fields of studies, it stretches from the grandest scale of galactic evolution, passing through the intricate web of the interstellar medium, to the dusty scenery of primordial discs, where stellar furnaces mildly begin to shimmer and glow. The presence of active stellar formation in our Galaxy \citep{1999ARAA..37..311E} provides the opportunity to gaze at the process while it unfolds before our eyes: during the last years, several surveys have been performed to study the environment of galactic star-forming regions \citep[e.g.,][]{2009PASP..121..213C}, the formation of multiple systems \citep[e.g.,][]{2011ApJ...731....8K}, or even to directly test the predictions from models of planetary formation \citep[e.g.,][]{2021aa...646A.164J}. It has long been known \citep{1954LIACo...5..293A} that, following the collapse and fragmentation of gigantic structures called {\it{molecular clouds}}, a plethora of stars ($N=10$ to $10^5$) begins to form; initially concealed by the same dusty envelope which they are born from, they rapidly \citep[2-7 Myr;][]{2021MNRAS.504..487K} divest themselves of it by means of harsh stellar winds and ionising radiation, mostly originating from massive stars; HII regions, the impressive product of the irremediable alteration of the original cloud, are ruthlessly sculpted by the injection of energy and momentum from exploding supernovae \citep{2020MNRAS.498.4906B}; after just a few Myr, the region is virtually devoid of its original gas reservoir \citep{2001MNRAS.321..699K}. The abrupt change within stellar natal environment is proven by observations showing that, while at $t<5$ Myr stars are often still embedded in their parent cloud, after 10 Myr only $\sim 10\%$ of stars are found in bound clusters \citep{2003ARAA..41...57L}. What we witness as an {\it{association}} for just --in cosmic terms-- the blink of an eye \citep[10-100 Myr,][]{2016EAS....80...73M}, is therefore a young system, still dwelt by bright ephemeral OB stars and regions of active stellar formation \citep{2001MNRAS.321..699K}. The kinematic signature of members of clusters and associations, first recognized in the Hyades cluster and in Ursa Major already in the 19th century \citep{1869RSPS...18..169P}, is slowly eroded as the galactic differential rotation and tides spread the stars, turning them into moving groups or streams \citep{2002ASPC..285..442L}. Given that the observed densities of associations are too low to give rise to significant close encounters and scatterings, their initial velocity structure can largely be conserved over the timescale of several Myr \citep{2018MNRAS.476..381W}, as the above-mentioned perturbation induced by the galactic tidal field is expected to begin dominating on timescales of $\sim 10^7$ yr \citep{2018MNRAS.476..381W}. If this is the case, we might think to use our present knowledge of an association to delve into its past. The first attempt in this direction was done by \cite{1946PGro...52....1B,1964ARaa...2..213B}, who devised a simple linear expansion model, where all the sibling stars move away at a constant pace from their natal position. By tracing back their motion, it is in principle possible to obtain an estimate of the association age in a way that is independent of stellar evolution models. However, a quantitative assessment of the expansion model has long been considered elusive due to difficulties, on the one hand, in distinguishing real members from interlopers and, on the other hand, to obtain precise measurements of stellar distances and motions. This is why the idea has largely been shelved for decades \citep{1997MNRAS.285..479B}, even though the notion of OB associations as the inflated outcome of compact clusters kept being popular \citep{2003ARAA..41...57L}. Ultra-precise astrometry from Gaia \citep{2016aa...595A...1G}, covering, in its last release, almost 2 billion sources all over the sky, providing reliable optical photometry up to $G\approx 21$ mag and, most importantly, supplying the astronomical community with astrometry and proper motions of unprecedented accuracy, has been revolutionising our knowledge of the Galaxy. An extensive search for new members of the known associations has been undertaken, reaching, for the nearest ones, the hydrogen-burning limit \citep{2018ApJ...862..138G}, and opening up a kaleidoscope of opportunities for kinematic studies. Especially when combined with radial velocities from external catalogues, Gaia is able to delve deeper than ever into the core of association architectures, unearthing exquisite fragments of their history. Some examples include the Gamma Velorum cluster, showing two distinct kinematic components \citep{2014aa...563A..94J}, Taurus \citep{2017ApJ...838..150K}, Cygnus OB2 and its complex substructures \citep{2016MNRAS.460.2593W}, and even smaller structures like the TWA moving group \citep{2014aa...563A.121D}. OB associations, in particular, are spectacularly confirming the expectation that their kinematic substructure is reminiscent of its initial structure \citep[e.g.,][]{1981MNRAS.194..809L,2016MNRAS.460.2593W}. The same complexity emerges when studying the geometry and the internal motion of molecular clouds \citep{1991ApJ...378..186F,2013aa...554A..55H}: starting from structure analysis of prestellar cores \citep{2020aa...638A..74L} and from the variegated shapes taken by filaments and filamentary networks in the early phase of stellar formation \citep{2018aa...610A..77H,2021arXiv210404541H} that give rise to distinct stellar populations \citep[e.g.,][]{2019aa...621A..42A}, it is natural to think that the complex, fractal structure is inherited by young stars \citep{2001AJ....121.1507E,2008ApJ...674..336G}. In this regard, the nearest OB association to the Sun, Scorpius-Centaurus (Sco-Cen), naturally stands as an ideal benchmark. Spanning an enormous area of approximately $80^\circ \times 40^\circ$ in the sky, this very young \citep[$t < 20$ Myr,][]{2012ApJ...746..154P} region comprises a few regions still actively forming stars \citep{2008hsf2.book..351W}. The exquisite concoction of closeness, youth and low extinction, allowing detection of members down to the brown dwarf regime, has made it the target of many studies in the last decades \citep[e.g.,][]{1999AJ....117..354D,2002AAS...200.7114M} involving binaries \citep[e.g.,][]{2013ApJ...773..170J}, primordial discs \citep[e.g.,][]{2006ApJ...651L..49C}, high mass \citep{2012ApJ...756..133C}, Sun-like \citep[e.g.,][]{2012ApJ...746..154P} and low-mass stars \citep[e.g.,][]{2013MNRAS.431.3222L}, and even young planets \citep[e.g.,][]{2021aa...646A.164J}. The star formation history of Sco-Cen is closely related to its spatial structure. Sco-Cen is classically divided into three main subgroups \citep{1946PGro...52....1B,1999AJ....117..354D}: going toward lower galactic longitude, Upper Scorpius (USCO), Upper Centaurus-Lupus (UCL) and Lower Centaurus-Crux (LCC). Both density and age have a spatial gradient, with USCO being more compact and younger than UCL and LCC \citep{2016MNRAS.461..794P}. This intriguing observation led \cite{1999AJ....117.2381P} to put forward the idea of a triggered star formation, where the process, started in LCC, gradually expanded eastward by means of supernova shocks, causing star formation in USCO after some Myr. The same USCO is thought to have triggered a minor burst of star formation in the $\rho$ Ophiuchi complex, which is still ongoing \citep{2008hsf2.book..351W}. However, the picture is complicate, and the three subgroups appear composed of many smaller entities, each bearing a peculiar mark while being conditioned by feedback from the surrounding environment \citep{2018MNRAS.476..381W}. This tension between nature and nurture has given renovated impulse to the idea of investigating the substructure of the three subgroups to gain knowledge into their star formation histories. The presence of at least a certain degree of substructure is evident even to visual inspection in the youngest part of Sco-Cen, USCO. This rather compact \citep[98×24×18 pc$^3$,][]{2018MNRAS.477L..50G} region of Sco-Cen, home to the bright Antares \citep{2013aa...555A..24O}, received a great attention on its own due to the interplay of kinematic and age peculiarities. Notably, a consistent age determination for USCO has long been elusive. While the first photometric studies argued for an age of $\sim 5$ Myr with no significant spatial and temporal spread \citep{1999AJ....117.2381P,2002AJ....124..404P}, recent work has been increasingly prone to an older age ($t \sim 11$ Myr) with a significant spread \citep[$\Delta t \sim 7$ Myr,][]{2012ApJ...746..154P}; the debate on the dependence of the latter on position \citep{2016MNRAS.461..794P}, spectral class \citep{2016ApJ...817..164R}, systematic artefacts due to stellar models \citep{2016aa...593A..99F} or an extended star formation history \citep{2017ApJ...842..123F} has been vivid in recent years. In this context, insight from kinematic studies are pivotal to shed light on the problem. While the first studies, limited to few bright members, could not but aim at assessing a single common expansion age \citep{1978ppeu.book..101B} --a solid lower limit of $\sim 10$ Myr, in this regard, was put by \cite{2012ApJ...746..154P}--, nowadays we do have the means to investigate the whole kinematic substructure of USCO. The paper is organized as follows: after defining the selection criteria for our sample of USCO stars, together with the astrometric, kinematic and photometric data employed throughout this work (Sect. ~\ref{section:data}), we introduce in Sect. ~\ref{section:analysis} our tool, \textsc{madys}, and apply it to the region to recover and characterize the dual kinematic substructure found within the association. Sect.~\ref{section:age_determination} is dedicated to the age determination of clustered and diffuse populations, conducted in a threefold way. In Sect. ~\ref{section:discussion}, we discuss our results within the framework of previous studies of the region, with particular emphasis on its star formation history. Finally, in Sect. ~\ref{section:conclusions} we provide a brief summary of the results of this work. Appendix ~\ref{section:excess_factor} and ~\ref{section:proj_effects} explore in greater detail two quantities introduced to correctly handle data coming from Gaia: namely, a quality cut defined to exclude unreliable $G_{BP}$ and $G_{RP}$ photometric measurements and a set of corrections to remove the fraction of individual proper motions due to the reflection of the relative motion of USCO with respect to the Sun. \section{Data} \label{section:data} \subsection{Sample selection} Motivated by the idea of exploiting the full potential of the latest Gaia release \citep[EDR3,][]{2020arXiv201201533G}, we decided to construct a novel sample of USCO sources, independent from the DR2-based samples already present in the literature \citep[e.g.,][]{2020AJ....160...44L}. A preliminary deep query was done in a region virtually encompassing the whole Upper Scorpius, employing just minimal cuts on astrometry ($\alpha$, $\delta$, $\pi$) and kinematics ($\mu_\alpha^*=\mu_\alpha \cdot \text{cos}(\delta)$, $\mu_\delta$) to exclude stars either from the field or belonging to the nearby Upper Centaurus-Lupus (UCL) subgroup (Tab.~\ref{tab:criteria}.). No attempt has been done to remove, as many previous studies, sources belonging to the nearby Rho Ophiuchi region (from this moment
on, $\rho$ Ophiuchi) since we intend to explore in detail its relation with the bulk of USCO. We will simply refer to our sample as {\it{USCO}}. Membership to USCO has been defined operationally, by inspecting the 5D phase space $(\alpha, \delta, \pi, v_\alpha, v_\delta)$, with \begin{align} v_\alpha \text{ [km s$^{-1}$]} &= A \cdot \mu_\alpha^*/\pi \\ v_\delta \text{ [km s$^{-1}$]} &= A \cdot \mu_\delta/\pi \end{align} \noindent with $A=4.74$ km yr s$^{-1}$ being the conversion factor between AU yr$^{-1}$ and km yr s$^{-1}$. The line of sight velocities $v_\alpha$ and $v_\delta$ are more suitable than proper motion components in a region of non-negligible radial depth ($ \Delta r \sim 50$ pc) and with parallax uncertainties no more as limiting as it was in the pre-Gaia era. A clear concentration of sources emerges, distinguishing USCO from the field (Fig.~\ref{fig:phase_space}). A comparison with an independent sample, the DR2-based catalogue of Sco-Cen members by \cite{2019aa...623A.112D}\footnote{Actually, with the subsample defined by the same cuts on $\alpha$ and $\delta$ as our sample in order to exclude UCL members. Only their {\it{bona fide}} members were considered.}, yielded excellent agreement: out of their 2330 stars, 2129 ($\sim 91 \%$) were recovered; the fraction would have risen to 2298/2330 ($\sim 98\%$), if we employed their same cut on minimum distance ($\pi<10$ mas). However, we opted for a more conservative $\pi<8$ mas not to detrimentally affect field contamination. \begin{figure*} \centering \includegraphics[width=17cm]{sample_selection.pdf} \caption{Detection of USCO (red) within the 5D phase space. Only field stars (black) with $236^\circ<\alpha<251^\circ$, $-29^\circ<\delta<-16^\circ$, $G<20$ mag and $\sigma_\pi/\pi<0.1$ are shown for the sake of clarity.} \label{fig:phase_space} \end{figure*} We consider those stars --the only additional caveats being $G<20$ and $\sigma_\pi / \pi < 0.1 $-- as our final sample (which we will call {\it{2D sample}}). The complete set of defining criteria are summarized in Table~\ref{tab:criteria}, while the sky distribution of the sample, comprising 2745 stars, is shown in Fig.~\ref{fig:coord}. \begin{figure*} \centering \includegraphics[width=17cm]{damiani_scocen.pdf} \caption{Sco-Cen bona fide members from \protect\cite{2019aa...623A.112D}, shown in black. Upper Scorpius can easily be distinguished in the upper left. The sample used throughout this work and defined by the cuts of Table \ref{tab:criteria} is displayed in red. The criteria on right ascension and declination define the region bordered by the dashed lines.} \label{fig:coord} \end{figure*} \begin{table} \centering \caption{Criteria for the selection of the 2D sample. Coordinates and proper motions are referred --as usual for Gaia EDR3-- to the ICRS at epoch J2016.0.} \begin{tabular}{lc} \hline \hline \multicolumn{2}{c}{Initial query} \\ \hline Query position $(\alpha_0,\delta_0)$ & $(245^\circ,-25^\circ)$ \\ Query radius $(^\circ)$& 20 \\ Parallax (mas) & $5<\pi<11$ \\ Proper motion along $\alpha$ (mas yr$^{-1}$) & $-50<\mu_\alpha^*<0$ \\ Proper motion along $\delta$ (mas yr$^{-1}$) & $-53<\mu_\delta^*<0$ \\ No. of sources & 408465 \\ \hline \hline \multicolumn{2}{c}{Final criteria} \\ \hline Right ascension $(^\circ)$ & $236<\alpha<251$ \\ Declination $(^\circ)$ & $-29<\delta<-16$ \\ Parallax (mas) & $5.7<\pi<8$ \\ Parallax error & $\sigma_\pi/\pi <0.1$ \\ Velocity along $\alpha$ (km s$^{-1}$) & $v_\alpha>-12.8$ \\ Velocity along $\delta$ (km s$^{-1}$) & $-20.4<v_\delta<-12.8$ \\ Apparent {\it{G}} magnitude (mag) & $G<20$ \\ No. of sources & 2745 \\ \hline\hline \end{tabular} \label{tab:criteria} \end{table} \subsection{Radial velocities} \label{section:RV} An unbiased analysis of an extended region on the sky cannot be achieved, due to projection effects, without a full knowledge of the 6D phase space of its members: radial velocities (RV) are crucial not only to identify interlopers but also, more importantly, to correctly analyse stellar motions (see Appendix ~\ref{section:proj_effects} for details). A complete analysis in the 6D phase space has been performed on the subsample possessing reliable RV measurements. In addition to Gaia EDR3, we collected data from APOGEE DR16 \citep{2020ApJS..249....3A} and GALAH DR3 \citep{2020arXiv201102505B}; whenever multiple measurements were present, the datum with the smallest error bar was chosen. After selecting sources with a relative error on RV < 0.1 or an absolute error $<1$~km~s$^{-1}$, we defined the Cartesian frame $(x,y,z)$: \begin{equation} \label{eq:coord_transf1} \begin{cases} x=r \text{ cos} (\delta-\delta_P) \text{ sin} (\alpha-\alpha_P) \\ y=r \text{ cos} (\delta-\delta_P) \text{ cos} (\alpha-\alpha_P) \\ z=r \text{ sin} (\delta-\delta_P) \end{cases} \end{equation} where the pole $(\alpha_P,\delta_P)=(243.09^\circ, -23.03^\circ)$ points, for convenience, toward the mean equatorial coordinates of the sample. Finally, we restricted only to sources with propagated errors on $v_x$, $v_y$ and $v_z$ simultaneously satisfying the three conditions\footnote{$1 \text{ km s}^{-1}=1.02 \text{ pc Myr}^{-1}$.}: \begin{itemize} \item $|\sigma_{v_x}/v_x|<0.1\quad \text{OR}\quad \sigma_{v_x}<0.1 \text{ pc Myr}^{-1}$, \item $|\sigma_{v_y}/v_y|<0.1\quad \text{OR}\quad \sigma_{v_y}<0.1 \text{ pc Myr}^{-1}$, \item $|\sigma_{v_z}/v_z|<0.1\quad \text{OR}\quad \sigma_{v_z}<0.1 \text{ pc Myr}^{-1}$. \end{itemize} The final RV sample ({\it{3D sample}}) comprises 771 stars, $\sim 28\%$ of the 2D sample (Table~\ref{tab:data_errors}). Although we decided not to appoint the 3D sample as our main focus, because it only imperfectly reproduces the real distribution of sources\footnote{The distribution of sources possessing RV does not appear as a random pick of the Gaia sample, but rather --as expected-- as the union of distinct surveyed regions.}, we will employ it in a twofold way: on the one hand, it will provide us with a way to quantify the effect of the association's mean motion with respect to the Sun (which we will call, from this moment on, {\it{bulk motion}}); on the other hand, it will offer a constant comparison with the 2D sample to check the validity of our results: by comparing, whenever possible, features observed in 2D with their 3D counterparts, we are able to rule out the possibility of a random alignment of sources lying at different distances, i.e. a perspective effect. As regards the former aspect, a simple geometrical argument proves that, taken a group of stars, the knowledge of their proper motions alone is not sufficient to disentangle between a real expansion and a non-zero mean radial motion with respect to the line of sight through its centre, i.e. a {\it{virtual expansion}} \citep[see discussion in][]{1999AJ....117..354D}. Transverse motions, too, are split up into velocity components depending on $(\alpha,\delta,\pi)$. Thus, any non-zero bulk motion will manifest itself as a bias in the kinematic reconstruction. Once having estimated that the centre of the 3D sample approximately lies at $(\alpha_c,\delta_c,r_c)=(244.55^\circ,-23.79^\circ,143.3$ pc$)$, we computed the mean velocity components in a Cartesian frame centred on it. Expressed in the standard right-handed Cartesian Galactic frame, our 3D sample has median velocity components $(U, V, W) = (-4.788 \pm 0.019, -16.378 \pm 0.015, -6.849 \pm 0.016)$ km s$^{-1}$, with a precision gain of almost one order of magnitude relative to previous estimates \citep{2012ApJ...758...31L,2018MNRAS.477L..50G}. Finally, we determined the projections of this bulk motion on the proper motions of each star of the 2D sample, and subtracted them (see Appendix ~\ref{section:proj_effects} for details). We verified that, due to the angular extent of USCO, this bias does not significantly affect the shape of the substructures nor the timing of their maximum spatial concentration. Even correcting for bulk motion, a similar (although smaller) projection effect keeps affecting individual stellar velocities, due to the rotation of the $(v_\alpha, v_\delta, v_r)$ plane with $(\alpha,\delta)$. Again, the angular extent of the association is not too big to hinder the approach altogether\footnote{\label{foot:cartesian}. Choosing a fixed Cartesian $(\hat{x},\hat{y},\hat{z})$ frame around a star defined by $(\alpha_0, \delta_0, \pi_0, v_{\alpha,0}, v_{\delta,0}, v_{r,0})$ such that $\hat{x} \parallel v_{\alpha,0} $, $\hat{y} \parallel v_{r,0}$ and $\hat{z} \parallel v_{\delta,0}$, the mixing between velocity components for a second star having $(\alpha_1=\alpha_0+10^\circ,\delta_1=\delta_0+10^\circ, \pi_1, v_{\alpha,1}, v_{\delta,1}, v_{r,1})$ is such that $v_x = 0.985 v_{\alpha,1} - 0.030 v_{\delta,1} + 0.171 v_{r,1}$, $v_y = -0.174 v_{\alpha,1} -0.171 v_{\delta,1} +0.970 v_{r,1}$, $v_z=0.985 v_{\delta,1} +0.174 v_{r,1}$.}. \begin{table} \centering \caption{Median uncertainties on astrometry and kinematics for the 2D and 3D samples.} \begin{tabular}{lclc} \hline \hline \multicolumn{2}{c}{2D sample} & \multicolumn{2}{c}{3D sample} \\ \hline No. of sources & 2745 & No. of sources & 771 \\ Median $\sigma_\pi$ (mas) & $0.06$ & Median $\sigma_{x}$ (pc) & $0.03$ \\ Median $\sigma_{\mu_\alpha^*}$ (mas yr$^{-1}$) & $0.07$ & Median $\sigma_{y}$ (pc) & $0.82$\\ Median $\sigma_{\mu_\delta}$ (mas yr$^{-1}$) & $0.05$ & Median $\sigma_{z}$ (pc) & $0.03$ \\ Median $\sigma_{v_\alpha}$ (km s$^{-1}$) & $0.08$ & Median $\sigma_{v_x}$ (pc Myr$^{-1}$) & $0.
}), we get ${\rm rank\,}L_gH_{k-1}(x)=const.$ for all $x\in M^c_{k-1}$ around $x_p$ (condition (ii) of Proposition 6.1.3 in \cite{Isid95}). Since $\rk L_gH_{k-1}(x)=const.$, there exists a basis matrix $R_{k-1}(x)$ of the annihilator of the image of $L_gH_{k-1}(x)$, that is $R_{k-1}(x)L_gH_{k-1}(x)=0$. Thus $N^c_k$ can be defined by \[N^c_k=\{x\in U_k:H_{k-1}(x)=0, \ R_{k-1}(x)L_fH_{k-1}(x)=0\}.\] Notice that by the geometric reduction algorithm, we have \[M^c_{k}=\{x\in U_k:H_{k-1}(x)=0, \ \ \ \tilde F^2_k(x)=0\}.\] By $N^c_{k}=M^c_{k}$ and the fact that ranks of the differential of $(H_{k-1}(x),\tilde F^2_k(x))$ are constant for all $x$ around $x_p$ (assumption (A1) of Theorem \ref{Thm:NWF}), it follows that the rank of the differential of $\left[ {\begin{smallmatrix} {{H_{k - 1}}(x)}\\ {{R_{k - 1}}(x){L_f}{H_{k - 1}}(x)} \end{smallmatrix}} \right]$ is constant around $x_p$ (condition (i) of Proposition 6.1.3 in \cite{Isid95}). Assumption (A3) of Theorem \ref{Thm:NWF} that $\dim\, E(x)T_{x}M^*=\dim\, M^*$ locally around $x_p$ implies $${\rm span}\left\lbrace g_1(x_p),\ldots,g_m(x_p)\right\rbrace \cap T_{x_p}N^*=0.$$ Finally, by $N^*=\{x: H_{k^*}(x)=0\}$, it follows that the matrix ${L_g}H_{k^*} (x_p) $ has rank $m$ (condition (iii) of Proposition 6.1.3 in \cite{Isid95}). \end{proof} \begin{proof} [Proof of Theorem \ref{Thm:NWF}] Observe that by assumption (A3) and Theorem \ref{Thm:1}(iii), we have that $\Xi$ is internally regular. Then by Claim \ref{Cl:claim1}, we have $x_p$ is a regular point of the zero dynamics algorithm for any control system $\Sigma\in \mathbf{Expl}(\Xi)$. Thus there exist local coordinates $(z,z^*)$ such that $\Sigma$ is in the form (\ref{Eq:ZD form}) around $x_p$. Notice that the matrix $\beta=(\beta_1,\ldots,\beta_m)$ is invertible at $x_p$ and the functions $\sigma_i^{k}|_{N^c_k}=0$ for $1\le i\le m$, $1\le k\le \rho_i-1$, which implies $\sigma_i^{k}\in \mathbf{I}^k$, where $\mathbf{I}^k$ is the ideal generated by $z_i^j$, $1\le i\le m$, $1\le j\le k$ in the ring of smooth functions of $z^a_b$ and $z^*_c$. Then for system (\ref{Eq:ZD form}), using the feedback transformation $\tilde v=\alpha+\beta v$, where $\alpha= (\alpha_1,\ldots,\alpha_m)$, we get \begin{align}\label{Eq:prf1} \left\lbrace \begin{array}{c@{ }l} y_i&= z _i^1, \ \ \ \ \ i=1,\ldots,m,\\ \dot z _i^1 &=z _i^2 + \sum\limits_{s = 1}^{i - 1} {\delta_{i,s}^{1}\tilde v_s } + a^1_i+b^1_i\tilde v,\\ &\cdots \\ \dot z _i^{{\rho_i} - 1}& =z _i^{{\rho_i}} + \sum\limits_{s = 1}^{i - 1} {\delta_{i,s}^{{\rho_i} - 1}\tilde v_s} + a^{\rho_i-1}_i+b^{\rho_i-1}_i\tilde v,\\ \dot z _i^{{\rho_i}} &= \tilde v_i,\\ \dot z^*&= \tilde f^*(z,z^*)+ \tilde G^*(z,z^*)\tilde v, \end{array}\right. \end{align} where $\tilde f^*=f^*-\bar g\beta^{-1}\alpha$, $\tilde G^*=g^*\beta^{-1}$, and where $a^{k}_{i}=-\sigma _i^k\beta^{-1}\alpha$, $b^{k}_{i}=\sigma _i^k\beta^{-1}$, for $1\le i\le m$, $1\le k\le\rho_i-1$ and by $\sigma_i^{k}\in \mathbf{I}^k$, we have $a^k_i,b^k_{i,s}\in \mathbf{I}^k$. Recall from (\ref{Eq:ZD form}) that the functions $\delta^j_{i,s}\equiv0$ for $1\le j<\rho_{s}$, $1\le s\le i-1$. Then if the function $\delta^j_{i,\bar s}\neq 0$, $j=\rho_{\bar s}+k$, for a certain $1\le \bar s\le i-1$ and a certain $0\le k\le \rho_i-1-\rho_{\bar s}$, we show that, via suitable changes of coordinates and output multiplications, the nonzero function $\delta^{k+\rho_{\bar s}}_{i,\bar s}$ can be eliminated. Namely, define the new coordinates (and keep the remaining coordinates unchanged): $$ \tilde z^{k+1}_i=z^{k+1}_i- \delta^{\rho_{\bar s}+k}_{i,\bar s}z^1_{\bar s}, ~ ~ ~ \tilde z^{k+2}_i=z^{k+2}_i-\delta^{\rho_{\bar s}+k}_{i,\bar s}z^2_{\bar s}, ~ ~ ~ \ldots, ~ ~ ~ \tilde z^{k+\rho_{\bar s}}_i=z^{k+\rho_{\bar s}}_i-\delta^{\rho_{\bar s}+k}_{i,\bar s}z^{\rho_{\bar s}}, $$ we have (notice that below $\delta^1_{\bar s,s}\equiv0$ for $1\le s \le \bar s-1$) $$ \begin{aligned} \dot {\tilde z}^{k+1}_i&=z^{k+2}_i+\sum\limits_{s = 1}^{i - 1} {\delta_{i,s}^{k+1}\tilde v_s}+ a^{k+1}_i+b^{k+1}_i\tilde v-(\delta^{\rho_{\bar s}+k}_{i,\bar s})'z^1_{\bar s}-\delta^{\rho_s+k}_{i,s}(z^2_{\bar s}+a^1_{\bar s}+b^1_{\bar s}\tilde v+\sum\limits_{s = 1}^{\bar s - 1} {\delta_{\bar s,s}^{1}\tilde v_s})\\ &=(z^{k+2}_i-\delta^{\rho_{\bar s}+k}_{i,\bar s}z^2_{\bar s})+(a^{k+1}_i-(\delta^{\rho_{\bar s}+k}_{i,\bar s})'z^1_{\bar s}-\delta^{\rho_s+k}_{i,s}a^1_{\bar s})+(b^{k+1}_i-\delta^{\rho_s+k}_{i,s}b^1_{\bar s})\tilde v+\sum\limits_{s = 1}^{i - 1} {\delta_{i,s}^{k+1}\tilde v_s}\\ & =\tilde z^{k+2}_i+\tilde a^{k+1}_i+\tilde b^{k+1}_i \tilde v+\sum\limits_{s = 1}^{i - 1} {\delta_{i,s}^{k+1}\tilde v_s}, \end{aligned} $$ where $(\delta^{\rho_{\bar s}+k}_{i,\bar s})'$ denotes the derivative of $\delta^{\rho_{\bar s}+k}_{i,\bar s}(x(t))$ with respect to $t$, and $\tilde a^{k+1}_i=a^{k+1}_i-(\delta^{\rho_{\bar s}+k}_{i,\bar s})'z^1_{\bar s}-\delta^{\rho_s+k}_{i,s}a^1_{\bar s}$, $\tilde b^{k+1}_i=b^{k+1}_i-\delta^{\rho_s+k}_{i,s}b^1_{\bar s}$, and it is clear that
$\tilde a^{k+1}_i,\tilde b^{k+1}_{i,l}\in \mathbf{I}^{k+1}$. Then via similar calculations, we have $$ \dot {\tilde z}^{k+j}_i=\tilde z^{k+j+1}_i+\tilde a^{k+j}_i+\tilde b^{k+j}_i \tilde v+\sum\limits_{s = 1}^{i - 1} {\delta_{i,s}^{k+j}\tilde v_s}, \ \ \ 2\le j\le \rho_{\bar s}-1, $$ for some $\tilde a^{k+j},\tilde b^{k+j}_{i,l}\in \mathbf{I}^{k+j}$. Moreover, we have $$ \begin{aligned} \dot {\tilde z}^{k+\rho_{\bar s}}_i&=z^{k+\rho_{\bar s}+1}_i+\sum\limits_{s = 1}^{i - 1} {\delta_{i,s}^{k+\rho_{\bar s}}\tilde v_s}+ a^{k+\rho_{\bar s}}_i+b^{k+\rho_{\bar s}}_i\tilde v-(\delta^{\rho_{\bar s}+k}_{i,\bar s})'z^{\rho_{\bar s}}_{\bar s} -\delta^{\rho_{\bar s}+k}_{i,{\bar s}}\tilde v_{\bar s}\\ &=z^{k+\rho_{\bar s}+1}_i+(a^{k+\rho_{\bar s}}_i-(\delta^{\rho_{\bar s}+k}_{i,\bar s})'z^{\rho_{\bar s}}_{\bar s})+ b^{k+1}_i \tilde v+\sum\limits_{s = 1}^{i - 1} {\delta_{i,s}^{k+\rho_{\bar s}}\tilde v_s}-\delta^{k+\rho_{\bar s}}_{i,{\bar s}}\tilde v_{\bar s}\\ & = z^{k+\rho_{\bar s}+1}_i+\tilde a^{k+\rho_{\bar s}}_i+\tilde b^{k+\rho_{\bar s}}_i \tilde v+\sum\limits_{s = 1}^{\bar s - 1} {\delta_{i,s}^{k+\rho_{\bar s}}\tilde v_s}+\sum\limits_{s = \bar s + 1}^{i-1} {\delta_{i,{\bar s}}^{k+\rho_{\bar s}}\tilde v_s}, \end{aligned} $$ where the functions $\tilde a^{k+\rho_{\bar s}},\tilde b^{k+\rho_{\bar s}}_{i,l}\in \mathbf{I}^{k+\rho_{\bar s}}$. Thus in the above formula, the nonzero function $\delta^{k+\rho_{\bar s}}_{i,{\bar s}}$ is eliminated. Note that if $k=0$, then the change of coordinate $\tilde z^{1}_i=z^1_i- \delta^{\rho_{\bar s}}_{i,\bar s}z^1_{\bar s}$ transforms the first equation $y_i=z^1_i$ of (\ref{Eq:prf1}) into $y_i=\tilde z^{1}_i+ \delta^{\rho_{\bar s}}_{i,\bar s}z^1_{\bar s}$. We define a new output $\tilde y_i=y_i-\delta^{\rho_{\bar s}}_{i,\bar s}z^1_{\bar s}=y_i-\delta^{\rho_{\bar s}}_{i,\bar s}y_{\bar s}$ (which is actually an output multiplication \cyan{of the form $\tilde y_i=\eta_iy$}) such that the first equation of (\ref{Eq:prf1}) becomes $\tilde y_i=\tilde z^{1}_i$. Repeat the above construction to eliminate all nonzero functions $\delta^j_{i,s}$ for $ j\ge \rho_{s}$, $1\le s\le i-1$. Then system (\ref{Eq:prf1}) becomes the following control system $$ \tilde \Sigma:\left\lbrace \begin{array}{c@{ }l} \tilde y_i&= \tilde z _i^1, \ \ \ \ \ i=1,\ldots,m,\\ \dot {\tilde z} _i^1 &=\tilde z _i^2 + \tilde a^1_i+ \tilde b^1_i\tilde v,\\ &\cdots \\ \dot {\tilde z} _i^{{\rho_i} - 1}& = \tilde z _i^{{\rho_i}} + \tilde a^{\rho_i-1}_i+ \tilde b^{\rho_i-1}_i\tilde v,\\ \dot {\tilde z} _i^{{\rho_i}} &= \tilde v_i,\\ \dot z^*&= \tilde f^*(z,z^*)+ \tilde G^*(z,z^*)\tilde v. \end{array}\right. $$ where $a^k_i,b^k_{i,s}\in \mathbf{I}^k$ for $1\le k\le\rho_i-1$. It is clear that $\Sigma\mathop \sim \limits^{sys}\tilde \Sigma$ (we used coordinates changes, feedback transformations and output multiplications to transform $\Sigma$ into $\tilde \Sigma$). Then consider the last row of every subsystem of $\tilde \Sigma$, which is $\dot z _i^{{\rho_i}} = \tilde v_i$. By deleting this equation in every subsystem and setting $y_i=0$ for $i=1,\ldots,m$, and replacing the vector $\tilde v$ by $\dot z ^{{\rho}}$, we transform $\tilde \Sigma$ into a DAE $\tilde \Xi$ below. It is \cyan{straightforward} to see that $\tilde \Sigma\in \mathbf{Expl}({\tilde \Xi})$. $$ \tilde \Xi: \left\{ \begin{array}{ccl} \left[ {\begin{matrix} 0&{}&{}&{}\\ 1& \ddots &{}&{}\\ {}& \ddots & \ddots &{}\\ {}&{}&1&0 \end{matrix}} \right]\left[ {\begin{matrix} {\dot {\tilde z} _i^1}\\ {\dot {\tilde z} _i^2}\\ \vdots \\ {\dot {\tilde z} _i^{{\rho_i}}} \end{matrix}} \right] &=& \left[ {\begin{matrix} {{\tilde z} _i^1}\\ {{\tilde z} _i^2}\\ \vdots \\ {{\tilde z} _i^{{\rho_i}}} \end{matrix}} \right] + \left[ {\begin{matrix} 0\\ \tilde a^1_i+\tilde b^1_i\dot {\tilde z}^{\rho}\\ \vdots \\ \tilde a^{\rho_i-1}_i+\tilde b^{\rho_i-1}_i\dot {\tilde z}^{\rho} \end{matrix}} \right], \ \ i=1,\dots,m,\\ - \tilde G^*\left( {{\tilde z} ,z^*} \right){{\dot {\tilde z} }^{{\rho}}} + \dot z^* &=& \tilde f^*\left( {{\tilde z} ,z^*} \right). \end{array} \right. $$ Finally, by Theorem \ref{Thm:ex and sys} and $\Sigma\mathop \sim \limits^{sys}\tilde \Sigma$, we have that $\Xi\mathop \sim \limits^{ex} \tilde\Xi$ and that $\tilde \Xi$ is in the \textbf{NWF} of (\ref{Eq:NF1}). \end{proof} \section{Conclusions}\label{section:4} In this paper, we first revise the geometric reduction method for the existence of nonlinear DAE solutions, and then we define the notions of internal and external equivalence, their differences are {discussed} by analyzing their relations with solutions. We show that the internal regularity (existence and uniqueness of solutions) of a DAE is equivalent to {the fact} that the DAE is internally equivalent to an ODE (without free variables) on its maximal invariant submanifold. A procedure named explicitation with driving variables is proposed to connect nonlinear DAEs with nonlinear control systems. We show that the external equivalence for two DAEs {is} the same as the system equivalence for {their explicitation systems}. Moreover, we show that $\Xi$ is {externally} equivalent to a semi-explicit DAE if and only if the distribution defined by $\ker E(x)$ is {of} constant rank and involutive. If so, the driving variables of a control system $\Sigma\in\mathbf{Expl}(\Xi)$ can be fully reduced. Finally, two nonlinear generalizations of the Weierstrass form \textbf{WF} are proposed based on the explicitation method and the notions as zero dynamics, relative degree and invariant distributions of nonlinear control theory. \bigskip \noindent\textbf{Acknowledgment:} The first author is currently supported by Vidi-grant 639.032.733. \bibliographystyle{model1-num-names}
\section{Introduction} As the data on single-particle distributions of identified hadrons produced in heavy-ion collisions become more abundant and precise \cite{sa, ja, ba, sa1, igb, iga, ba1}, more demands are put on theoretical models to reproduce them. It is generally recognized that in Au-Au collisions at $\sqrt{s_{NN}}= 200$ GeV at the Relativistic Heavy-Ion Collider (RHIC) the low transverse-momentum $(p_T)$ region $(p_T < 2$ GeV/c) is well described by hydrodynamics \cite{dat} and the high-$p_T$ region ($p_T > 6$ GeV/c) by perturbative QCD \cite {mv}, both subjects being reviewed recently in Ref.\ \cite{fn}. In the intermediate region ($2 < p_T < 6$ GeV/c) neither approaches work very well. What stands out in that region are the large baryon/meson ratio and quark-number scaling (QNS), which give empirical support to the recombination/coalescence models \cite{hy, gkl, fmnb, hwa, hfr, mv1}. The connection between the intermediate- and high-$p_T$ regions is smooth, since the dominance of shower-shower recombination is equivalent to parton fragmentation. The transition across the lower $p_T$ boundary at $p_T \sim 2$ GeV/c is not so smooth because of the difference in the continuum description in hydrodynamics and the parton description in hadronization. Our aim in this article is to extend our previous considerations \cite{chy, hz} to the lower-$p_T$ region and to describe in a self-consistent way both the $p_T$ and azimuthal $\phi$ behaviors of pions and protons without explicit reliance on hydrodynamics. One specific point that motivates our study is related to the question of what happens to the initial system within 1 fm/c after collision. Semihard partons created within 1 fm from the surface will have already left the initial overlap region before thermalization is complete. There are many of them with parton transverse-momentum $k_T \sim $2-3 GeV/c even at RHIC, let alone at the Large Hadron Collider (LHC). They are minijets that can cause azimuthal anisotropy, not accounted for by conventional hydrodynamics. It is known that in events triggered by jets there is a ridge phenomenon in the structure of associated particles with narrow $\Delta\phi$ (azimuthal angle relative to that of the trigger) and extended $\Delta\eta$ (pseudorapidity relative to the trigger). Such a structure should be present in the inclusive distribution even if triggers are not used to select the jet events. When $k_T$ is low enough so that minijets are copiously produced, the corresponding effect on the $\phi$ anisotropy can become dominant, rendering the consideration of pressure gradients along different $\phi$ direction unreliable if semihard scatterings are ignored. In this paper we give specific attention to the ridge contribution to the single-particle distributions in the low-$p_T$ region. It is in this sense that we use the terminology: inclusive ridge distribution. Another area of concern is the variation of the $p_T$ dependence as the focus is moved to the low-$p_T$ region, where pion and proton appear empirically to have different behaviors. In the parton recombination model the hadrons should have the same inverse slope as that of the coalescing quarks if the hadrons are formed by recombination of the thermal partons, but because of the difference in the meson and baryon wave functions, the net $p_T$ distributions turn out to be different. This line of analysis takes into account the quark degree of freedom just before hadronization, which is overlooked by the fluid description of the flow effect. The burden is to show that the data on $v_2(p_T)$ can be reproduced for both pion and proton at low $p_T$ without the hydro description of elliptic flow. That is indeed what we shall show for various centralities. We confine our consideration in this paper to the physics at midrapidity. At larger $\eta$ there are other issues, such as large $p/\pi$ ratio \cite{iga} and large $\Delta\eta$ distribution of triggered ridge \cite{ba1}, which have been examined in Refs. \cite{hz1, ch}, and will not be further considered here. \section{Single-particle distribution with ridge } We begin with a recapitulation of our description of single-particle distribution \cite{hy, chy, hz}. At low $p_T$ we consider only the recombination of thermal partons, so the pion and proton spectra at $y=0$ are given by \begin{eqnarray} p^0 {dN^{\pi}\over dp_T} &=& \int \prod_{i=1}^2 \left[{dq_i\over q_i} {\cal T} (q_i)\right] {\cal R}^{\pi}(q_1,q_2,p_T), \label{1} \\ p^0 {dN^p\over dp_T} &=& \int \prod_{i=1}^3 \left[{dq_i\over q_i} {\cal T} (q_i)\right] {\cal R}^p(q_1,q_2,q_3,p_T), \label{2} \end{eqnarray} where ${\cal T} (q_i)$ is the thermal distribution of the quark (or antiquark) with momentum $q_i$, and ${\cal R}^h$ is the recombination function (RF) for $h=\pi$ or $p$. On the assumption that collinear quarks make the dominant contribution to the coalescence process (so that the integrals are one-dimensional for each quark along the direction of the hadron), the RFs are \begin{eqnarray} {\cal R}^{\pi}(q_1,q_2,p) &=& {q_1q_2\over p^2} \delta\left(\sum_{i=1}^2 {q_i\over p} - 1 \right), \label{3} \\ {\cal R}^p(q_1,q_2,q_3,p) &=& f\left({q_1\over p},{q_2\over p},{q_3\over p} \right) \delta\left(\sum_{i=1}^3 {q_i\over p} - 1 \right) \label{4} \end{eqnarray} where the details of $f(q_i/p)$ that depends on the proton wave function are given in \cite{hy}, and need not be repeated here. The main point to be made here is that if the thermal distribution ${\cal T}(q_i)$ has the canonical invariant form \begin{eqnarray} {\cal T}(q) = q{dN^q\over dq} = Cqe^{-q/T}, \label{5} \end{eqnarray} then the $\delta$-functions in the RFs require that $dN^h/p_Tdp_T$ has the common exponential factor, $\exp(-p_T/T)$, for both $h=\pi$ and $p$. The prefactors are different; we simply write down the results obtained previously \begin{eqnarray} {dN^{\pi}\over p_Tdp_T} &=& {\cal N}_{\pi}e^{-p_T/T}, \label{6} \\ {dN^p\over p_Tdp_T} &=& {\cal N}_p{p_T^2\over m_T}e^{-p_T/T}, \quad m_T = (p_T^2 + m_p^2)^{1/2}, \label{7} \end{eqnarray} where ${\cal N}_{\pi} \propto C^2$ and ${\cal N}_p \propto C^3$, and $C$ has the dimension (GeV)$^{-1}$. Note that the factor $p_T^2/m_T$ in the proton spectrum (that must be present for dimensional reason) causes the $p/\pi$ ratio to vanish as $p_T \rightarrow 0$ on the one hand, but to become large, as $p_T$ increases, on the other. When $p_T$ exceeds 2 GeV/c, shower partons become important and the above description must be supplemented by thermal-shower ({\bf TS}) recombination that limits the increase of the $p/\pi$ ratio to a maximum of about 1 \cite{hy}. We restrict our consideration to $p_T < 2$ GeV/c, but now broaden it to include $\phi$ dependence. For non-central collisions the almond-shaped initial configuration leads to $\phi$ anisotropy. The conventional description in terms of hydrodynamics relates the momentum anisotropy to the variation of pressure gradient at early times upon equilibration \cite{kh}. The success in obtaining the large $v_2$ as observed gives credibility to the approach. We adopt an alternative approach and justify our point of view on the basis that we can also reproduce the empirical $v_2$, as we shall show. Furthermore, aside from offering a smooth connection with the intermediate $p_T$ region by the inclusion of {\bf TS} recombination, our approach describes also the effect of semihard scattering on the soft sector. The ridge phenomenon that we attribute to that effect can be with trigger \cite{ch, ch1, ch2} or without trigger \cite{rch, hwa, chy, hz}. Although data on the ridge structure must necessarily make use of triggers in order to distinguish it from background \cite{ja1, bia, ha, ba2}, inclusive distribution must include ridges along with background. Thus theoretically a single-particle distribution should have a ridge component in the soft sector due to undetected semihard or hard partons. That component has $\phi$ dependence that can be calculated from geometrical consideration \cite{hz}, and has been shown to be consistent with the dependence of the ridge yield in two-particle correlation on the trigger angle $\phi_s$ relative to the reaction plane \cite{ha}. Let us use $\rho_1^h(p_T,\phi,b)$ to denote the single-particle distribution of hadron $h$ produced at mid-rapidity in heavy-ion collision at impact parameter $b$, i.e. \begin{eqnarray} \rho_1^h(p_T,\phi,b) = {dN^h\over p_Tdp_Td\phi}(N_{\rm part}), \label{8} \end{eqnarray} where $N_{\rm part}$ is the number of participants related to $b$ in a known way through Glauber description of nuclear collision \cite{mrss}. At low $p_T$ let $\rho_1^h$ be separated into two components \begin{eqnarray} \rho_1^h(p_T,\phi,b) = B^h(p_T,b) + R^h(p_T,\phi,b), \label{9} \end{eqnarray} where $B^h(p_T,b)$ is referred to as Base, not to be confused with the bulk that is usually determined in hydrodynamics; this is a change from earlier nomenclature \cite{hz}, where the use of ``bulk" did lead to some misunderstanding. Our emphasis here is that $B(p_T,b)$ is independent of $\phi$. In our approach we regard the semihard partons created near the surface, and directed outward, give rise to all the $\phi$ dependence of the medium before equilibrium is established; the recoil partons being directed inward are absorbed and randomized. The component expressed by $R^h(p_T,\phi,b)$ is referred to as ridge on the basis of its $\phi$ dependence discussed below. The $B^h(p_T,b)$ component consists of all the soft and semihard partons that are farther away from the surface and are unable to lead to hadrons with distinctive $\phi$ dependence. Thus the separation between $B^h(p_T,b)$ and $R^h(p_T,\phi,b)$ relies primarily on the $\phi$ dependence that the ridge component possesses. In Ref.\ \cite{hz} we have given an extended derivation of what that $\phi$ dependence is. It is embodied in $S(\phi,b)$ that is the segment of the surface through which a semihard parton can be emitted to contribute to a ridge particle at $\phi$. From the geometry of the initial ellipse (with width $w$ and height $h$ that depend on $b$) and from the angular constraint between the semihard parton and ridge particle prescribed by a Gaussian width $\sigma$ determined earlier in treating the ridge formation for nuclear density not too low \cite{ch2}, it is found that \begin{eqnarray} S(\phi,b) = h[E(\theta_2,\alpha) - E(\theta_1,\alpha)], \label{10} \end{eqnarray} where $E(\theta_i,\alpha)$ is the elliptic integral of the second kind with $\alpha=1-w^2/h^2$ and \begin{eqnarray} \theta_i = \tan^{-1} \left({h\over w}\tan\phi_i \right), \quad \phi_1 = \phi - \sigma, \quad \phi_2 = \phi + \sigma, \label{11} \end{eqnarray} for $\phi_i \leq \pi/2$, and an analytic continuation of it for $\phi_2 > \pi/2$. Thus $S(\phi,b)$ is completely calculable for any given $b$, and $R^h(p_T,\phi,b)$ is proportional to it. We can now rewrite Eq.\ (\ref{9}) unambiguously as \begin{eqnarray} \rho_1^h(p_T,\phi,b) = B^h(p_T,b) + {S(\phi,b)\over \bar S (b)} \bar R^h(p_T,b) \label{12} \end{eqnarray} where \begin{eqnarray} \bar S (b) = (2/\pi) \int_0^{\pi/2} d\phi S(\phi,b) \label{13} \end{eqnarray} and $\bar R^h(p_T,b)$ is a similar average of $R^h(p_T,\phi,b)$. According to Eqs.\ (\ref{6}) and (\ref{7}) the inclusive distributions $\bar \rho_1^h(p_T,b)$ should share the common exponential factor $\exp(-p_T/T)$, for $h = \pi$ and $p$, as for quarks. That does not take into consideration the enhancement of pions at very small $p_T$ due to resonance decay. We account for it by a phenomenological term $u(p_T,b)$, and write {\begin{eqnarray} \bar \rho_1^{\pi}(p_T,b) &=& {\cal N}_{\pi} (b)[1 + u(p_T,b)]e^{-p_T/T}, \label{14} \\ \bar \rho_1^p(p_T,b) &=& {\cal N}_p(b){p_T^2\over m_T}e^{-p_T/T}, \label{15} \end{eqnarray} where the resonance effect on the proton is neglected because of baryon-number conservation. These expressions are for the left-hand side of Eq.\ (\ref{12}) after $\phi$ averaging. The base term $B^h(p_T,b)$ on the right side is the soft component without the contribution from semihard scattering near the surface and should have the same common structure as in Eqs.\ (\ref{6}) and (\ref{7}) due to thermal parton recombination, except that the inverse slope is lower without the enhancement by the energy loss from the semihard partons. We can therefore write \begin{eqnarray} B^{\pi}(p_T,b) &=& {\cal N}_{\pi}(b)[1 + u(p_T,b)]e^{-p_T/T_B}, \label{16} \\ B^p(p_T,b) &=& {\cal N}_p(b){p_T^2\over m_T}e^{-p_T/T_B}, \label{17} \end{eqnarray} where $T_B < T$. It then follows that \begin{eqnarray} \bar R^{\pi}(p_T,b) &=& {\cal N}_{\pi}(b)[1+u(p_T,b)] \bar R_0(p_T) \label{18} \\ \bar R^p(p_T,b) &=& {\cal N}_p(b){p_T^2\over m_T} \bar R_0(p_T), \label{19} \end{eqnarray} where \begin{eqnarray} \bar R_0(p_T)=e^{-p_T/T}-e^{-p_T/T_B}=e^{-p_T/T_B}(e^{p_T/\tilde{T}}-1) \quad \label{19a} \\ {1\over \tilde{T}} = {1\over T_B} - {1\over T} = {\Delta T\over T_BT}, \qquad \Delta T = T - T_B. \qquad \label{20} \end{eqnarray} There are two undetermined inverse-slopes: $T_B$ and $T$, common for both $\pi$ and $p$. They are for single-particle inclusive distributions, so only $T$ is directly observable. We postpone phenomenology to a later section. In ridge analysis using triggered events for two-particle correlation the two corresponding inverse slopes are separately measured \cite{bia}. Here, however, we are dealing with single-particle distributions. The difference between $T_B$ and $T$ has to do with ridges and their effect on the $\phi$ distribution. Thus we expect $\Delta T$ to be related to azimuthal asymmetry, a topic we next turn to. \section{Quadrupole Moments of $\phi$ Asymmetry} This topic is usually referred to as elliptic flow, a terminology that is rooted in hydrodynamics. Since we have not used hydro in the previous section, it is more appropriate to use the unbiased language initiated in Ref.\ \cite{tk}, and call it azimuthal quadrupole. It is the familiar $v_2$ that is defined by \begin{eqnarray} v_2^h(p_T,b) = \langle \cos2\phi \rangle_{\rho_1}^h = {\int_0^{2\pi} d\phi \cos2\phi\rho_1^h(p_T,\phi,b)\over \int_0^{2\pi} d\phi\rho_1^h(p_T,\phi,b)}. \label{21} \end{eqnarray} Using Eqs.\ (\ref{12}) - (\ref{20}) yields \begin{eqnarray} v_2^h(p_T,b) &=& {[2\bar R(p_T,b)/\pi\bar S(p_T,b)]\int_0^{\pi/2} d\phi \cos2\phi S(\phi,b) \over B(p_T,b)+\bar R(p_T,b)} \nonumber \\ &=& {\langle \cos2\phi \rangle_S\over Z^{-1}(p_T) + 1}, \label{22} \end{eqnarray} where \begin{eqnarray} \langle \cos2\phi \rangle_S &=& {2/\pi\over \bar S (b)} \int_0^{\pi/2} d\phi \cos2\phi S(\phi,b), \label{23} \\ Z(p_T) &=& e^{p_T/\tilde T} - 1. \label{24} \end{eqnarray} These equations are remarkable in that the $b$ dependence resides entirely in Eq.\ (\ref{23}) and the $p_T$ dependence entirely in Eq.\ (\ref{24}); furthermore, there is no explicit dependence on the hadron type nor the resonance term represented by $u(p_T,b)$. As we have noted at the end of the preceding section, $T$ can be determined by the $p_T$ spectra, but $T_B$ is not directly observable. However, the quadrupole is measurable, so it can constrain $\tilde T$ and therefore $T_B$. In short, the two parameters $T$ and $T_B$ can be fixed by fitting the data on $\bar \rho_1^h(p_T,b)$ and $v_2^h(p_T,b)$. Without using a model to describe the evolution of the dense medium, it is clear that we cannot predict the values of $T$ and $T_B$. However, our aim is to discover how far one can go without using such a model. Neither $T$ nor $T_B$ depend on $\phi$. Yet non-trivial $v_2^h(p_T,b)$ can be obtained because of the presence of the ridge term in Eq.\ (\ref{12}). If phenomenology turns out to support this interpretation of azimuthal asymmetry, as we shall do in the next section, then the ridges induced by undetected semihard partons play a more important role in giving rise to the $\phi$ dependence in inclusive single-particle distribution than hydro expansion that is based on assuming equilibration to be completely at a later time without semihard scattering. From Eqs.\ (\ref{10}) and (\ref{23}) we can calculate $\left< \cos 2\phi\right>_S$ and obtain its dependence on $b$. For the initial elliptical configuration the width and height are \begin{eqnarray} w=1-b/2, \qquad h=(1-b^2/4)^{1/2} , \label{25} \end{eqnarray} where all lengths are in units of the nuclear radius $R_A$. Setting the Gaussian width $\sigma$ between the azimuthal angle $\phi_1$ of the semihard parton and $\phi_2$ of the ridge particle to be $\sigma=0.33$ \cite{ch2}, we determine $\left< \cos 2\phi\right>_S$ as shown in Fig.\ 1(a). \begin{figure}[tbph] \includegraphics[width=.4\textwidth]{fig1a.eps} \includegraphics[width=.5\textwidth]{fig1b.eps} \caption{(Color online) (a) Average of $\cos2\phi$ weighted by $S(\phi,b)$ vs impact parameter in units of $R_A$. (b) Common dependence of $v_2^h(p_T,b)$ on $N_{part}$ for various $p_T$, shifted vertically for comparison. The diamond and square points are horizontally shifted slightly from the points in circles to aid visualization. The solid line is from $\left<\cos2\phi\right>_S$ shown in (a), but rescaled and plotted in terms of $N_{part}$. The data are from Ref.\ \cite{ja}.} \end{figure} According to Eq.\ (\ref{22}) $\left< \cos2\phi\right>_S$ contains all the $b$ dependence of $v_2^h(p_T,b)$ for any $p_T$ in the soft region. To check how realistic that is phenomenologically, we show first in Fig.\ 1(b) the data on $v_2^h(p_T,N_{part})$ for three $p_T$ values from Ref.\ \cite{ja}, but shifted vertically so that they agree with the data for $p_T=0.975$ GeV/c for most of large $N_{part}$. The diamond and square points are slightly shifted horizontally to spread out the overlapping points for the sake of visual distinguishability. The fact that their dependencies on $N_{part}$ are so nearly identical is remarkable in itself. The solid line is a reproduction of the curve in Fig.\ 1(a) but plotted in terms of $N_{part}$, and reduced in normalization by a factor 0.25 to facilitate the comparison with the data points. For $N_{part}>100$ the line agrees with the data on $v_2$ very well, thus proving the factorizability of $p_T$ and $b$ dependencies of Eq.\ (\ref{22}). For $N_{part}<100$, corresponding to $b/R_A>1.3$ or centrality $>40$\%, there is disagreement which is expected because the density is too low in peripheral collisions to justify the simple formula in Eq.\ (\ref{22}). A density-dependent correction is considered in Ref.\ \cite{hz}, but will not be repeated here. Our focus in this paper is on the inclusive ridge, so we proceed
to phenomenology on the basis that the formalism given above is valid for central and mid-central collisions at $N_{part}>100$. To have a compact analytic expression for $S(\phi,b)$ as given in Eqs.\ (\ref{10}) and (\ref{11}) to summarize the $\phi$ dependence is not only economical, but also provides a succinct feature to distinguish the ridge from the base components in Eq.\ (\ref{12}). \section{Phenomenology} We now determine the parameters in our model through phenomenology. A success in fitting all the relevant data can give support to our approach that emphasizes issues not considered in the standard model \cite{rv}. Our first task is to determine the inverse slope $T$ that is shared by ${\cal T}(q), \bar\rho_1^\pi(p_T,b)$ and $\bar\rho_1^p(p_T,b)$. Since the normalization factors in Eqs.\ (\ref{5}), (\ref{14}) and (\ref{15}) have not yet been specified, we consider first a particular centrality, 20-30\%, and fit the $p_T$ dependence of the proton spectrum for $p_T<2$ GeV/c, as shown in Fig.\ 2, and obtain \begin{eqnarray} T=0.283\ {\rm GeV}. \label{26} \end{eqnarray} Note that the one-parameter fit (apart from normalization) is very good compared to the data from Ref.\ \cite{sa}. It demonstrates that the proton is produced in that $p_T$ range by thermal partons and that the flattening of the spectrum at low $p_T$ is due to the prefactor $p_T^2/m_T$ arising from the proton wave function. \begin{figure}[tbph] \includegraphics[width=.45\textwidth]{fig2.eps} \caption{Proton spectrum at $y\approx 0$ averaged over $\phi$ (hence, no $1/2\pi$ factor) at 20-30\% centrality. The solid line is a fit of the data by Eqs.\ (\ref{15}) and (\ref{26}) with free adjustment of normalization. The data are from Ref.\ \cite{sa}.} \end{figure} Having determined $T$, we next consider the pion spectrum $\bar\rho_1^\pi(p_T,b)$. According to Eq.\ (\ref{14}) it has the same exponential factor as does $\bar\rho_1^p(p_T,b)$, but has also an additional factor $[1+u(p_T,b)]$ due to resonance decay. We show in Fig.\ 3 the data from PHENIX \cite{sa} on the pion distribution\ for 20-30\% centrality; the $\exp(-p_T/T)$ factor is shown by the dashed line, the normalization being adjusted to fit (and to be discussed later). For $p_T>1$ GeV/c they agree very well, demonstrating the validity of the common $T$. For $p_T<1$ GeV/c there is resonance contribution to the pion spectrum which we cannot predict. Thus we fit the low-$p_T$ region by the addition of a term $\exp(-p_T/T_r)$, shown by the dash-dotted line, corresponding to $T_r=0.174$ GeV. The sum depicted by the solid line agrees with the data perfectly. The point of this exercise is mainly to show that the common $\exp(-p_T/T)$ behavior is valid for pion as for proton, but the reality of resonance contribution for $p_T<1$ GeV/c obscures that commonality. Converting the resonance term to the form given in Eq.\ (\ref{14}) we write \begin{eqnarray} u(p_T,b)=u_0(b) e^{-p_T/T_0} , \label{27} \end{eqnarray} where $T_0=0.45$ GeV and $u_0=3.416$ for 20-30\% centrality. We do not regard this $u$ term as a fundamental part of our model; we attach the factor $[1+u(p_T,b)]$ to all expressions of the pion distribution s, as in Eqs.\ (\ref{16}) and (\ref{18}). Of more significance is the role that $T$ has played in the phenomenology, and so far $T_B$ has played no role. \begin{figure}[tbph] \includegraphics[width=.45\textwidth]{fig3.eps} \caption{Pion spectrum showing $e^{-p_T/T}$ by the dashed line, and the resonance contribution by the dash-dotted line. The sum is in solid line. The data are from Ref.\ \cite{sa}.} \end{figure} $T_B$ is not directly related to any observable spectrum, since it describes the $p_T$ dependence of the base $B^h(p_T,b)$ that lies under the ridge. The important concept we advance here is that it is $\phi$ independent, and that $R^h(p_T,\phi,b)$ carries all the $\phi$ dependence. Thus we turn to $v_2^h(p_T,b)$ in Eq.\ (\ref{22}) and examine its $p_T$ dependence for both $h=\pi$ and $p$. In order to emphasize the universality between $\pi$ and $p$, we consider $v_2^h$ versus the transverse kinetic energy $E_T$, for $E_T<0.8$ GeV, where \begin{eqnarray} E_T(p_T)=m_T(p_T)-m_h . \label {28} \end{eqnarray} We adopt the ansatz that $p_T$ is to be replaced by $E_T$ in Eq.\ (\ref{24}) so as to account for the mass effect, i.e., \begin{eqnarray} Z(p_T)=e^{E_T(p_T)/\tilde T}-1 , \label{29} \end{eqnarray} where $\tilde T$ is as given in Eq.\ (\ref{20}). In Fig.\ 4 is shown the data from Ref.\ \cite{ja} when $v_2^h$ is plotted against $E_T$ for 20-30\% centrality. We fit the data points for both $h=\pi$ and $p$ by Eqs.\ (\ref{22}) and (\ref{29}) with the choice \begin{eqnarray} T_B=0.253\pm 0.003\ {\rm GeV} , \label{30} \end{eqnarray} \begin{figure}[tbph] \vspace*{-.5cm} \includegraphics[width=.45\textwidth]{fig4.eps} \caption{(Color online) $v_2^h$ for $h=\pi$ and $p$. The shaded region corresponds to $T_B=0.253\pm0.003$ GeV. The data are from Ref.\ \cite{ja}.} \end{figure} \noindent which is represented by the shaded region in Fig.\ 4. The upper boundary of that region is for $T_B=0.25$ GeV that fits the pion $v_2$ almost perfectly, and the lower boundary is for $T_B=0.256$ GeV that fits well the proton $v_2$. It is evident that $v_2$ is very sensitive to $T_B$ due to the exponential factor in Eq.\ (\ref{29}), yet the data support a common value for $T_B$ to within 1-2\% deviation for pion and proton production. One cannot expect an accuracy better than that in the universality of $v_2^h$ for $h=\pi$ and $p$. We regard this result to be remarkable, since the normalization of $v_2^h$ is fixed by Eq.\ (\ref{22}) without freedom of adjustment. Note that we have not used any more parameters besides $T$ and $T_B$ to accomplish this, which is a fitting procedure not more elaborate than the hydro approach where the initial condition and viscosity are adjusted. So far we have concentrated on 20-30\% centrality partly because we want to separate the $p_T$ and $\phi$ dependencies from the issue of centrality dependence, and partly because $v_2^h(p_T,b)$ is large at 20-30\% centrality for low $p_T$. To extend our consideration to other centralities, we fix $T$ and $T_B$ at the values obtained in Eqs.\ (\ref{26}) and (\ref{30}) so that $Z(p_T)$ is no longer adjustable. The centrality dependence of $v_2^h(\pt,b)$\ is then examined using Eq.\ (\ref{22}). Figure 5 shows the results for different centrality bins for both $h=\pi$ and $p$. The shaded regions due to the uncertainty in Eq.\ (\ref{30}) become narrower in more central collisions. The agreement with data from STAR \cite{ja} is evidently very good. Since there has been no more adjustment of free parameters to achieve that, we find substantial support from Fig.\ 5 for our view that the $\phi$ dependence arises entirely from the ridge component in the inclusive distribution. This raises serious question on whether viscous hydrodynamics is the only acceptable description of heavy-ion collisions, if the reproduction of $v_2^h(\pt,b)$\ is the primary criterion for the success of a model. \begin{figure}[tbph] \hspace*{-.5cm} \includegraphics[width=.5\textwidth]{fig5.eps} \caption{(Color online) Same as in Fig.\ 4 for four centrality bins.} \end{figure} It is possible to further improve the agreement between the values of $v_2^h$ for pion and proton in Fig.\ 5 if those figures are replotted in accordance to the idea of quark number scaling (QNS), i.e., $v_2^h/n_h$ vs $E_T/n_h$, where $n_h$ is the number of constituent quarks in hadron $h$ \cite{mv1,kcg}. As we have considered QNS and its breaking in the recombination model before \cite{chy}, we do not revisit that problem here, especially since our main goal to use $v_2$ to constrain $T_B$ has already been accomplished. Having obtained the correct centrality dependence of $v_2^h(\pt,b)$\ that is calculable, we now consider the centrality dependence of the inclusive spectra $\bar\rho_1^h(p_T,b)$. We note that the unknown normalization factors ${\cal N}_\pi(b)$ and ${\cal N}_p(b)$ in Eqs.\ (\ref{14}) and (\ref{15}) never enter into the calculation of $v_2^h(\pt,b)$\ because of cancellation, but for $\bar\rho_1^h(p_T,b)$ they must be reckoned with. As remarked after Eq.\ (\ref{7}), ${\cal N}_\pi(b)$ and ${\cal N}_p(b)$ are proportional to $C^2$ and $C^3$, respectively, due to $q\bar q$ and $qqq$ recombination. The magnitude $C$ of the thermal partons depends on $b$ in a way that cannot be reliably calculated. By phenomenology on the pion spectrum it was previously estimated for $p_T>1.2$ GeV/c \cite{hz}, but that is inadequate for our purpose here; moreover, ${\cal N}_\pi(b)$ and ${\cal N}_p(b)$ have different statistical factors that can depend on $b$ because of resonances. We give here direct parametrizations of the normalization factors in terms of $N_{part}$ \begin{eqnarray} {\cal N}_\pi(N_{part})&=&0.516 N_{part}^{1.05} , \label{31} \\ {\cal N}_p(N_{part})&=&0.149 N_{part}^{1.18} , \label{32} \\ u_0(N_{part}) &=& 2.8+0.003 N_{part} . \label{33} \end{eqnarray} The parameters are determined by fitting the centrality dependence to be shown, but the essence of our prediction is in $p_T$ and $\phi$ dependencies that are not adjustable. Using the above in Eqs.\ (\ref{14}) and (\ref{15}) we obtain the curves in Fig.\ 6 (a) pion and (b) proton for three centrality bins. They agree with the data from PHENIX \cite{sa} very well over a wide range of low $p_T$. In all those curves $T$ is kept fixed at 0.283 GeV, thus reaffirming our point that both pions and protons are produced by the same set of thermal partons despite the apparent differences in the shapes of their $p_T$ dependencies. \begin{figure}[tbph] \includegraphics[width=.45\textwidth]{fig6a.eps} \hspace{.5cm} \includegraphics[width=.45\textwidth]{fig6b.eps} \caption{Inclusive spectra at three centralities for (a) pion and (b) proton. The data are from Ref.\ \cite{sa}.} \end{figure} \section{Inclusive Ridge Distribution} It is now opportune for us to revisit the two-component description of the single-particle distribution\ and focus on the ridge component, in particular. As stated explicitly in Eq.\ (\ref{12}), the $\phi$ dependence separates the $B^h(p_T,b)$ and $\bar R^h(p_T,b)$ components, the former being described by Eqs.\ (\ref{16}) and (\ref{17}), the latter by Eqs.\ (\ref{18}) and (\ref{19}). Upon averaging over $\phi$, we have \begin{eqnarray} \bar\rho_1^h(p_T,b)=B^h(p_T,b)+\bar R^h(p_T,b) . \label{34} \end{eqnarray} \begin{figure}[tbph] \includegraphics[width=.45\textwidth]{fig7a.eps} \hspace{.5cm} \includegraphics[width=.45\textwidth]{fig7b.eps} \caption{Inclusive distributions for pion showing the base (B) component by dashed line and ridge (R) component by dash-dotted line for (a) 0-5\% and (b) 20-30\%. The solid line is their sum. The data are from Ref.\ \cite{sa}.} \end{figure} \noindent Since the exponential factors are the same for $h=\pi$ and $p$, let us consider only the pion distribution\ specifically. In Fig.\ 7 we show $B$ and $R$ components by dashed and dash-dotted lines, respectively, for (a) 0-5\% and (b) 20-30\%. It is in those figures that we exhibit the basic difference between our description of inclusive spectra and those of others. Inclusive ridge represented by $R$ is always present in the single-particle distribution\ whether or not an experiment chooses to do correlation measurement to examine the ridge. Semihard scattering is unavoidable in any nuclear collisions at high energy. Its effect on soft partons is therefore also unavoidable. We quantify the effect by the $R$ component which is determined by the azimuthal anisotropy that is well reproduced in Fig.\ 5. Here in Fig.\ 7 we see it rising above the $\phi$-independent base $B$ component when $p_T$ is higher than 1.4 GeV/c. It is a consequence of the recombination of enhanced thermal partons. For $p_T>3$ GeV/c in addition to the inclusive ridge the jet component of the semihard partons themselves manifests in the spectra in the form of thermal-shower recombination that characterizes the intermediate-$p_T$ region. Thus we have a smooth transition from low- to intermediate-$p_T$ regions by recognizing the importance of the inclusive ridge component. It is observed that the dash-dotted lines in Fig.\ 7 are not exactly straight because the ridge component is not exponential in $p_T$. However, for $p_T>1$ GeV/c, $\bar R^\pi(p_T,b)$ can be well approximated by pure exponential. In Fig.\ 8 we show by the solid line the $p_T$ dependence of $\bar R_0(p_T)$, defined in Eq.\ (\ref{19a}); it is the part of the ridge distribution s $\bar R^h(p_T,b)$ in Eqs.\ (\ref{18}) and (\ref{19}) that is common for $h=\pi$ and $p$ and is independent of $b$. From the values of $T$ and $T_B$ that we now know, we have $\tilde T=2.39$ GeV. The (red) dashed line is a straight-line approximation of the solid curve for $p_T>1$ GeV/c by \begin{eqnarray} \bar R_0(p_T)\approx R_0 e^{-p_T/T'}, \qquad T'=0.326\ {\rm GeV} . \label{36} \end{eqnarray} Thus the ridge distribution\ is harder than the inclusive distribution\ characterized by $T=0.283$ GeV. This is a property that is known from triggered ridges \cite{bia}, but now it is for untriggered inclusive ridge. \begin{figure}[tbph] \includegraphics[width=.45\textwidth]{fig8.eps} \caption{(Color online) The $p_T$ dependence of $\bar R_0(p_T)$ defined in Eq.\ (\ref{19a}), represented by the solid line. The (red) dashed line is a straight-line approximation for $p_T>1$ GeV/c, expressed by Eq.\ (\ref{36}).} \end{figure} The enhancement of $T'$ over $T$ is an important point to note. Physically, it means that the ridge is a consequence of the passage of semihard partons through the medium, whose energy losses enhance the thermal partons in the vicinities of the trajectories. We know that the enhancement factor is $Z(p_T)$, which has the necessary $p_T$ dependence to render $v_2^h(p_T,b)$ to be in good agreement with the quadrupole data. In particular, the property that $Z(p_T)\to 0$ as $p_T\to 0$ is essential to guarantee that $v_2^h(p_T,b)\to 0$ in the same limit. The effect of $Z(p_T)$ at larger $p_T$ is to increase $T_B$ to $T'$. Although $Z(p_T)$ increases exponentially, its net effect on $\bar R_0(p_T)$ is suppressed by $e^{-p_T/T_B}$. The effective inverse slope $T'$ for $p_T>1$ GeV/c is larger than $T$ of the inclusive by 43 MeV, roughly the same as what Putschke reported on the first discovery of ridge, where the triggered ridge has $T'$ larger than that of the inclusive by 45 MeV \cite{jp}. There are, however, subtle differences between triggered and untriggered ridges. Experimentally, it is necessary to do correlation measurements to learn about the properties of the ridge, which is extracted by a subtraction scheme. The inverse slope $T'$ can be compared to the inclusive $T$ of the background. For single-particle distribution\ the only measurable inverse slope is $T$ for the inclusive spectrum. Theoretically, we assert that ridges do not disappear just because triggers are not used. The inverse slope $T'$ for $\bar R_0(p_T)$ cannot be measured directly. It is larger than both $T$ and $T_B$ because $\bar R_0(p_T)$ is the difference between the two exponentials for $\bar\rho_1$ and $B$, represented by the middle term in Eq.\ (\ref{19a}), which vanishes as $p_T\to 0$. A physically more sensible way to compare the various inverse slopes is to recognize that $T'$ is significantly larger than $T_B$ because of the enhancement effect due to semihard scattering, and that $T$ is the effective slope of the inclusive distribution, $B+\bar R$, that is measurable and is between $T_B$ and $T'$. Although our concern in this paper has been restricted to the midrapidity region, the physics of inclusive ridge can be extended to non-vanishing pseudo-rapidity $\eta$. In Ref.\ \cite{ch} a phenomenological relationship is found between the triggered ridge distribution\ in $\Delta\eta$ and the inclusive distribution\ in $\eta$ with the implication that there is no long-range longitudinal correlation. However, there can be transverse correlation due to transverse broadening of forward (or backward) soft partons as they move through the conical vicinity of the semihard partons. The enhancement of the thermal partons due to energy loss is just as we have described in this paper. Indeed, the term representing the enhanced $p_T$ distribution\ in Ref.\ \cite{ch} is essentially identical to that expressed in Eq.\ (\ref{19a}). Similar consideration has also been used in the explanation of the ridge structure found at LHC \cite{cms,hy2}. In this paper we have presented the most detailed quantitative analysis of the RHIC data in the formalism of inclusive ridge that sets the foundation for the ridges at $|\Delta\eta|>0$. \section{Conclusion} Our study of inclusive ridge distribution s has consolidated earlier exploratory work with firm phenomenological support, and therefore succeeded in extending the hadronization formalism from intermediate-$p_T$ region to below 2 GeV/c, exposing thereby an aspect of physics that has not been included in other approaches. The effect of semihard scattering on soft partons is accounted for by the ridge component whose azimuthal behavior is totally characterized by $S(\phi,b)$; it is a calculable quantity bounding the surface segment through which semihard partons can contribute to the formation of a ridge particle at $\phi$. In an earlier paper \cite{hz} we showed the connection between $S(\phi,b)$ and the dependence of the triggered ridge yield on the $\phi_s$ of the trigger angle relative to the reaction plane. Now, we have exhibited the central role that $S(\phi,b)$ plays in determining the azimuthal quadrupole $v_2^h(p_T,b)$ of inclusive distribution s. Thus the inclusive ridge distribution\ that we have advanced in this series of work serves as a bridge between the single-particle distribution\ and the two-particle correlation. Since semihard partons are copiously produced before thermalization is complete, it is an aspect of physics that should not be ignored. The success in fitting $v_2^h(p_T,b)$ for all central and mid-central collisions and for both $\pi$ and $p$ by one parameter $T_B$ therefore leads to claim of relevance as much as viscous hydro does. Another attribute of our approach is to unify the production of pions and protons in one hadronization scheme based on the recombination of enhanced thermal partons so that their spectra have the same inverse slope $T$ despite apparent differences in the low-$p_T$ data. That same scheme when extended to $p_T\sim 3$ GeV/c explains readily the observed large $p/\pi$ ratio. Since the ridge components in our formalism are the same for $\pi$ and $p$, we can predict that the $p/\pi$ ratio is also large in the triggered ridge as in the inclusive. The property about ridge that most investigators are concerned about is the large $\Delta\eta$ range found in correlation experiments. That is an aspect of the problem that has been addressed in Ref.\ \cite{ch}. Our focus in this paper is on the hidden aspect of the ridge that is not easily detected, but is pervasive because it is in the inclusive distribution. Phenomenological success found in this paper puts the idea on solid footing. If the concept of inclusive ridge is important at RHIC, then its relevance at LHC will be predominant. Since the structure of $v_2^h(p_T,b)$ expressed in terms of $S(\phi,b)$ in Eq.\ (\ref{22}) is independent of initial density, viscosity, or even collision energy, except $\tilde T$, we would expect $v_2$ measured at LHC to be essentially similar to what is shown in Fig.\ 5 for both $\pi$ and $p$. A preliminary look at the data from ALICE \cite{aam} leads us to believe that such an expectation may not be unrealistic. \section*{Acknowledgment} This work was supported, in part, by the U.\ S.\ Department of Energy under Grant No. DE-FG02-96ER40972 and by the Scientific Research Foundation for Young Teachers, Sichuan University under No. 2010SCU11090 and Key Laboratory of Quark and Lepton Physics under Grant No. QLPL2009P01.
\section{Introduction} Instead of using logical formulas to represent meaning of natural language, recent trends use Abstract Meaning Representation (AMR) instead: a graph based solution where nodes correspond to atomic meaning units and edges specify argument position. Several attempts have been made at constructing them compositionally, and recently the idea of using s-graphs with the HR-algebra \citep{koller15s-graphs} has been simplified to reduce the number of options when parsing \citep{groschwitz17am-algebra}% . This apply-modify algebra (AM-algebra) is a linguistically plausible graph algebra with two classes of operations, both of rank two: the apply operation is used to combine a predicate with its argument; the modify operation is used to modify a predicate. As terms it generates annotated s-graphs (as-graphs), which are s-graphs annotated with a more detailed type description. While the AM-algebra correctly handles relative clauses and complex cases of coordination, it cannot parse reflexive sentences like: ``The raven washes herself.'' that lead to AMRs resembling the ones in \autoref{fig:amrself}. This paper proposes a change to the type system of the AM-algebra and a change to the definition of s-graphs underlying the algebra to facilitate this. \begin{figure}[tb] \begin{subfigure}[b]{0.5\textwidth} \centering \digraph[height=4cm]{otherwashingraven}{ margin=0 wash -> raven [xlabel="ARG 0 "] wash -> raven2 [label=" ARG 1"] raven2 [label=raven] } \caption{}\label{fig:amrother} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \centering \digraph[height=4cm]{selfwashingraven}{ margin=0 wash:sw -> raven:nw [xlabel="ARG 0 "] wash:se -> raven:ne [label=" ARG 1"] } \caption{}\label{fig:amrself} \end{subfigure} \caption{AMRs for ``The raven washes the raven'' (\subref{fig:amrother}) and ``The raven washes herself'' (\subref{fig:amrself})} \end{figure} \begin{figure} \centering \begin{minipage}{0.3\textwidth} \centering \begin{forest} sn [ \app{s} [ \app{o} [ $G_{wash}$ ] [ $G_{raven}$ ] ] [ $G_{raven}$ ] ] \end{forest} \caption{Correct way of parsing ``The raven washes the raven''.} \label{fig:treeraven} \end{minipage}\hfill \begin{minipage}{0.3\textwidth} \centering \begin{forest} sn [ \app{s} [ \app{o} [ $G_{wash}$ ] [ $G_{self}$ ] ] [ $G_{raven}$ ] ] \end{forest} \caption{Preferred way of parsing ``The raven washes herself''.} \label{fig:treeself} \end{minipage}\hfill \begin{minipage}{0.3\textwidth} \centering \digraph[height=2.5cm,trim=0cm 0.2cm 0cm 0.7cm]{washself}{ margin=0; s [label=<<FONT COLOR="red">S</FONT>>] wash [label=<wash<BR/><FONT COLOR="red">rt</FONT>>] wash:sw -> s:nw [xlabel="ARG 0 "] wash:se -> s:ne [label=" ARG 1"] } \caption{AS-graph for ``to wash oneself''.} \label{fig:amroneself} \end{minipage} \end{figure} \section{The Proposal in Short} \label{sec:short} Amongst many constructs, the AM-algebra can parse an English SVO sentence like ``The raven washes the raven'' according to the term given in \autoref{fig:treeraven} and it should also be able to parse reflexive sentences like ``The raven washes herself'' in a similar manner, as shown in \autoref{fig:treeself}. In this solution, the only vertex of $G_{self}$ has no label, as the necessary label is determined in the \app{s} step, but instead has an additional \type{s}-source label on its root node (\autoref{fig:selflex}). The \app{o} operation renames \text{rt}{} to \type{o}, composes the result with $G_{wash}$ and finally forgets the \type{o}-source label. Because of the additional \type{s}-source label that is on the same node as the \type{o}-source label, the \type{o} and \type{s}-sources of $G_{wash}$ are merged, resulting in \autoref{fig:amroneself}. Intuitively, the additional \type{s}-label marks that whatever position $G_{self}$ is applied to, should merge with the subject position. The original s-graphs used in the AM-algebra, however, disallow one vertex having multiple source labels, making the lexical item $G_{self}$ needed for this derivation illegal. Secondly, the AM-algebra demands that the type of the second argument of \app{$\alpha$} is strictly equal to the type expected by the first argument at its $\alpha$-source. Having an extra \type{s}-label, $G_{self}$ violates this constraint. The first part of the remainder of this paper addresses the issue regarding s-graphs, while the second part covers the type system of the AM-algebra. \begin{figure}[tb] \begin{subfigure}[b]{0.333\textwidth} \centering \digraph[height=2.5cm,trim=0cm -1.5cm 0cm -1.5cm]{raven}{ margin=0 raven [label=<raven<BR/><FONT COLOR="red">rt</FONT>>] } \caption{$G_{raven}$} \end{subfigure}% \begin{subfigure}[b]{0.333\textwidth} \centering \digraph[height=2.5cm]{wash}{ margin=0 s [label=<<FONT COLOR="red">S</FONT>>] o [label=<<FONT COLOR="red">O</FONT>>] wash [label=<wash<BR/><FONT COLOR="red">rt</FONT>>] wash -> s [xlabel="ARG 0 "] wash -> o [label=" ARG 1"] } \caption{$G_{wash}$} \end{subfigure}% \begin{subfigure}[b]{0.333\textwidth} \centering \digraph[height=2.5cm,trim=0cm -1.5cm 0cm -1.5cm]{herself}{ margin=0 rt [label=<<FONT COLOR="red">rt</FONT>, <FONT COLOR="red">S</FONT>>] } \caption{$G_{self}$} \label{fig:selflex} \end{subfigure}% \caption{Example lexicon} \label{fig:lexicon} \end{figure} \section{S-graphs} \subsection{Original Definition of S-graphs} \label{sec:original_definition} This section reiterates the definitions by \citet{courcelle12graphs} of s-graphs and parallel-composition and concludes with a small remark. As opposed to the more precise definition of s-graphs and parallel-composition that \citet{courcelle12graphs} give in Section~2.3, this paper works with the simpler one from Section~1.4.2: \begin{quote} We consider (abstract) directed or undirected graphs, possibly with multiple edges. They form the set $\ensuremath{\mathcal{J}}$. For a graph in $\ensuremath{\mathcal{J}}$, $E_G$ denotes its set of edges (and $V_G$ its set of vertices). We let $\ensuremath{\mathcal{A}}$ be a countable set of labels (\dots) that will be used to distinguish particular vertices. These distinguished vertices will be called \emph{sources}, and $\ensuremath{\mathcal{A}}$ is the set of \emph{source labels}. (This notion of source is unrelated with edge directions.) A \emph{graph with sources}, or \emph{s-graph} in short, is a pair $G = \langle G^\circ, src_G \rangle$ where $G^\circ \in \ensuremath{\mathcal{J}}$ and $src_G$ is a bijection from a finite subset $\tau(G)$ of $\ensuremath{\mathcal{A}}$ to a subset of $V_{G^\circ}$. We call $\tau(G)$ the \emph{type} of $G$ and $src_G(\tau(G))$ the set of its sources. The vertex $src_G(a)$ is called the \emph{a-source} of $G$; its \emph{source label}, also called its \emph{source name}, is $a$. We let $\ensuremath{\mathcal{JS}}$ denote the set of s-graphs; (\dots) We define operations on $\ensuremath{\mathcal{JS}}$: first a binary operation called the \emph{parallel-composition}, (\dots). For $G, H \in \ensuremath{\mathcal{JS}}$ we let \[ G \mathbin{\Vert} H := \langle G^\circ \cup H'^\circ, src_G \cup src_{H'}\rangle \] where $H'$ is isomorphic to $H$ and is such that \begin{align*} &E_{H'} \cap E_G = \emptyset \\ &src_{H'}(a) = src_G(a) \text{ if } a \in \tau(G) \cap \tau(H'), \\ &V_{H'} \cup V_G = \{src_G(a) \mathbin{\vert} a \in \tau(G) \cap \tau(H') \}. \end{align*} This operation ``glues'' $G$ and a disjoint copy of $H$ by fusing their sources having the same names. \end{quote} Note that \citet{courcelle12graphs} define s-graphs with $src_G$ a bijection. This is somewhat misleading, as ``a bijection (\dots) to a subset of $V_{G^\circ}$'' is exactly the same as saying that $src_G$ is injective to $V_{G^\circ}$. The subset to which $src_G$ is then bijective is trivially $\img(src_G)$. \subsection{New Graphs with Sources} \label{sec:new_definition} To facilitate the preferred parsing method described in \autoref{sec:short}, this section gives a definition that is very similar to the one of s-graphs by \citet{courcelle12graphs}, but allows for one vertex to have multiple source labels. Secondly, a new definition of parallel-composition is given that is equivalent to the old one when composing s-graphs, but also correctly handles the composition of graphs with sources that have more than one label. Finally, a proof is given for this equivalence of definitions. \subsubsection{Defining MS-graphs} \begin{definition}[Graphs with possibly multiple source labels per vertex] Let $\ensuremath{\mathcal{A}}$ be a fixed countable set of \emph{names} or \emph{labels}. Let $\tau(G) \subseteq \ensuremath{\mathcal{A}}$, the \emph{type} of $G$, be a finite subset of $\ensuremath{\mathcal{A}}$, denoting the labels used in $G$. Instead of $src_G$ being an \emph{injective} function (as in the original definition), let $src_G: \tau(G) \to V_{G^\circ}$ be any function. A \emph{graph with possibly multiple source labels per vertex} (\emph{ms-graph}) is a pair $G = \langle G^\circ, src_G \rangle$. \end{definition} This definition allows for one vertex to have multiple labels, but crucially a label still uniquely specifies a vertex. Moreover, if $src_G$ happens to be injective, this definition of ms-graphs reduces to the definition of s-graphs, thus all s-graphs are ms-graphs. Finally, let $slab_G\colon \img(src_G) \to \mathcal{P}\left(\tau(G)\right)$ be the inverse of $src_G$. If $S$ is a set, we write $Slab_G(S) := \bigcup \{ slab_G(s) \mathbin{\vert} s \in S \cap \dom(slab_G) \}$. \subsubsection{Redefining parallel-composition} This section shows the definition by \citet{courcelle12graphs} of parallel-composition does not work for ms-graphs and gives a new definition. The example in \autoref{fig:compprob} together with the following corollary shows the original definition of parallel-composition is contradictory when used on ms-graphs. \begin{corollary}\label{cor:disjoint} Let $G$ and $H$ be s-graphs, $\abs{\tau(G)} \geq 1$ and, without loss of generality, let $a, b \in \tau(G)$. If $src_G(a) \neq src_G(b)$% , then $src_{G \mathbin{\Vert} H}(a) \neq src_{G \mathbin{\Vert} H}(b)$, because $\restr{src_{G \mathbin{\Vert} H}}{\tau(G)} = src_G$. \end{corollary} Crucially, \autoref{cor:disjoint} states that all sources that were distinct within $G$ are still distinct in $G \mathbin{\Vert} H$. This poses a problem for composing the graphs in \autoref{fig:compprob}, as $G_{separate} \mathbin{\Vert} G_{single}$ should be isomorphic to $G_{single}$. By \autoref{cor:disjoint}, however, it must have at least as many nodes as $G_{separate}$, which is a contradiction. \begin{figure}[tb] \begin{subfigure}[b]{0.5\textwidth} \centering \digraph[width=\textwidth]{separate}{ margin=0.2 a [label=<<FONT COLOR="red">A</FONT>>] b [label=<<FONT COLOR="red">B</FONT>>] } \caption{$G_{separate}$} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \centering \digraph[width=\textwidth]{single}{ margin=0.6 rt [label=<<FONT COLOR="red">A</FONT>, <FONT COLOR="red">B</FONT>>] } \caption{$G_{single}$} \end{subfigure}% \caption{Two graphs that would pose a problem when composed using the original definition of parallel-composition.} \label{fig:compprob} \end{figure} For the more robust definition of parallel-composition of $G$ and $H$, an equivalence relation on the disjoint union of the vertices of $G$ and $H$ is needed that makes sources with overlapping labels equivalent. \begin{definition} \label{def:eq} Let $G$ and $H$ be any two ms-graphs and let $H'$ be isomorphic to $H$ such that $H'$ shares no vertices or edges with $G$. Let $V_\sim := V_G \cup V_{H'}$ and let $R$ be a binary relation on $V_\sim$ such that $\forall g \in V_G, h \in V_{H'}$: \begin{equation*} (g, h) \in R \iff \exists \alpha \in \ensuremath{\mathcal{A}} \colon g = src_G(\alpha) \land h = src_{H'}(\alpha) \end{equation*} Let $\sim_{G \mathbin{\Vert} H}$ (or $\sim$ if $G$ and $H$ are understood from context) be the symmetric and transitive closure of $R$ and additionally, for all $v \in V_\sim$: let $(v, v) \in \ \sim$. \end{definition} Note that $\sim$ is an equivalence relation on $V_\sim$. \begin{definition}[Parallel-composition of ms-graphs] \label{def:comp} Let $G$, $H$, $H'$, $V_\sim$ and $\sim$ be as in \autoref{def:eq}. Let $X \subseteq V_\sim$ be a cross-section of $\sim$ as in the definition of quotient graphs by \citet[Definition~2.15]{courcelle12graphs}, which will also be used here. Furthermore, let: \begin{align*} \tau(G \mathbin{\Vert} H) &:= \tau(G) \cup \tau(H)\footnotemark \\ [x] &:= \{v \in V_\sim \mathbin{\vert} v \sim x \} & \forall x \in X \end{align*} \footnotetext{Note that isomorphic ms-graphs have the same type.} $slab_{G \mathbin{\Vert} H}$: \begin{align*} X &\to \mathcal{P}\left(\tau(G \mathbin{\Vert} H)\right) \\ x &\mapsto Slab_G([x]) \cup Slab_{H'}([x]) \end{align*} $src_{G \mathbin{\Vert} H}$: \begin{align*} \tau(G \mathbin{\Vert} H) &\to X \\ \alpha &\mapsto \begin{cases} x &\text{if } \alpha \in slab_{G \mathbin{\Vert} H}(x) \\ \text{undefined} \end{cases} \end{align*} Then the parallel-composition of $G$ and $H$ is: \begin{equation*} G \mathbin{\Vert} H := \langle \left(\left( G^\circ \cup H'^\circ \right) / \sim\right)_X, src_{G \mathbin{\Vert} H} \rangle \end{equation*} \end{definition} The following theorem suffices to show this definition of parallel-composition is well-defined. \begin{theorem} $src_{G \mathbin{\Vert} H}$ is well-defined. \end{theorem} \begin{proof} Let $x, y \in X$ and $\alpha \in \tau(G \mathbin{\Vert} H)$. If $\alpha \in slab_{G \mathbin{\Vert} H}(x)$ and $\alpha \in slab_{G \mathbin{\Vert} H}(y)$, then there exist $u, v \in V_\sim$ such that $u \sim x$, $v \sim y$ and one of the following two cases holds: \begin{enumerate} \item $slab_G(u) \ni \alpha \in slab_G(v)$ or $slab_{H'}(u) \ni \alpha \in slab_{H'}(v)$ \item $slab_G(u) \ni \alpha \in slab_{H'}(v)$ or $slab_{H'}(u) \ni \alpha \in slab_{G}(v)$ \end{enumerate} The first case implies that either $u = src_G(\alpha) = v$ or $u = src_{H'}(\alpha) = v$, thus by transitivity of $\sim$ we have $x \sim y$. Without loss of generality, let us assume $u \in G$ and $v \in H'$ in the second case. Then $u = src_G(\alpha)$ and $v = src_{H'}(\alpha)$, thus by \autoref{def:eq} $u \sim v$ and by transitivity of $\sim$ we have $x \sim y$. $X$, however, is a cross-section of $\sim$, thus $x \sim y$ implies $x = y$. \
end{proof} Note that, like the definition of parallel-composition by \citet{courcelle12graphs}, \autoref{def:comp} only determines the parallel-composition of two ms-graphs up to isomorphism. \subsection{Proof of Equivalence of Definitions} The following shows that \autoref{def:comp} reduces to the original definition of parallel-composition when composing s-graphs. \begin{proof} Let $G$, $H$, $H'$, $V_\sim$ and $\sim$ be as in \autoref{def:eq} and specifically, let $G$ and $H$ be s-graphs. In this proof $A \cong B$ denotes that $A$ is isomorphic to $B$ and $G \mathbin{\Vert} H$ refers to parallel-composition as in \autoref{def:comp}. Finally, let $H''$ be what \citeauthor{courcelle12graphs} denote by $H'$. What must be shown is that \begin{equation*} \left(\left( G^\circ \cup H'^\circ \right) / \sim\right)_X \cong G^\circ \cup H''^\circ \end{equation*} with some isomorphism $c$, such that \begin{equation*} c \circ src_{G \mathbin{\Vert} H} = src_G \cup src_{H''}. \end{equation*} By definition of s-graphs, every vertex of $G$ and $H'$ has at most one source and thus for every source in $G$ there is at most one equivalent vertex, which is the vertex of $H'$ with the same source label, if it exists. Thus $\forall g, g' \in V_{G}$: \begin{equation} \label{eq:G} g \sim g' \iff g = g', \end{equation} similarly $\forall h, h' \in V_{H'}$: \begin{equation} \label{eq:H} h \sim h' \iff h = h' \end{equation} and finally $\forall g \in V_G, h \in V_{H'}$: \begin{equation} \label{eq:GH} g \sim h \iff \exists \alpha \in \ensuremath{\mathcal{A}} \colon g = src_G(\alpha) \land h = src_{H'}(\alpha). \end{equation} Let $K^\circ$ be a subgraph of $\left(\left( G^\circ \cup H'^\circ \right) / \sim\right)_X$ (or $(G \mathbin{\Vert} H)^\circ$) such that \begin{align*} V_K &:= \{x \in X \mathbin{\vert} \exists h \in V_{H'} \colon h \sim x\} \\ E_K &:= E_{H'} \\ vert_K(e) &:= (x, y) &\forall e \in E_K \end{align*} with $x, y \in X$, $x \sim h$, $y \sim h'$ and $vert_{H'}(e) = (h, h')$\footnote{Such $x$ and $y$ exist and are unique, because $X$ is a cross-section of $V_\sim$.}. In other words, $K^\circ$ is the part of $(G \mathbin{\Vert} H)^\circ$ that came from $H'$. Let $src_K := \restr{src_{G \mathbin{\Vert} H}}{\tau(H')}$ and let $K := \langle K^\circ, src_K \rangle$. \autoref{eq:G} and \ref{eq:H} and \ref{eq:GH} together imply $\abs{V_K} = \abs{V_{H'}}$ and thus by construction $K \cong H'$. The correct choice of $X$ makes $K$ satisfy all requirements of $H''$ and thus $H''$ can be chosen equal to it, making $c$ equal to the identity, proving the equivalence of definitions. The remainder of this proof describes this choice of $X$, verifies that $K$ then satisfies the requirements for $H''$ and finally formally checks equality. By \autoref{eq:G}, we can choose $X$ such that $V_G \subseteq X$ and therefore $slab_{G \mathbin{\Vert} H}(g)$ is defined for all $g \in V_G$. By \autoref{eq:GH}% , we have $\forall g \in V_G$: \begin{equation*} Slab_G([g]) \supseteq Slab_{H'}([g]) \end{equation*} and thus \begin{align*} slab_{G \mathbin{\Vert} H}(g) &= Slab_G([g]) \cup Slab_{H'}([g]) & \forall g \in V_G \\ &= Slab_G([g]) \\ &= slab_G(g) & \text{by \autoref{eq:G}.} \end{align*} Therefore, $\forall \alpha \in \tau(G), g \in V_G$: \begin{align*} \alpha \in slab_{G \mathbin{\Vert} H}(g) &\implies src_{G \mathbin{\Vert} H}(\alpha) = g &\text{by \autoref{def:comp}}\\ \alpha \in slab_{G}(g) &\implies src_{G \mathbin{\Vert} H}(\alpha) = g \\ src_G(\alpha) = g &\implies src_{G \mathbin{\Vert} H}(\alpha) = g \\ \restr{src_{G \mathbin{\Vert} H}}{\tau(G)} &= src_G. \autotag{eq:restrG} \end{align*} This paragraph checks $K$ is suitable as $H''$. By choice of $H'$, we have \begin{equation*} E_{K} \cap E_G = \emptyset. \end{equation*} By definition of $K$ and \autoref{eq:restrG}, we have $\forall \alpha \in \tau(G) \cap \tau(K)$: \begin{equation*} src_K(\alpha) = src_{G \mathbin{\Vert} H}(\alpha) = src_G(\alpha). \end{equation*} For the last condition, \begin{equation*} V_G \cap V_K = \{ g \in V_G \mathbin{\vert} \exists h \in V_{H'} \colon h \sim g \} \end{equation*} by definition of $V_K$. \autoref{eq:GH} implies $\forall g \in V_G$: \begin{align*} g \in V_G \cap V_K &\iff \exists h \in V_{H'}, \alpha \in \ensuremath{\mathcal{A}} \colon g = src_G(\alpha) \land src_{H'}(\alpha) = h \end{align*} thus \begin{align*} V_G \cap V_K &= \{ src_G(\alpha) \mathbin{\vert} \alpha \in \tau(G) \land \left[\exists h \in V_{H'} \colon src_{H'}(\alpha) = h\right] \} \\ V_G \cap V_K &= \{ src_G(\alpha) \mathbin{\vert} \alpha \in \tau(G) \cap \tau(H') \} \\ V_G \cap V_K &= \{ src_G(\alpha) \mathbin{\vert} \alpha \in \tau(G) \cap \tau(K) \} \end{align*} for $\tau(K) = \tau(H')$. Finally, let us check $(G \mathbin{\Vert} H)^\circ = G^\circ \cup K^\circ$ and $src_{G\mathbin{\Vert} H} = src_G \cup src_K$. $K^\circ \subseteq (G \mathbin{\Vert} H)^\circ$ by choice of $K^\circ$ and similarly $G^\circ \subseteq (G \mathbin{\Vert} H)^\circ$ by choice of $X$, thus $(G \mathbin{\Vert} H)^\circ \supseteq G^\circ \cup K^\circ$. For the inverse, we must check that $\forall x \in X$: \begin{equation} \label{eq:supset} x \not \in V_G \to x \in V_k. \end{equation} $X$ is a cross-section of $\sim$, thus it is a subset of $V_{\sim} = V_G \cup V_{H'}$, thus $\forall x \in X \colon x \in V_G \lor x \in V_{H'}$. If $x \in V_G$, then \autoref{eq:supset} holds. If $x \not \in V_G$, then $x \in V_{H'}$ and therefore $\exists h \in V_{H'} \colon h \sim x$, namely $h = x$ and thus $x \in V_{K}$, proving that $X = V_G \cup V_K$. For the edges: \begin{align*} E_{(G \mathbin{\Vert} H)^\circ} &= E_{G^\circ \cup H'^\circ} & \text{by definition of quotient graphs} \\ &= E_{G} \cup E_{H'} \\ &= E_{G} \cup E_{K} & \text{by choice of } E_K. \end{align*} Verifying $vert_{(G \mathbin{\Vert} H)^\circ} = vert_G \cup vert_{K}$ it too tedious for the current proof, but boils down to showing that for all $e \in E_{H'}$ either $vert_{H'}(e) = vert_K(e)$ or one or both of the vertex instances of $vert_{H'}(e)$ are not in $X$, but then they are replaced with their unique equivalent $x \in X$, exactly as in the definition of $vert_K$. This shows that also $(G \mathbin{\Vert} H)^\circ \subseteq G^\circ \cup K^\circ$ and thus $(G \mathbin{\Vert} H)^\circ = G^\circ \cup K^\circ$. Because of this equality and because $\restr{src_{G \mathbin{\Vert} H}}{\tau(K)} = src_K$ by definition and $\restr{src_{G \mathbin{\Vert} H}}{\tau(G)} = src_G$ (\autoref{eq:restrG}), we also have $src_{G \mathbin{\Vert} H} = src_G \cup src_K$. \end{proof} \section{Type-System of the AM-algebra} A graph type of an as-graph $g$ annotates each source label $\alpha$ with the type of the graph that can be used as argument to $\app{$\alpha$}\left(g, - \right)$ \citep[Definition~3.1]{groschwitz17am-algebra}. Note that using ms-graphs allows multiple and thus differing type restrictions when apply is used on a source node, but because the desired source name must be specified during application, this does not lead to contradictions. What \emph{is} problematic in combining $G_{wash}$ with $G_{self}$ as shown in \autoref{fig:treeself}, is that the graph type of $G_{wash}$ expects $G_{self}$ to have empty type, but $G_{self}$ is of type \type{s}, rendering $\app{s}\left(G_{wash}, G_{self}\right)$ undefined, according to Definition~3.3 of \citet{groschwitz17am-algebra}, cited below. This section first recounts said Definition~3.3, lists the problems that arise when relaxing it and finally proposes changes that take these problems into account. \subsection{Original Definition of Apply Operation} \begin{quote} \begin{quotedef}{3.3}[Apply operation (\textsc{App})] \ \\ Let $\mathcal{G}_1 = \left(\left(g_1, S_1\right), \left(T_1, R_1\right)\right)$, $\mathcal{G}_2 = \left(\left(g_2, S_2\right), \left(T_2, R_2\right)\right)$ be as-graphs. Then we let $\app{$\alpha$}\left(\mathcal{G}_1, \mathcal{G}_2\right) = \left(\left(g', S'\right), \left(T', R'\right)\right)$ such that \begin{align*} (g', S') &= f_\alpha( (g_1, S_1) \mathbin{\Vert} \text{ren}_{\{\text{rt} \mapsto \alpha \}}(\text{ren}_{R(\alpha)}( (g_2, S_2)))) \\ T' &= (T_1 \setminus \{\alpha\}) \cup (T_2 \circ \overline{R_1 (\alpha)^{-1}})\\ R' &= (R_1 \setminus \{\alpha\}) \cup (R_2 \circ \overline{R_1 (\alpha)^{-1}}) \end{align*} if and only if \begin{enumerate} \item $\mathcal{G}_1$ actually has an $\alpha$-source to fill, i.e. $\alpha \in \dom(T_1)$ \item \label{cond:original} $\mathcal{G}_2$ has the type $\alpha$ is looking for, i.e. $T_1(\alpha) = (T_2, R_2)$, and \item $T', R'$ are well-defined (partial) functions; \end{enumerate} otherwise $\app{$\alpha$}(\mathcal{G}_1, \mathcal{G}_2)$ is undefined. \end{quotedef} \end{quote} \subsection{Problems} The only place where this definition has to be relaxed, is in Condition~\ref{cond:original}. To keep the type system functional, this relaxation should not be too big: the type of the output graph using the original definition must not be violated. Simply allowing $\mathcal{G}_2$ to have any extra annotation that $\mathcal{G}_1$ already contains (conform the modify operation \citep[Definition~3.4]{groschwitz17am-algebra}) allows the term in \autoref{fig:treeself} and does not change the output type. At first this seems fine, but there are three problems: \begin{enumerate} \item \label{prob:wash} This would also allow \type{s}-application of $G_{wash}$ to itself, which is definitely not desirable, as intuitively transitive verbs should only be allowed to combine with entities and not with other verbs. \item \label{prob:self} Moreover, this would allow \type{s}-application of $G_{self}$ to something, which is undesirable because the \type{s}-label in $G_{self}$ is there only to signify a merge with the subject of the sentence-to-be, not to label a to-be-filled argument slot. \item \label{prob:app} Finally, this would allow something to be \type{s}-applied to $G_{self}$, making the extra \type{s}-label redundant. This \emph{could} be undesirable, as the spirit of the AM-algebra is to reduce the number of terms leading to the same AMR\footnote{ This all hinges around whether $ren_{\{\text{rt} \mapsto \alpha \}}(g)$ is defined if $\alpha \in \tau(g)$. For if it is not defined, then the AM-algebra implicitly disallows $\alpha$-application of something to a graph with an $\alpha$-source that is not renamed, which indicates Problem~\ref{prob:app} is indeed undesirable, but also implicitly disallowed. \citeauthor{courcelle12graphs}, however, only define a simultaneous rename operation where labels \emph{swap} position, which would lead to weird but not explicitly prevented results. \citeauthor{groschwitz17am-algebra} do not bother specifying whether their rename works the same. }. \end{enumerate} All three problems signify that any additional labels of the root node should be treated differently than the source labels of other nodes. This does not only involve relaxing Condition~\ref{cond:original}, but also requires adding conditions. \subsection{Proposed Changes} For brevity, let $rlab(G) := slab_{G}\left(src_{G}(\text{rt})\right) \setminus \{\text{rt}\}$ for any ms-graph, the set of \emph{additional root labels} of $G$. Problem~\ref{prob:self} is simply solved by adding the following condition: \begin{enumerate} \setcounter{enumi}{3} \item \label{cond:notaddroot} $\alpha \notin rlab(\mathcal{G}_1)$, i.e. $\alpha$ is not an additional root label of $\mathcal{G}_1$, \end{enumerate} or instead, if Problem~\ref{prob:app} must be accounted for explicitly: \begin{enumerate} \setcounter{enumi}{3} \item $\alpha \notin rlab(\mathcal{G}_1) \cup rlab(\mathcal{G}_2)$, i.e. $\alpha$ is an additional root label of neither $\mathcal{G}_1$ nor $\mathcal{G}_2$. \end{enumerate} Problem~\ref{prob:wash} can be solved by making sure not to relax Condition~\ref{cond:original} too much. A safe choice would therefore be to relax Condition~\ref{cond:original} as little as possible, while still reaching our goal. The minimum relaxation needed for the term in \autoref{fig:treeself} to be legal, would be to allow at most one additional \type{s}-label at the root. This, however, would be tailored too much to the specific choice of label and number of additional labels. Instead, the minimum relaxation that does not specify the (number of) additional root labels, is the replacement of Condition~\ref{cond:original} by the following conditions: \begin{enumerate}[2a] \item \label{cond:ignoretype} $T_1(\alpha) = \left(T_2 \setminus rlab(g_2), R_2 \setminus rlab(g_2)\right)$, i.e. apart from its additional root labels, $\mathcal{G}_2$ has the type $\alpha$ is looking for, \item \label{cond:subtype} $\restr{T_2}{rlab(g_2)} \subseteq T_1$ and $\restr{R_2}{rlab(g_2)} \subseteq R_1$, i.e. the additional root labels of $\mathcal{G}_2$ do not change the type of $\mathcal{G}_1$. \end{enumerate} This changed \textsc{App}{} operation is trivially equivalent to the original one when only using s-graphs: for any s-graph $g$, $rlab(g) = \emptyset$, by definition of s-graphs. Thus Condition~\ref{cond:notaddroot} and Condition~\ref{cond:subtype} become tautologies% , and Condition~\ref{cond:ignoretype} reduces to Condition~\ref{cond:original} of the original definition. \section{Conclusion} The AM-algebra by \citet{groschwitz17am-algebra} is powerful and linguistically plausible, but not powerful enough to parse reflexive sentences in a linguistically preferred way. In summary, this paper proposed a change to the AM-algebra and the s-graphs underlying it, and showed the proposed definitions reduce to the original definitions when used under the original constraints. Most importantly, these changes enable the AM-algebra to parse reflexive sentences in a linguistically preferred way. \bibliographystyle{plainnat}
\section{Introduction} AGN (active galactic nuclei)-starburst (SB) connection/link has been suggested based on correlation analyses between the luminosity of galactic nuclei and that of circum-nuclear warm dust heated by active star formation (SF) \cite{Smith+1998,Wild+2010,Fabian2012}. Large amount of energy released at AGN produces various types of energetic outflows such as jets, bubbles and winds expanding into the galactic disc and halo \cite{King+2015}, which interact with the circum-nuclear disc (CND) and torus as well as gas clouds, and enhance star formation (SF) and starburst. This is categorized as the AGN-to-SB feedback, which is the first subject of this paper. On the other hand, the surrounding gas disc plays a role in fueling the gas to the nucleus by overcoming the problem of the refusing force due to the conservation of angular-momentum by (i) bar-dynamical \cite{Shlosman+1989}, (ii) magnetic braking \cite{Krolik+1990}, and (iii) radiation drag \cite{Umemura+1997,Thompson+2005} accretion mechanisms. These may be categorized as disk-to-AGN fueling. However, feedback of (iv) star formation and/or starburst itself to the AGN has not been thoroughly investigated, which is the second subject of this paper. In our Galactic Centre, various expanding and out-flowing phenomena have been observed, indicating that the Milky Way has experienced AGN phases in the past with various energies and time scales. Expanding phenomena are evidenced, for example, by multiple thermal shells of radii $\sim 10$ pc around Sgr A with required energy of $\sim 10^{51}$ ergs in the last $\sim 10^5$ y \cite{Sofue2003}, 200-pc expanding molecular cylinder of $\sim 10^{54}$ ergs \cite{Kaifu+1972,Scoville1972,Sofue2017}, GC radio lobe of $\sim 200$ pc at $\sim 10^{54}$ ergs \cite{Sofue+1984,Heywood+2019}, and giant shells/bubbles in the halo in radio, X-rays and $\gamma$-rays with radii from several to $\sim 10$ kpc with $\sim 10^{55}$ ergs in the last $10^6$ to $10^7$ y \cite{Sofue1980,Sofue2000,Sofue+2016,Su+2010,Crocker2012,Kataoka+2018}. Numerous non-thermal filaments in radio continuum emission \cite{YZ+2004,LaRosa+2005,Lang+1999} may also indicate continuous magneto-hydrodynamic (MHD) waves excited by the activity in Sgr A \cite{Sofue2020a}. The Galactic nucleus is surrounded by the central molecular zone (CMZ) embedding active star forming regions such as Sgr B and C \cite{Morris+1996,Oka+1998,Oka+2012,Tsuboi+2015}. High excess of the number of supernova remnants (SNR) in the GC direction indicates a high rate of SF in the CMZ \cite{Gray1994}. Star formation in the CMZ has been discussed often in relation to the non-linear response of the rotating disc gas to the barred potential \cite{Krumholz+2015} and to cloud-cloud collisions \cite{Hasegawa+1994,Tsuboi+2015}. Feedback of the nuclear activity \cite{Zu2015,Zu+2013,Zu+2017,Hsieh+2016} would also trigger the SF in CMZ. The SF activity would in turn disturb the surrounding medium by supernova explosions and stellar winds \cite{Martinpintado+1999}. Radio continuum blobs and filaments as mixture of thermal and non-thermal emissions, composing a radio-bright zone (RBZ), suggests high-energy feedback from the SF regions to the CMZ and surrounding medium \cite{Zhao+2016,Yusef+2019}. Thus produced disturbances will propagate through the RBZ, and further reach and affect the nuclear region around Sgr A. However, such feedback from the SF regions to Sgr A seems to be not thoroughly investigated. In this paper, we investigate the propagation of MHD waves in the Galactic Center by solving the Eikonal equations for low amplitude fast-mode MHD waves in magnetized medium by the method described in section \ref{secmethod}. In section \ref{secsgrA} MHD disturbances induced by the activity in Sgr A are shown to converge on the CMZ and molecular clouds therein to compress and trigger SF, which may hint to insight into an efficient, and hence minimal energetic feedback in the AGN-SB connection. In section \ref{secsgrB} we trace the waves emitted from the SF region, and show that they converge onto the nucleus at high efficiency, which will trigger AGN activity in Sgr A. In section \ref{secdiscussion} we discuss the implication of the result. Throughout the paper, the term "Sgr A" will be used to express the complex around the Galactic nucleus including Sgr A$^*$ (AGN of the Milky Way) and associated molecular and radio sources. Similarly 'Sgr B' expresses the SF region and molecular complex associated with the radio sources Sgr B1 and B2, which is embedded in the CMZ and is supposed to be a starburst site. \section{Method} \label{secmethod} \subsection{Basic equations} Disturbances excited by an explosive event in the interstellar medium propagate as a spherical shock wave in the initial phase. In the fully expanded phase, they propagate as sound, \Alf, and fast-mode MHD waves. Among the modes, sound wave is much slower than the other two modes in the usual ISM condition. The \Alf wave transports energy along the field lines, while it does not compress the field, so that it is not effective in compressing the gas to trigger star formation. The fast-mode MHD wave (hereafter, MHD wave) propagates across the magnetic field lines at \Alf velocity, and compresses the local field as well as the gas, which would act to trigger star formation \cite{Sofue2020a}. The basic equations of motion, or the Eikonal equations, to trace the fast-mode MHD waves were obtained in order to study the Morton waves in the solar corona \red{under the condition that the \Alf velocity is sufficiently higher than the sound velocity, or in the so-called low $\beta$ condition.} \cite{Uchida1970,Uchida1974}. The method has been applied to MHD wave propagation in the Galactic center, supernova remnants, and star forming regions \cite{Sofue1978,Sofue1980,Sofue2020a,Sofue2020b}. Given the distribution of \Alf velocity and initial directions of the wave vectors, the equations can be numerically integrated to trace the ray path as a function of time. The equations are shown in the Appendix as reproduced from the above papers. Besides galactic disc at rest, we also examined a case that the disc is rotating at a constant rotation velocity. Thereby, the azimuth angle moment of each wave packet, $p_\phi$ in the Eikonal equations, was modified by adding the angular velocity caused by the rotation around the $z$ axis. Numerical integration of the differential equations was obtained by applying the first order Runge-Kutta-Gill (RKG) method with sufficiently small time steps. The validity was confirmed by checking some results by applying the second order RKG method as well as by changing the time steps. Because of the simple functional forms of the adopted \Alf velocity distributions having no singularity, the 1st order method was sufficiently accurate and faster than the 2nd order method. While the computations were performed in the spherical coordinates, the results will be presented in the Cartesian coordinates $(x,y,z)$, where $x$ and $y$ represent the distance in the galactic plane, $z$ is the axis perpendicular to the disc with $(0,0,0)$ denoting the nucleus, and $\varpi=(x^2+y^2)^{1/2}$ is the distance from $z$ axis. \subsection{Gas distribution} Extensive observations in molecular line, radio and X-ray observations have revealed that the gas density distribution is expressed by a disk-like CMZ surrounded by a molecular ring \cite{Morris+1996,Oka+1998,Sofue1995,Sofue2017}, warm gas disk \cite{OkaTake+2019}, and extended hot gas \cite{Koyama2018}. We represent the gas distribution by superposition of several components: \be \rho=\Sigma_i \rho_i, \ee where individual components are given as follows: The main disc is represented by \begin{equation} \rho_{\rm disk}= \rho_{\rm disk,0} \sech \left(\frac{z}{h}\right) e^{-(\varpi/\varpi_{\rm disk})^2}. \end{equation} We adopt $\rho_{\rm 0,disk}=1.0$ and $\varpi_{\rm disk}\sim 10$ in the scale units as listed in table \ref{tab_unit} (described later). A molecular ring representing the main body of CMZ is represented by \be \rho_{\rm ring}= \rho_{\rm 0,ring} e^{-((\varpi-\varpi_{\rm ring})^2+z^2)/w_{\rm ring}^2}, \ee where $\varpi_{\rm ring}=5$ and half ring width of $w_{\rm ring}\sim0.5-1$. Gas clouds are assumed to have Gaussian density distribution as \be \rho_{\rm cloud,i}=\rho_{0, i} e^{-(s_i/a_i)^2}, \ee where $s_i^2=(x-x_i)^2+(y-y_i)^2+(z-z_i)^2$, $\rho_{\rm cloud, i, 0}$ and $a_i$ arei centre density and scale radius of the $i-$th cloud or gaseous core centered on $(x_i,y_i,z_i)$. The whole system is assumed to be embedded in a halo of \be \rho_{\rm halo} =0.01. \ee As a nominal set, we take $\rho_{\rm coud, i}=100$, $a_i=1$, $\rho_{\rm disk, 0}=1$, $h=1$. \subsection{Magnetic fields} Non-thermal radio emission in the GC is more extended than the molecular gas disc and clouds, indicating that the magnetic pressure distribution is smoother than the gas distribution and the field strength is on the order of $0.1-1$ mG. There may be two major components. One is the large-scale vertical/poloidal field penetrating the galactic disc with roughly constant strength at $\sim 0.1-1$ mG \cite{YZ+1987,Tsuboi+1986,Sofue+1987}, and the other is a ring field of radius $\sim 100-200$ pc, whose strength is $\sim 0.01-0.1$ mG \cite{Nishiyama+2010}. \red{The strong magnetic field in the GC of 0.1 to 1 mG may be explained by a primordial-origin model, in which the primordial magnetic field was gathered in the GC during the the proto-Galactic accretion to form a strong vertical field \cite{Sofue+2010}. The field strength is amplified to a value at which the magnetic energy density balances the kinetic energy density of the disc gas in galactic rotation at $\sim 200$ \kms in the deep gravitational potential of the GC. } In the galactic disc of solar vicinity, Zeeman effect observations in local molecular clouds indicate that the magnetic strength is about constant at several $\mu$G through molecular clouds with density less than $\sim 10^4$ H cm$^{-2}$ except for high-density cores \cite{Crutcher+2010}, and the \Alf velocity decreases with the gas density \cite{Sofue2020b}. We here also assume such a general property of magnetized clouds in the GC. In our simulation, we first examine simple cases assuming a constant magnetic field $B=B_{\rm halo}=1$ in order to show typical behaviors of wave propagation. Then, we adopt a more realistic magnetic fields, where the fields are loosely coupled with the gas distribution in such that the magnetic pressure varies with with scale radii being twice those for the gas distribution, namely \be B_{\rm disk}^2= B_{\rm disk,0}^2 \sech \left(\frac{z}{2h}\right) e^{-(\varpi/(2\varpi_{\rm disk}))^2} \ee in the disc, and \begin{equation} B_i^2=B_{i,0}^2 e^{-(s_i/(2a_i))^2} \end{equation} in the ring and clouds, where $s_i$ is the distance from the center of $i$-th cloud or ring's azimuthal axis. We further examine a case that the disc is penetrated by a vertical cylinder of magnetic flux. In our recent paper \cite{Sofue2020b} we modeled the vertical radio continuum threads \cite{Heywood+2019} as remnants of nuclear activity. Thereby, we assumed a large-scale vertical magnetic cylinder penetrating the disc following the primordial origin model of Galactic magnetic field \cite{Sofue+2010}. It was shown that the MHD waves are confined inside the magnetic cylinder, forming vertically stretched filaments, which well reproduced the observed radio threads. The magentic cylinder is represented by \be B_{\rm cyl}=5 \ e^{-(\varpi-\varpi_{\rm cyl})^2/w_{\rm cyl}^2}, \ee where $\varpi_{\rm cyl}=3$ and $w_{\rm cyl}=1$. \subsection{\Alf velocity} The \Alf velocity is given by \be \Va=\sqrt{\Sigma_i B_i^2/{4\pi \rho}}, \ee where $i$ represents each of components of the interstellar medium in the GC. \red{It is estimated to be $\Va \sim 70-700$ \kms in the GC disc with $B\sim 0.1-1$ mG and $\rho\sim 10$ H cm$^{-3}$. On the other hand, the sound velocity in the disc is $\cs \sim 1$ \kms for HI gas of temperature of $T\sim 100$ K and $\sim 0.3$ \kms for molecular gas of $\sim 20$ K. In the molecular ring with $B\sim 0.1$ mG, $\rho\sim 100$ H cm$^{-3}$ and $T\sim 20$ K, we have $\Va \sim 20$ \kms and $\cs \sim 0.3$. In dense molecular clouds with $B\sim 0.1$ mG, $\rho\sim 10^4$ H cm$^{-3}$ and 10 K, we have $\Va \sim 2$ \kms and $\cs \sim 0.3$ \kms. The \Alf velocity in the halo increases rapidly above/below the gaseous disc, whose scale height is much smaller than that of magnetic field, so that $\Va \gg \sim 100$ \kms, the sound velocity of the X-ray halo at $\sim 10^6$ K. Thus, we may safely assume that the \Alf velocity is sufficiently faster than the sound velocity in the circumstances under consideration, which was the condition for the Eikonal method used in this paper. } Besides such strong global fields, we may also consider other possible models as follows. If the magnetic field and interstellar gas are in a local energy-density (pressure) equipartition, $B^2 \propto \rho \sigma_v^2$, the \Alf velocity is nearly equal to the turbulent velocity $\sigma_v$ of the gas, which is usually almost constant at $\sim 5- 10$ \kms. In such a case of constant \Alf velocity, the MHD waves propagate rather straightly without suffering from deflection. On the other hand, if the magnetic field is frozen into the gas, the magnetic flux is proportional to $\rho^{2/3}$, or $\Va \propto \rho^{1/6}$. This means that the \Alf velocity increases toward the cloud center, and the cloud works to diverge the waves rather than to converge. Such a case is indeed observed in high-density molecular cores with density $\sim 10^5$ H cm$^{-3}$ \cite{Crutcher+2010}. \red{These cases may apply to individual clouds, not largely changing the global wave propagation in the GC, and will be considered as a cause for local fluctuations as discussed in section \ref{clumpiness}.} \begin{table} \begin{center} \caption{Units used in the calculation} \label{tab_unit} \begin{tabular}{ll} \hline Density, $\rhou=\rho_{\rm disk, 0}$ & 100 H cm$^{-3}$ \\ Magnetic field, $\Bu$ & 1 mG \\ Velocity, $\Vu =\Bu/\sqrt{4\pi \rhou}$ & 220 \kms \\ Length $A$& 40 pc\\ Time, $\tu=A/\Vu$ & 0.18 My \\ \hline \end{tabular} \end{center} \end{table} \subsection{Units and normalization} The real quantities are obtained by multiplying the units to the non-dimensional quantities in the equations, where the length by $A$, time by $\tu$, and velocity by $\Vu =\Bu/\sqrt{4\pi \rhou}=219$ \kms as listed in table \ref{tab_unit}, following our recent analysis of the GC threads \cite{Sofue2020b}. Typical \Alf velocity is $\Va \sim 20$ \kms in molecular clouds and $\sim 200$ \kms in the galactic disc. \subsection{Dissipation rate} The dissipation rate $\gamma$ of a small-amplitude MHD wave defined through amplitude $\propto \exp(-\gamma L)$ \cite{Landau+1960} is expressed as \be \gamma=\frac{\omega^2}{2V^3} \(\frac{\nu}{\rho} + \frac{c^2}{4\pi\sigma_e}\), \ee where $\omega$ is the frequency, $\nu\sim 10^{-4}$ g cm s$^{-1}$ is the viscosity of hydrogen gas, $\sigma_e$ the electric conductivity, $c$ the light velocity, and $L$ is the distance along the ray path. The first term is due to dissipation by viscous energy loss, and the second term due to Ohmic loss, which is small enough compared to the first. Then, the dissipation length $L$ is estimated by \begin{eqnarray} \(\frac{L}{{\rm kpc}}\)=\gamma^{-1} \sim 0.6 \(\frac{B}{{\rm \mu G}}\) \(\frac{\rho}{ {\rm H \ cm^{-3}} } \)^{1/2}\nonumber \\ \times \(\frac{\nu}{{\rm 10^{-4} g \ cm \ s^{-1}}}\)^{-1} \(\frac{\lambda}{{\rm pc}}\), \label{dissipation} \end{eqnarray} which amounts to several kpc in the region under consideration, so that the dissipation is negligible in the GC region. \subsection{Assumptions and limitation of the method} The method here assumes that the amplitude of the MHD waves is small enough and the Eikonal equations are derived using the linear wave approximation. Hence, non-linear compression of the interstellar medium and the real density and magnetic field in the waves cannot be calculated in this paper. Therefore, we here discuss only the possibility of mechanisms to compress the gas and to enhance or trigger the star formation and nuclear activity from the relative amplification of wave amplitude due to the geometrical effect of spherical implosion of the wave front onto the focal point. Although the direction of the propagation does not depend on the magnetic field direction, the compression by the fast-mode MHD wave occurs in the direction perpendicular to the magnetic lines of force, with the amplitude proportional to \sin $i$, where $i$ is the angle between the direction of the propagation and magnetic field. This implies that the gas compression is not effective when the wave direction is parallel to the field ($i \sim 0$), whereas it attains maximum at perpendicular propagation ($\sim 90\deg$). In the GC, the global field direction is observed to be perpendicular to the galactic plane in the inter-cloud and out-of plane regions \cite{Heywood+2019}, while it is nearly parallel to the molecular ring occupying most of the CMZ \cite{Nishiyama+2010}. Therefore, the waves expanding from Sgr A toward the molecular ring and those from the CMZ toward the nucleus propagate almost perpendicularly to the global magnetic fields at maximum or high compression efficiency. Magnetic structures in the molecular clouds and the nuclear regions are not well observed, so that we here assume that they are random. In such a case, the field directions can be assumed to be statistically oblique at the most expected angle of $i\sim 60\deg$ (angle dividing a sphere into two equal areas about the polar axis), so that the compression efficiency is $\sim {\rm sin} \ 60\deg = 0.87$ as a mean. Thus, we may here consider that the fast-mode MHD waves propagate at high compression efficiency in almost everywhere in the GC for the first approximation. \section{Sgr A to B: AGN to CMZ} \label{secsgrA} \begin{figure \begin{center} \includegraphics[width=7cm]{fig1.eps} \end{center} \caption{Ray paths of MHD waves from the galactic centre through a sech disc in the $(x,z)$ plane.} \label{rays_disk} \end{figure} \begin{figure* \begin{center} \includegraphics[width=16cm]{fig2.eps} \end{center} \caption{(Left) Density distribution in the $(x,z)$ plane. Dark: log $V/V_{\rm unit}=-1$, white: 1. (Middle) MHD wave ray paths. (Right) Enlargement near the focus (ring).} \label{Valf} \label{rays_ring} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=7cm]{fig3.eps} \end{center} \caption{MHD wave amplitude.} \label{flux} \end{figure \begin{figure*} \begin{center} {\bf [Sgr A $\Rightarrow$ Disc and ring]} \\ \includegraphics[width=13cm]{fig4.eps} \end{center} \caption{(a) MHD waves from the nucleus propagating through the sech h gas disc with constant magnetic field. (b) Same converging on a gaseous ring with $r=5$ and width $r_{\rm ring}=0.5$ with parameters as $\rho_{\rm disk}=1/\cosh(z/1.)$; $\rho_{\rm ring}=100 \exp(-((r-r_{\rm ring})^2+z^2)/r_{\rm ring}^2)$; $rho=\rho_{\rm disk}+\rho_{\rm ring}$. Each dot represents MHD wave packet whose propagation is traced by solving the Eikonal equations. Initially, about a thousand packets are put at random on a small sphere at the center with outward radial vectors. The ensemble of packets are plotted at a constant time interval, here every 2 units of time at $t=2$, 4, 6, ... . The front expands spherically in the initial phase, elongated in the direction perpendicular to the disc, and reflected/refracted by the disc to focus on a focal ring (a). If there is a molecular ring of radius $\varpi=5$, the waves focus more efficiently on the ring (b). } \label{p-gc-ring} \end{figure*} \begin{figure*} \begin{center} {\bf [Sgr A $\Rightarrow$ Sgr B, etc.]}\\ \includegraphics[width=12cm]{fig5.eps} \end{center} \caption{(a) MHD waves from the nucleus converging onto three clouds projected on the xy plane. (b) Same, but projected on a plane perpendicular to the line of sight at 30$\deg$ from the plane. } \label{p-AtoB} \begin{center} {\bf [Sgr A $\Rightarrow$ Sgr B, etc. in rotation]}\\ \includegraphics[width=12cm]{fig6.eps} \end{center} \caption{Same as figure \ref{p-AtoB}, but the disc is rotating as figure \ref{Vrot}. } \label{p-AtoB-rot} \end{figure* \begin{figure \begin{center} \includegraphics[width=8cm]{fig7.eps} \end{center} \caption{Rotation velocity of the disc and clouds.} \label{Vrot} \end{figure \begin{figure* \begin{center} {\bf [Sgr A $\Rightarrow$ Sgr B, etc. in rotation through vertical magnetic cylinder]}\\ \includegraphics[width=16cm]{fig8.eps} \end{center} \caption{Same as figure \ref{p-AtoB-rot}, but a vertical magnetic cylinder of radius $\varpi_{\rm cyl}=3$ is present. MHD waves emitted at Sgr A are reflected and trapped inside the magnetic cylinder, while a significant fraction penetrates through it and converges onto the molecular clouds such as Sgr B. The right panel enlarges the central region to show that the reflected waves are converging back to Sgr A, indicating a self-feedback. Note a vacant area in the magnetic cylinder, from where the waves are rejected.} \label{cylMag-AB} \end{figure* \subsection{Propagation through the disc} We examine the effect of disturbances produced by explosive activity in the nucleus (Sgr A), which propagate and converge onto the CMZ and molecular clouds. We first present a simple case of propagation through a plane-parallel layer with sech-type density profile, where the gas density has the form of \be \rho_{\rm disk}=\sech (z/h), \ee with $h=1$. Figure \ref{rays_disk} shows the calculated result for the ray paths in the $(x,z)$ plane. The MHD wave front expand spherically in the initial phase, and are reflected and refracted by the disc due to the rapid increase of \Alf velocity with the height. The waves are then converged onto a circle in the plane (ring) with radius $\varpi \sim 4.4 h$. After focusing, the waves further propagate outward, and focus again at $r\sim 9h$, making the second focal ring. By such repetitive reflection and focusing, the waves are confined within the disc at high efficiency, so that the released energy is transformed outward, repeating circular and periodic convergence on the focal rings every $\sim 4.4h$ in radial interval. \subsection{Convergence onto gaseous ring} We next examine a case with a molecular gas ring of radius $\varpi_{\rm ring}=5$ around the nucleus embedded in the sech disk. We assume that the magnetic field is constant at $B=1$, and gas distribution is given by \be \rho=\rho_{\rm disk}+\rho_{\rm ring} + \rho_{\rm halo}, \ee where \be \rho_{\rm ring}=10 \ e^{-((\varpi-\varpi_{\rm ring})^2 +z^2)/w_{\rm ring}^2)}, \ee with $\varpi_{\rm ring}=5$ and $w_{\rm ring}=1$. Figure \ref{Valf} shows the distribution of the \Alf velocity in the $(x,z)$ plane, where the ring has minimum \Alf velocity. The ray paths of waves propagating in such \Alf distribution are shown in the middle panel of figure \ref{rays_ring
}, which indicates that the waves are strongly focused on the ring. The right panel enlarges the focal region, where the rays sharply focus on a circle of radius $\varpi=5.2$ slightly outside the gas ring's center at $x=5$. Figure \ref{flux} shows relative density of the wave packets, representing the wave flux, as calculated by $f=(z_0/z)^2$ for the rays near the galactic plane, where $z_0$ is the initial radius of the MHD wave front. This figure qualitatively represents the variation of wave flux as a function of the distance from the nucleus. In the figure, the wave flux is amplified at the focal ring by a factor of $\sim 10^3$, reaching almost the same flux as that in the initial sphere at the nucleus. However, in principle, because the rays of the wave packets cross eath other at the focus, where the area of the wave front becomes infinitesimally small ($z\simeq 0$), the amplification factor will reache infinity by the geometrical effect. In order to visualize the behavior of the wave front during the propagation, we adopt a dot plotting method. At the initial epoch, $t=0$, a number of wave packets, here about a thousand, are distributed at the origin with random radial vectors. Propagation of each of the wave packets is traced by the Eikonal equations, and the packets are displayed by dots projected on the sky at a given time interval. Figure \ref{p-gc-ring} shows thus calculated wave packets projected on a tilted plane by 30$\deg$ from the $(x,z)$ plane. Each group of dots represents the wave front at a certain epoch. Here, the front is displayed every 2 time units, or at $t=2$, 4, 6, ..., 20. The left panel shows a case for the sech disc (same as figure \ref{rays_disk}), where the wave front expands, reflected to converge on a focal ring at $r\sim 4$, again expands and focus on the outer focal ring at $r\sim 9$. The right panel shows a front expanding into the disc superposed with a dense gaseous ring of radius $r=5$ and width $w=1$. The waves behave in the same way as in the left panel inside the ring, but more strongly converge onto the gas ring, and are trapped there for a while. \subsection{Convergence onto clouds} It is more likely that the CMZ is clumpy composed of molecular clouds. We examine a case when three molecular clouds with different sizes and densities are present at the same distance as the above ring embedded in the central disk. We assume that the magnetic field is constant at $B=1$, and gas distribution is given by \be \rho=\rho_{\rm disk}+\Sigma_i \rho_{\rm cloud, i} + \rho_{\rm halo}, \ee where \be \rho_{\rm cloud,i}=100\ e^{-((x-x_i)^2+(y-y_i)^2+z^2)/w_{\rm cloud,i}^2}, \ee with $(x_i,y_i,w_{\rm cloud,i})=(5,0,1)$, $(-\sqrt{5},\sqrt{5},0.5)$, and $(-\sqrt{5},-\sqrt{5},0.5)$. Figure \ref{p-AtoB} shows a result of MHD wave convergence on such clouds. Because of the spherical convergence onto each cloud, the amplification of the wave flux is much stronger compared with convergence onto a ring. A fraction as high as $\sim 20$\% of the total released wave front from the nucleus is trapped by the largest cloud and focuses onto its center. Figure \ref{p-AtoB-rot} shows a case when the disc and clouds are rotating with nearly constant velocity as indicated in figure \ref{Vrot}. The convergence of waves onto the clouds are essentially the same, but the focusing waves are deformed according to the differential rotation. \subsection{Effect of vertical magnetic cylinder} Figure \ref{cylMag-AB} shows a result for the waves emitted from the nucleus (Sgr A) and propagate through a disc and vertical magnetic cylinder. The waves are reflected by the inner wall of the cylinder, and a fraction is trapped inside the wall and disc. However, a remaining fraction penetrates the wall and propagate through the disc, and further converge onto the ring and molecular clouds. The magnetic cylinder, therefore, reflects the waves and confine some fraction inside the cylinder, and some fraction penetrates through the wall and converge onto the target clouds. Although the magnetic cylinder somehow suppresses the efficiency of feedback from the wave source to the targets, the characteristic behavior of the waves are essentially the same as in the case without magnetic cylinder. It may be noted that there appears an almost vacant region of waves around the cylinder's radius, where the waves propagate faster than the surrounding region because of the faster \Alf velocity. This suggests that there is a quiet region of interstellar disturbances between Sgr A and molecular ring in the CMZ. \section{Sgr B to A: Starburst to AGN} \label{secsgrB} \subsection{Ring to center} In dense molecular clouds in the CMZ, star formation is triggered by the effective compression by the focusing MHD waves, which would be lead to active SF and may cause SB, if the compression is strong enough. Once active SF occurs, the preceding explosive phenomena such as supernova (SN) explosions, stellar winds, and expanding HII shells will produce various types of disturbances, which propagate through the CMZ and the galactic disk. We assume a constant magnetic field with $B=1$ and gas density distribution as \be \rho=\rho_{\rm disk}+\rho_{\rm nuc} +\rho_{\rm halo}, \ee with $\rho_{\rm nuc}=100\ e^{-(r/r_{\rm nuc})^2},$ and $r_{\rm nuc}=1$. Figure \ref{ring-center} shows propagation of MHD waves produced in a ring surrounding the GC, where the waves start from the ring at r=5 and propagate through the disc with plane parallel density distribution as sech z/1.0. Black and red lines indicate ray paths starting from two points on the ring at $X=-5$ and 5, respectively. The waves expands from the ring and are reflected by the sech disk. About a half of the rays focus on the Galactic Center at $X=0$. The rays from every azimuth position on the whole ring focuses on the central one point at high efficiency. After passing the center, the rays further propagate through the disc and focus again on the opposite side of the ring, and propagate further outward. \begin{figure \begin{center} \includegraphics[width=7cm]{fig9.eps} \end{center} \caption{Paths of MHD waves excited in a ring of radius 5 focusing onto the nucleus.} \label{ring-center} \end{figure \subsection{Sgr B to A} Although circum-nuclear SB is known to occur in a ring or donuts region surrounding the nucleus, the SF regions and molecular gas distribution are more or less clumpy, which is in fact clearly observed in the CMZ and associated SF sites \cite{Oka+1998,Oka+2012}. In the CMZ, the most active SF is observed in the Sgr B complex composed of HII regions and giant molecular clouds \cite{Hasegawa+1994}. Expanding shells of hot gas around Sgr B2 suggest strong wind or explosive events in the SF region \cite{Martinpintado+1999}. Radio continuum observations of Sgr B region has shown that the radio emission is a mixture of thermal and non-thermal emissions, indicating that the region contains high energy objects \cite{Jones+2011} (and the literature therein). X-ray observations also suggest active events, heating the surrounding gas to high temperature \cite{Koyama2018}. Thus, the SF activity in Sgr B will result in a variety of explosive and/or wind phenomena, including multiple supernova explosion. We trace propagation of disturbances simulated by a spherical MHD wave originating in a remote site from the nucleus at $(x,y,z)=(5,0,0)$, mimicking an explosive event in the Sgr B SF complex. The gas density distribution is assumed to have the form as \be \rho=\sech \left(\frac{z}{h}\right) e^{ -\left( \frac{\varpi}{\varpi_{\rm disk}} \right)^2 } +100 e^{- \left( \frac{r}{r_{\rm nuc}} \right)^2 } + 0.01, \label{diskAB} \ee with $h=1$, $\varpi_{\rm disk}=10$ and $r_{\rm nuc}=1$, representing a sech disc of radius 10 and a nuclear high-density gas concentration of radius 1.0 Figures \ref{sb_gc}(a) and (b) show the wave front at t=1, 2, 4, 6, ... and 20 for non-rotating disk. The wave front expands spherically in the initial stage around the explosion place, mimicking disturbance from a SB site such as Sgr B. As the shell expands, it is deformed due to the sech disc to form cylindrical shape, and is further converges to the disk. A significant portion, about $sim 10$ percent, of the front facing the nucleus focuses onto the nucleus. Figure \ref{sb_gc_rot}(c) and (d) are the same, but the disc is rotating at a constant velocity as shown in figure \ref{Vrot}. The wave source (Sgr B) moves along a circle at $r=5$ clockwise and the front expands in the rotating disk. Approximately the same portion of the front facing the nucleus as in case (a) focuses onto the nuclear gas cloud, while the front shape is deformed by the differential rotation. When the wave approaches the nucleus, the front shape attains almost spherical shape, focusing onto the center. Figure \ref{sb_multi_gc_rot} shows the same in a rotating disk, but there are three wave sources at slightly different radii at $r=4$, 4.5 and 5, and different azimuth angles. This figure demonstrates the effective convergence of the waves from multiple SF (SB) sites onto the nucleus at Sgr A. Figure \ref{sb_multi_ring_gc_rot} shows the same, but the emitting sources (SF regions) are located on the edge of a molecular ring as observed as the 200 pc molecular ring \cite{Sofue1995} embedded in the disc given by equation (\ref{diskAB}). The ring's density is represented by \be \rho_{\rm ring}=\rho_{\rm ring, 0} e^{-((\varpi-5)^2+z^2)/w_{\rm ring}^2}, \ee where $\varpi_{\rm ring,0}=5$, half width $w_{\rm ring}=1$, and $\rho_{\rm ring,0}=5$. which is embedded in the disc given by equation (\ref{diskAB}). A large fraction of the MHD waves are trapped in the ring, and a portion, about 10 percent, escapes from the ring and converges onto the nucleus. The waves inside the ring efficiently converge to the center, propagating through the disc in differential rotation, and focus onto the nucleus, resulting in spherical implosion. It may be stressed that the major fraction of the MHD waves are trapped inside the ring, staying there almost without dissipation (equation (\ref{dissipation})). The waves propagate along the ring, repeating oscillating focusing at wave length of $\sim 2w_{\rm ring}$ in the same way as in a magnetized filament \cite{Sofue2020a}. \begin{figure* \begin{center} {\bf [Sgr B $\Rightarrow$ Sgr A]}\\ \includegraphics[width=16cm]{fig10.eps} \end{center} \caption{(Left) Projection on the $(x,y)$ plane of SB-induced MHD wave by Sgr B (circle) focusing on the nucleus at Sgr A (cross). (Middle) Same, but seen from altitude at 30$\deg$. (c) Same as left panel, but enlarged near Sgr A.} \label{sb_gc} \begin{center} {\bf [Sgr B in rotation $\Rightarrow$ Sgr A]}\\ \includegraphics[width=16cm]{fig11.eps} \end{center} \caption{Same as figure \ref{sb_gc}, but in a rotating disc with circular velocity as shown by figure \ref{Vrot}. } \label{sb_gc_rot} \begin{center} {\bf [Sgr B, etc. in rotation $\Rightarrow$ Sgr A]}\\ \includegraphics[width=16cm]{fig12.eps} \end{center} \caption{Same as figure \ref{sb_gc_rot}, but the MHD waves are emitted from three sources at $r=4, 4.5$ and 5.5, mimicking Sgr B, C etc.. } \label{sb_multi_gc_rot} \end{figure* \begin{figure* \begin{center} {\bf [Sgr B etc. on a ring in rotation $\Rightarrow$ Sgr A]}\\ \includegraphics[width=16cm]{fig13.eps} \end{center} \caption{Same as figure \ref{sb_multi_gc_rot}, but a gas ring of radius 5 and half width 0.5 is added. Waves are either trapped in the ring, or escape and focus on the nucleus (Sgr A). } \label{sb_multi_ring_gc_rot} \end{figure* \subsection{Feedback through a magnetic cylinder} Figure \ref{cylMag} shows the result for MHD waves emitted from three SF regions near the molecular ring at $\varpi=5$ in the presence of a magnetic cylinder of radius 3. A fraction of the waves are reflected by the magnetic wall, and propagate backward and trapped (absorbed) in the molecular ring. Some other fraction penetrates through the magnetic cylinder, and converges onto the nucleus. Thus, the magnetic cylinder somehow suppress the efficiency of feedback from Sgr B to Sgr A, although the essential behavior is about the same without magnetic cylinder. Again, a vacant region appears coincident with the cylinder's radius due to the faster \Alf velocity inside. \begin{figure* \begin{center} {\bf [Sgr B, etc. on a ring in rotation $\Rightarrow$ Sgr A through a magnetic cylinder]}\\ \includegraphics[width=16cm]{fig14.eps} \end{center} \caption{Same as figure \ref{sb_multi_ring_gc_rot}, but in the presence of a magnetic cylinder at $\varpi=3$. MHD waves from three SF regions (Sgr B etc.) near the molecular ring are reflected by the magnetic cylinder, while a fraction penetrates it and converges onto the nucleus (Sgr A). There appears a vacant area in the cylider, with the wave being rejected. Some self-feedback occurs back to the ring and clouds.} \label{cylMag} \end{figure* \subsection{Efect of clumpiness} \label{clumpiness} Although the CMZ is composed of such major structures as the central disc, molecular ring, giant clouds like Sgr B, and the nuclear core around Sgr A, it is also full of clumps of smaller scales and turbulence \cite{Morris+1996,Oka+1998}. Such small structures cause fluctuations of the distribution of \Alf velocity, and disturb the smooth propagation of the MHD waves. While detailed discussion of magneto-hydrodynamic turbulence is beyond the scope of this paper, we here try to perform a simple exercise to examine how small scale fluctuations affect the MHD propagation by adding sinusoidal deformation against the background \Alf velocity distribution. Namely, the \Alf velocity is multiplied by a factor of \be f_{\rm clump}=1+q\ \sin (5x)\ \sin (5y)\ \sin (5z), \ee with $q\sim 0.1$. This equation represents $\Va$ variation with wavelength $\lambda \sim 0.6$ and peak-to-peak amplitude of 0.2 times the background, or peak-to-peak gas density of 0.4 times. In order to abstract the effect, we examine a simple case where are put three major clouds (Sgr B etc.) without rotation and a nuclear core (Sgr A), and the waves are emitted at the surfaces of the three clouds. In figure \ref{clump} we compare the results for no clumps (upper panels) and with clumps (lower panels). Although the wave fronts are more diverted due to the scattering and diffraction of the ray paths by the clumps, the global behavior of the wave propagation does not much different between the two cases. Focusing onto Sgr A also occurs at almost the same efficiency, while the directions of the focusing waves are more widely spread. Therefore, we can conclude that the clumps disturb the front shape of the wave, but it has no significant effect on the global focusing on the nucleus and its efficiency. The behavior is similar to a fluid flow collected by a deformed funnel into the central hole, regardless of the degree of deformation. \begin{figure* \begin{center} {\bf [ Sgr B, etc.$\Rightarrow$ Sgr A; without/with backgound fluctuations ]}\\ \includegraphics[width=16cm]{fig15.eps} \end{center} \caption{ Effect of clumpiness on the MHD waves from three sources (Sgr B, etc.) at $\varpi \sim 5$ without rotation, converging on Sgr A at the center. (a) \Alf velocity distribution is the same as for figure\ref{sb_multi_gc_rot}, and the waves from 3 sources are shown at $t=1$, 4, 8, ..., 20 projected on the ($x,y$) plane. (b) Same, but projection seen from $30\deg$ above the galactic plane. (c) Same as (a), but close up around Sgr A. (d) to (f) Same as (a) to (c), respectively, but the background \Alf velocity is superposed by fluctuation multiplied by a factor of $f_{\rm clump}=1+0.1\times \ \sin(5x)\ \sin(5y)\ \sin(5z)$. } \label{clump} \end{figure* \section{Discussion} \label{secdiscussion} \subsection{Echoing Feedback between Sgr A and B} We have shown that MHD waves excited by the AGN in Sgr A converge onto the molecular clouds such as Sgr B in the CMZ. The waves focus onto the clouds' centers, and compress the gas to trigger star formation. If the activity in Sgr A is strong and the wave amplitude is sufficiently large, the focusing wave will compress the cloud strongly, leading to starburst. The thus triggered SF and SB produce expanding HII regions, SN explosions and stellar winds, which further excite MHD waves in the surrounding ISM. The major part of the MHD waves is trapped to the disc and ring, and some portion, $\sim 10-20$ percent, focus on the nucleus, or Sgr A, as spherically imploding compression wave. Thus, the activity of Sgr A (AGN) triggers the SF/SB in the CMZ (Sgr B), which excites another MHD waves that inversely converge back to Sgr A by the inward focusing. This boomerang-focusing cycle will continue until the circum-nuclear gas and CMZ are exhausted by star formation and out-flowing events such as winds and expanding shells. \subsection{Solving the angular-momentum problem of AGN fueling} The key problem about fueling AGN by accretion of cold gas is how to control the refusing forces by conservation of angular momentum and increasing magnetic pressure \cite{Umemura+1997,Shlosman+1989,Krolik+1990,Thompson+2005,Jogee2006}. Such problem can be solved by the present model, because the MHD wave propagates as local and temporary enhancement of the magnetic pressure associated with gas compression, surfing the rotating disc without transporting angular momentum. Namely, the convergence onto the nucleus occurs without global change in the angular momentum and magnetic field. Thus, the MHD wave focusing can produce spherical implosion onto the nucleus without suffering from refusing forces by conservation of angular-momentum as well as magnetic pressure. \subsection{Energetics} The kinetic energy released by the starburst in the CMZ can be approximately estimated by the supernova (SN) rate and SF rate in the GC. High excess of small-diameter SNRs in the GC \cite{Gray1994} suggests that the SN rate per volume is higher than that in the galactic disc. The observed SF rate of $\sim 0.1 \Msun$ y$^{-1}$ in the CMZ \cite{Barnes+2017,YZ+2009} indicates massive-star birth rate of $\sim 10^{-3} \times 10 \Msun$ (OB stars) y$^{-1}$ $\sim 10^{-2}\Msun$ y$^{-1}$, \red{corresponding to $\sim 10^{-3}$ SNe y$^{-1}$.} Then, the injection rate of the kinetic energy into the CMZ by SNe is on the order of $L_{\rm SN}\sim \eta E_{\rm SN} dN_{\rm SN}/dt \sim 10^{39}$ ergs s$^{-1}$, where $E_{\rm SN}\sim 10^{51}$ ergs and $\eta\sim 0.03$ is the fraction of kinetic energy. The injected kinetic energy by SNRs finally fades away and merge into the ISM of CMZ, and excites small amplitude MHD waves. As the simulation showed, a significant fraction, $\sim 0.1$, of thus created waves converges onto the nucleus by the focusing effect. \red{This results in an implosive injection of kinetic energy in the form of compression MHD waves at a rate of $L_{\rm kin} \sim 10^{38}$ ergs s$^{-1}$ into a small focal area around Sgr A$^*$ (figure \ref{sb_gc}). } Since the problem of angular momentum has already been solved as in the previous subsection, this kinetic energy is directly spent to promote the accretion of the circum-nuclear gas towards the center, overcoming the gravitational barrier. The accretion rate is related to the injecting kinetic energy luminosity as \be L_{\rm kin}\sim \dot{M} \( \frac{GM_\bullet}{r} \), \ee or \be \dot{M} \sim 3.6 \(\frac{L_{\rm kin}}{10^{40}{\rm erg}} \) \( \frac{r}{1 {\rm pc}}\) \(\frac{M_\bullet}{10^6 \Msun}\)^{-1} \ \Msun \ {\rm y}^{-1}, \ee where $M_\bullet$ is the mass of the central massive object and $r$ is the radius at which the accretion is proceeding. \red{For above luminosity, this reduces to $\dot{M}\sim 0.01(r/1{\rm pc})$, if we assume that the convergence is so efficient that the focusing occurs into a small volume around the central massive black hole of mass $M_\bullet \sim 4\times 10^6 \Msun$ at Sgr A$^*$ \cite{Genzel+2010}. This yields $\dot{M}\sim 10^{-6}\Msun$ y$^{-1}$ for $r\sim 10^5 R_{\rm S}$ with $R_{\rm S}$ being the Schwaltzschild radius, and would be compared with the rate of a few $10^{-6}\Msun$ y$^{-1}$ estimated for Sgr A$^*$ \cite{Cuadra+2005,Yuan+2014}, although it is beyond the scope of the model how the accretion can further reach this radius from the presently simulated scale of several pc. } \subsection{High efficiency compression by spherical implosion} The present MHD calculation by solving the Eikonal equations cannot treat non-linear growth of waves and absolute amplitude. However, we may speculate that the wave will grow rapidly as it focuses on the focal point, where the amplitude increases inversely proportional to the spherical surface of the wave front. Such implosive focusing will further cause strong and efficient feeding of compressed gas onto the focal point such as the nucleus or dense cloud's center. An advantage of the present model is its minimal energy requirement. The released energy from the AGN in the form of MHD waves propagates the GC disc without dissipation as estimated by equation (\ref{dissipation}). Almost all fraction of the waves are trapped inside the GC disc and CMZ, and converges onto the clouds, where the waves focus on focal points. Even weak disturbances in the form of MHD waves are collected by the clouds and largely amplified, causing spherical implosion toward the focal points. \subsection{Shock waves in explosion phase vs MHD waves in quiet phase} There have been various models to explain the radio, X-ray and $\gamma$-ray bubbles and shells from the Galactic Center by energetic explosions associated with strong shock waves \cite{Crocker2012,Kataoka+2018}. Such explosive phenomena will significantly influence and change the structure of the galactic disk, and may work to suppress star formation, rather than to trigger, by blowing off the gas from the disk. Such violent phenomena make contrast to the model presented here of triggering the activities and star formation by focusing of MHD waves. Opposite two different situations may be possible to occur, if there are two different activity phases, strong and weak, in the nucleus as follows. One phase is composed of energetic explosions associated with giant supersonic shells and jets expanding into the halo, which may blow off the surrounding interstellar medium into the halo. The other is weak and silent phase between the strong ones, during which weak disturbances such as MHD waves are emitted gently and constantly, and trigger the SF by focusing implosions. Although weak, the latter will last much longer than the strong phase, so that the nucleus (Sgr A) may be regarded as a constant supplier of the triggering waves. \subsection{Larger-scale feedback in the entire Galaxy} The present feedback mechanism can be extended to larger scale feedback of explosive energy to the disc and nucleus in the entire Milky Way. MHD waves produced at the nucleus converge not only onto the CMZ, but also penetrate it and propagate through the entire disc of the Milky Way because of the small dissipation rate. This may cause further convergence onto the spiral arms, molecular and HI clouds, and would act to trigger implosive compression of the clouds, leading to star formation. Stronger shock waves from the nucleus expanded into the halo make giant shells and bubbles. When the shells fade out in the halo, sound and MHD waves are excited in the halo. Such waves are then reflected by the upper halo, and converge onto the galactic disc and further to interstellar clouds. Thus, most of the released kinetic energy at the nucleus (Sgr A) is trapped inside the Milky Way, and fed back to interstellar clouds, triggering there subsequent star formation. Similarly, MHD waves excited in the galactic disc by SF and SNe propagate in the disk, and a significant portion globally converges to the galactic center, triggering AGN. Again, the efficiency of wave trapping and focusing is so high that a significant fraction of the kinetic energy released by SF is fed back to the GC. The convergence of these waves to the nucleus will continue as long as SF activity continues in the Galaxy. \vskip 5mm \section{Summary} We have traced the propagation of fast-mode magneto-hydrodynamic (MHD) compression waves in the Galactic Center (GC). It was shown that the waves produced by the activity in the nucleus (Sgr A) focus on the molecular ring and clouds in the CMZ, which will trigger starburst. As feedback, MHD disturbances induced by SF activity or starburst propagate backward to the nucleus, and focus on the cloud around Sgr A. This further enhance implosive compression to cause nuclear activity. The present model, thus, solves the most important problem of the angular momentum in the AGN fueling mechanism. The AGN (Sgr A) and starburst (Sgr B) trigger each other through echoing focusing of MHD waves, which realises mutual trigger at high efficiency and minimal energy requirement. The present idea and method would also be applied generally to insight into the detailed mechanism of the AGN-SB connection in external galaxies. \vskip 5mm \noindent{\bf Data availability} There are no data available on line. \noindent {\bf Acknowledgements}: The calculations were performed at the Astronomical Data Center (ADC) of the National Astronomical Observatory of Japan (NAOJ). The author would like to thank the anonymous referee for the valuable comments.
\section{INTRODUCTION} The global market for unmanned aerial vehicles (UAVs) remains in the development stage with a potential market of \$45 billion in the civil sector alone \cite{kovalev2019analysis}, showing the enormous economic potential of drone swarms. Teams of robots and in particular quadcopters are found to be useful in many applications such as search-and-rescue missions \cite{karaca2018potential}, emergency communication \cite{camara2014cavalry} or package delivery \cite{shakhatreh2019unmanned}, the reason for which lies at least partially in their potential for low-level distributed processing applications such as decentralized object tracking and detection. At the same time, the usage of drone swarms in the real world adds various additional coordination challenges \cite{chmaj2015distributed}. As UAVs in swarms will often operate in close vicinity of each other, an important challenge for successful autonomous application of drone swarms is decentralized collision avoidance. The study of collision avoidance has a long history and begins with traditional path-planning and sensor-based methods. Existing collision avoidance algorithms based on path-planning typically require expensive planning as well as full state information, complicating their usage in large drone swarms \cite{mellinger2012mixed}. On the other hand, although decentralized sensor-based methods can achieve very good performance -- e.g. ORCA \cite{alonso2013optimal} -- the disadvantage of such methods lies in their rigidity. While such algorithms may provide provable safety for the assumed model, they often lead to deadlock situations. In this context, end-to-end reinforcement learning (RL) may be able to provide better performance both in terms of deadlock occurrence and speed \cite{long2018towards}. \begin{figure} \centering \vspace{0.2cm} \includegraphics[width=0.9\linewidth]{fig/fig1.pdf} \caption{Illustration of our proposed architecture (BICARL). A: The agent's own observations (orange block) are concatenated with observations from the $k$ nearest neighbors (blue block, $k = 1$) and passed to a feed-forward neural network to obtain an immediate action distribution. B: The corresponding situation where agents need to negotiate their way toward their target while avoiding other dynamical agents. Each agent conditions their actions on observations of its $k$-nearest neighbors and their own state.} \label{fig:hlvl} \end{figure} In this work, we propose a scalable, biologically well-motivated design to learning collision avoidance using end-to-end RL, i.e. {B}iologically-{I}nspired {C}ollision {A}voidance using {RL} (BICARL). In our method, we apply the biological insight into flocks of starlings (Sturnus vulgaris) interacting only with a limited number of their nearest neighbors to quadcopters in order to achieve decentralized collision avoidance that avoids crashes or getting stuck while learning successful policies in 30 minutes on two commodity desktop computers using an Intel Core i7-8700K CPU, 16 GB of RAM and no GPUs. This provides a scalable end-to-end learning approach to decentralized collision avoidance, reaching the performance of other state-of-the-art algorithms. Even though our motion model is more complex than models in prior works \cite{chen2017socially, everett2021collision}, we find that the added complexity allows improved collision avoidance performance and at the same time direct application to real world quadrotors by combining with conventional low-level controllers. As a result, we obtain a very practical deep RL approach to decentralized collision avoidance that remains scalable and applicable to arbitrary task specifications while requiring minimal information about the swarm, i.e. only from the nearest neighbor. Finally, we validate our algorithm both in simulation and real world application, verifying real world applicability. \section{RELATED WORK} Traditional methods for collision avoidance include planning approaches with full state information \cite{hamer2018fast}, or hand-engineered sensor-based approaches using velocity obstacles \cite{fiorini1998motion} and potential fields \cite{sigurd2003uav}, see also \cite{hoy2015algorithms} for a survey. Though the field of planning remains an active area of research \cite{honig2018trajectory}, typical disadvantages of planning-based approaches are the requirement of full state information and high computational cost, barring scalable application in decentralized drone swarms. On the other hand, sensor-based methods may be applied in decentralized swarms, but have the drawback of potential deadlocks. Amongst the sensor-based methods, Optimal reciprocal collision avoidance (ORCA) is a method based on velocity obstacles \cite{alonso2013optimal}, while force-based motion planning (FMP) is a state-of-the-art method based on potential fields \cite{semnani2019forcebased}. \paragraph{Learning-based approaches} Recently, there have been many attempts to use machine learning for collision avoidance \cite{long2018towards, willemsen2021mambpo}, typically outperforming traditional state-of-the-art methods in terms of success rate and speed by learning a collision avoidance rule via reinforcement learning \cite{chen2017decentralized}, though many prior works focus on simplified dynamics models and consider application only to small swarms or single drones \cite{kahn2017uncertainty}. The GA3C-CADRL algorithm \cite{everett2021collision} applies LSTMs in a single integrator model to learn a reaction to the position and velocities of all other agents in order to jointly avoid collisions. From the imitation learning realm, GLAS \cite{riviere2020glas} focuses on imitating decisions of a global privileged planner with safety module. \cite{semnani2020multi} extend GA3C-CADRL to 3D dynamics and fuses FMP \cite{semnani2019forcebased} with RL in a hybrid algorithm. Finally, \cite{wang2020two} applies imitation learning on ORCA for initialization, and then refines using RL. All of the aforementioned approaches except for \cite{chen2017socially} remain conditioned on information of all agent's (relative) positions and velocities. Similar to our work, \cite{chen2017socially} proposes RL-based collision avoidance via nearest neighbor information in the single integrator model, though their approach remains limited to very small swarms and in \cite{everett2021collision} was found to become stuck in large swarms. \paragraph{Biological inspiration} In the field of robot navigation, there exist a great number of works inspired by biology. To name a few, one can find navigation algorithms inspired by bugs \cite{mcguire2019comparative}, optical flow navigation inspired by honey bees \cite{green2008optic} or rule-based swarming models inspired by collective motion \cite{vicsek2012collective}, see also \cite{hoy2015algorithms} for a review. In our work, we give a biological motivation for the nearest neighbor method. To be precise, we take inspiration from the behavior of flocks of starlings following a topological nearest-neighbor interaction rule in order to achieve robust swarm flight behavior \cite{young2013starling}. In prior works, this type of biological insight has inspired flocking control design \cite{liu2020leader} and real-world deployment of drone swarms \cite{petravcek2020bio}. Somewhat more related, \cite{zhu2020multi} implements end-to-end-learned flocking control based on classical swarming models. However, their focus remains on flocking, and their observation model is fully observed and therefore not decentralized. \section{MODEL} In this work we consider partially observable stochastic games (POSG) as the typical setting of multi-agent reinforcement learning. Formally, in our setting a POSG is a tuple $(I, X, U, Z, T, r, p, \gamma)$. The index set $I = \{1, \ldots, N\}$ is the set of agents, $X$ is the state space, $U = \bigotimes_{i \in I} U_i$ is the joint action space and $Z = \bigotimes_{i \in I} Z_i$ is the joint observation space, where $U_i$ and $Z_i$ denote the action and observation spaces of each agent respectively. In our work, we use a deterministic transition function $T \colon X \times U \to X$ and a random observation emission density $p \colon Z \times X \to \mathbb R_{\geq 0}$. The reward function $r \colon X \times U \to \mathbb R^N$ and the discount factor $\gamma \in (0,1)$ give rise to the maximization objective \begin{align} J_i(\boldsymbol \pi) = \mathbb E_{\boldsymbol \pi} \left[ \sum_{t=0}^\infty \gamma^t r_i(x_t, u_t) \mid x_0 \sim \mu_0 \right] \end{align} of agent $i$ with initial state distribution $\mu_0$ over joint policies $\boldsymbol \pi = (\pi^1, \ldots, \pi^N)$ with $\pi^i \colon U \times Z_i \to \mathbb R_{\geq 0}$ and \begin{align} z_t \sim p(\cdot \mid x_t), \quad u_t^i \sim \pi^i(\cdot \mid z_t^i), \quad x_{t+1} = T(x_t, u_t) \end{align} for $t \geq 0, i \in I$ and $u_t \equiv (u_t^1, \ldots, u_t^N), z_t \equiv (z_t^1, \ldots, z_t^N)$. \subsection{Dynamics} In our work, we will consider both a 2D and a 3D case. The perhaps simplest model studied in multi-agent collision avoidance is the 2D single integrator model used in most prior works such as \cite{alonso2013optimal, everett2021collision}, where the states $x_t \equiv (\mathbf p^i_t, \mathbf p^{i,*}_t)_{i \in I}$ consist of the $\mathbb R^2$-valued position $\mathbf p^i_t = (x^i_t, y^i_t)$ as well as goal position $\mathbf p^{i,*}_t = (x^{i,*}_t, y^{i,*}_t)$ of agents $i$. As actions $u_t^i \equiv \mathbf v^i_t$, each agent may choose their $\mathbb R^2$-valued velocity $\mathbf v^i_t$ directly as an action under the constraint $\lVert \mathbf v^i_t \rVert_2 \leq v_{\mathrm{max}}$, leading to the deterministic transition function defined by \begin{align} \mathbf p^i_{t+1} = \mathbf p^i_t + \Delta t \cdot \mathbf v^i_t \end{align} for time step size $\Delta t \geq 0$. We propose a more complex model with some degree of momentum by using the following modified double integrator model, where the state $x_t \equiv (\mathbf p^i_t, \mathbf v^i_t, \theta^i_t, \omega^i_t, \mathbf p^{i,*}_t)_{i \in I}$ is given not only by the positions, but also by the $\mathbb R^2$-valued velocities $\mathbf v^i_t$ as well as the $\mathbb R$-valued yaw angle and its associated angular rate $\theta^i_t, \omega^i_t$. Note that this model can alternatively be understood as part of our algorithm on top of the single integrator model, i.e. the algorithm keeps track of any additional states and simulates the modified double integrator dynamics to choose velocities at every time step. Therefore, results for this more complex dynamics model are nonetheless applicable and comparable to the single integrator case. An action $u_t^i \equiv (\tilde {\mathbf v}^i_t, \tilde {\omega}^i_t)$ of agent $i$ with chosen target velocity $\tilde {\mathbf v}^i_t \in \mathbb R^2$ and target angular velocity $\tilde {\omega}^i_t \in \mathbb R$ leads to the deterministic update \begin{align} \mathbf p^i_{t+1} &= \mathbf p^i_t + \mathbf v^i_t \cdot \Delta t , \\ \mathbf v^i_{t+1} &= \mathbf v^i_t - c_v (\mathbf R(\theta^i_t) \cdot \tilde {\mathbf v}^i_t - \mathbf v^i_t) \cdot \Delta t , \\ \theta^i_{t+1} &= \theta^i_t + \omega^i_t \cdot \Delta t , \\ \omega^i_{t+1} &= \omega^i_t - c_\omega (\tilde {\omega}^i_t - \omega^i_t) \cdot \Delta t \end{align} with $\lVert \tilde {\mathbf v}^i_t \rVert_2 \leq v_{\mathrm{max}}$, $\left| \tilde {\omega}^i_t \right| \leq \omega_{\mathrm{max}}$ and yaw rotation matrix \begin{align} \mathbf R(\theta^i_t) = \begin{bmatrix} \cos \theta^i_t & -\sin \theta^i_t \\ \sin \theta^i_t & \cos \theta^i_t \end{bmatrix} \, . \end{align} Importantly, although the yaw angle is not strictly required, we add the yaw angle to empirically obtain significantly better performance, as the added model richness allows agents to implicitly save information without requiring e.g. recurrent policy architectures. This model of medium complexity will at the same time allow us to directly use the desired velocity output as a set point of more traditional low-level quadrotor controllers such as PID controllers. Note that depending on the specific task, we add task-specific transitions for the goal position, see the section on experiments. For the 3D case, we simply add another $z$-coordinate to position and velocity that remains unaffected by the rotation matrix. It should be further noted that we observe no discernible difference when applying some small amount of noise to the dynamics. \subsection{Observation model} We let the observations of an agent $i$ be randomly given by the own position and velocity as well as the relative bearing, distance and velocity of other agents inside of the sensor range $K > 0$ and the goal, i.e. \begin{multline} z_t^i \equiv (\hat{\mathbf p}^i_t, \hat{\mathbf v}^i_t, d^{i,*}_t, \phi^{i,*}_t, \\ \{ (d^{i,j}_t, \phi^{i,j}_t, \mathbf v^{i,j}_t) \mid j \neq i \colon \lVert \mathbf p^j_t - \mathbf p^i_t \rVert_2 \leq K \}) \end{multline} where the observations are Gaussian distributed according to \begin{align} \hat{\mathbf p}^i_t &\sim \mathcal N(\mathbf p^i_t, \sigma_p^2), \quad \hat{\mathbf v}^i_t \sim \mathcal N(\mathbf v^i_t, \sigma_v^2), \\ d^{i,*}_t &\sim \mathcal N(\lVert \mathbf p^{i,*}_t - \mathbf p^i_t \rVert_2, \sigma_d^2), \\ \phi^{i,*}_t &\sim \mathcal N(\varphi^{i,*}_t \sigma_\phi^2), \\ d^{i,j}_t &\sim \mathcal N(\lVert \mathbf p^j_t - \mathbf p^i_t \rVert_2, \sigma_d^2), \\ \phi^{i,j}_t &\sim \mathcal N(\varphi^{i,j}_t, \sigma_\phi^2), \\ \mathbf v^{i,j}_t &\sim \mathcal N(\mathbf v^{j}_t - \mathbf v^{i}_t, \sigma_v^2) \end{align} with noise standard deviations $\sigma_p, \sigma_v, \sigma_d, \sigma_\phi > 0$ and bearing angles $\varphi^{i,*}_t = \arctantwo(y^{i,*}_t - y^i_t, x^{i,*}_t - x^i_t)$, $\varphi^{i,j}_t = \arctantwo(y^j_t - y^i_t, x^j_t - x^i_t)$ where $\arctantwo(y,x)$ is defined as the angle between the positive $x$-axis and the ray from $0$ to $(x,y)$. Note further that the observations allow application of e.g. ORCA \cite{alonso2013optimal} and FMP \cite{semnani2019forcebased}. In the 3D case, we additionally observe the $z$-coordinate difference to the target and other agents with Gaussian noise of standard deviation $\sigma_p$. \subsection{Reward function} As the reward function for all of our experiments, for each agent $i \in I$ we define a term for reaching the goal position and for avoiding collisions each by \begin{multline} r_i(x_t, u_t) = c_p \langle \mathbf v^i_t, \mathbf p^{i,*}_t - \mathbf p^i_t \rangle \\ - c_c \sum_{j \in I \setminus \{ i \}} \mathbf 1 \left( \lVert \mathbf p^j_t - \mathbf p^i_t \rVert_2 \leq C \right) \end{multline} with desired target position $\mathbf p^{i,*}_t \in \mathbb R^2$, avoidance radius $C \geq 0$ and reward / collision penalty coefficients $c_p, c_c \geq 0$, where $\langle \cdot, \cdot \rangle$ denotes the standard dot product. Note that this objective is selfish, i.e. agents are only interested in their own collisions and reaching their own goal. This manual choice of reward function alleviates the well-known multi-agent credit assignment problem in multi-agent RL \cite{hernandez2019survey}, as shared, cooperative swarm reward functions are difficult to learn due to the increasing noise from other agent's behaviors as the number of agents rises. \section{METHODOLOGY} In this work, we propose a biologically-inspired design principle to learning collision avoidance algorithms. Recently, it was found that swarms of starlings observe only up to seven neighboring birds to obtain robust swarm flight behavior \cite{young2013starling}. This implies that it should be sufficient to use a similar observation reduction for collision avoidance. Further, it is well-known that the multi-agent RL domain suffers from the combinatorial nature of multi-agent problems \cite{hernandez2019survey}. Hence, the reduction of observations to e.g. the closest $k$ agents can greatly help learning effective behavior. It is known that tractable exact solution methods and theoretical guarantees in the context of POSGs are sparse even in the fully cooperative case \cite{zhang2021multi}. Instead, we apply independent learning, i.e. we ignore the non-stationarity of other agents and solve assumed separate single-agent problems via RL \cite{tan1993multi}. Furthermore, we share a policy between all agents via parameter sharing \cite{gupta2017cooperative} and use the PPO algorithm to learn a single, shared policy \cite{schulman2017proximal}. For faster learning, we collect rollout trajectories from our simulation in parallel on two machines and use the RLlib \cite{liang2018rllib} implementation of PPO. We implemented our environment in Unity ML-Agents \cite{juliani2018unity}. \begin{table} \centering \caption{Hyperparameters and parameters used in all experiments.} \label{table:hyperparameters} \begin{tabular}{@{}ccc@{}} \toprule Symbol & Function & Value \\ \midrule $\Delta t$ & Time step size & \SI{0.02}{\second} \\ $c_v$ & Velocity coefficient & \SI{1}{\per \second} \\ $c_w$ & Angular rate coefficient & \SI{1}{\per \second} \\ $\sigma_p, \sigma_v$ & Noise standard deviations & \SI{1}{\milli\metre}, \SI{10}{\milli\metre}\\ $\sigma_d, \sigma_\phi$ & Noise standard deviations & \SI{1}{\milli\metre}, \SI{0.1}{\milli\metre}\\ $C$ & Avoidance radius & \SI{7}{\metre} \\ $c_p$ &
Reward coefficient & \SI{0.3}{\second \per \square \metre} \\ $c_c$ & Penalty coefficient & $1$ \\ $v_{\mathrm{max}}$ & Maximum velocity & \SI{30}{\metre \per \second} \\ $\omega_{\mathrm{max}}$ & Maximum angular velocity & \SI{15}{\per \second} \\ $K$ & Sensor range & \SI{10}{\metre} \\ $k$ & Number of considered neighbors & 1 \\ $\gamma$ & Discount factor & $0.99$\\ \\ \midrule & PPO & \\ \midrule $l_{r}$ & Learning rate & \num{5e-5}\\ $\beta$ & KL coefficient & $0.2$ \\ $\epsilon$ & Clip parameter & $0.3$ \\ $B$ & Training batch size & $50000$ \\ $B_{m}$ & SGD Mini batch size & $2500$ \\ $M$ & SGD iterations & $20$ \\ \bottomrule \end{tabular} \end{table} We parametrize our policy by a simple feedforward network consisting of two hidden layers with 64 nodes, and ReLU activations except for the output layer which outputs parameters of a diagonal Gaussian distribution over actions. Actions are clipped after sampling to fulfill constraints. We preprocess observations by reducing to the information of the nearest $k$ neighbors in $L_2$ norm on the positions $\mathbf p^i_t$. All results in this work are for $k=1$, i.e. we use observations $(d^{i,j}_t, \phi^{i,j}_t, \mathbf v^{i,j}_t)$ of the nearest neighbor $j$. Crucially, this observation reduction works well in our experiments, and it allows us to show that information of neighboring agents limited to the nearest neighbor information is indeed sufficient for collision avoidance. A resulting advantage of our design is that the resulting policy itself is very simple and memoryless. Note that our policy architecture is very small and thus allows sufficiently fast computation on low-powered microcontrollers and UAVs such as the Crazyflie \cite{giernacki2017crazyflie} used in our real world experiments. In comparison, previous works use Deep Sets \cite{zaheer2017deep} or LSTMs \cite{everett2021collision} to process a variable number of other agents, which will scale worse in large swarms and is more computationally costly to learn and apply. The scenarios considered in this work include formation change and package delivery. In the formation change scenario, all drones start on a circle and have their goal set to the opposite side. In the package delivery scenario, drones are continuously assigned new random package locations and destinations after picking and dropping off packages, where the locations and destinations are uniformly distributed e.g. on a line, circle or in a rectangle. \section{EXPERIMENTS AND RESULTS} \begin{table} \centering \caption{Performance of BICARL (ours), ORCA \cite{alonso2013optimal}, and FMP \cite{semnani2019forcebased} in the package delivery task, averaged over 4 runs of 100 seconds.} \label{table:circle-comparison} \begin{tabular}{@{}ccccccc@{}} \toprule Test setup & \multicolumn{3}{c}{Average collected packages} \\ \midrule \# agents & BICARL & ORCA & FMP \\ \midrule 4 & \textbf{15.25} & 1.5 & 3 \\ 6 & \textbf{8.66} & 1.16 & 2 \\ 8 & \textbf{7.62} & 1.25 & 2.12 \\ 10 & \textbf{5.8} & 0 & 1.2 \\ 12 & \textbf{4.41} & 0 & 0.41 \\ 14 & \textbf{4.14} & 0 & 0.25 \\ \bottomrule \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=0.55\linewidth]{fig/stuck_fmp.pdf} \caption{A constellation where FMP is stuck in package delivery: At this number of agents, the computed forces nullify each other, resulting in freezing agents. The red squares visualize the current goal of agents and the blue lines signify which agent is assigned which goal.} \label{fig:fmp_stuck_circle} \end{figure} \begin{table*} \centering \caption{Performance analysis of BICARL (ours), ORCA \cite{alonso2013optimal}, and FMP \cite{semnani2019forcebased} in circle formation change, averaged over 5 runs.} \label{table:comparison} \renewcommand{\arraystretch}{1.21} \begin{tabular}{@{}cllccccccccc@{}} \toprule \multicolumn{3}{c}{Test setup} & \multicolumn{3}{c}{Extra time to goal (\si{\second})} & \multicolumn{3}{c}{Minimal distance to other agents (\si{\metre})} & \multicolumn{3}{c}{Extra travelled distance (\si{\metre})} \\ \midrule \multicolumn{3}{c}{\# agents} & BICARL & ORCA & FMP & BICARL & ORCA & FMP & BICARL & ORCA & FMP \\ \midrule \multicolumn{3}{c}{5} & 4.02 & 14.52 & \textbf{1.86} & \textbf{12.38} & 4.94 & 6.18 & 1.28 & 1.17 & \textbf{1.11} \\ \multicolumn{3}{c}{10} & 4.51 & 15.31 & \textbf{1.95} & \textbf{7.01} & 4.78 & 3.78 & 1.26 & 1.65 & \textbf{1.15} \\ \multicolumn{3}{c}{15} & 4.87 & 14.98 & \textbf{2.15} & \textbf{6.14} & 3.42 & 4.44 & 1.27 & 1.82 & \textbf{1.19} \\ \multicolumn{3}{c}{20} & 6.51 & 18.68 & \textbf{2.34} & 3.14 & \textbf{4.74} & 3.87 & 1.45 & 2.7 & \textbf{1.26} \\ \multicolumn{3}{c}{25} & 7.52 & 20.13 & \textbf{2.28} & 3.3 & \textbf{4.7} & 3.34 & 1,60 & 3.56 & \textbf{1.23}\\ \multicolumn{3}{c}{30} & 7.42 & 31.51 & \textbf{3.66} & 3.52 & \textbf{4.7} & 2.94 & 1,61 & 4.68 & \textbf{1.38} \\ \multicolumn{3}{c}{35} & \textbf{7.81} & 41.15 & N/A & 2.52 & \textbf{4.72} & N/A & \textbf{1.60} & 5.71 & N/A \\ \multicolumn{3}{c}{40} & \textbf{9.18} & 45.64 & N/A & 2.35 & \textbf{2.75} & N/A & \textbf{1.79} & 10.17 & N/A \\ \multicolumn{3}{c}{45} & \textbf{8.91} & 76.25 & N/A & 1.94 & \textbf{3.21} & N/A & \textbf{1.76} & 8.14 & N/A \\ \multicolumn{3}{c}{50} & \textbf{10.1} & 81.03 & N/A & 1.67 & \textbf{2.66} & N/A & \textbf{1.87} & 17.95 & N/A \\ \bottomrule \end{tabular} \end{table*} \begin{figure} \centering \includegraphics[width=0.95\linewidth]{fig/traj_drones_81012.png} \caption{Simulated trajectories of agents for $1$-nearest neighbor BICARL with 2D dynamics. Left: 8 agents; Middle: 10 agents; Right: 12 agents.} \label{fig:traj81012} \end{figure} For all of our experiments, we use the parameters and hyperparameters indicated in Table~\ref{table:hyperparameters}. We apply curriculum learning \cite{bengio2009curriculum} to improve learning performance, increasing the number of agents continuously while training. The single resulting policy is then evaluated for a varying number of agents to gauge generality and scalability. We consider a collision to occur when agents are closer than $\SI{1.5}{\metre}$ and accordingly tuned the penalty radius during training to $C=\SI{7}{\metre}$ to achieve reasonable behavior for up to $50$ agents. \subsection{Simulation Results} We have trained on two commodity desktop computers equipped with an Intel Core i7-8700K CPU, 16 GB RAM and no GPUs. We successfully and consistently learned good policies in 30 minutes upon reaching e.g. $1.4 \cdot 10^{6}$ time steps. The reason for the fast training lies in the parallelization of collecting experience in Unity, as we can run multiple simulations at the same time to collect data, and the simulation in Unity is fast. We first compare our results to the algorithms FMP \cite{semnani2019forcebased} and ORCA \cite{alonso2013optimal}. As mentioned earlier, for comparison we may also consider our dynamics as part of the algorithm for simpler single integrator dynamics. \paragraph{Learned collision avoidance behavior} In the package delivery scenario, we place the agents on a circle of fixed radius and gradually increase the number of agents. As soon as agents become closer than $\SI{3.5}{\metre}$ to the goal, we sample a new goal on the circle. In Table~\ref{table:circle-comparison}, it can be observed that the average packages collected per drone decrease as the number of drones increases, as the drones will increasingly be in each other's way. Further, for the package delivery task, FMP and ORCA eventually run into a deadlock. An example for FMP can be seen in Fig.~\ref{fig:fmp_stuck_circle}. In this case, our methodology provides robust, collision-free behavior that avoids getting stuck. In the formation change scenario, during training we start with 4 diametrically opposed agents on a circle of radius $\SI{70}{\metre}$ and gradually increase the size of the swarm up to 40. It can be seen in Fig.~\ref{fig:traj81012} that rotation around a fixed direction emerges. Increasing the number of agents leads to situations where ORCA and FMP get stuck. \cite{trautman2010unfreezing} demonstrates that the solution is cooperative collision avoidance. In line with this finding, our learned policy is able to capture such cooperation, i.e. one drone gives way to another as can be seen in Fig.~\ref{fig:same_target} and supplementary videos. Overall, from Table~\ref{table:comparison} we see that our solution is competitive, especially for many agents. Note that extra time to goal and travelled distance are measured relative to the baseline where drones fly in straight lines to the goal. Although FMP achieves very good results for small numbers of agents, at more than 35 agents FMP becomes stuck, while our method learns a robust behavior that works for large numbers of agents. Here, we tuned FMP parameters to obtain reasonably smooth flight. While improved results in FMP could be possible, additional parameter tuning would be required. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{fig/2A-Package-Collection.png} \caption{Simulated flight behavior of two agents in the package delivery scenario with overlapping goals $(1^*, 2^*)$. In our approach, one drone yields and the other drone collects, obtaining a new goal. In FMP, the drones become stuck, while in BICARL one drone yields for the other drone. The red squares and blue lines have the same meaning as in Fig.~\ref{fig:fmp_stuck_circle}. Time progresses from left to right.} \label{fig:same_target} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\linewidth]{fig/dis_drones_101520.png} \caption{Simulated minimum inter-agent distances achieved in the circle formation change scenario. The red line indicates the radius considered as collision. Left: 10 agents; Middle: 15 agents; Right: 20 agents.} \label{fig:dis_101520} \end{figure} In Fig.~\ref{fig:dis_101520}, we find that our method successfully avoids collisions with other agents while reaching the goals. During training, collisions may be caused by nearby other agents regardless of the own behavior. As a result, this often provides a negative feedback signal even if the drone itself is not responsible for the collision, resulting in behavior where agents avoid other agents even when the punishment for violating the avoidance radius (here $C=7$) is far off. Analogously to the 2D case, we can consider formation change in the 3D case. In Fig.~\ref{fig:traj3D_81012} the trajectories of our learned policy are depicted. It can be seen that our methodology remains successful in guiding the agents to their goal destination. Furthermore, to make use of the additional space above, agents begin flying above and below each other. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{fig/traj_drones_3D81012.png} \caption{Simulated trajectories of agents for $1$-nearest neighbor BICARL with 3D dynamics. Agents have free space above their starting position and use it to navigate past each other. Left: 8 agents; Middle: 10 agents; Right: 12 agents.} \label{fig:traj3D_81012} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\linewidth]{fig/benchmark_LSTM_std.png} \caption{Learning curve: Average achieved sum of rewards per episode over total time steps, plus variance as shaded region for three random seeds. It can be seen that our model learns significantly faster. Left: 2 agents; Middle: 4 agents; Right: 6 agents.} \label{fig:lstm_bicarl} \end{figure} \begin{figure*} \centering \includegraphics[width=\linewidth]{fig/3D-6A.png} \caption{Real world circle formation exchange with the policy for the 3D case embedded on a swarm of 6 Crazyflie nano-quadcopters. The drones successfully switch their positions to the antipodal points on a circle by engaging in 3D collision avoidance. Time progresses from left to right.} \label{fig:real} \end{figure*} \paragraph{Comparison to LSTM-based policies} We compare our observation model with the LSTM variant proposed in \cite{everett2021collision}. We use an LSTM observation filter in conjunction with the two-layer architecture and run several tests with different parameters (e.g. number of hidden units) for 2, 4 and 6 agents, comparing the average achieved reward in each case. The learning curves in Fig.~\ref{fig:lstm_bicarl} suggest that our observation model achieves comparable results and faster learning. As expected, training LSTMs is slow and our method speeds up learning greatly, especially for many agents. While this does not exclude that in general and for improved hyperparameter configuration, the LSTM observation model will be superior, our method regardless has less parameters to fine-tune, has a constant computation cost regardless of the number of agents and hence is cheaper to both train and apply in the real world. Although in \cite{everett2021collision} it has been noted that learned collision avoidance algorithms such as \cite{chen2017socially} based on neighborhood information tend to get stuck in large swarms or cause collisions, we obtain a seemingly opposed result. The reason is that our double integrator model together with the added yaw angle allows agents to avoid getting stuck by implicitly saving information or communicating with each other via the yaw angle. Since the problem at hand is multi-agent, a general optimal solution should act depending on the whole history of observations and actions. If two drones are stuck, they can therefore remember past information and change their reaction, leading to better results. Indeed, in our experiments we found that the yaw angle is crucial to obtaining good performance, leading us to the conclusion that a sufficiently rich motion model may in fact ease the learning of good collision avoidance behavior. \subsection{Real World Experiments} We validate the real-world applicability of our policies by performing experiments on swarms of micro UAVs. We downscale lengths and speeds by $10$ (since the used indoor drones are only $\SI{0.1}{\metre}$ in length). We directly apply the desired velocity as a set point for the low-level controller instead of the acceleration as in conventional double integrator models, which is not easily regulated due to the non-linearity of typical quadrotor dynamics. \paragraph{Hardware Setup} For our real-world experiments, we use a fleet of Crazyflie nano-quadcopters \cite{giernacki2017crazyflie} with indoor positioning via the Lighthouse infrared positioning system and extended Kalman filter \cite{mueller2015fusing} with covariance correction \cite{mueller2017covariance}. Since the Crazyflies cannot sense each other, we simulate the inter-drone observations by exchanging information via TDMA-based peer-to-peer radio communication to broadcast the estimated position and velocity of each drone. Although the sensor range $K$ in the simulation can model a limited range of wireless communication, in our real indoor experiments there is no such limitation. Instead, drones ignore all information other than their supposed observations. As low-level controller, we use the default cascaded PID controller of the Crazyflie, setting desired velocities and yaw rates from the output of our policy directly as set point of the controller. Note that our code runs completely on-board, except for starting and stopping experiments. \paragraph{Evaluation} We find that the policies obtained in simulation work as expected for two to six drones in reality, and expect similar results for more real drones. In Fig.~\ref{fig:real}, it can be seen that the drones successfully complete formation changes. Using our modified double integrator model, we easily achieve stable results in real world tests, as the cascaded PID low-level-controller is able to achieve desired velocities while stabilizing the system. Due to the simulation-to-reality gap stemming from model simplifications, we can observe slight control artefacts (oscillations). Nonetheless, the combination of traditional controller with learned policy successfully realizes the desired collision avoidance behavior, and the on-board inference time of the policy remains below $\SI{1}{\milli \second}$, with $\SI{30}{\micro \second}$ on our PCs. As a result, by using our motion model of medium complexity, we obtain a practical approach to decentralized quadrotor collision avoidance. \section{CONCLUSION} To summarize, in this work we have demonstrated that information of $k$-nearest neighbors, non-recurrent policies and a sufficiently rich motion model are enough to find robust collision avoidance behavior. We have designed a high-level model that allows application of the learned policy directly to the real world in conjunction with e.g. a standard cascaded PID controller. In the large swarm case and for collecting packages, our scalable and decentralized methodology appears to show competitive performance. Furthermore, our method avoids getting stuck and is very fast to learn. Interesting future work could be to consider external static or dynamic obstacles such as walls. One could combine our approach for $k>1$ with Deep Sets \cite{zaheer2017deep}, mean embeddings \cite{huttenrauch2019deep} or attention mechanisms \cite{manchin2019reinforcement} to further improve learning behavior. Finally, it may be of interest to further reduce the simulation-to-reality gap by using a more detailed model that models e.g. a 12-state quadrotor dynamics with simulated PID controller, motor lag, detailed drag models etc. \bibliographystyle{IEEEtran}
\section{Introduction} It is known that in nature a quantum particle has corpuscular and wave properties, incompatible with each other from the viewpoint of classical physics. So the creators of quantum mechanics (QM), whose goal was to develop a universal theory to describe nature both on the atomic and macroscopic levels, have met the extremely complex problem -- they had to find a way to reconcile these seemingly incompatible properties within the framework of a single theory and, moreover, to find such a balance between these properties that this theory was indeed universal. As it turned out, QM is not a universal theory. Its superposition principle is incompatible with the principles of a macroscopic realism what was shown by Schr\"{o}dinger in his famous Cat paradox, as well as by Feynmann in his analysis of the thought double-slit experiment. Thus, the creators of quantum mechanics did not find the desired balance between the wave and corpuscular properties of a quantum particle: it proved to be violated in favor of wave properties. At present there is a dominant point of view, according to which QM should abandon the conception of closed systems and treat the Cat paradox as a measurement problem. Of course, there are other points of view, but no one of them puts in doubts the validity of the modern formulation of the quantum mechanical superposition principle. But our goal is to prove just the opposite: it is this formulation of the superposition principle that needs revision, not the concept of closed systems. QM, with this formulation, is logically contradictory; as a consequence, its logically consistent interpretation is also impossible. \section{Where and why modern quantum theory violates the existing in nature harmony between the corpuscular and wave properties of a quantum particle} Let us now elucidate the question of where and why the modern quantum theory describes improperly the existing in nature harmony between the corpuscular and wave properties of a quantum particle. For this purpose we have to analyze such quantum mechanical concepts as ``state vectors" and ``self-adjoint operators". This is important for our study because the position and momentum operators together with other one-particle self-adjoint operators, represent the corpuscular properties of a quantum particle. While state vectors represent in QM its wave properties. According to QM, a quantum state of a closed system -- a particle plus an external field described by the potential energy operator entering the Hamiltonian -- is set, at time $t$, by the time-dependent state vector (wave function) that obeys the corresponding Schr\"{o}dinger equation. Such states are called pure states in order to distinguish them from mixed states defined by the density operator. All possible pure states of a particle, at time $t$, form a Hilbert space where the superposition principle operates. Thus, the starting situation is as follows: the canonical operators that represent corpuscular properties of a quantum particle act in the Hilbert space that represents the wave properties of this particle. This means that in order to elucidate the compatibility in QM of the wave and corpuscular properties of the particle, we must verify the compatibility of the properties of the canonical operators and Hilbert space. For this purpose we have to begin with the quantum mechanical superposition principle which underlies the notion of a Hilbert space. According to the modern formulation of this principle, a sum (``superposition") of two or more pure states of a quantum particle is also its pure state; and vice versa, any pure state of a quantum particle can be represented as a sum of two or more of its other pure states. At first glance, the superposition principle affects only the properties of a Hilbert space and has nothing to do with the properties of canonical operators. But this is not so. In essence, this formulation requires the canonical operators to act irreducibly in the space of pure states of a particle (in other words, it requires that the Schr\"{o}dinger representation must be irreducible). Otherwise, the space of pure states would be a direct sum of nontrivial orthogonal subspaces invariant with respect to the action of these operators and, thus, the superposition principle would have to distinguish between pure states belonging to different orthogonal subspaces, as well as the superpositions of such states. {\it This requirement is precisely that weak place in modern quantum theory where the natural harmony between the corpuscular and wave properties of a quantum particle is violated.} The point is that, if we accept this formulation of the superposition principle as flawless, then the irreducibility of the Schr\"{o}dinger representation must also be accepted as unquestionable truth. But the modern proof of the irreducibility of the Schr\"{o}dinger representation is questionable because it ignores the unboundedness of the position and momentum operators. As was stressed by F. Strocchi (see p.2 in \cite{Str1}), ``without the condition of {\it boundedness} the whole linear structure of the observables is in question". Modern quantum theory overcomes this difficulty as follows (see, e.,g., \cite{Str1,Str2,Pri,Bri}): at the first step, on the basis of the Stone-von Neumann theorem it is rigorously proved that the (bounded) Weyl exponentials $e^{i\hat{x}u}$ and $e^{i\hat{p}v}$ of the (unbounded) operators $\hat{x}$ and $\hat{p}$ act irreducibly in the space of pure states ($u$ and $v$ are real parameters); at the second step, with making use of the heuristic ``operational definition of observables", this result is extended onto the usual $\hat{x}$ and $\hat{p}$ operators. In effect, the second step is based on the following heuristic reasoning (see, e.g., F. Strocchi \cite{Str2}): due to the scale bounds of experimental apparatuses, one actually measures only {\it bounded} functions of $x$ and $p$; thus, $x$ and $p$ are not observables in the operational sense -- the unbounded $\hat{x}$ and $\hat{p}$ are rather (physically harmless) extrapolations of their bounded Weyl counterparts that fall under the ``operational definition of observables". However, such approximation of the position operator is ``physically harmless" only for a particle with a {\it discrete} energy spectrum. Since a particle with such spectrum has {\it bound} stationary states, the probability of finding a particle with a given energy at infinity is zero. As a consequence, the (usual) position operator is "effectively bounded" and, hence, its approximation by the corresponding (bounded) Weil counterpart is indeed justified. As regards a particle with a continuous energy spectrum, it has {\it unbound} stationary states: for a given energy, the probability of finding a particle at infinity is nonzero and, hence, the unboundedness of $\hat{x}$ is irremovable. In quantum one-particle phenomena with a continuous energy spectrum, a time-dependent physical (normalized) state of a particle can represent a superposition of two or more wave packets (of finite size), which eventually move away from each other by an infinite distance. Such superpositions appear in the one-particle scattering processes as well as in the decay phenomena. This takes place, for example, at the final stage of scattering a particle on a 1D potential barrier, when the state of a particle represents the superposition of the transmitted and reflected wave packets; the distance between the ``centers of mass" of these packets grows infinitely in time. According to the operational definition of observables, the particle's coordinate can not be determined as a physical observable for the entire ensemble of scattered particles, since the experimenter would have to use an infinite-size device to measure it. Our next step is to show that the Schr\"{o}dinger representation is reducible in the distant past and in the distant future of this scattering process; there exists a superselection rule that restricts the action of the superposition principle in this process. \section{On the distant past and future of scattering a particle on a 1D potential barrier}\label{cont} Let $\hat{H}$ be Hamiltonian to describe scattering a particle on a 1D "short-range" potential barrier $V(x)$ which is nonzero in the interval $[-a,a]$, and $|\Psi_0\rangle$ be the initial state of a particle; without loss of generality we will assume that $V(x)$ is such that a particle has no bound states. According to quantum scattering theory \cite{Tay,Ree}, the initial state uniquely determines the in-asymptote $|\Psi_{in}\rangle=\hat{\Omega}_+ |\Psi_0\rangle$, the out-asymptote $|\Psi_{out}\rangle=\hat{\Omega}_- |\Psi_0\rangle$ and the scattering state $|\Psi\rangle=e^{-i\hat{H}t/\hbar}|\Psi_0\rangle$ that ``interpolates" between them; here $\hat{\Omega}_{\mp}=\lim_{t\to \pm\infty} e^{i\hat{H}t/\hbar}e^{-i\hat{H}_0t/\hbar}$ are the in and out M{\o}ller wave operators associated with the limits $t\to -\infty$ and $t\to +\infty$, respectively; $\hat{H}_0$ is the free one-particle Hamiltonian. In this case, $|\Psi_{out}\rangle=\hat{S}|\Psi_{in}\rangle$ where $\hat{S}=\hat{\Omega}_{-}^\dag\hat{\Omega}_{+}$ is a linear unitary scattering operator. Two limiting cases of this scattering process are in the focus of our attention: strictly speaking, the infinitely distant past, when the particle is yet free and moving to the barrier, and, strictly speaking, infinitely distant future, when it is already free and moving away from the barrier. By our approach, the properties of the Hilbert space associated with these two limiting cases should play a key role in the quantum description of this scattering process. Note that in the distant past the particle can approaching the barrier both from the left and from the right. Similarly, in the distant future, it can move away from the barrier both to the left and to the right. In both limiting cases, the probability of finding a particle in the barrier region $[-a,a]$ is zero. Let $|\Psi_{past}\rangle$ and $|\Psi_{future}\rangle$ denote the scattering state $|\Psi\rangle$ in the distant past and future, respectively. From the above it follows that, in the general case, we have to distinguish between the state vectors $|\Psi_{past}^{left}\rangle$ and $|\Psi_{past}^{right}\rangle$ to describe a particle that approaches the barrier from the left and from the right, respectively. Similarly, we have to distinguish between the state vectors $|\Psi_{future}^{left}\rangle$ and $|\Psi_{future}^{right}\rangle$ to describe a particle moving away from the barrier to the left and to the right, respectively. Thus, assuming that all these state vectors are unit, in the general case we have $|\Psi_{past}\rangle=a_p |\Psi_{past}^{left}\rangle+b_p |\Psi_{past}^{right}\rangle$ and $|\Psi_{future}\rangle=a_f |\Psi_{future}^{left}\rangle+b_f |\Psi_{future}^{right}\rangle$, where $|a_p|^2+|b_p|^2=1$ and $|a_f|^2+|b_f|^2=1$. We assume that the wave function $\Psi_{past}^{left}(x,t)$ and $\Psi_{future}^{left}(x,t)$ belong, respectively, to the subspaces ${\cal{H}}_{past}^{left}$ and ${\cal{H}}_{future}^{left}$ of infinitely differentiable wave functions which are identically equal to zero in the interval $[-a,\infty)$ and tend exponentially to zero when $x\to -\infty$ and $x\to -a$. Similarly, $\Psi_{past}^{right}(x,t)$ and $\Psi_{future}^{right}(x,t)$ belong, respectively, to the subspaces ${\cal{H}}_{past}^{right}$ and ${\cal{H}}_{future}^{right}$ of infinitely differentiable functions which are equal to zero in the interval $(-\infty,a]$ and tend exponentially to zero when $x\to a$ and $x\to\infty$. It is evident that $\langle\Psi_{past}^{left}|\Psi_{past}^{right}\rangle=0$ and $\langle\Psi_{future}^{left}|\Psi_{future}^{right}\rangle=0$. And, since this is valid for all state vectors of these subspaces, ${\cal{H}}_{past}^{left}\bot{\cal{H}}_{past}^{right}$ and ${\cal{H}}_{future}^{left}\bot{\cal{H}}_{future}^{right}$. All this means that all four subspaces are invariant with respect to the action of the canonical operators $\hat{x}$ and $\hat{p}$ (see also Property B in the next section). As a consequence, the Hilbert space ${\cal{H}}_{past}$ of one-particle states in the distant past is the direct sum of the subspaces ${\cal{H}}_{past}^{left}$ and ${\cal{H}}_{past}^{right}$ of the states of a particle moving to the barrier from the left and from the right, respectively: ${\cal{H}}_{past}={\cal{H}}_{past}^{left}\oplus {\cal{H}}_{past}^{right}$. Similarly, the Hilbert space ${
\cal{H}}_{future}$ of one-particle states in the distant future is the direct sum of the subspaces ${\cal{H}}_{future}^{left}$ and ${\cal{H}}_{future}^{right}$ of the states of a particle moving away from the barrier to the left and to the right, respectively: ${\cal{H}}_{future}={\cal{H}}_{future}^{left}\oplus {\cal{H}}_{future}^{right}$. (Note that, when the course of time is reversed, the subspaces corresponding to the ``past" and to the ``future" change places.) So, the Schr\"{o}dinger representation is reducible in the distant past and in the distant future of this scattering process! The unboundedness of the position operator is essential in this scattering process: this operator cannot be approximated by the corresponding Weyl exponential. From the viewpoint of the operational definition of observables, the particle's coordinate is not measurable ``observable" for the whole ensemble of particles, both in the distant past and in the distant future of this ensemble (see also Property A in the next section). For example, let us consider this process in the limit $t\to\infty$ when $a_p\neq 0$ and $b_p\neq 0$. It is evident that in this case the distance between the wave packets $\Psi_{future}^{left}(x,t)$ and $\Psi_{future}^{right}(x,t)$ tends to infinity. Thus, in order to measure the coordinates of all scattered particles, described by these two wave packets, an experimentalist has to use a device with an infinite size. Since this situation is non-physical the coordinates of scattered particles, localized to the left of the barrier and to the right of it, should be measured separately, with the help of two different devices. Similar situation arises in the limit $t\to-\infty$ when $a_f\neq 0$ and $b_f\neq 0$. Our next step is to show that there is a superselection rule that restricts the validity of the superposition principle in the distant past and future of this scattering process. \section{Superselection rule for scattering a particle on a 1D potential barrier}\label{new} The following three properties point to the existence of this superselection rule. {\bf Property A: Any superposition of state vectors from ${\cal{H}}_{past}^{left}$ and ${\cal{H}}_{past}^{right}$ represents a mixed vector state, rather than a pure vector state. The same concerns the superposition of state vectors from ${\cal{H}}_{future}^{left}$ and ${\cal{H}}_{future}^{right}$.} Let us take the state of a particle in the distant future in the form \begin{eqnarray*} |\Psi_{future}\rangle= e^{i\alpha}\sqrt{W_{left}}\cdot|\Psi^{left}_{future}\rangle+e^{i\beta}\sqrt{W_{right}}\cdot |\Psi^{right}_{future}\rangle; \end{eqnarray*} here $0<W_{left}<1$ and $0<W_{right}<1$; $W_{left}+W_{right}=1$; $\alpha$ and $\beta$ are real constants. It is evident that for any self-adjoint operator $\hat{A}$ we have \begin{eqnarray} \label{2} \langle\Psi_{future}|\hat{A}|\Psi_{future}\rangle= W_{left}\cdot\langle\Psi_{future}^{left}|\hat{A}|\Psi_{future}^{left}\rangle + W_{right}\cdot\langle\Psi_{future}^{right}|\hat{A}|\Psi_{future}^{right}\rangle. \end{eqnarray} As is seen from Exp.(\ref{2}), the average value of $A$ for the whole quantum ensemble of particles is the sum of the average values (multiplied by the corresponding weights $W_{left}$ and $W_{right}$) of this observable for the subensembles of particles moving on different sides of the barrier. Besides, Exp. (\ref{2}) does not depend on the phases $\alpha$ and $\beta$, pointing to the fact that these phases are unobservable and the state $|\Psi_{future}\rangle$ represents, in effect, a superposition of mutually incoherent states. All this means that the state vector $\Psi_{future}$ describes a mixed state, rather than a pure one. In order to distinguish mixed states described by state vectors from those described by density operators, we will call them ``mixed vector states". Accordingly, ``usual" pure states will be called ``pure vector states". {\bf Property B: Transitions between the subspaces ${\cal{H}}_{past}^{left}$ and ${\cal{H}}_{past}^{right}$ as well as between ${\cal{H}}_{future}^{left}$ and ${\cal{H}}_{future}^{right}$, under the action of the operator of any observable, are forbidden.} Indeed, for any self-adjoint operator $\hat{A}$ and state vectors $|\Psi^{left}_{past}\rangle$, $|\Psi^{right}_{past}\rangle$, $|\Psi^{left}_{future}\rangle$ and $|\Psi^{right}_{future}\rangle$ we have $\langle\Psi^{left}_{past}|\hat{A}|\Psi^{right}_{past}\rangle= \langle\Psi^{left}_{future}|\hat{A}|\Psi^{right}_{future}\rangle=0$. These equalities also mean that any self-adjoint operator $\hat{A}$ leaves invariant these four subspaces. {\bf Property C: There exists an operator $\hat{T}$ such that the subspaces ${\cal{H}}^{left}_{past}$, ${\cal{H}}^{left}_{future}$, ${\cal{H}}^{right}_{past}$ and ${\cal{H}}^{right}_{future}$ are its eigenspaces of a point-like spectrum.} Indeed, let $\hat{T}=\hat{P}_r - \hat{P}_l$ where $\hat{P}_r$ and $\hat{P}_l$ are projection operators defined as multiplication operators in the $x$-representation: $\hat{P}_r=\theta(x+a)$ and $\hat{P}_l=\theta(a-x)$; $\theta(x)$ is the Heaviside step function. That is, in the Schr\"{o}dinger representation, $\hat{T}$ is a multiple of the identity operator. It is evident that \begin{eqnarray*} \hat{T}|\Psi^{right}_{past}\rangle=+|\Psi^{right}_{past}\rangle,\mbox{\hspace{3mm}} \hat{T}|\Psi^{right}_{future}\rangle=+|\Psi^{right}_{future}\rangle;\mbox{\hspace{5mm}} \hat{T}|\Psi^{left}_{past}\rangle=-|\Psi^{left}_{past}\rangle,\mbox{\hspace{3mm}} \hat{T}|\Psi^{left}_{future}\rangle=-|\Psi^{left}_{future}\rangle. \end{eqnarray*} Thus, the subspaces ${\cal{H}}^{right}_{past}$ and ${\cal{H}}^{right}_{future}$ are the eigenspaces of the operator $\hat{T}$, corresponding to its eigenvalue $+1$; the subspaces ${\cal{H}}^{left}_{past}$ and ${\cal{H}}^{left}_{future}$ are its eigenspaces corresponding to its eigenvalue $-1$. The existence of such properties is a ground to say (see, e.g., \cite{Hor1} as well as \cite{Ear}) that in the Hilbert spaces ${\cal{H}}_{past}$ and ${\cal{H}}_{future}$ there acts a dichotomous superselection rule with the superselection operator $\hat{T}$; the role of superselection (coherent) sectors in ${\cal{H}}_{past}$ are played by its subspaces ${\cal{H}}^{left}_{past}$ and ${\cal{H}}^{right}_{past}$, while in ${\cal{H}}_{future}$ superselection sectors are its subspaces ${\cal{H}}^{left}_{future}$ and ${\cal{H}}^{right}_{future}$. This superselection rule restricts the action of the superposition principle: \begin{itemize} \item[$\bullet$] Superposition of pure vector states from the same coherent sector is also a pure vector state from this sector. \item[$\bullet$] Superposition of pure vector states from different coherent sectors is a mixed vector state. \end{itemize} As regards the physical origin of this rule, according to probability theory the different physical conditions (contexts) in the remote spatial regions on the different sides of the potential barrier ``prepare" {\it different} statistical ensembles of particles. As is stressed in \cite{Khr}, ``Two collectives of particles moving under two macroscopically distinct contexts form two different statistical ensembles"; and else, {\it ``probabilistic data generated by a few collectives\ldots cannot be described by a single Kolmogorov space" (ibid)} (see also \cite{Acc}). Thus, we found a dichotomous-context-induced superselection rule. So far we have analyzed only the limiting cases $t\to -\infty$ and $t\to\infty$ of this scattering process. As regards its intermediate stage, in the course of scattering the Schr\"{o}dinger dynamics crosses the boundaries of coherent sectors. Of importance is that, for the ensemble of particles impinging the barrier from the left, the operator $\hat{S}$ transforms the pure vector state $|\Psi^{left}_{past}\rangle$ (which describes the ensemble of incident particles) into the superposition of the pure vector state $|\Psi^{left}_{future}\rangle$ (which describes the sub-ensemble of reflected particles) and the pure vector state $|\Psi^{right}_{future}\rangle$ (which describes that of transmitted particles). That is, in the course of this scattering process a pure vector state is transformed into a mixed vector state! Note that we deal with a {\it closed} quantum system; our approach does not use an environment and other external factors in order to induce such a state transformation. Note that the superselection rule allows the calculation of the expectation values of observables only for pure vector states. For example, for the case considered in the previous paragraph we may calculate such values for the asymptotic state $|\Psi^{left}_{past}\rangle$ as well as for $|\Psi^{left}_{future}\rangle$ and $|\Psi^{right}_{future}\rangle$, but not for their superposition $|\Psi^{left}_{future}\rangle+|\Psi^{right}_{future}\rangle$. All observables for the asymptotic states $|\Psi^{left}_{future}\rangle$, $|\Psi^{right}_{future}\rangle$ and $|\Psi^{left}_{past}\rangle$ can be measured directly, because they are not hidden by interference effects. Thus, we arrive at the unusual situation: (a) physical observables are quite allowed for the very state $|\Psi^{left}_{past}\rangle$, but they are forbidden for the states, evolving from it in the course of scattering, because the final state of this evolution is the mixed state $|\Psi^{left}_{future}\rangle+|\Psi^{right}_{future}\rangle$; (b) observables are allowed both for the state $|\Psi^{right}_{future}\rangle$ and for its preceding states (which must exist, from considerations of the continuity of evolution over time), but these states are not described in the modern model of this scattering process; (c) observables are also allowed both for the state $|\Psi^{left}_{future}\rangle$ and (again, for continuity reasons) for its preceding states but, again, these states are unknown. So, all observables and characteristic times can be introduced only for the transmission and reflection sub-processes. But their dynamics is not described in the modern model of this process, what means that this model is not yet complete. It needs to be finished to describe the quantum dynamics of these two subprocesses at all stages of scattering. Note also that, because of the interference between the subprocesses, all of their observables and characteristic times can be measured only indirectly. Obviously, this can be done only on the basis of the completed model of this process. \section{Conclusion} It is shown that the modern proof of the irreducibility of the Schr\"{o}dinger representation, based on the approximation of the usual (unbounded) canonical operators by the corresponding (bounded) Weyl exponentials, is true only for one-particle phenomena in which a particle has a discrete energy spectrum. As for one-particle quantum phenomena with a continuous energy spectrum, here the unboundedness of the position operator plays an essential role and, as a consequence, this proof is erroneous. We considered the simplest process from this class -- scattering a particle on a 1D potential barrier -- and showed that there exists a superselection rule that restricts the action of the superposition principle in this process. By this rule, quantum mechanics must distinguish between pure vector states and mixed vector states. Otherwise, a logically consistent interpretation of vector states (wave functions) and quantum mechanics itself is impossible in principle. \section{Acknowledgments} I would like to thank Professor Bruno Nachtergaele for his useful remark on the stone-von Neumann theorem, as well as Professor Karl Hess for his remark concerning some text typos in the paper arXiv:1412.7657.
\section*{Appendix \Alph{section}: #1}} \def\LabelFig#1#2{ \refstepcounter{figure}\count100=\thefigure \def\thefigure{\the\count100 #1}\label{#2} \addtocounter{figure}{-1}} \begin{document} \thesaurus{09(06.09.1;06.15.1;06.18.2;03.13.4)} \title{Inferring the equatorial solar tachocline from frequency splittings} \author{T. Corbard \and G. Berthomieu \and J. Provost \and P. Morel} \institute{Laboratoire G.-D. Cassini, CNRS UMR 6529, Observatoire de la C\^ote d'Azur, BP 4229, 06304 Nice Cedex 4, FRANCE } \offprints{T. Corbard} \date{Received 18 July 1997; accepted 24 October 1997} \maketitle \begin{abstract} Helioseismic inversions, carried out for several years on various ground-based and spatial observations, have shown that the solar rotation rate presents two principal regimes: a quasi-rigid rotation in the radiative interior and a latitude-dependent rotation in the whole convection zone. The thin layer, named solar tachocline, between these two regimes is difficult to infer through inverse techniques because of the ill-posed nature of the problem that requires regularization techniques which, in their global form, tend to smooth out any high gradient in the solution. Thus, most of the previous attempts to study the rotation profile of the solar tachocline have been carried out through forward modeling. In this work we show that some appropriate inverse techniques can also be used and we compare the ability of three 1D inverse techniques combined with two automatic strategies for the choice of the regularization parameter, to infer the solar tachocline profile in the equatorial plane. Our work, applied on LOWL (LOWL is an abbreviation for low degree denoted by L) two years dataset, argue in favor of a very sharp ($0.05\pm0.03R_\odot$) transition zone located at $0.695\pm0.005R_\odot$ which is in good agreement with the previous forward analysis carried out on Global Oscillations Network Group (GONG), Big Bear Solar Observatory (BBSO) and LOWL datasets. \keywords{Sun: interior -- Sun: oscillations -- Sun: rotation -- methods: numerical} \end{abstract} \section{Introduction} Helioseismic inversions of the solar p-modes frequencies splitted by rotation have shown that there is, at the base of the convection zone, a thin transition layer separating two regimes of rotation, a strong differential rotation in the convection zone and a quasi rigid rotation in the radiative interior (e.g. Thompson et al. \cite{Science}; Corbard et al. \cite{meAA}). This layer, called tachocline, is supposed to play an important role in the solar dynamo, in the transport of angular momentum and in the mixing of chemical elements. Its position $r_c$ and thickness $w$ give constraints to the theories describing its structure and evolution (Spiegel \& Zahn \cite{tacho1}; Gough \& Sekii \cite{tacho2}). Different estimations of these parameters have been obtained so far mostly by using forward methods (Kosovichev \cite{koso}; Charbonneau et al. \cite{char}; Basu \cite{Basu}). The aim of this work is to test and compare the ability of some inversion methods to infer the location and the width of the solar tachocline, and then to apply these methods to helioseismic data. We compare three 1D least-squares methods. They differ essentially by the mean used to regularize the ill-posed inverse problem of inferring the equatorial solar rotation rate from the observed frequency splittings. The first method is the most commonly used Regularized Least-Squares (RLS) method with Tikhonov regularization (\cite{Tikhonov}), the second one is the Modified Truncated Singular Value Decomposition (MTSVD) introduced by Sekii and Shibahashi (\cite{MTSVD1}) which uses a regularization term of the same form but with a discrete truncation parameter instead of the continuous Tikhonov regularization parameter. The third method, introduced by Hansen \& Mosegaard ({\cite{PPTSVD}), is called Piecewise Polynomials TSVD (PP-TSVD) and is a modification of the MTSVD method that can preserve discontinuities of the solution. In Sect.~\ref{sec:Model}, we briefly recall the inverse problem and define our parameterization of the tachocline. Section~\ref{sec:methods} gives the two strategies studied in this work for inferring the rapid variation of the rotation. We test these methods by inverting artificial data in Sect.~\ref{sec:tests} and then, in Sect.~\ref{sec:lowl}, we use this study in order to infer the location and thickness of the solar tachocline in the equatorial plane from data observed by the LOWL instrument (Tomczyk et al. \cite{Tomczyk}). \section{Direct analysis and parameterization of the tachocline} \label{sec:Model} Frequency splittings $\Delta\nu_{nlm}=\nu_{nlm}-\nu_{nl-m}$ between modes with the same radial order $n$ and degree $l$ but different azimuthal orders $m$ are induced by the solar rotation $\Omega(r,\theta)$ expressed as a function of the radius $r$ and colatitude $\theta$. For a slow rotation, assumed to be symmetric about the equator, and moderate or high degree modes, these splittings are given by: \begin{equation}\label{eq:int2D} \Delta\nu_{nlm}\!=m\!\int_0^{{\pi\over 2}}\!\!\!\!\int_0^{R_\odot}\!\!\! K_{nl}(r)P_l^m(\cos\theta)^2 \ \Omega(r,\theta)\ \sin\theta\ dr\ d\theta, \end{equation} where $K_{nl}(r)$ are the so-called rotational kernels that can be calculated for each mode from a solar model (Morel et al. \cite{updated}). In the following, they are assumed to be known exactly. There exists additional terms that are not taken into account in Eq.~(\ref{eq:int2D}) but, as discussed in Corbard et al. (\cite{meAA}), they do not influence inversion above $0.4R_\odot$. As the aim of this work is not to sound the rotation of the core, Eq. (\ref{eq:int2D}) is a good approximation. $P_l^m(\cos\theta)$ are normalized Legendre functions. Their asymptotic property leads, as discussed by Antia et al. (\cite{Antia}), to the following expression that shows the sectoral (i.e. $l=m$) modes splittings as weighted averages of the equatorial rotation rate $\Omega_{eq}(r)=\Omega(r,90^{\circ})$: \begin{equation}\label{eq:int} \Delta\nu_{nll}\simeq l\int_0^{R_\odot} K_{nl}(r)\ \Omega_{eq}(r)\ dr. \end{equation} We note that the validity of this 1D approximation is $l$-dependent. Indeed, the higher the degree, the more the latitudinal kernel $P_l^l(\cos\theta)^2\sin\theta$ is peaked at the equator. Following Charbonneau et al. (\cite{char}), we define the location and the width of the transition zone in the equatorial plane as the parameters $\hat r_c$ and $\hat w$ respectively of the following $erf$ function which fits the rotation law in this plane: \begin{equation}\label{eq:erf} \Omega_{eq}(r)=\hat\Omega_0+{1\over 2}(\hat\Omega_1-\hat\Omega_0) \left(1+erf\left({r-\hat r_c\over0.5 \hat w}\right)\right). \end{equation} Here $\hat \Omega_0$ and $\hat \Omega_1$ represent the mean values of the rotation in the radiative interior and in the convection zone respectively. In order to compare different 1D inverse methods, we have built several sets of theoretical sectoral frequency splittings that correspond to different given rotation laws with fixed parameters $r_c$, $w$, $\Omega_0$, $\Omega_1$ but with a function of the colatitude in order to mimic the latitudinal differential rotation of the convection zone: \begin{equation}\label{eqn:law} \Omega(r,\theta)\!\!=\!\!\Omega_0\!+\!{1\over 2} (\Omega_1\!-\!A\!\cos^2\!\!\theta\!-\!B\!\cos^4\!\!\theta-\Omega_0)\!\! \left(\!\!1\!\!+erf\!\!\left(\!\!{r-r_c\over 0.5w}\!\!\right)\!\right) \end{equation} Evidently, for any choice of constants $A$ and $B$, the searched parameters for these rotation laws are $\hat r_c=r_c$, $\hat{w} =w$, $\hat\Omega_0=\Omega_0$ and $\hat\Omega_1=\Omega_1$. We compute the splittings $\Delta\nu_{nll}$ from Eq.~(\ref{eq:int2D}) for a set of modes corresponding to the set of LOWL data used in Corbard et al. (1997) and we add a normally distributed noise $\delta\nu_{nll}\in {\cal N}(0,\sigma_{nl})$. For each mode $(n,l)$ the standard deviation of the noise $\sigma_{nl}$ has been taken equal to: \begin{equation}\label{eq:err} \sigma_{nl}={\bar\sigma_{nl}\over\sqrt{k_\sigma}}, \end{equation} where $\bar\sigma_{nl}$ is the error derived from the observers' uncertainties for a splitting $\Delta\nu_{nll}$, and $k_\sigma$ is an integer used to vary the level of the noise that we introduce in the data. Doing this, we take into account the fact that the error obtained on the observed splitting varies with the frequency and the degree of the mode which is certainly more realistic than taking the same average standard deviation for all the modes. From those noisy splittings, the equatorial rotation profile is obtained by inverting Eq.~(\ref{eq:int}) and this profile is then fitted by the $erf$ function Eq.~(\ref{eq:erf}) leading to the parameters $\bar r_c$, $\bar w$, $\bar\Omega_0$, $\bar\Omega_1$ which will be compared to the initial parameters. \section{Strategies for inferring rapid variations of the rotation} \label{sec:methods} The three inverse methods used in this work are detailed in Appendix \ref{app:methods}. They all use a grid of $50$ points in radius distributed according to the density of turning points of observed modes. The most important difficulty in inferring the thickness of the tachocline from inverse methods results from the fact that the problem of solving Eq.~(\ref{eq:int}) is an ill-posed problem and this is strengthened by the fact that rotational kernels give redundant information about the outer part of the sun whereas they have only low amplitude in the solar core for the observed mode set. Numerically, this produces a high value for the condition number (defined as the maximum singular value divided by the smallest singular value) of the discretized problem Eq.~(\ref{eq:discret}) (typically $\Lambda_{max}/\Lambda_{min} \simeq 2\times 10^8$ in our implementation) and the singular values decay rapidly. This high value of the condition number means that the solution of the initial problem is highly sensitive to the numerical errors and the noise contained in the data. Therefore we have to introduce some a-priori knowledge on the rotation profile. Unfortunately this regularization tends to smooth out every rapid variation in the solution. By using global regularization, we make the implicit assumption that the real rotation is smooth everywhere and therefore the information about the thickness of a rapid variation of the rotation profile is not directly readable from the solutions obtained by classic inversions. There are however several ways for overcoming these difficulties. \subsection{Local deconvolution of the result obtained from linear inversions: the use of averaging kernels} The first way is to have a good understanding of the process by which the inversion smoothes the solution: using this information, we may be able to inverse this process and to acquire a more realistic view of the rotation. This is what Charbonneau et al. (\cite{char}) have done in combination with the so-called Subtractive Optimal Localized Average (SOLA) (Pijpers \& Thompson \cite{SOLA1}, \cite{SOLA2}) method. This can be generalized for any linear inversion as RLS method used in this work. The solution $\bar\Omega(r_0)$ obtained at a target location $r_0$ can be viewed as a weighted average of the `true rotation' $\Omega(r)$, the weighting function being the averaging kernel $\kappa(r,r_0)$ that can always be estimated at any $r_0$: \begin{equation}\label{eq:avk} \bar\Omega(r_0)=\int_0^{R_\odot}\kappa(r,r_0)\Omega(r)\ dr. \end{equation} If we suppose that the averaging kernels obtained at any depth can be approximated by a translation of the averaging kernel obtained at the middle of the transition i.e. $\kappa(r,\hat r_c)$, then we can define $\kappa_c$ by $\kappa_c(r-\hat r_c)\equiv\kappa(r,\hat r_c)$ and Eq.~(\ref{eq:avk}) reduces to a convolution equation: \begin{equation} \bar\Omega(r_0)=\!\!\int_0^{R_\odot}\!\!\kappa_c(r-r_0)\Omega(r)\ dr \Leftrightarrow \bar\Omega(r)=\kappa_c(r)*\Omega(r) \end{equation} Finally, if the `true rotation' can be well approximated by an $erf$ function of the form given by Eq.~(\ref{eq:erf}), and if we approximate the kernel $ \kappa_c(r-r_0)$ by a Gaussian function of the form: \begin{equation} \kappa_c(r-r_0)\simeq \exp\left[-(r-r_0)^2/\Delta_r^2\right], \end{equation} then the inferred solution is also an $erf$ function of the form Eq.~(\ref{eq:erf}) but with a larger width $\bar{w}$. A simple deconvolution gives the following relation between the searched width $\hat{w}$ and the inferred width $\bar{w}$: \begin{equation}\label{eq:correction} \hat{w}=\bar{w}_c\equiv\sqrt{{\bar{w}}^2-4\Delta_r^2}, \end{equation} which defined the corrected inferred width $\bar{w}_c$. This result is valid only under a large number of assumptions that may be quite distant from the reality. Especially the reduction to a convolution form is certainly not valid because of the extent of averaging kernels that tend to increase rapidly toward the solar core. Moreover the profile of the rotation rate may be much more complicated than a simple $erf$ function. However, the tachocline is thin and the averaging kernels have nearly the same profile in its whole extent. Thus this is certainly a good approach to get a quantitative idea of how the inversion enlarges the `true rotation' transition. We note that if we obtain $\Delta_r>\bar{w}/2$ this certainly means that some of the previous assumptions are not valid. In this work, we have applied this `deconvolution method' on the solutions obtained by Tikhonov inversions computed as explained in Appendix \ref{app:Ti}. We estimate that this cannot be made for MTSVD method because the corresponding averaging kernels are less well peaked and exhibit a more oscillatory behavior (see Fig. \ref{fig:avk} hereafter). \subsection{Non linear regularization} The second way to estimate the location and thickness of the tachocline, is to build inverse methods that are capable of producing solutions with steep gradients. The idea is to apply a local regularization instead of the global Tikhonov regularization term. This leads to a non linear problem and piecewise smooth solutions. This approach has recently found useful applications in image processing for edge-preserving regularization (Aubert et al. \cite{i3s}) and total variation (TV) denoising (Vogel \& Oman \cite{Vogel1}, \cite{Vogel2}). In particular, the TV of $f$ is defined as the 1-norm of the first derivative of $f$ and this is the definition of smoothness that we use in the PP-TSVD inverse method. Therefore, the results obtained by this method, detailed in Appendix \ref{app:M-PPTSVD}, represent a first attempt to use this class of inversion with non linear regularization on helioseismic data. \section{Tests with artificial data: results and discussion}\label{sec:tests} \subsection{The key: how to choose regularization parameters}\label{sec:key} Whichever regularized inverse method we use, a very important point is the choice of the regularization parameter which can be a discrete truncation parameter $k$ (MTSVD, PP-TSVD, Eq.~(\ref{eq:tsvd})) or a continuous parameter $\lambda$ (Tikhonov, Eq.~(\ref{eq:Ti})). This choice is specially important if we want to infer a quantity like the width of a zone with high gradients which is directly affected by the regularization. Several methods for choosing the regularization parameter have been proposed that tend to establish a balance between the propagation of input errors and the regularization (see e.g. Badeva \& Morozov (\cite{Morozov}), Thompson \& Craig (\cite{ThompsonAM2}) and Hansen (\cite{L-curve}, \cite{HansenTools}) for a general review and Thompson (\cite{ThompsonAM1}), Barett (\cite{Barrett}) and Stepanov \& Christensen-Dalsgaard (\cite{Stepanov}) for applications in helioseismic inversions). In this work we test and compare the ability of two of these automatic strategies, namely the L-curve criterion (Hansen \cite{L-curve}) and the Generalized Cross Validation (GCV) criterion (Wahba \cite{Wahba}; Golub et al. \cite{Golub}), to reproduce a good estimation of the tachocline profile from noisy data. The importance of the choice of the regularization parameter can be illustrated by the following figures (Figs. \ref{fig:good}, \ref{fig:bad}, \ref{fig:goodk}, \ref{fig:badk} ) where the results of the fit of the solution by an $erf$ function are plotted as a function of the regularization parameter. \begin{figure} \LabelFig{a-d}{fig:good} \psscalefirst \centerline{ \psfig{figure=6651f1.ps,height=8cm,angle=-90}} \caption[]{Inferred parameters $\bar\Omega_0$, $\bar\Omega_1$, $\bar r_c$, $\bar{w}$ and corrected inferred parameter $\bar w_c$ against the logarithm of the Tikhonov regularization parameter $\lambda$. Error bars result from the fit of the solution by an $erf$ function taking into account the propagation of noise through the inverse process but not the existing correlations between the results obtained at two different radius. The initial parameters are indicated by dashed lines. The GCV and L-curve choices are shown by the full star and the circle respectively. The input rotation law was not dependent on the latitude ($A=B=0$) and the level of noise was small ($k_\sigma=10$).} \end{figure} Figure \ref{fig:good} represents the variation of the four $erf$-parameters $\bar\Omega_0$, $\bar\Omega_1$, $\bar r_c$ and $\bar\omega$ deduced from a Tikhonov inversion as a function of the logarithm of the regularization parameter. The four initial parameters were $\Omega_0=425$ nHz, $\Omega_1=460$ nHz, $r_c=0.69R_\odot$ and $w=0.05R_\odot$. In this case, called the `ideal case' in the following, the added errors were small ($k_\sigma$=10) and the initial rotation law was not dependent on the latitude ($A=B=0$). The choices designated by L-curve and GCV strategies are shown by the full star and the circle respectively. In addition we have plotted the corrected inferred width $\bar{w}_c$ given by Eq.~(\ref{eq:correction}) and computed by calculating systematically the averaging kernel at $r_0=\bar r_c$ (as shown on Fig. \ref{fig:avk}a for the GCV choice). The GCV criterion leads always to a lower regularization than the L-curve choice and then tends to reduce the smoothing of the solution. In most of our tests, as in Figs. \ref{fig:good}a, c, d, the GCV choice corresponds to a point where the errors deduced from the fit become small whereas the L-curve criterion gives a point beyond which a rapid variation of the fitted parameters with increasing regularization occurs. The fact that the values of the fitted parameters are nearly constant between these two points shows that, for this level of noise, the method is robust in that sense that the choice of the precise value of the regularization parameter is not a crucial point: any choice that tends to establish a balance between the propagation of input errors and the regularization is able to produce good results. Let us now look at the behavior of this method for a more realistic example. For this we take a level of noise similar to the one given by observers ($k_\sigma=1$) and we build frequency splittings of sectoral modes by taking into account a latitudinal variation of the rotation rate in the convection zone close to that derived by 2D inversions. We have set $A=55$ nHz and $B=75$ nHz which are mean values derived from observations of the plasma motion at the solar surface (Snodgrass \& Ulrich \cite{Snod:Ul}). This choice for the input rotation law and errors is referred as the `realistic case' in the following. The Eq.~(\ref{eq:int2D}) with $m=l$ has been used to compute the frequency splittings of sectoral modes and 1D Tikhonov inversions have been performed again in order to infer the equatorial rotation rate from Eq.~(\ref{eq:int}). \begin{figure} \LabelFig{a-d}{fig:bad} \psscalefirst \centerline{ \psfig{figure=6651f2.ps,height=8cm,angle=-90}} \caption[]{The same as in Fig \ref{fig:good} but with more realistic input errors ($k_\sigma=1$) and an input rotation profile with latitudinal variation in the convection zone ($A=55$ nHz, $B=75$ nHz).} \end{figure} Figure \ref{fig:bad} represents the results of these inversions in the same form as Fig. \ref{fig:good} and for the same initial $erf$-parameters. There are two essential points to be seen on this figure. The parameter $\Omega_0$ in Fig. \ref{fig:bad}a is systematically under-estimated of about $4$ nHz. A detailed analysis shows that this effect is strongly related to the introduction of a latitudinal variation of the rotation rate in the convection zone. The assumption, used in the 1D inversions, that sectoral modes are sensitive only to the equatorial component of the rotation rate is not valid for low degree $l$ modes (e.g. Antia et al. \cite{Antia}) , and these modes sound the deep interior. This may explain some perturbation for the determination of the parameter $\Omega_0$ that represents the mean value of the rotation rate in the radiative interior. The difference between splittings of sectoral modes computed from Eq.~(\ref{eq:int}) and Eq.~(\ref{eq:int2D}) is below 1 nHz for the observed modes having their turning points above $0.4R_\odot$. The large resulting difference in $\Omega_0$ is due to the fact that high $l$ sectoral modes see only the equatorial rotation rate and then fix the inferred value $\bar\Omega_1$ equal (or nearly equal as in Fig. \ref{fig:bad}b) to the initial value $\Omega_1$ while lower degrees sectoral modes are sensitive to the differential rotation of the convection zone and this effect can only be accounted for in the inverse rotation law by a substantial lowering in $\bar\Omega_0$. Furthermore we have checked that two rotation laws with the same $\Omega_1$ but with a difference of $4$ nHz in $\Omega_0$ and two rotation laws with the same $\Omega_0$ but with or without latitudinal variation in the convection zone, induce a difference of the same order in the sectoral modes frequency splittings. The second important point is that, in Figs. \ref{fig:bad}c, d, the estimation $\bar{w}$ of the width of the tachocline increases rapidly between the GCV and the L-curve points whereas its location $\bar r_c$ decreases rapidly from $0.688R_\odot$ down to $0.674R_\odot$ As in Fig. \ref{fig:good}d, the deconvolution made by using averaging kernels tends to correct this behavior for the estimation of the width but, in this case, the GCV choice remains over-estimated for about $0.015R_\odot$ and the L-curve choice is still very distant from the initial value. Tests made with different input parameters
show that, as in Figs. \ref{fig:bad}c, d and for that level of noise, the GCV choice is always better than the L-curve choice for the estimation of the location and the width of the tachocline. This point will be illustrated and discussed in the next section for the estimation of widths between $0.03$ and $0.11R_\odot$. \begin{figure} \LabelFig{a-d}{fig:goodk} \psscalefirst \centerline{ \psfig{figure=6651f3.ps,height=8cm,angle=-90}} \caption[]{The same as in Fig. \ref{fig:good} (`ideal case') but for MTSVD (full line) and PP-TSVD (dashed line) methods and against the truncation parameter $k$. The L-curve choice for MTSVD method is outside the plot on panel b.} \end{figure} \begin{figure} \LabelFig{a-d}{fig:badk} \psscalefirst \centerline{ \psfig{figure=6651f4.ps,height=8cm,angle=-90}} \caption[]{The same as in Fig. \ref{fig:bad} (`realistic case') but for MTSVD (full line) and PP-TSVD (dashed line) methods and against the truncation parameter $k$. The L-curve choice for MTSVD method is outside the plot on panels b,c and d.} \end{figure} Similar figures (Figs. \ref{fig:goodk}, \ref{fig:badk}) can be plotted for MTSVD and PP-TSVD methods where the continuous regularization parameter is replaced by the discrete truncation parameter. Results obtained in the `realistic case' (Fig. \ref{fig:badk}) have again a larger dispersion and exhibit the same systematic deviation for the determination of $\Omega_0$. Another interesting point is that, as shown on Figs. \ref{fig:goodk}d, \ref{fig:badk}d and also in the next section, the PP-TSVD method tends to give an under-estimation of the width whereas the MTSVD method tends to give an over-estimation of this parameter. This may be very useful in order to give a bounded estimation of the true width. For these two methods, the choice of the optimal truncation parameter $k$ through the L-curve criterion needs the evaluation of the curvature of discrete L-curve. This can be done carefully by an appropriate 2D curve fitting. Nevertheless our experience shows that it is difficult to do this systematically with the same fit procedure for any level of noise and input rotation law. Furthermore, when this is done carefully, this choice leads to results for the tachocline profile that are always worse than the ones obtained from the GCV choice. Thus, in the following, results are shown only with the GCV criterion for MTSVD and PP-TSVD methods. \begin{figure} \LabelFig{a-c}{fig:sol} \psscalefirst \centerline{ \psfig{figure=6651f5.ps,height=8cm,angle=-90}} \caption[]{Solutions obtained between $0.4$ and $0.8R_\odot$ from the three inverse methods with the GCV choice of regularization parameters. The input rotation law was the same as in Figs. \ref{fig:bad}, \ref{fig:badk} (`realistic case'). The equatorial component of the initial law is shown by dashed line whereas the fits of the inverse solutions are shown by full lines.} \end{figure} Figure \ref{fig:sol} shows the solutions obtained from the three methods with the GCV choices indicated on Figs. \ref{fig:bad} and \ref{fig:badk}. The error bars on the PP-TSVD method (Fig. \ref{fig:sol}c) were obtained by assuming that the method is linear i.e. the dependence of $\vec{H}$ (defined in Eq.~(\ref{eq:non_lineaire})) relatively to the data vector $\vec W$ is neglected. This is indeed not the case and a Monte-Carlo approach for estimating errors may be more realistic. We note however that the two other methods (Tikhonov and MTSVD) are linear only for a given regularization parameter. Since this parameter is chosen through automatic strategies, it depends also on the data. Thus, strictly speaking, these methods are also non-linear methods. Nevertheless, the automatic choices are built so that they are not too much sensitive to little change in the data and that justify the linear approximation. \begin{figure} \LabelFig{a and b}{fig:avk} \psscalefirst \centerline{ \psfig{figure=6651f6.ps,height=6cm,width=6cm,angle=-90}} \caption[]{Averaging kernels computed at $r_0=\bar r_c$. For Tikhonov method the dashed line represents the Gaussian approximation of the kernel used for the local deconvolution of the solution shown on Fig. \ref{fig:sol}a.} \end{figure} The corresponding averaging kernels computed at $r=\bar r_c$ (Fig. \ref{fig:avk}) show that whereas the Gaussian approximation is rather good for the Tikhonov method, the large oscillations in the convection zone obtained for the MTSVD method make difficult the use of a local deconvolution in that case. \subsection{Tests for width between $0.03$ and $0.11$ $R_\odot$} An important point is to test the ability of a method to give a good estimation of the $erf$-parameters for a large domain of variation of the width of the tachocline. We first study in Fig. \ref{fig:comp} the behavior of the different methods and automatic strategies between the `ideal case' and the `realistic case' for one realization of input errors. Then, in Fig. \ref{fig:histo500}, we have carried out a Monte-Carlo approach in order to have a better estimation of the errors on the widths deduced from the fit of the solutions for the `realistic case'. \begin{figure} \LabelFig{a-c}{fig:comp} \psscalefirst \centerline{ \psfig{figure=6651f7.ps,height=11cm,width=6cm,angle=-90}} \caption[]{Difference between the inferred width and the initial width ($\delta w=\bar{w}-w$) against the initial width for PP-TSVD (triangles) and MTSVD (circles) methods, both computed with the GCV choice for the truncation parameter. Squares are for the Tikhonov method with GCV criterion (full line) and L-curve criterion (dashed line). For this latter method we plot the difference between the corrected inferred width and the initial width ($\delta w=\bar{w}_c-w$). {\bf a} $k_\sigma=10$, $A=B=0$ as in Figs. \ref{fig:good}, \ref{fig:goodk} (`ideal case'); {\bf b} $k_\sigma=1$, $A=B=0$; {\bf c} $k_\sigma=1$, $A=55$, $B=75$ as in Figs. \ref{fig:bad}, \ref{fig:badk} (`realistic case')} \end{figure} Figure \ref{fig:comp} shows the inferred width $\bar{w}$ (for MTSVD and PP-TSVD methods) and the corrected inferred width $\bar{w}_c$ (for the Tikhonov method) as functions of the initial width $w$ and for one realization of the input errors. Figure \ref{fig:comp}a represents the same example as Figs.~\ref{fig:good}, \ref{fig:goodk} (`ideal case' ), in Fig. \ref{fig:comp}b we increase the level of noise ($k_\sigma=1$), and finally we set an input rotation law with a latitudinal dependence in the convection zone so that the Fig. \ref{fig:comp}c is for the same example as Figs. \ref{fig:bad}, \ref{fig:badk} (`realistic case'). In Fig. \ref{fig:comp}a , the results for $\bar{w}$ fit the real value within $0.02R_\odot$ except for PP-TSVD and widths above $0.9R_\odot$, and the two regularization procedures (L-curve and GCV) give almost the same result. The comparison of Figs.~\ref{fig:comp}a and \ref{fig:comp}b clearly indicates that the results obtained for Tikhonov method with the L-curve criterion (dashed curves) are very sensitive to the level of noise and are not adapted to the actual errors of observed data. The deconvolution method using Tikhonov inversion with GCV criterion appears to be the less sensitive to the noise level and the most stable for widths between $0.03$ and $0.11R_\odot$. We see again that the results obtained from MTSVD and PP-TSVD lead respectively to an over-estimation and an under-estimation of the real width. Figure~\ref{fig:comp}c illustrates the effect of a latitudinal dependence of the rotation in the convection zone: an increasing over-estimation of $w$ from the Tikhonov method with GCV criterion and a general larger dispersion of the results. \begin{figure} \LabelFig{}{fig:histo500} \psscalefirst \centerline{ \psfig{figure=6651f8.ps,height=8cm,angle=-90}} \caption[]{The same as in Fig. \ref{fig:comp}c ('realistic case') but each points is the mean value of the results obtained for 500 realizations of input errors. Error bars represent a $68.3\%$ confidence interval on $w$.} \end{figure} In Fig.~\ref{fig:histo500}, we have performed $500$ realizations of input errors for each initial width and each point shown in this figure represents the mean value of the $500$ inferred or corrected inferred widths for a given initial width and a given method. Error bars represent a $68.3\%$ confidence interval which contains the nearest $341$ inferred widths from the mean value but they are not necessarily symmetric around this value. This study shows that the Tikhonov and PP-TSVD methods with the GCV criterion are the most reliable for estimating the width in the most realistic case. They lead, respectively to an over-estimation and under-estimation of the width of about $0.01R_\odot$ at the maximum for initial widths between $0.03R_\odot$ and $0.11R_\odot$. In that range, the standard deviation obtained for $500$ realizations of input errors is around $0.02R_\odot$ for Tikhonov method and much larger (up to $0.05R_\odot$ for $w=0.11R_\odot$) for PP-TSVD method which then appears to be well adapted only to infer very sharp transitions. Let $\omega_i$ represent the widths deduced from $N_r$ hypothetical (non-observed) realizations of the unknown true width $\hat\omega$. In the Monte-Carlo method we suppose that we can approximate the distribution of $(\hat\omega-\omega_i,\ i=1,..N_r)$ by the distribution of $(\omega_o-\tilde\omega_i,\ i=1,N_r)$ where $\omega_o$ is the width deduced from the observed dataset and $\tilde\omega_i$ are the widths deduced from datasets built by setting $\hat\omega=\omega_o$ in the model. As we can not insure that $\omega_o$ is very close to $\hat\omega$, the underlying assumption is that, in the range of uncertainty concerning $\hat\omega$ (say $0.03-0.11R_\odot$), the way in which errors propagate through the inverse process does not vary rapidly (see e.g. Press et al. \cite{NumRec}). The fact that, in Fig. \ref{fig:histo500}, error bars grow rapidly with the initial width for PP-TSVD method makes difficult the use of the Monte-Carlo results for estimating the statistical behavior of this method. There are nevertheless two factors that may introduce bias in these estimations of the errors on the inferred widths. First, the existing correlations between the inferred rotation values obtained at two different radius are not taken into account in the fit of the solution by an $erf$-function. Secondly, for the PP-TSVD method, the non-linearity of the method is not taken into account in the estimation of the propagation of noise through the inverse process. Making the fit in the right way, i.e. taking into account correlations, may lead to a lower dispersion of the results and then our estimation of the error on the inferred widths may be over-estimated. Nevertheless, the effects of these two approximations are not easy to estimate a priori and need a more complete analysis in future work. \begin{table*} \caption[]{Inferred $erf$-parameters obtained from LOWL data. The L-curve criterion has not been used for methods with discrete truncation parameters.} \begin{flushleft} \begin{tabular}{llllllll} \hline\noalign{\smallskip} Methods & \multicolumn{2}{l}{$\bar\Omega_0$ (nHz)} & &\multicolumn{2}{l}{$\bar\Omega_1$ (nHz)}& $\bar r_c/R_\odot$ & $\bar w_{(c)}/R_\odot$ \\ \noalign{\smallskip} \cline{2-3}\cline{5-6} \noalign{\smallskip} & GCV & L-curve & & GCV & L-curve & GCV & GCV \\ \noalign{\smallskip} \hline \noalign{\smallskip} Tikhonov & $429.3\pm 0.5$ & $427.9\pm 0.3$ & &$457.7\pm 0.3$ & $460.4\pm 0.4$ & $0.693\pm 0.002$ & $0.067\pm 0.010$ \\ MTSVD & $429.4\pm 0.7$ & - & &$457.0\pm 0.5$ & - &$0.693\pm 0.003$ & $0.062\pm 0.009$ \\ PP-TSVD & $429.6\pm 0.2$ & - & &$456.4\pm 0.3$ & - &$0.693\pm 0.009$ & $0.031\pm 0.017$ \\ \noalign{\smallskip} \hline \end{tabular} \end{flushleft} \end{table*} \section{Results for LOWL data}\label{sec:lowl} \begin{figure} \LabelFig{}{fig:sol_lowl} \psscalefirst \centerline{ \psfig{figure=6651f9.ps,height=8cm,angle=-90}} \caption[]{Equatorial tachocline profiles obtained from LOWL data by PP-TSVD (triangles) and Tikhonov (squares) methods with GCV criterion. Error bars represent the $1\sigma$ errors estimated on the solution by assuming the linearity of the inversions. The full and dashed curves represent respectively the fit of the PP-TSVD and Tikhonov solutions by an $erf$-function between $0.4$ and $0.8R_\odot$.} \end{figure} This section gives the results obtained from the two years (2/26/94-2/25/96) observations by the LOWL instrument in Hawaii (Tomczyk et al. \cite{Tomczyk}; Corbard et al. \cite{meAA}). These data contain $1102$ modes with degrees up to $l=99$ and frequencies between $1200$ and $3500$ $\mu$Hz. For each mode $(n,l)$, individual splittings are given by, at best, five a-coefficients of their expansion on orthogonal polynomials defined by Schou et al. (\cite{SCDT}). For this work, we assume that the previous simulations provide an estimation of the bias introduced by the methods and we use these values in order to correct the inferred tachocline parameters. This supposes the closeness of the model used in the simulation to the reality and a good estimation of the errors in the data. Furthermore, we use the sum of odd a-coefficients as a first approximation for the sectoral splittings i.e. $\Delta\nu_{nll}\simeq a^{nl}_1+a^{nl}_3+a^{nl}_5$. This approximation is exact for all the rotation laws such that $a^{nl}_{2j+1}=0\ \forall\ j>2$ (which is the case for the rotation laws Eq. \ref{eqn:law} used in our model). When this is not the case the latitudinal kernel associated to $a^{nl}_1+a^{nl}_3+a^{nl}_5$ is less peaked at the equator than the one associated to the sectoral splittings (i.e. $P_l^l(\cos\theta)^2\sin\theta$, see Sect.~\ref{sec:Model}) and thus $\hat\Omega_1$ represents a latitudinal average of the rotation in a larger domain around the equator. However the kernel associated to the sum of three a-coefficients is less $l$-dependent. Results obtained by the three methods are summarized in Table 1. They are in very good agreement for the location of the tachocline and the mean values of the rotation rate in the radiative interior and convection zone but more dispersive concerning the determination of the width. The tests discussed above have shown that this may be related to the level of noise contained in the data. The equatorial tachocline profiles obtained by Tikhonov and PP-TSVD methods with GCV criterion are shown in Fig. \ref{fig:sol_lowl}. According to the previous sections, we use the GCV choice in order to infer the location and the width of the equatorial tachocline. Nevertheless, for $\Omega_0$ and $\Omega_1$ the L-curve choices may be useful in order to see the amplitude of the variation of the inferred parameters against the regularization parameter. The errors cited in this Table are just the result of the fit of the solution by the $erf$-function. The variation of the inferred $erf$ parameters against the regularization, as shown by Fig. \ref{fig:lowlreg} for the Tikhonov method, and the previous Monte-Carlo simulations can help us to estimate error bars that may be more realistic. \begin{figure} \LabelFig{}{fig:lowlreg} \psscalefirst \centerline{ \psfig{figure=6651f10.ps,height=8cm,angle=-90}} \caption[]{ Variation of the inferred parameters $\bar\Omega_0, \bar\Omega_1, \bar r_c,\bar{w}$ and $\bar{w}_c$ as a function of the logarithm of the regularization parameter for the Tikhonov inversion of LOWL data. Graph markers have the same meaning as in Fig. \ref{fig:good}. The L-curve choice of $\bar{w}$ is outside the plot on panel d.} \end{figure} Figure \ref{fig:lowlreg}a shows that the evaluation of the mean value of the rotation rate in the radiative interior ($\hat\Omega_0$) is not much sensitive to the regularization. Nevertheless, we have shown in Sect.~\ref{sec:key} that this parameter tends to be systematically under-estimated of about $4$ nHz because of the influence of the latitudinal variation of the rotation in the convection zone on the low $l$ sectoral splittings. For the sum $a_1^{nl}+a_3^{nl}+a_5^{nl}$ the latitudinal kernel is less $l$-dependent so that this systematic offset may be smaller than $4$ nHz. We take this effect into account by increasing the estimation of the error and our final interval for this parameter becomes: $427.5\le\hat\Omega_0\le434.5$ nHz. The mean value of the equatorial rotation rate in the convection zone is less subject to systematic errors but may be under-estimated by the GCV choice (c.f. Figs. \ref{fig:bad}b, \ref{fig:badk}b, \ref{fig:sol}). The difference between the GCV choice and the L-curve choice is about $3$ nHz on Fig. \ref{fig:lowlreg}. Thus we estimate that $\hat\Omega_1=459.0\pm 1.5$ nHz. We note that we do not attempt to use the points of the solution found under $0.4R_\odot$ or above $0.8R_\odot$ (c.f. Fig. \ref{fig:sol_lowl}). Therefore $\hat \Omega_1$ does not take into account the eventual rapid variation of the rotation near the surface or at $0.9R_\odot$ (Antia et al. \cite{Antia}) and $\hat \Omega_0$ is not sensitive to the core rotation. The ratio $q=\hat\Omega_0/\hat\Omega_1$ obtained from helioseismic data is an important test for the theories of the tachocline dynamics. Spiegel and Zahn's (\cite{tacho1}) theory leads to $q=0.90$ whereas Gough's (\cite{Gough85}) one leads to $q=0.96$. Our results give $0.93<q<0.95$ which is intermediate between the two theoretical estimates. Similar results have already been pointed out by Gough \& Sekii (\cite{tacho2}). For the estimation of $\hat r_c$, we find in Fig. \ref{fig:lowlreg} that the L-curve criterion leads to a lower value than the GCV criterion as we had found in Fig. \ref{fig:bad}. As discussed in Sect.~\ref{sec:key}, we think that the GCV choice is more reliable but may lead to an under-estimation of about $0.002R_\odot$. Therefore our final estimation for the location of the center of the tachocline in the equatorial plane is: $\hat r_c=0.695\pm 0.005R_\odot$. This value, estimated in the equatorial plane, is intermediate between the two values previously obtained by forward methods (c.f. Table 2). We note however that whereas our work just look for the equatorial component of the tachocline, the previous works assume that the solar tachocline presents the same profile at any latitude. This may lead to bias if, as suggested by Charbonneau et al. (\cite{char}) from LOWL data, the tachocline is prolate i.e. is located deeper at the equator than at higher latitudes. The tests discussed in the previous sections show that the L-curve choice is not reliable for the estimation of the width and suggest three ways for estimating the width of the tachocline from GCV criterion: First, the true value is supposed to lie between the MTSVD and PP-TSVD estimations. That gives $0.031R_\odot\leq\hat{w}\leq 0.062R_\odot$. Secondly, for the Tikhonov method, since the error bars have roughly of the same amplitude in the whole range $0.03-0.11R_\odot$ of initial widths (Fig. \ref{fig:histo500}) , we can use the Monte-Carlo simulation. Near $w=0.07R_\odot$ (the inferred value reported in Table 1 being $\bar{w}=0.067$), Fig. \ref{fig:histo500} shows that the Tikhonov method leads in mean to a systematic over-estimation of about $0.005R_\odot$ with a $68.3\%$ confidence interval around $\pm 0.02R_\odot$. Thus we obtain by this way $\hat{w}\simeq 0.062\pm 0.020R_\odot$. Thirdly, the PP-TSVD method is though to produce, in mean, an under-estimation of the width of about $0.01R_\odot$ but with a larger dispersion of the results for the large widths so that we are not allowed to use straightforwardly our Monte-Carlo simulation. The $68.3\%$ confidence intervals plotted in Fig. \ref{fig:histo500} indicate that the PP-TSVD method can lead to an inferred width around $0.03R_\odot$ (which is the value obtained from LOWL data) for initial widths up to $0.08R_\odot$. Therefore the interpretation of the result obtained by this method is not easy. This may indicate that the method is better suited to the search of transition zones known a priori to be very thin (searching for a width lower than $0.05R_\odot$ for example). Nevertheless, all the above discussions indicate $0.020\le\hat{w}\le 0.070R_\odot$ as a reasonable interval for the true width, deduced from PP-TSVD method. All these approaches are globally consistent but lead to a relatively large dispersion of the results. Therefore our final estimation of the width of the solar tachocline in the equatorial plane is: $\hat w=0.05\pm0.03R_\odot$. This estimation is in very good agreement with the result obtained by Charbonneau et al. (\cite{char}) and remains compatible with the value given by Kosovichev (\cite{koso}) (c.f. Table 2). \begin{table} \caption[]{Comparison of our results with previous forward analysis. Charbonneau et al. (\cite{char}) and our work are for the same LOWL dataset (2/26/94-2/25/96) whereas Kosovichev (\cite{koso}) has used the 198
.5cm \begin{centering} \epsfig{file=fig4.eps,height=8.5cm,width=6cm,angle=-90} \caption{Total neutrino and antineutrino luminosity as a function of time.} \end{centering} \end{figure} \bigskip \section{Discussion and conclusions} From the previous results on the neutrino luminosities one can easily estimate the total energy released by the conversion process to be of the order of $10^{53}$ erg. It is thus powerful enough to be compared with the energy released within the most violent and, to some extent, still mysterious explosions of the Universe, i.e., SNe and long GRBs. It is then clearly tempting to associate at least {\it some of these explosions} to the formation of a strange star (the first proposals about this connection were presented many years ago \cite{DeRujula:1987pg,Cheng:1995am}, see also the more recent Refs. \cite{Bombaci:2000cv,Ouyed:2001cg,Berezhiani:2002ks} ). In particular, the formation of strange quark matter (regardless of its absolute stability) could provide an additional energy injection which triggers the explosion of core collapse supernovae \cite{Gentile:1993ma,Drago:1997tn,Sagert:2008ka,Fischer:2010wp,Drago:2008tb,Pagliara:2009dg}. None of the presently available simulations could produce explosions for high mass progenitors (with masses larger than $\sim 20 M_{\odot}$), and the possible appearance of quark matter could help in solving this problem. Apart from the energy released in the birth of a strange star, also the temporal delay with respect to the collapse of the progenitor star and the birth of the neutron star could possibly explain some of the puzzling observations connected to SNe and GRBs. In Ref.~\cite{DeRujula:1987pg} a two-neutrino-burst scenario is proposed for the neutrino signal of SN1987A: the data of the LSD detector would indicate a burst of neutrinos that occurred $\sim 5$ h before the well-known K2, IMB and Baksan neutrino events. The second burst, suggests the author, could be, for instance, associated with the birth of a strange star. It is widely accepted that long GRBs are phenomena connected with the collapse of massive stars and that they are intimately connected to SNe. In some cases, a sizable temporal delay (ranging from hours to years) between a SN and the subsequent GRB was inferred from the data (see \cite{Berezhiani:2002ks}). In those cases the second explosive event, i.e. the GRB, could be associated with the conversion of a neutron star into a star containing quark matter as proposed in \cite{Berezhiani:2002ks}. The puzzling observations mentioned before (the LSD neutrino signal and the long time delay between SN and GRB) have been, however, under debate for many years and none of them is considered to be statistically robust. Therefore they do not provide a clear proof of the existence of quark matter in astrophysical systems. A more direct and clean analysis can instead be performed just by considering the light curves of the prompt emissions of GRBs: It seems again that, at least in some cases, after a first burst a second burst occurs which is delayed by up to hundreds of seconds with respect to the first \cite{Drago:2005rc}. Between the two bursts a quiescent time is present during which it is likely that the inner engine is not active. In \cite{Drago:2005rc}, by performing a statistical analysis using the
sample of GRB light curves of the BATSE satellite, hints are presented in favor of the interpretation of long quiescent times as periods during which the inner engine is indeed dormant. A spectacular event of this type was recently detected by the Swift satellite \cite{Zhang:2011vk}: The second burst is $11$ min delayed with respect to the first one. Such a long quiescent time challenges popular models for the GRB inner engine, i.e., the collapsar model \cite{Woosley:1993wj} and the protomagnetar model \cite{Metzger:2010pp}. In \cite{Zhang:2011vk}, the following scenario is proposed: The first burst is generated by a rapidly rotating magnetar, and the second burst is due to a delayed collapse of the star into a black hole (if enough mass accretes onto the magnetar, about $1 M_{\odot}$). Recent numerical simulations of the accretion induced collapse of a neutron star indicate, however, that these events could be sources of short GRBs instead of the long ones \cite{Giacomazzo:2012bw}. Here we speculate that those double bursts could be instead related to the conversion of a neutron star to a strange star. Within the protomagnetar model of long GRBs, the source of energy is provided by the rotational energy of the star and for the prompt emission, in addition to the spin down rate, also the neutrino wind released by the hot surface of the star is crucial: A high neutrino luminosity implies a large value for the mass loss rate which inhibits the mechanism at the origin of the gamma radiation. Only when the neutrino luminosity drops to a critical value, in the untrapping regime, is the prompt emission of the GRB realized. The strange star at birth could then, in principle, generate a new burst. The quiescent time would correspond, in this scenario, to the time needed to trigger the conversion process (for instance, because of the spinning down, the central density increases and at some point nucleation can start as computed in \cite{Yasutake:2004kx}). A detailed numerical study of this possibility is an important outlook of this work. Clearly, the main motivation of this paper is to show that the conversion process of a neutron star into a strange star generates a strong neutrino signal which is relevant from the phenomenological point of view. Being the first quantitative study in this problem, several assumptions had to be adopted. For studying quantitatively the phenomenological consequences of this process it is important to improve our calculation especially by introducing the chemical potential of electron neutrinos and the lepton number diffusion equation. Modeling the burning of the material left after the combustion would also be important for obtaining a better estimate of the duration of the neutrino signal. Finally, testing our theoretical results by means of a direct neutrino detection is clearly very difficult: If such processes really occur in the Universe, their rate is probably significantly lower than that of core collapse SN events, therefore making a detection highly improbable. On the other hand, we have plenty of data on long GRBs: We have suggested that these data already contain some interesting information which would possibly indicate that strange quark matter is really formed in compact stellar objects. We thank A. Drago for valuable discussions. G.P. acknowledges financial support from the Italian Ministry of Research through the program \textquotedblleft Rita Levi Montalcini\textquotedblright. The work of F.K.R. is supported by the Deutsche Forschungsgemeinschaft (DFG) via the Emmy Noether Program (RO 3676/1-1), and by the ARCHES prize of the German Ministry of Education and Research (BMBF). \bigskip
\section{Introduction} Einstein-Podolsky-Rosen (EPR) steering, which describes how the state of one subsystem in an entangled pair is manipulated by local measurements performed on the other part, was proposed by Schr\"{o}dinger in 1935 \cite{E.P.C, E.P.C1}. The EPR steering differs from both entanglement and the Bell nonlocality, because it has inherent asymmetric features. Therefore, the EPR steering has potential applications in the one-side device-independent quantum key distribution \cite{CBEG}. Recently, quantum steering and its asymmetry have been theoretically studied \cite{HMW,Skrzypczyk, Kocsis, Adesso2015, steering3, steering4,MWZZ,steering5,SLCC} and experimentally demonstrated \cite{Walborn,steering2,Bowles,VHTE,VHTSS,Handchen,BWSR,SKMJ,DJSS,TEVH,Xiao2017,SWNW} in different quantum systems. However, little is known about behaviors of quantum steering in relativistic settings. Most recently, Navascues and Perez-Garcia studied quantum steering between space-like separated parties in the frame of algebraic quantum field theory \cite{steeringrqi}. In addition, more attention has been given to the dynamics of quantum steering under the influence of the dynamical Casimir effect \cite{steeringrqi1}, the Hawking radiation \cite{steeringrqi2}, and relativistic motions \cite{steeringrqi3}. Since a realistic quantum system cannot be prepared and transmitted in a curved spacetime without any gravitational and relativistic effects, the study of quantum steerability in a relativistic framework is necessary. Such studies are of practical and fundamental importance to understand the influence of gravitational effects on the steerability-type quantum resource when the parties involved are located at large distances in the curved space time \cite{DEBA,DEBT,satellite1,satellite2,kerr}. It has been shown that the curved background spacetime of the Earth affects the running of quantum clocks \cite{Alclock}, is employed as witnesses of general relativistic proper time in laser interferometric \cite{Zych}, and influences the implementation of quantum metrology \cite{MADE, MADE2} in satellite-based setups. Furthermore, Kish and Ralph found that there would be inevitable losses of quantum resources in the estimation of the Schwarzschild radius \cite{SPK}. We studied how the curved background spacetime of the Earth influences the satellite-based quantum clock synchronization \cite{wangsyn}. Most recently, an experimental test of photonic entanglement in an accelerated setting was realized \cite{RQI8}, where a genuine quantum state of entangled photon pairs was exposed to different accelerations. In this work, we present a quantitative investigation of Gaussian quantum steerability for correlated photon pairs which are initially prepared in a two-mode squeezed state in the curved background spacetime of the Earth. We assume that one of entangled photons is sent to Alice (at the Earth station) and the other propagates to Bob (at the satellite). During this propagation, the photons' wave-packet will be deformed by the curved background spacetime of the Earth, and these deformations effects on the quantum state of the photons can be modeled as a lossy quantum channel \cite{MANI, wangsyn}. Since the initial state is Gaussian and the transformations involved are linear and unitary, we can restrict our state to the Gaussian scenarios and employ the covariance matrix formalism. We calculate the Gaussian quantum steering from Alice to Bob, which quantifies to what extent Bob's mode can be steered by Alice's measurements. We also discuss Gaussian quantum steering from Bob to Alice to verify the asymmetric property of steerability in the curved spacetime. This work is organized as follows. In section II, we introduce the quantum field theory of a massless uncharged bosonic field which propagates from the Earth to a satellite. In section III, we briefly introduce the definition and measure of the bipartite Gaussian quantum steering. In section IV, we show a scheme to test quantum steering between the Earth and satellites and study the behaviors of quantum steering in the curved spacetime. The last section is devoted to a brief summary. Throughout the whole paper we employ the natural units $G = c =\hbar= 1$. \section{Light wave-packets propagating in the curved space-time \label{tools}} In this section we will describe the propagation of photons from the Earth to satellites under the influence of the Earth's gravity \cite{DEBT}. The Earth's spacetime can be approximately described by the Kerr metric \cite{Visser}. For the sake of simplicity, our work will be constrained to the equatorial plane $\theta=\frac{\pi}{2}$. The reduced metric in Boyer-Lindquist coordinates $(t,r,\phi)$ reads \cite{Visser} \begin{align}\label{metric} ds^2=&\, -\Big(1-\frac{2M}{r} \Big)dt^2+\frac{1}{\Delta}dr^2 \nonumber \\ &\,+\Big(r^2+a^2+\frac{2Ma^2}{r}\Big) d\phi^2 - \frac{4Ma}{r} dt \, d\phi, \\ \Delta=&\,1-\frac{2M}{r}+\frac{a^2}{r^2}, \end{align} where $M$, $r$, $J$, $a=\frac{J}{M}$ are the mass, radius, angular momentum and Kerr parameter of the Earth, respectively. A photon is sent from Alice on Earth's surface to Bob at time $\tau_A$, Bob will receive this photon at $\tau_B=\Delta\tau+\sqrt{f(r_B)/f(r_A)}\tau_A$ in his own reference frame, where $f(r_A)=1-\frac{r_S}{r_A}$ and $f(r_B)=1-\frac{r_S}{r_B}$. Here $r_S=2M$ is the Schwarzschild radius of the Earth and $\Delta\tau$ is the propagation time of the light from the Earth to the satellite by taking curved effects of the Earth into account. In general, a photon can be modeled by a wave packet of excitations of a massless bosonic field with a distribution $F^{(K)}_{\Omega_{K,0}}$ of mode frequency $\Omega_{K}$ and peaked at $\Omega_{K,0}$ \cite{ULMQ,TGDT}, where $K=A,B$ denote the modes in Alice's or Bob's reference frames, respectively. The annihilation operator of a photon for an observer far from Alice or Bob takes the form \begin{equation} a_{\Omega_{K,0}}(t_K)=\int_0^{+\infty}d\Omega_K e^{-i\Omega_K t_K}F^{(K)}_{\Omega_{K,0}}(\Omega_K)a_{\Omega_K}. \label{wave} \end{equation} Alice's and Bob's operators in Eq. (\ref{wave}) can be used to describe the same optical mode in different altitudes. By considering the curved spacetime of the Earth, the wave packet received is modified. The relation between $a_{\Omega_A}$ and $a_{\Omega_B}$ was discussed in \cite{DEBT,DEBA,wangsyn}, and can be used to calculate the relation between the frequency distributions $F^{(K)}_{\Omega_{K,0}}$ of the photons before and after the propagation \cite{DEBT,DEBA,wangsyn} \begin{eqnarray} F^{(B)}_{\Omega_{B,0}}(\Omega_B)=\sqrt[4]{\frac{f(r_B)}{f(r_A)}}F^{(A)}_{\Omega_{A,0}}\left(\sqrt{\frac{f(r_B)}{f(r_A)}}\Omega_B\right).\label{wave:packet:relation} \label{fab} \end{eqnarray} From Eq. (\ref{fab}), we can see that the effect induced by the curved spacetime of the Earth cannot be simply corrected by a linear shift of frequencies. Therefore, it may be challenging to compensate the transformation induced by the curvature in realistic implementations. Indeed, such a nonlinear gravitational effect is found to influence the fidelity of the quantum channel between Alice and Bob \cite{DEBT,DEBA,wangsyn}. It is always possible to decompose the mode $\bar{a}^{\prime}$ received by Bob in terms of the mode $a^{\prime}$ prepared by Alice and an orthogonal mode $a_{\bot}^{\prime}$ (i.e. $[a^{\prime},a_{\bot}^{\prime\dagger}]=0$) \cite{PPRW} \begin{eqnarray} \bar{a}^{\prime}=\Theta a^{\prime}+\sqrt{1-\Theta^2}a_{\bot}^{\prime},\label{mode:decomposition} \end{eqnarray} where $\Theta$ is the wave packet overlap between the distributions $F^{(B)}_{\Omega_{B,0}}(\Omega_B)$ and $F^{(A)}_{\Omega_{A,0}}(\Omega_B)$, \begin{eqnarray} \Theta:=\int_0^{+\infty}d\Omega_B\,F^{(B)\star}_{\Omega_{B,0}}(\Omega_B)F^{(A)}_{\Omega_{A,0}}(\Omega_B),\label{single:photon:fidelity} \end{eqnarray} and we have $\Theta=1$ for a perfect channel. From this expression we can see that the spacetime curvature of the Earth would affect the the fidelity $\mathcal{F}=|\Theta|^2$ as well as the quantum resource of EPR steering. We assume that Alice employs a real normalized Gaussian wave packet \begin{eqnarray} F_{\Omega_0}(\Omega)=\frac{1}{\sqrt[4]{2\pi\sigma^2}}e^{-\frac{(\Omega-\Omega_0)^2}{4\sigma^2}}\label{Bobpacket}, \end{eqnarray} with wave packet width $\sigma$. In this case the overlap $\Theta$ is given by \eqref{single:photon:fidelity} where we have extended the domain of integration to all the real axis. We note that the integral should be performed over strictly positive frequencies. However, since $\Omega_0\gg \sigma$, it is possible to include negative frequencies without affecting the value of $\Theta$. Using Eqs. \eqref{wave} and \eqref{Bobpacket} one finds that \cite{DEBT,DEBA,wangsyn} \begin{eqnarray} \label{theta} \Theta=\sqrt{\frac{2}{1+(1+\delta)^2}}\frac{1}{1+\delta}e^{-\frac{\delta^2\Omega_{B,0}^2}{4(1+(1+\delta)^2)\sigma^2}}\label{final:result}, \end{eqnarray} where the new parameter $\delta$ quantifying the shifting is defined by \begin{equation} \delta=\sqrt[4]{\frac{f(r_A)}{f(r_B)}}-1=\sqrt{\frac{\Omega_B}{\Omega_A}}-1. \end{equation} The expression for $\frac{\Omega_B}{\Omega_A}$ in the equatorial plane of the Kerr spacetime has been shown in \cite{kerr} \begin{equation}\label{aw} \frac{\Omega_B}{\Omega_A}=\frac{1+\epsilon \frac{a}{r_B}\sqrt{\frac{M}{r_B}}}{C\sqrt{1-3\frac{M}{r_B}+ 2\epsilon\frac{a}{r_B}\sqrt{\frac{M}{r_B}}}}, \end{equation} where $C=[1-\frac{2M}{r_A}(1+2a {\omega})+\big(r^2_A+a^2-\frac{2Ma^2}{r_A}\big){\omega}^2]^{-\frac{1}{2}}$ is the normalization constant, $\omega$ is the Earth's equatorial angular velocity and $\epsilon=\pm1$ stand for the direct of orbits (i.e., when $\epsilon=+1$ for the satellite co-rotates with the Earth). In the Schwarzschild limit $a, \omega\rightarrow0$, Eq. (\ref{aw}) coincides to the result found in \cite{DEBT}, which is \begin{equation} \frac{\Omega_B}{\Omega_A}=\sqrt{\frac{1-\frac{2M}{r_A}}{1-\frac{3M}{r_B}}}. \end{equation} Noticing that $(r_A \omega)^2>a\omega$, therefore we can retain second order terms in $r_A\omega$. Expanding Eq. (\ref{aw}) we obtain the following perturbative expression for $\delta$. This perturbative result does not depend on whether the Earth and the satellite are co-rotating or not \begin{eqnarray}\label{bw} \nonumber\delta&=&\delta_{Sch}+\delta_{rot}+\delta_h\\ \nonumber&=&\frac{1}{8}\frac{r_S}{r_A}\big(\frac{1-2\frac{h}{r_A}}{1+\frac{h}{r_A}} \big)-\frac{(r_A\omega)^2}{4}-\frac{(r_A\omega)^2}{4}\big(\frac{3}{4}\frac{r_S}{r_A}-\frac{4Ma}{\omega r_A^3}\big), \end{eqnarray} where $h=r_B-r_A$ is the height between Alice and Bob, $\delta_{Sch}$ is the first order Schwarzschild term, $\delta_{rot}$ is the lowest order rotation term and $\delta_h$ denotes all higher order correction terms. If the parameter $\delta=0$ (the satellite moves at the height $h\simeq\frac{r_A}{2}$), we have $\Theta=1$. That is to say, the received photons at this height will not experience any frequency shift, and the effects of gravity of the Earth and the effects of special relativity completely compensates each other. \section{Gaussian quantum steering} In this section we briefly review the measurement of quantum steering for a general two-mode Gaussian state $\rho_{AB}$. The character of a bipartite Gaussian state ${\rho _{AB}}$ can be described by its covariance matrix (CM) \begin{equation}\label{CM} \sigma_{AB} = \left( {\begin{array}{*{20}{c}} A & C \\ {{C^{\sf T}}} & B \\ \end{array}} \right), \end{equation} with elements ${\sigma _{ij}} = \text{Tr}\big[ {{{\{ {{{\hat R}_i},{{\hat R}_j}} \}}_ + }\ {\rho _{AB}}} \big]$. Here the submatrices $A$ and $B$ are the CMs correspoding to the reduced states of $A$'s and $B$'s subsystems, respectively. The \textit{bona fide} condition should be satisfied for a physical CM, which is \begin{equation}\label{bonafide} {\sigma _{AB}} + i\,({\Omega _A} \oplus {\Omega _B}) \ge 0. \end{equation} Let us continue by giving the definition of steerability. For a bipartite state, it is steerable from $A$ to $B$ \textit{iff} it is \textit{not} possible for every pair of local observables $R_A \in \mathcal{M}_A$ on $A$ and $R_{B}$ (arbitrary) on $B$, with respective outcomes $r_A$ and $r_{B}$, to express the joint probability as \cite{HMWS} $P\left( {{r_A},{r_{B}}|{R_A},{R_{B}},{\rho _{AB}}} \right) = \sum\limits_\lambda {{\wp_\lambda }} \, \wp\left( {{r_A}|{R_A},\lambda } \right)P\left( {{r_{B}}|{R_{B}},{\rho _\lambda }} \right)$. That is to say, there exists at least one measurement pair between $R_A$ and $R_{B}$ that can violate this expression when ${\wp_\lambda }$ is fixed across all measurements. Here ${\wp_\lambda }$ and $\wp \left( {{r_A}|{R_A},\lambda }\right)$ are arbitrary probability distributions and $P\left( {{r_{B}}|{R_{B}},{\rho _\lambda }} \right)$ is a probability distribution restricted to the extra condition of being evaluated on a quantum state $\rho_\lambda$. It has been proven that a necessary and sufficient condition for Gaussian $A\to B$ steerability is \textit{iff} the condition \begin{equation}\label{nonsteer} {\sigma _{AB}} + i\,({0_A} \oplus {\Omega _B}) \ge 0, \end{equation} is violated \cite{HMWS}. To quantify how much a bipartite Gaussian state with CM $\sigma_{AB}$ is steerable (by Gaussian measurements on Alice's side), the following quantity has been performed \cite{IKAR} \begin{equation}\label{GSAB} {\cal G}^{A \to B}(\sigma_{AB}):= \max\bigg\{0,\,-\sum_{j:\bar{\nu}^{B}_j<1} \ln(\bar{\nu}^{B}_j)\bigg\}\,, \end{equation} where $\bar{\nu}^{B}_j$ are the symplectic eigenvalues of the Schur complement of $A$ in the covariance matrix $\sigma_{AB}$. By defining the Schur complement $\det{\sigma_{AB}} = \det{A} \det{M^{B}_{\sigma}}$ and employing the R\'enyi-$2$ entropy, Eq. (\ref{GSAB}) can be written as \begin{eqnarray}\label{GSAC} {\cal G}^{A \to B}(\sigma_{AB})& =& \nonumber\mbox{$\max\big\{0,\, \frac12 \ln {\frac{\det A}{\det \sigma_{AB}}}\big\}$}\\ &=& \max\big\{0,\, {\cal S}(A) - {\cal S}(\sigma_{AB})\big\}\,, \label{GS1} \end{eqnarray} where the R\'enyi-$2$ entropy ${\cal S}$ reads ${\cal S}(\sigma) = \frac12 \ln( \det \sigma)$ \cite{renyi} for a Gaussian state with CM $\sigma$. However, unlike quantum entanglement, quantum steering is asymmetric \cite{IKAR}. To obtain the measurement of Gaussian steering $B\rightarrow A$, one can swap the roles of $A$ and $B$ and get an expression like Eq. (\ref{GSAC}). \section{The influence of gravitational effects on quantum steerability and entanglement} In this section we propose a scheme to test large distance quantum steering between the Earth and satellites and discuss how quantum steerability is affected by the curved spacetime of the Earth. Firstly, we consider a pair of entangled photons which are initially prepared in a two-mode squeezed state with modes $b_1$ and $b_2$ at the ground station. Then we send one photon with mode $b_1$ to Alice. The other photon in mode $b_2$ propagates from the Earth to the satellite and is received by Bob. Due to the curved background spacetime of the Earth, the wave packet of photon with mode $b_2$ is deformed. Finally, one can test how the quantum state of Alice's photon is manipulated by local Gaussian measurements performed by Bob at the satellite and verifies the quantum steerability from $b_2$ to $b_1$, and vice versa. Considering that Alice receives the mode $b_1$ and Bob receives the mode $b_2$ at different satellite orbits, we should take the curved spacetime of the Earth into account. As discussed in \cite{DEBT,DEBA,wangsyn}, the influence of the Earth's gravitational effect can be modeled by a beam splitter with orthogonal modes $b_{1\bot}$ and $b_{2\bot}$. The covariance matrix of the initial state is given by \begin{equation}\label{initialcov} \Sigma^{b_1b_2b_{
1\bot}b_{2\bot}}_0=\left( \begin{array}{cc} \tilde\sigma(s) &0 \\ 0 & I_4 \end{array}\right), \end{equation} where ${I}_4$ denotes the $4\times4$ identity matrix and $\tilde\sigma{(s)}$ is the covariance matrix of the two-mode squeezed state \begin{equation} \tilde\sigma(s)=\left( \begin{array}{cc} \cosh{(2s)} {I}_2&\sinh{(2s)}\sigma_z \\ \sinh(2s)\sigma_z &\cosh{(2s)} {I} _2 \end{array}\right), \end{equation} where $\sigma_z$ is Pauli matrix and $s$ is the squeezing parameter. The effect induced by the curved spacetime of the Earth on Bob's mode $b_2$ can be model as a lossy channel, which is described by the transformation \cite{DEBT,DEBA,wangsyn} \begin{eqnarray} \bar{b}_2&=&\Theta_2\,b_2+\sqrt{{1-\Theta_2^2}}b_{2\bot}, \end{eqnarray} while the mode $b_1$ received by Alice is unaffected because Alice stays at the ground station. This process can be represented as a mixing (beam splitting ) of modes $b_1(b_2)$ and $b_{1\bot}(b_{2\bot})$. Therefore, for the entire state, the symplectic transformation can be encoded into the Bogoloiubov transformation \begin{equation} S=\left( \begin{array}{cccc} {I}_2 &0& 0&0 \\ 0&\Theta_2 {I} _2 &0&\sqrt{ 1-\Theta_2^2} {I} _2\\ 0 &0& -{I}_2&0 \\ 0&\sqrt{ 1-\Theta_2^2} {I} _2 &0&-\Theta_2 {I} _2 \end{array}\right).\nonumber \end{equation} The final state $\Sigma^{b_1b_2b_{1\bot}b_{2\bot}}$ after the transformation is $\Sigma^{b_1b_2b_{1\bot}b_{2\bot}}=S\,\Sigma_0^{b_1b_2b_{1\bot}b_{2\bot}}\,S^{T}$. Then we trace over the orthogonal modes $b_{1\bot},b_{2\bot}$ and obtain the covariance matrix $\Sigma^{b_1b_2}$ for the modes $b_1$ and $b_2$ after the propagation \begin{equation}\Sigma^{b_1b_2}=\left( \begin{array}{cc}\label{gst} (1+2\sinh^2s ) {I}_2 &\sinh{(2s)}\,\Theta_2\,\sigma_z \\ \sinh{(2s)}\,\Theta_2\,\sigma_z &(1+2\sinh^2s\,\Theta_2^2 )\, {I}_2 \end{array}\right). \end{equation} The form of the two-mode squeezed state under the influence of the effects of gravity of the Earth is given by Eq. (\ref{gst}). Then employing the measurement of Gaussian steering, we obtain an specific mathematic expression of the mode $b_1\rightarrow b_2$ Gaussian steering under the curved spacetime of the Earth \begin{eqnarray}\label{gaussian5} {\cal G}^{b_1\to b_2} &=& \mbox{$\max\big\{0,\, \ln {\frac{ 1+2\sinh^2(2s)}{1+2(1- \Theta_2^2)\sinh^2s}}\big\}$}. \end{eqnarray} We notice that the wave packet overlap $\Theta$ in the above equation is determined by the parameters $\delta$, $\sigma$ and $\Omega_{B,0}$. Since the Schwarzschild radius of the Earth is $ r_s= 9$ mm, we have $\delta\sim-\frac{1}{2}(\frac{r_s}{r_B}-\frac{r_s}{r_A}) \sim 10^{-10}$. Here we consider a typical PDC source with a wavelength of 598 nm (corresponding to the peak frequency $\Omega_{B,0}= 500$ THz) and Gaussian bandwidth $\sigma=1$MHz [48, 49]. Under these constraints, $\delta\ll(\frac{\Omega_{B,0}}{\sigma})^2\ll1$ is satisfied. Therefore, the wave packet overlap $\Theta$ can be expand by the parameter $\delta$. Then we obtain $\Theta \sim1-\frac{\delta^2\Omega_{B,0}^2}{8\sigma^2}$ by keeping the second order terms. The Eq. (\ref{gaussian5}) has following form in the second order of perturbation for the parameter $\delta$ \begin{equation}\label{g0} {\cal G}^{b_1\to b_2}\simeq\max\{0,{\cal G}_0-\frac{\delta^2\Omega_{B,0}^2}{2\sigma^2}\sinh^2(s)\}, \end{equation} where higher order contributions are neglected. To ensure the validity of perturbative expansion, we should estimate the values of the last term in Eq. (\ref{g0}). Considering $\frac{\delta^2\Omega^2_{B,0}}{2\sigma^2}\sim1.25\times10^{-7}$, we find that even if the value of the squeezing parameter is $s\ll7.6$ (corresponding to $\sinh^2(s)\ll10^6$), the perturbative expansion is valid. Therefore, we can safely prelimit the value of the squeezing parameter as $s<3$ hereafter. In the case of flat spacetime, this expression reduces to ${\cal G}_0:=\ln{[1+2\sinh^2(2s)]}$. As showed in Eq. (\ref{g0}), the Gaussian steering $b_1\to b_2$ not only depends on the squeezing parameter, the peak frequency, and the Gaussian bandwidth, but also the height of the orbiting satellite. This means that the curved spacetime of the Earth will influence the $b_1\to b_2$ steerability because the parameter $\delta$ contains the height $h$ of the satellite. It is clear that $\delta$ approaches to a constant value when the height $h\rightarrow\infty$ and the squeezing parameter $s$ is a fixed value. Therefore, quantum steering ${\cal G}^{b_1\to b_2}$ also becomes a constant. \begin{figure}[tbp] \centering \centerline{\includegraphics[width=7.0cm]{Fig1.eps}} \caption{(Color online) The Gaussian steering ${\cal G}^{b_1\to b_2}$ of two-mode squeezed state as a function of the squeezing parameter $s$ for different peak frequencies, $\Omega_2=0.6$ (green dashed line), $\Omega_2=1$ (red dashed line) and $\Omega_2=1.4$ (violet dotted line), respectively. The orbit height of the satellite and the Gaussian bandwidth are fixed as $h=20000$km and $\sigma=1$. }\label{f1} \end{figure} For convenience, we will work with dimensionless quantities by rescaling the peak frequency and the Gaussian bandwidth \begin{equation} \Omega \rightarrow \tilde{\Omega}\equiv\frac{\Omega}{\Omega_{B,0}}, \sigma \rightarrow \tilde{\sigma}\equiv\frac{\sigma}{\sigma_0}, \end{equation} where $\Omega_{B,0}=500$THz and $\sigma_0=1$ MHz. For simplicity, we abbreviate the dimensionless parameter $\tilde{\Omega}$ as $\Omega_2$ and abbreviate $\tilde{\sigma}$ as $\sigma$, respectively. In Fig. (1) we plot the Gaussian steering ${\cal G}_{b_1\to b_2}$ as a function of the squeezing parameter $s$ for the fixed orbit height $h=20000$ km and Gaussian bandwidth $\sigma=1$. We can see that quantum steering monotonically increases with the increase of the squeezing parameter $s$. It is also shown that, comparing with the peak frequency parameter, the Gaussian steering $\mathcal{G}^{b_1\to b_2}$ changes more for different squeezing parameters, which indicates that the initial quantum resource plays a more important role in the quantum steering. The Gaussian steering ${\cal G}_{b_1\to b_2}$ in terms of the orbit height $h$ and the Gaussian bandwidth $\sigma$ for the fixed values $s=1$ and $\Omega_2=1$ has been shown in Fig. (2). We can see that the quantum steerability ${\cal G}_{b_1\to b_2}$ decreases with increasing the Gaussian bandwidth $\sigma$. In addition, comparing with the squeezing parameter, the Gaussian steering is not easy to change with changing orbit height parameter and Gaussian bandwidth. This allows us to choose appropriate physical parameters and perform more reliable quantum steering tasks between the Earth to a satellite. \begin{figure}[tbp] \centering \includegraphics[height=2.3in, width=2.6in]{Fig2.eps} \caption{(Color online) The Gaussian steering ${\cal G}_{b_1\to b_2}$ in terms of the orbit height $h$ and the Gaussian bandwidth $\sigma$, for the fixed values $s=1$ and $\Omega_2=1$. } \end{figure} \begin{figure}[tbp] \centerline{\includegraphics[width=7.7cm]{Fig3.eps}} \caption{ (Color online). The Gaussian steering ${\cal G}^{b_1\to b_2}$ (green lines) and ${\cal G}^{b_2\to b_1}$ (orange lines) as functions of the height between Alice and Bob under the influence of the Earth's gravity. Here the Gaussian bandwidth of the initial state is fixed as $\sigma=1$, the dimensionless peak frequencies of the mode $b_2$ are fixed as (a) $\Omega_{2}=0.6$, (b) $\Omega_{2}=1$, and the squeezing parameter is $s=1$. }\label{f1} \end{figure} One of the most distinguishable properties of quantum steering is its asymmetry, which has been recently experimentally demonstrated in flat spacetime \cite{VHTE,VHTSS}. To understanding this properties in the curved spacetime, we also calculate the steerability ${\cal G}^{b_2\to b_1}$, which is \begin{eqnarray} {\cal G}^{b_2\to b_1} &=& \mbox{$\max\big\{0,\, \ln {\frac{ 1+2\sinh^2(2s)\Theta_2^2}{1+2(1- \Theta_2^2)\sinh^2s }}\big\}$}, \end{eqnarray} Similarly, this equation can be rewritten in its perturbative expansion form as \begin{eqnarray} {\cal G}^{b_2\to b_1}\simeq\max{\{0,\, {\cal G}_0-\frac{\delta^2\Omega_{B,0}^2}{2\sigma^2}(\sinh^2{(s)}+\frac{\sinh^2{(2s)}} {\cosh{(4s)}}})\}. \end{eqnarray} This equation gives us a quantitative way to evaluate the contributions of the curved background spacetime of the Earth to the steering for the $b_2\to b_1$ scenario, when the satellites are far away to the Earth. It is clearly shown that ${\cal G}^{b_2\to b_1}$ is equal to ${\cal G}_0$ when the $\delta\rightarrow0$, which means that the effect induced by the curved background spacetime of the Earth vanishes in this limit. The typical distance between the ground station and the geostationary satellite is about $3.6\times10^4$km, which yields the height $r_B = 4.237\times10^4$ km for the satellite. Since the height of current GPS (Global Position System) satellites is $r_B \approx 2.7\times10^4$km. For this distance the influence of relativistic disturbance of the spacetime curvature on quantum steerability cannot be ignored for the quantum information tasks at current level technology \cite{satellite1,satellite2,MJAG}. Hence, in this work the plotting range of the satellite height will be constrained to geostationary satellites height. In Fig. (3), we plot the quantum steerability ${\cal G}^{b_1\to b_2}$, as well as ${\cal G}^{b_2\to b_1}$ of the final state as a function of the height $h$. The plot range is limited to geostationary Earth orbits $r_B(GEO)=r_A+35784$ km. Here, the range of peak frequency parameter $\Omega_2$ is fixed from $0.6$ to $1$ to satisfy $\delta\ll(\frac{\delta\Omega_2}{\sigma})^2\ll1$. It is shown that both the $b_1\rightarrow b_2$ and $b_2\rightarrow b_1$ steering increase for a specific range of height parameter $h$ and then gradually approach to a finite value with increasing $h$. This is because the total frequency shift in Eq. (\ref{aw}) both includes the Schwarzschild term and the rotation term. The parameter $\delta$ in the Kerr spacetime is $\delta=\frac{1}{8}\frac{r_S}{r_A}\big(\frac{1-2\frac{h}{r_A}}{1+\frac{h}{r_A}} \big)$ which is different from the Schwarzschild case $\delta_{Sch}^{'}=-\frac{r_S}{4r_A}\frac{h}{(r_A+h)}$ \cite{DEBT, DEBA} since special relativistic effects are involved \cite{kerr}. When the satellite moves at the height $h=\frac{r_A}{2}$, the Schwarzschild term $\delta_{Sch}$ vanishes and photons received on satellites will generate a very small frequency shift dominated by special relativistic effects, therefore the lowest order rotation term $\delta_{rot}$ needs to be considered. In addition, we can see that whatever ${\cal G}^{b_1\to b_2}$ or ${\cal G}^{b_2\to b_1}$ both reduce with increasing $h$ after reaches the peak. That's why we say the gravitational frequency shift effect is a lossy channel. This losing degree of quantum steering depends on the dimensionless peak frequency of mode $b_2$, which means that this lossy channel not only depends on curvature of the Earth. In fact, the peak value of quantum steering indicates the fact that the photon's frequency received by satellites experiences a transformation from blue-shift to red-shift, which causes the Gaussian steering between the photon pairs to increase first and then to reduce with increasing height \cite{kerr}. In the Schwarzschild limit $a, \omega_A\to 0$, the frequency shift simplifies to $\frac{\Omega_B}{\Omega_A}=\sqrt{\frac{1-\frac{2M}{r_A}}{1-\frac{3M}{r_B}}},$ from which we can see that the received photon's frequency on satellites do not experience any frequency shift at $h=\frac{r_A}{2}$. On the other hand, the frequency of photon received at orbits with height $h<\frac{r_A}{2}$ will experience blue-shift, while the frequencies of photons received at height $h>\frac{r_A}{2}$ experience red-shift. For this reason, the photons experience different frequency shifts when the satellite locates at different heights in the Kerr spacetime. Therefore, the Gaussian steering increases at the beginning, reaches the peak value (corresponds to the satellite at the heights $h\approx\frac{r_A}{2}$, i.e. the parameter $\delta=0$ ), and then decreases with increasing height. To check the asymmetric degree of steerability under the Earth's spacetime curvature, we calculate the Gaussian steering asymmetry \begin{equation} {\cal G}^{\Delta}=| {\cal G}^{b_1\to b_2}-{\cal G}^{b_2\to b_1}|, \end{equation} and plot it as a function of the peak frequency $\Omega_2$ and the height $h$ of the satellite in Fig. (4). This allows us to have a better understanding of how the peak frequency $\Omega_2$ and the Earth's gravitation affect steering asymmetry. It is shown that the ${\cal G}^{\Delta}$ is close to zero, i.e., the steerability is almost symmetric when the height parameter $h\to0$ and the peak frequency $\Omega_2\to0$ because ${\cal G}^{b_1\to b_2}\approx{\cal G}^{b_2\to b_1}$ in these two cases. In addition, the steering asymmetry monotonically increases with increasing orbit height $h$ of the satellite. The physical support behind this is that gravitational field would reduce quantum resource \cite {MAK}, and the effect of gravitational field on different directions of steering is different \cite{steeringrqi2}. Furthermore, it is not difficult to infer that if the gravitation is strong enough or Bob is close to the horizon of a black hole, the gravitational frequency should lead to completely asymmetry: Alice can steer Bob but Bob cannot steer Alice at all. \begin{figure}[tbp] \centering \includegraphics[height=2.1in, width=2.3in]{Fig4} \caption{(Color online) The Gaussian steering asymmetry ${\cal G}^{\Delta}$ as a function of the height $h$ of the satellite and the peak frequency $\Omega_2$ of mode $b_2$. The Gaussian bandwidth and the squeezing parameter are fixed as $\sigma=1$ and $s=1$, respectively. }\label{f3} \end{figure} \vspace*{0.8cm} \section{Conclusions} In conclusion, we have studied Gaussian steering for a two-mode Gaussian state when one of the modes propagates from the ground to satellites. We found that the frequency shift induced by the curved spacetime of the Earth reduces the quantum correlation of the steerability between the photon pairs when one of the entangled photons is sent to the Earth station and the other photon is sent to the satellite. In addition, the influence of spacetime curvature on the steering in the Kerr spacetime is very different from the non-rotation case because special relativistic effects are involved. We also found that Gaussian steering is easier to change with the initial squeezing parameter than the gravitational effect and other parameters. Although the gravitational effect of the Earth is small, it will lead to the Gaussian steering asymmetry between the photon pairs. This is because the influence of gravitational field on the steering of the downlink setup is stronger than the effect of gravitational field on the steering of the uplink setup, which results in the increase of quantum steering asymmetry. Therefore, we can conclude that the effects induced by the curved spacetime of the Earth will generate quantum steering asymmetry. Finally, the peak value is found to be a critical point which indicates the received photons experience a transformation from blue-shift to red-shift. According to the equivalence principle, the effects of acceleration are equivalence with the effects of gravity, our results could be in principle apply to dynamics of quantum steering under the influence of acceleration. Since realistic quantum systems will always exhibit gravitational and relativistic features, our results should be significant both for giving more advices to realize quantum information protocols such as quantum key distribution from Earth to satellites and for a general understanding of quantum steering in relativistic quantum systems. \begin{acknowledgments} This work is supported by the Hunan Provincial Natural Science Foundation of China under Grant No. 2018JJ1016; and the National Natural Science Foundation of China under Grant No. 11675052, and No. 11475061. \end{acknowledgments}
\section{Introduction} Cameron and Ku~\cite{MR2009400} proved a version of the Erd\H{o}s-Ko-Rado theorem for permutations. In this paper we give an alternate proof to this theorem which is substantially different from the one given by Ku and Cameron. The Erd\H{o}s-Ko-Rado theorem~\cite{MR25:3839} is a central result in extremal combinatorics. There are many interesting proofs and extensions of this theorem, for a summary see~\cite{MR86a:05004}. The Erd\H{o}s-Ko-Rado theorem gives a bound on the size of a family of intersecting $k$-subsets of a set and describes exactly which families meet this bound. \begin{theorem}(Erd\H{o}s, Ko and Rado~\cite{MR25:3839}) Let $k,n$ be positive integers with $n > 2k$. Let $\cA$ be a family of $k$-subsets of $\{1,\dots,n\}$ such that any two sets from $\cA$ have non-trivial intersection, then $|\cA| \leq (n-1)!$. Moreover, $|\cA| = (n-1)!$ if and only if $\cA$ is the collection of all $k$-subsets that contain a fixed $i \in \{1,\dots,n\}$. \end{theorem} The Erd\H{o}s-Ko-Rado theorem has been extended to objects other than subsets of a set. For example, Hsieh~\cite{MR0382015} and Frankl and Wilson~\cite{MR867648} give a version for intersecting subspaces of a vector space over a finite field, Berge~\cite{MR0389636} proves it for intersecting integer sequences, Rands~\cite{MR84i:05024} extends it to intersecting blocks in a design and Meagher and Moura~\cite{MR2156694} prove a version for partitions. The extension we give here is to intersecting permutations. Let $S(n)$ be the symmetric group on $\{1,\dots,n\}$. Permutations $\pi, \sigma \in S(n)$ are said to be \textsl{intersecting} if $\pi(i) = \sigma(i)$ for some $i \in \{1,\dots,n\}$. Similar to the case for subsets of a set, there are obvious candidates for maximum intersecting systems of permutations, these are the sets \begin{eqnarray}\label{eq:maxindy} S_{i,j} = \{ \pi \in S(n) : \pi(i)=j \}, \quad i,j \in \{1,\dots,n\}. \end{eqnarray} These sets are the cosets of a stabiliser of a point. \begin{theorem}(Cameron and Ku~\cite{MR2009400})\label{thm:main} Let $n\geq 2$. If $S \subseteq S(n)$ is an intersecting family of permutations then: \begin{enumerate}[(a)] \item $|S| \leq (n-1)!$. \item if $|S|=(n-1)!$ then $S$ is a coset of a stabiliser of a point. \end{enumerate} \end{theorem} The proof given by Cameron and Ku uses an operation called \textsl{fixing} which is similar to the shifting operation used in the original proof of Erd\H{o}s-Ko-Rado. They show that a maximum intersecting family of permutations is closed under this fixing operation. Assuming that the family contains the identity permutation, and thus each permutation in the family has a fixed point, they next consider the set system formed by the sets of fixed points for each permutation in the family. Cameron and Ku prove that if the family of permutations is closed under the fixing operation, then this set system is an intersecting set system. Finally, they prove the result by showing that if a family of intersecting permutations has size $(n-1)!$, then the sets in the intersecting set system must all intersect in the same point. Our proof uses a graph called the \textsl{permutation graph} which appears in the paper by Cameron and Ku. This graph is a union of graphs in an association scheme, we use properties of this association scheme together with information about the group representation of the symmetric group to get the result. This approach has been used to prove the standard Erd\H{o}s-Ko-Rado theorem for sets~\cite[Section 5.4]{mikethesis} and also to prove versions of the Erd\H{o}s-Ko-Rado theorem for other objects such as the $3 \times 3$ uniform partitions and vector spaces over a finite field~\cite{MR2260847}. It is interesting that this method also works for permutations and hoped that this method can be generalized to other objects. The proof we give only applies for $n>6$, for smaller $n$ the result can be verified using GAP~\cite{GAP4}. \section{The Clique-Coclique Bound} In this section we give a proof of the clique-coclique bound for the union of graphs in an association scheme. Although this bound is not new, it was originally proven by Delsarte~\cite{MR0384310}, and an alternate proof for vertex-transitive graphs is given by Cameron and Ku~\cite{MR2009400}, the proof given here is new. Let $\cA=\{\seq A0d\}$ be an association scheme with $d$ classes on $v$ vertices and let $v_i$ be the valency of the $i$-th graph. Denote the principal matrix idempotents of the association scheme by $\seq E0d$ and let $m_i$ be the dimension of the eigenspace belonging to $E_i$. We note that \[ E_0 =\frac1v J \] where $J$ is the all-ones matrix. \begin{theorem}(Delsarte~\cite[Theorem 3.9]{MR0384310}) \label{thm:ccl} Let $\cA$ be an association scheme on $v$ vertices and let $X$ be the union of some of the graphs in the scheme. If $C$ is a clique and $S$ is an independent set in $X$, then \begin{equation}\label{ineq:cliquecoclique} |C|\,|S| \le v. \end{equation} If equality holds and $x$ and $y$ are the respective characteristic vectors of $C$ and $S$, then \[ x^TE_jx\, y^TE_jy =0 \quad \textrm{for all } j>0. \] \end{theorem} \proof We have the following fundamental identity (see~\cite[Section 12.6]{MR1220704}): \[ \sum_{i=0}^d\frac1{vv_i}x^TA_ix\, A_i = \sum_{j=0}^d \frac1{m_j}x^TE_jx\, E_j \] from which it follows that \begin{equation} \label{xaya} \sum_{i=0}^d\frac1{vv_i}x^TA_ix\, y^TA_iy = \sum_{j=0}^d \frac1{m_j}x^TE_jx\, y^TE_jy. \end{equation} Now suppose $C$ is a clique and $S$ is an independent set in $X$, and let $x$ and $y$ be their respective characteristic vectors. The graph $X$ is a union of graphs in the scheme, if $A_i$ is one of the graphs in this union then $A_iy=0$ otherwise $A_ix=0$. So for all $i>0$, \[ x^TA_ix\,y^TA_iy=0, \] and hence the left side of Equation~\eqref{xaya} is \begin{equation} \label{xxyy} \frac1{v}x^Tx\,y^Ty =\frac{|C|\,|S|}{v}. \end{equation} For all $j$, the matrix $E_j$ is positive semidefinite and therefore \[ x^TE_jx\, y^TE_jy \ge0. \] Consequently the right side of Equation~\eqref{xaya} is bounded below by its first term: \begin{equation} \label{xex} x^TE_0x\, y^TE_0y =\frac{1}{v^2}x^TJx\, y^TJy =\frac{|C|^2|S|^2}{v^2}. \end{equation} It follows from \eqref{xxyy} and \eqref{xex} that $|C|\,|S|\le v$, as required. If equality holds the remaining condition follows immediately.\qed We will prove a simple, but useful corollary of this result. \begin{corollary}\label{cor:ccleql} Let $X$ be a union of graphs in an association scheme with the property that the clique-coclique bound holds with equality. Assume that $C$ is a maximum clique and $S$ is a maximum independent set in $X$ with characteristic vectors $x$ and $y$ respectively. If $E_j$ are the idempotents of the association scheme, then for $j>0$ at most one of the vectors $E_jx$ and $E_jy$ is not zero. \end{corollary} \proof If $j>0$, then \[ x^TE_jx\, y^TE_jy =0. \] Since $E_j$ is positive semidefinite, $z^TE_jz=0$ if and only if $E_jz=0$. \qed \section{The Permutation Graph} For a positive integer $n$ define the \textsl{permutation graph} $P(n)$ to be the graph whose vertex set is the set of all permutations of an $n$-set and vertices $\pi$ and $\sigma$ are adjacent if and only if they are not intersecting, that is $\pi(i) \neq \sigma(i)$ for all $i \in \{1, \dots ,n\}$. The intersecting families of permutations are exactly the independent sets in $P(n)$. We will show that the size of the maximum independent set in $P(n)$ is $(n-1)!$ and the only sets that meet this bound are the sets $S_{i,j}$ from Equation~\ref{eq:maxindy}. Let $d(n)$ be the number of derangements of an $n$-set (that is the number permutation with no fixed points), then the graph $P(n)$ is $d(n)$-regular. The number of derangements of a set of size $n$ is defined by the following recursive formula \begin{align}\label{eq:derangements} d(n)=(n-1)\left( d(n-1)+d(n-2)\right) \end{align} with $d(1)=0$ and $d(2)=1$. The permutation graph is a vertex-transitive graph, in fact, $P(n)$ is a Cayley graph whose connection set is the set of all derangements. Since this set is closed under conjugation, $P(n)$ is a \textsl{normal} Cayley graph (for more on normal Cayley graphs see~\cite[Section 5.2]{MR1468789}). Further, the graph $P(n)$ is a union of graphs in the association scheme known as the \textsl{conjugacy class scheme} on $S(n)$. The conjugacy class scheme can be constructed for any group $G$ and is an association scheme on the elements of $G$. Using the regular representation each element of $G$ can be expressed as a $|G| \times |G|$ permutation matrix. For any conjugacy class $C$ in $G$ define $A_C$ to be the sum of the permutation matrices for all the elements in the conjugacy class. Then \[ \mathcal{A} = \{A_C: C \textrm{ a conjugacy class in } G\} \] is the conjugacy class scheme on $G$ (for more on the conjugacy class scheme see~\cite[page 54]{MR882540}). If $\mathcal{A}$ is the conjugacy class scheme for the symmetric group $S(n)$, then the adjacency matrix of $P(n)$ is the sum of $A_{C}$ over all conjugacy classes $C$ of derangements. Since $P(n)$ is the sum of graphs in an association scheme the clique-coclique bound (Inequality~\ref{ineq:cliquecoclique}) holds. With this bound, it is straightforward to get the first statement of Theorem~\ref{thm:main}. This proof of the bound in Theorem~\ref{thm:main} is included in~\cite[Theorem 5]{MR2009400} and it was also shown by Deza and Frankl~\cite{MR0439648}. \begin{theorem}\label{thm:clique} The size of a maximum clique in $P(n)$ is n. \end{theorem} \proof A clique in $P(n)$ can have no more than $n$ vertices. This is clear since the image of 1 (or any other element in $\{1,\dots,n\}$) must be distinct for each permutation in the clique. Further, each row of a Latin square of order $n$ is a permutation in $S_n$ and the set of all rows in a Latin square of order $n$ is a clique of size $n$ in $P(n)$. Since a Latin square of order $n$ exists for every $n$ the theorem holds. \qed \begin{theorem} The size of a maximum independent set in $P(n)$ is $(n-1)!$. \end{theorem} \proof Since the graph $P(n)$ is a union of graphs in an association scheme the clique-coclique bound holds for $P(n)$, that is \[ \alpha(P(n)) \leq \frac{|V(P(n))|}{\omega(P(n))}. \] {}From Theorem~\ref{thm:clique}, $\omega(P(n)) =n$ and hence \[ \alpha(P(n)) \leq (n-1)!. \] Finally, the sets $S_{i,j}$ from Equation~\ref{eq:maxindy} are independent sets of size $(n-1)!$. \qed \section{Eigenvalues of $P(n)$} In this section we will find two eigenvalues of the adjacency matrix of $P(n)$. Eigenvalues of the adjacency matrix of $P(n)$ will simply be refer to as the eigenvalues of $P(n)$. \begin{lemma} For all positive integers $n$ \[d(n) \quad \textrm{ and } \quad -\frac{d(n)}{n-1} \] are eigenvalues for $P(n)$. \end{lemma} \proof Consider the independent set $S_{n,n}$ as defined in Equation~\ref{eq:maxindy}. The partition \[ \{S_{n,n}, V(P(n)) \setminus S_{n,n}\} \] is the orbit partition of $S(1) \times S(n-1)$ acting on the vertices of $P(n)$, hence it is an equitable partition. The quotient graph of $P(n)$ with respect to this partition is \[ \left( \begin{array}{cc} 0 & d(n) \\ \frac{d(n)}{n-1} & d(n) - \frac{d(n)}{n-1} \end{array} \right). \] The eigenvalues of this quotient graph are $d(n)$ and $-\frac{d(n)}{n-1}$. Since the partition is equitable these are also eigenvalues for the graph $P(n)$. \qed Since $P(n)$ is a $d(n)$-regular graph, $d(n)$ is the largest eigenvalue of $P(n)$. By Equation~\ref{eq:derangements} \[ -\frac{d(n)}{n-1} = -(d(n-1)+d(n-2)) \] so this eigenvalue is also an integer. The eigenvalues of a graph can be used to find bounds on the size of the maximum independent sets. In particular, if $X$ is a $d$-regular vertex-transitive graph with least eigenvalue $\tau$ then \[ \alpha(X) \leq \frac{|V(X)|}{1-\frac{d}{\tau}}. \] This is known as the \textsl{ratio bound for independent sets} (see~\cite[Lemma 9.6.2]{MR1829620} for a proof). Ku~\cite{MR2302532} has conjectured that the least eigenvalue of $P(n)$ is $-\frac{d(n)}{n-1}$. If this is true, then the ratio bound gives the first part of Theorem~\ref{thm:main}. The eigenvalues of a graph in a conjugacy class scheme and the idempotents of the conjugacy class scheme can be determined by the character table of the group. We will state these formulas for the conjugacy class scheme on the symmetric group. It is well-known that each irreducible character of $S(n)$ corresponds to an integer partition of $n$. To denote that $\la$ is an integer partition of $n$, we write $\la \vdash n$. If $\lambda \vdash n$, we will represent the character of $S_n$ corresponding to $\lambda$ by $\chi_\la$. Each partition $\la$ of $n$ corresponds to a module, we will call this the $\la$-module. For more on the representation theory of the symmetric group see~\cite[Chapter 4]{MR1153249}. For each $\la \vdash n$ there is a principal idempotent in the scheme. This idempotent is the $n! \times n!$ matrix whose entries are given by \begin{align}\label{eq:proj} (E_\la)_{\pi,\sigma} = \frac{\chi_\la(1)}{n!} \chi_\la(\pi^{-1}\sigma) \end{align} where $\pi,\sigma \in S(n)$. For $C$ a conjugacy class in $S(n)$ the eigenvalues of $A_C$ are \[ p_C^{\,\lambda} = \frac{|C|}{\chi_\lambda(1)} \chi_\lambda (c), \quad c \in C \] where $\lambda$ ranges over all partitions of $n$ (for a proof of this see~\cite[Chapter II, Section 2.7]{MR882540}). It follows from this that the eigenvalues of $P(n)$ are \[ \sum_{C}p_C^{\,\lambda}, \quad \lambda \vdash n \] where the sum is taken over all conjugacy classes of derangements. For the partition $\la = [n]$ the value of $p_C^{\,[n]}$ is $|C|$ and thus \[ \sum_{C} p_C^{\,[n]} = \sum_{C} |C| = d(n) \] where the sum is taken over all conjugacy classes of derangements. For any $x \in S(n)$ the value of $\chi_{[n-1,1]}(x)$ is one less than the number of fixed points in $x$, so for $C$ any conjugacy class of derangements $p_C^{\,[n-1,1]} = -\frac{|C|}{n-1}$. Thus \[ \sum_{C} p_{C}^{\,[n-1,1]} = \sum_{C} -\frac{|C|}{n-1} = -\frac{d(n)}{n-1} \] again, the sum is taken over all conjugacy classes of derangements. \section{The $(n-1)$-module} For a subset $S \subseteq S(n)$ let $v_S$ be the characteristic vector of $S$ and if $S$ is one of the independent sets $S_{i,j}$ defined in Equation~\ref{eq:maxindy}, then we will simply denote $v_S$ by $v_{i,j}$. Throughout this section $\one$ will denote the all-ones vector, the length of $\one$ will be clear from context. We will first show that for any maximum independent set $S$ the vector $v_S-\frac{1}{n}\one $ is in the module corresponding to the representation $[n-1,1]$. The next step will be to prove that the vectors $v_{i,j}-\frac{1}{n}\one$ span the $[n-1,1]$-module. Finally we show that any characteristic vector of a maximum independent set that is in this span must be one of $v_{i,j}$ for $i,j \in \{1,\dots,n\}$. \begin{lemma}\label{lem:module} Let $n$ be an integer with $n>6$. Let $S$ be a maximum independent set in $P(n)$ and $v_S$ be the characteristic vector of $S$. Then the vector $v_S-\frac{1}{n}\one$ is in the $[n-1,1]$-module. \end{lemma} \proof First, a simple calculation shows that $v_s-\frac{1}{n}\one$ is orthogonal to $\one$, so this vector is not in the $[n]$-module. For $\la$ an integer partition of $n$ let $\chi_\la$ be the character of $S_n$ corresponding to $\la$. For $C$ a maximum clique in $P(n)$ define \[ \chi_\lambda(C) = \sum_{x \in C} \chi_\lambda(x). \] If $\chi_\lambda(C) \neq 0$, then by Equation~\ref{eq:proj} $E_\la v_C \neq 0$. By Corollary~\ref{cor:ccleql}, this implies at $E_\la v_S = 0$ which in turn implies that $E_\la (v_S -\frac{1}{n}\one)= 0$ for all partitions $\la \neq [n]$. This means that the vector $v_S -\frac{1}{n}\one$ is orthogonal to the $\la$-module. If this is true for every partition $\la \vdash n$ except $[n-1,1]$, then for every maximum independent set $S$ the vector $v_S-\frac{1}{n}\one$ is in the $[n-1,1]$-module. To prove this theorem, we will show for every $\la \vdash n$ with $\la \neq [n-1,1]$ there is a maximum clique $C$ such that $\chi_\la (C) \neq 0$. For $n>6$ there is a decomposition of the complete digraph on $n$ vertices into $n-1$ directed cycles~\cite{MR1986837}. Each of these directed cycles is a cycle of length $n$ in $S_n$. Moreover, no two cycles in the decomposition share an edge so these cycles are adjacent in $P(n)$. Let $T$ be the $n$-clique whose elements are the $n$-cycles in this decomposition together with the identity of $S(n)$. Since every $x \in T$, except the identity, is an $n$-cycle for every $\la \vdash n$ the value of $\chi_\lambda(x)$ is the same. Thus \begin{eqnarray*} \chi_\lambda(T)&=& \sum_{x \in T} \chi_\lambda(x) \\ &=& \chi_\lambda(1) + (n-1)\chi_\lambda(x) \quad \textrm{ $x$ an $n$-cycle.} \end{eqnarray*} Further, $\chi_\lambda(x) = \pm 1$ for every character $\chi_\lambda$ (for a proof of this see~\cite{MR1093239}). Since $\chi_\lambda(1)$ is positive, if $\chi_\lambda(C)=0$, then $\chi_\lambda(x) =-1$ and $\chi_\lambda(1) = n-1$. For $n>6$ the only partitions of $n$ with $\chi_\lambda(1) = n-1$ are $[n-1,1]$ and $[2,1^{n-2}]$. If $n$ is even, then for $x$ an $n$-cycle $\chi_{[2,1^{n-2}]}(x) = 1$ so $\lambda$ must be $[n-
1,1]$. Finally, if $n$ is odd we need to prove that $\lambda$ is $[n-1,1]$. To do this we construct a clique $T$ with $\chi_{[2,1^{n-2}]}(T) \neq 0$. Consider an $n \times n$ Latin square with the first row $(1,2,\dots ,n)$ and the second row $(2,1,n,3,4, \dots, n-1)$. Such a Latin square exists since any Latin rectangle can be extended to a Latin square~\cite{MR0013111}. The rows of this Latin square will the be permutations in our clique. The first row corresponds to the identity permutation, the second to an odd permutation. The first row will contribute $n-1$ to the sum $\chi_{[2,1^{n-2}]}(T)$ and the second row will contribute 1. Each of the last $n-2$ permutations will contribute no less than $-1$ to the sum so the sum cannot be 0. \qed Next we give a basis for the $[n-1,1]$-module. \begin{lemma}\label{lem:basis} For any $i,j \in \{1, \dots ,n-1\}$ let $v_{i,j}$ denote the characteristic vector of the independent set $S_{i,j} =\{\pi \in S(n) : \pi(i)=j\}$. The vectors $v_{i,j}-\frac{1}{n}\one$ form a basis for the $[n-1,1]$-module. \end{lemma} \proof From Lemma~\ref{lem:module}, the vectors $v_{i,j}-\frac{1}{n}\one$ are elements in the $[n-1,1]$-module. The dimension of the $[n-1,1]$-module is $(n-1)^2$, so we only need to show that these vectors are linearly independent. Since $\one \not\in \mathrm{span} \{v_{i,j}: i,j \in\{1,\dots,n-1\} \}$, it is enough to show that the vectors $v_{i,j}$ are linearly independent. Order the pairs in $\{1, \dots ,n-1\}$ so that pair $(i,j)$ occurs before $(k,\ell)$ if $i < k$ or if $i=k$ and $j < \ell$. Let $H$ be a $01$-matrix with size $n! \times (n-1)^2$ defined as follows: the columns are indexed by the pairs from the $(n-1)$-set in the above ordering and the rows are indexed by all the permutations of an $n$-set. The $(\pi, (i,j))$-entry of $H$ is 1 if and only if $\pi(i)=j$. Let $I_{n}$ be the $n\times n$ identity matrix and $J_n$ the $n\times n$ all-ones matrix. The adjacency matrix of the complete graph on $n$ vertices is $K_n = J_n -I_n$. It is not hard to see with the given ordering on the pairs that \[ H^T H = (n-1)!I_{(n-1)^2} + (n-2)! (K_{n-1}\otimes K_{n-1}). \] Since 0 is not an eigenvalue of this matrix, it has rank $(n-1)^2$. Finally, the rank of $H$ is equal to the rank of $H^T H$ and the result holds. \qed \section{Proof of Theorem~\ref{thm:main}} Let $H$ be the $n! \times (n-1)^2$ matrix whose rows are the elements of the symmetric group on $n$ points and columns are the ordered pairs from $\{1,\dots,n-1\}$ with the $(\pi, (i,j))$ position of $H$ equal to 1 if $\pi(i)=j$ and zero otherwise. Denote the columns of $H$ by $h_{i,j}$. By Lemma~\ref{lem:basis} the vectors \[ h_{i,j} -\frac{1}{n}\one, \quad i,j \leq n-1 \] are a basis for the $[n-1,1]$-module. By Lemma~\ref{lem:module}, for any independent set $S$, the vector $v_S - \frac{1}{n}\one$ is in the $[n-1,1]$-module. In particular, it is in \[ \mathrm{span}\left\{ h_{i,j} -\frac{1}{n}\one : i,j\in \{1,\dots,n-1\} \right\}. \] This implies that the characteristic vector of any maximal independent set is in the span of column space of $H$ and $\one$. Let $\sigma$ be the identity permutation on the $n$-set and let $N(\sigma)$ denote the set of permutations adjacent to $\sigma$ in $P(n)$ (these are the derangements). Consider three submatrices of $H$: \begin{enumerate}[(a)] \item $N$ the submatrix whose rows are the permutations in $N(\sigma)$, \item $M$ the submatrix of $N$ whose columns are all the pairs $(i,j)$ with $i,j \in \{1,\dots,n-1\}$ and $i \neq j$, \item $W$ the submatrix of $H$ whose columns are all the pairs $(i,i)$ with $i\in \{ 1,\dots,n-1\}$. \end{enumerate} If the columns of $H$ are arranged so that the first $n-1$ columns correspond to the pairs $(i,i)$ for $i =1,\dots,n-1$, and the rows are arranged so the first row corresponds to the permutation $\sigma$ and the next $d(n)$ rows correspond to the neighbours of $\sigma$, then $H$ has the following block structure: \begin{center} \begin{tabular}{|c|c|} \hline 1 & 0 \\ \hline 0 & $M$ \\ \hline $H_1$ & $H_2$ \\ \hline \end{tabular} \end{center} and the first $n-1$ columns form the matrix $W$. \begin{lemma} For all $n$ the rank of $M$ is $(n-1)(n-2)$. \end{lemma} \proof The matrix $K_{n-1} \otimes I_{n-2}$ has rank $(n-1)(n-1)$; we show it is a submatrix of $H$. To find this submatrix, we reorder the rows and columns of $H$. Order the pairs from $\{1,\dots,n-1\}$ so that the pair $(i, i+j \pmod{n-1})$ occurs before $(k, k+\ell\pmod{n-1})$ if $i < k$ or $i=k$ and $j < \ell$. Order the columns of $M$ with this ordering. Next we define an ordering on a subset of derangements. Let $a \in \{1, \dots ,n-1\}$ and $b \in \{1, \dots ,n-2\}$. Define a permutation of $\{1, \dots ,n\}$ for $i\in \{1, \dots ,n-1\}$ as follows: \[ \pi_{a,b}(i) = \left\{ \begin{array}{ll} n & \textrm{if } a = i; \\ i+b & \textrm{if } a \neq i \textrm{ and } i+b < n; \\ i+b+1 \pmod{n} & \textrm{if } a \neq i \textrm{ and } i+b \geq n. \end{array} \right. \] Note that the value of $\pi_{a,b}(n)$ is forced. Order these permutations so that $\pi_{a_1,b_1}$ occurs before $\pi_{a_2,b_2}$ if $a_1 < a_2$ or $a_1 = a_2$ and $b_1 < b_2$. Consider the submatrix of $M$ induced by the rows corresponding to the permutations $\pi_{a,b}$ for $a \in \{1, \dots ,n-1\}$ and $b \in \{1, \dots ,n-2\}$. This submatrix of $M$ is $K_{n-1} \otimes I_{n-2}$. \qed \begin{table} \begin{center} \begin{tabular}{cc cc:cc:cc} $(a,b)$& $\pi_{a,b}$ & 1\sto 2 & 1\sto 3 & 2\sto 3 & 2\sto 1 & 3\sto 1 & 3\sto 2 \\ \cline{3-8} (1,1) & (1,4,2,3)&\multicolumn{1}{|c} 0&0 & 1&0 & 1&\multicolumn{1}{c|} 0 \\ (1,2) & (1,4,3,2)&\multicolumn{1}{|c} 0&0 & 0&1 & 0& \multicolumn{1}{c|}1 \\ \cdashline{3-8} (2,1) & (1,2,4,3)&\multicolumn{1}{|c} 1&0 & 0&0 & 1& \multicolumn{1}{c|}0 \\ (2,2) & (1,3,2,4)&\multicolumn{1}{|c} 0&1 & 0&0 & 0& \multicolumn{1}{c|}1 \\ \cdashline{3-8} (3,1) & (1,2,3,4)&\multicolumn{1}{|c} 1&0 & 1&0 & 0& \multicolumn{1}{c|}0 \\ (3,2) & (1,3,4,2)&\multicolumn{1}{|c} 0&1 & 0&1 & 0& \multicolumn{1}{c|}0 \\ \cline{3-8} \end{tabular} \end{center} \caption{The submatrix of $M$ for $n=4$.} \end{table} \begin{lemma}\label{lem:kernel} If $y$ is in the kernel of $N$, then $Hy$ lies in the column space of $W$. \end{lemma} \proof Assume $y$ is in the kernel of $N$. Let $y_M$ denote the vector of length $(n-1)(n-2)$ formed by taking the final $(n-1)(n-2)$ entries of $y$. Then \[ 0 = Ny = [0| M ]y = My_M. \] Since $M$ has rank $(n-1)(n-2)$, the last $(n-1)(n-2)$ entries of $y$ are all 0. Thus $Hy$ is in the column space of $W$. \qed Let $[N|1]$ be the $d(n) \times ((n-1)^2+1)$ matrix with a column of ones added to $N$, and $[M|1]$ the $d(n) \times ((n-1)(n-2)+1)$ matrix with a column of ones added to $M$. As above, for a length $(n-1)^2+1$ vector $y$, the vector formed by the last $(n-1)(n-2)+1$ entries of $y$ will be denoted by $y_{[M|1]}$. \begin{lemma}\label{lem:kernelwith1} If $y$ is in the kernel of $[N|1]$, then $y_{[M|1]}$ is a scalar multiple of \[ (1,1,\dots,1,-(n-2)). \] \end{lemma} \proof As in the previous lemma, \[ 0 = [N|1]y = [0|M|1]y = [M|1]y_{[M|1]}. \] Since $M$ has full column rank, the dimension of the kernel of $[M|1]$ is at most 1. Each row of $M$ has exactly $n-2$ entries equal to one and all other entries zero, so the vector $(1,1,\dots,1,-(n-2))$ is in the kernel of $[M|1]$ and is a basis for the kernel of $[M|1]$. \qed We now have all the tools to prove the second statement of Theorem~\ref{thm:main}. \noindent\textsl{Proof of Theorem~\ref{thm:main}. } Let $S$ be an independent set of size $(n-1)!$ in $P(n)$. Assume that the identity permutation $\sigma$ is in $S$ and let $v_S$ be the characteristic vector of $S$. By Lemma~\ref{lem:module}, $v_S$ is in the $\textrm{span}\{\one, h_{i,j}: i,j \leq n-1 \}$. We consider two cases, first when $v_S$ is in $\textrm{span}\{h_{i,j}: i,j \leq n-1\}$ and second when it is not. {\bf case 1.} Assume $v_S \in \textrm{span}\{h_{i,j} : i,j=1,\dots,n-1\}$, or, equivalently, that $v_S=Hy$ for some vector $y$. Since $S$ is an independent set no neighbours of $\sigma$ can be in $S$ and $Ny=0$. By Lemma~\ref{lem:kernel}, $v_S=Wx$ for some vector $x$. For any $i \in \{1, \dots ,n-1\}$ assume the $i$-th entry of the vector $x$ is non-zero. As $n\geq 3$, there is a permutation $\pi$ with $\pi(i)=i$ and no other fixed points. This means that the entry in the row corresponding to $\pi$ of $v_S$ must be equal to the $i$-th entry of $x$. Since $v_S$ is a 01-vector, $x$ must also be a 01-vector. Further, since $n \geq 4$, for every pair of distinct $i,j \in \{1, \dots ,n-1\}$ there is a permutation $\pi$ that fixes $i$ and $j$ but no other points. If the $i$-th and $j$-th entries of $x$ are both non-zero then the entry in the row corresponding to $\pi$ of $v_S$ is 2. Since $v_S$ must be a 01-vector, there is only one non-zero entry in $x$. Thus $v_S$ is one of the columns of $W$ and $S = S_{i,i}$ for some $i \in \{1,\dots,n-1\}$. {\bf case 2.} Assume $v_S$ is not in the column space of $H$. Equivalently, there is some vector $y$ such that $v_S = [H|1]y = Hy_H + c\one$ where $y_H$ denotes the vector formed from the first $(n-1)^2$ entries of $y$ and $c$ is a non-zero constant. As in case 1, no neighbours of $\sigma$ are in $S$ so $[N|1]y=0$. By Lemma~\ref{lem:kernelwith1} there is a non-zero $c$ such that \[ y_{[M|1]} = -\frac{c}{(n-2)}(1,1,\dots,1,-(n-2)). \] This determines all entries, upto multiplication by a constant, of $y$ except the first $n-1$. For each $i \leq n-1$ there is a permutation $\pi$ with $\pi(i)=i$ and no other fixed points. If $y_i$ is the $i$-th entry of $y$ then the entry in $v_S$ corresponding to $\pi$ is \[ y_i + (n-3)\left(-\frac{c}{n-2}\right) +c \] which must be either 0 or 1. This implies that \[ y_i = -\frac{c}{n-2} \textrm{ or } y_i =1 - \frac{c}{n-2}. \] Since $n\geq 4$ for any distinct pair $i,j \in \{1,\dots,n-1\}$ there is a permutation that fixes both $i$ and $j$ and no other points. If both $y_i$ and $y_j$ are equal to $1-\frac{c}{n-2}$, then the entry in the vector $v_S$ which corresponds to this permutation is \[ 2\left(1-\frac{c}{n-2}\right) + (n-4)\left(-\frac{c}{n-2}\right) + c =2 \] which is a contradiction since $v_S$ is a 01-vector. Thus at most one of the first $n-1$ entries of $y$ is $1-\frac{c}{n-2}$. Next, assume that exactly one of the first $n-1$ entries is $1-\frac{c}{n-2}$. Since $\sigma \in S$ the sum of the first $n-1$ entries of $y$ is 1. But this means that \[ 1-\frac{c}{n-2} + (n-2)\left(-\frac{c}{n-2}\right) + c =1, \] which implies that $c=0$, a contradiction. Hence, all the entries of $y$, except that last, are $-\frac{c}{n-2}$. Using the fact that the sum of the first $n-1$ entries of $y$ is 1 \[ (n-1)\frac{c}{-(n-2)} + c = 1 \] which implies that $c=-(n-2)$. For case 2 there is only one possibility for $y$, this is \[ y =(1,1,\dots,-(n-2)). \] Every row in $[H|1]$ that corresponds to a permutation that maps $n$ to $n$ has exactly $(n-1)$ entries equal to one and all other entries equal to zero. All the other rows has exactly $(n-2)$ entries equal to one and all other entries equal to zero. {}From this it follows that $[H|1]y =v_S$ is the characteristic vector of the set $S_{n,n}$. \qed \section{Further Work} We have only considered the simplest version of the Erd\H{o}s-Ko-Rado theorem. The full version of the Erd\H{o}s-Ko-Rado theorem is concerned with $t$-intersecting subsets. For an integer $t$, subsets $A,B \subseteq \{1,\dots,n\}$ are \textsl{$t$-intersecting} if $|A \cap B| \geq t$. \begin{theorem}(Erd\H{o}s-Ko-Rado~\cite{MR25:3839})\label{thm:fullekr} Let $t\leq k \leq n$ be positive integers. Let $\cA$ be a family of pairwise $t$-intersecting $k$-subsets of $\{1,\dots,n\}$. There exist a function $f(k,t)$ such that for $n \geq f(n,k)$ \[|\cA| \leq {n-t \choose k-t}.\] Moreover, a $t$-intersecting family $\cA$ meets this bound if and only if $\cA$ is the collection of all $k$-subsets that contain a fixed $t$-subset. \end{theorem} Permutations $\pi, \sigma \in S(n)$ are \textsl{$t$-intersecting} if \[ |\{i \in \{1,\dots,n\} : \pi(i)=\sigma(i)\}| \geq t. \] Again, there is an obvious family of candidates for the maximum system of $t$-intersecting permutations. Assume \[ A = \{(x_i,y_i) : i=1,\dots,t \;\mathrm{ and }\; x_i,y_i \in \{1,\dots,n\} \} \] with $x_i \neq x_j$ and $y_i \neq y_j$ for all $i\neq j$. Then the family \[ S_{A} = \{\pi : \pi(x_i) = y_i \;\mathrm{ for \;all }\; (x_i,y_i) \in A\}. \] is $t$-intersecting and $|S_A|= (n-t)!$. Deza and Frankl~\cite{MR0439648} conjecture that Theorem~\ref{thm:fullekr} can also be extended to families of $t$-intersecting permutations. \begin{conj}(Deza and Frankl~\cite{MR0439648}) For $n$ sufficiently large, the size of the maximum set of permutations of an $n$-set that are pairwise $t$-intersecting is $(n-t)!$. \end{conj} Cameron and Ku note that their method cannot be extended to $t$-intersecting permutations. It is possible that the proof presented in this paper may be extended as follows. Define a graph $P_t(n)$ whose vertices are the permutations of an $n$-set, where two vertices are adjacent if they agree on no more than $t$ points. Note that $P(n) = P_0(n)$. The graph $P_t(n)$ is a sum of all $A_C$ where $C$ is a conjugacy classes in which the elements have no more than $t$ fixed points. The graph $P_t(n)$ is vertex transitive so we have that \[ \alpha(P_t(n))\omega(P_t(n)) \leq n! \] This is Equation 3 in Deza and Frankl~\cite{MR0439648}. They also note that if there exists a sharply $2$-transitive set of permutations of $\{1,\dots,n\}$ (say $PGL(2,n)$) then there is a clique of size $n(n-1)$ and we have the bound on the $2$-intersecting permutations. We conjecture that the shifted characteristic vector of a $2$-intersecting permutation family lies in a union of modules. Define the \textsl{depth} of a partition $\lambda \vdash n$ with $\la=(\la_1,\la_2,\dots,\la_k)$ to be $n-\la_1$. \begin{conj} Let $v_S$ be the characteristic vector of a maximum independent set in $P_1(n)$. Then the vector $v_S -\frac{|S|}{n!}\one$ lies in the sum of the modules whose partitions have depth no more than 2. That is the sum of the following modules \[ [n],\quad [n-1,1],\quad [n-2,2], \quad [n-2,1,1]. \] \end{conj} The dimensions of the sum of these modules and the dimension of the span of $v_{A} -\frac{|S|}{n!}\one$ agree for $n=4,5,6$ where $A=\{(i,j),(k,\ell)\}$. This conjecture can be generalized to $t$-intersecting permutation systems. \begin{conj} Let $v_S$ be the characteristic vector of a maximum independent set in $P_t(n)$. Then the vector $v_S -\frac{|S|}{n!}\one$ lies in the sum of the modules whose partitions have depth no more than $t$. \end{conj} Finally, the proof of the Erd\H{o}s-Ko-Rado theorem for permutations given in this paper is an application of a method that has been used to prove the Erd\H{o}s-Ko-Rado theorem for set systems and its analogue for intersecting vector spaces over a finite field. Another direction for this work is to apply this method to other objects such as perfect matchings and uniform partitions with a plan of developing a more general theory of Erd\H{o}s-Ko-Rado theorems.
\section{Introduction} This paper is concerned with solutions of the stationary incompressible Navier-Stokes equations \begin{equation} \label{SNS1} \begin{cases} -\De u + (u \cdot \nabla) u + \nabla p =0, \\ \div u = 0 \end{cases} \end{equation} in $\Om= \mathbb{R}^3 \setminus \{ 0\}$ that satisfies% \begin{equation} \label{SNS2} |u(x)| \le \frac {C_0}{|x|}, \quad (x \in \Om), \end{equation} for some $C_0>0$. Here $u : \Om \to \mathbb{R}^3$ is the velocity field and $p:\Om \to \mathbb{R}$ is the pressure. The usual regularity theory implies that $u$ is smooth. In fact, by \cite{Sverak-Tsai}, \begin{equation} \label{SNS3} |\nabla^k u(x)| \le \frac {C_k}{|x|^{k+1}}, \quad (x \in \Om), \end{equation} for some $C_k= C_k(C_0)$, for all $k \in \mathbb{N}$. The system \eqref{SNS1} enjoys the \emph{scaling property}: If $(u,p)$ is a solution pair in $\Om$, then for any $\la>0$, \[ u^\la(x)=\la u(\la x), \quad p^\la (x) = \la^2 p(\la x) \] is also a solution pair in $\Om$. A solution pair $(u,p)$ is called \emph{self-similar} if $(u^\la,p^\la)=(u,p)$ for all $\la>0$. In this case, $u$ and $p$ are homogeneous of degree $-1$ and $-2$, respectively, \[ u(x) = \frac 1{|x|} u\bke{ \frac x{|x|}},\quad p(x) = \frac 1{|x|^2} p\bke{ \frac x{|x|}}. \] A solution pair $(u,p)$ is called \emph{discretely self-similar (DSS)} if $(u^\la,p^\la)=(u,p)$ for one $\la>1$. In this case, $u$ may not be minus one homogeneous, but if $u \in H^1_\text{loc}(\Om)$ then it enjoys the estimates \eqref{SNS2} and \eqref{SNS3}. It is determined by its value in the annulus $B_\la \setminus B_1$, where $B_r = \bket{x \in \mathbb{R}^3: \ |x| <r }$. A special family of solutions of \eqref{SNS1}-\eqref{SNS2} is the \emph{Landau solutions} or \emph{Slezkin-Landau solutions}, computed by Slezkin \cite{Slezkin} in 1934 (see \cite{Galaktionov} for English translation), and by Landau in 1944 \cite{Landau}. Landau's computation can be found in standard textbooks \cite[\S23]{Landau-Lifshitz} and \cite[\S4.6]{Bat}. The solutions were also independently found by Squire \cite{Squire} in 1951, and more recently revisited in Tian and Xin \cite{Tian-Xin} and Cannone and Karch \cite{MR2034160}. These solutions are self-similar and axisymmetric with no swirl. In spherical coordinates $(\rho, \theta, \phi)$ with \begin{equation} \label{spherical-coordinates} (x_1,x_2,x_3) = (\rho \sin \phi \cos \th, \rho \sin \phi \sin \th, \rho \cos \phi), \end{equation} and basis vectors \[ e_\rho = \frac x\rho, \quad e_\theta = (-\sin \th, \cos \th, 0), \quad e_\phi = e_\th \times e_\rho, \] a function $f$ is called \emph{axisymmetric} if $f=f(\rho, \phi)$ is independent of $\th$, and a vector field $u$ is \emph{axisymmetric} if it is of the form \[ u = u_\rho(\rho, \phi) e_\rho + u_\th(\rho, \phi) e_\th + u_\phi(\rho, \phi) e_\phi \] with components $u_\rho$, $u_\th$ and $u_\phi$ independent of $\th$. It has \emph{no swirl} if the swirl component $u_\th$ is zero. Both classes of axisymmetric flows and axisymmetric flows with no swirl are invariant under \eqref{SNS1}: If $(u,p)$ is axisymmetric, then the left side of \eqref{SNS1} is also axisymmetric. Similarly if $u$ has no swirl. Thus these two classes are preserved under time evolution if we add $\partial_t u$ to the left side of \eqref{SNS1}$_1$. The Landau solutions, denoted by $U^a$ with parameter $a>1$, are \begin{equation} \label{Landau-sol} { U^a = \frac 2{\rho}\bke{\frac {a^2-1}{(a-\cos \phi)^2} -1} e_\rho + 0 e_\th- \frac {2\sin \phi} {\rho(a-\cos \phi)} e_\phi,} \quad P^a =\frac {4(a\cos \phi -1)}{\rho^2(a-\cos \phi)^2}. \end{equation} It can also be written as \begin{equation} \label{Landau-Psi} U^a = \curl (\Psi^a e_\th), \quad \Psi^a = \frac {2\sin \phi }{a-\cos \phi}. \end{equation} The Landau solution $U^a$ satisfies the (inhomogeneous) Navier-Stokes equations with delta force at the origin, \begin{equation} \label{Landau-eq} -\De u + (u \cdot \nabla) u + \nabla p ={\vec\be \de_0}, \quad \div u = 0 \end{equation} in $\mathbb{R}^3$, where $\de_0$ is the Dirac delta function at the origin, $\vec\be= \beta e_3$ and \[ \beta= \beta_0(a) = 16 \pi \bke{a + \frac 12 a^2 \log \frac {a-1}{a+1} + \frac {4a}{3(a^2-1)}}, \] see \cite[Lemma 8.2]{nslec}. The function $a\in (1,\infty] \mapsto \beta_0 \in [0,\infty)$ is strictly decreasing, one to one and onto. Note that $\beta_0(a)$ and the bound $C_0$ in \eqref{SNS2} for $U^a$ go to infinity as $a \to 1_+$. In the literature, instead of $a$, one sometimes uses $\vec\beta \in \mathbb{R}^3$ as the parameter and denotes the Landau solution as $U^{\vec\be}$ or $U^\be$. The basis $\{e_1,e_2,e_3\}$ is then changed accordingly so that $e_3$ is in the direction of $\vec\beta$. Landau solutions appear as the asymptotic leading terms of solutions of \eqref{SNS1} in exterior domains in $\mathbb{R}^3$: Nazarov and Pileckas \cite{NP00} derived asymptotic expansion for solutions of \eqref{SNS1} satisfying the bounds \eqref{SNS2}-\eqref{SNS3} under smallness conditions, but the leading term was less explicit. Korolev and \v Sver\'ak \cite{Korolev-Sverak} showed that the leading term of a small solution must be a Landau solution. This result was extended to small time-periodic solutions by Kang, Miura and Tsai \cite{KMT12}, identifying that the leading spatial term is a fixed time-independent Landau solution. Decaster and Iftimie \cite{MR3610933} extends the asymptotic results of \cite{NP00,Korolev-Sverak} to stationary solutions in an exterior domain with minus-three homogeneous force fields. The existence of solutions with minus-three homogeneous \emph{axisymmetric} force fields in the whole space $\mathbb{R}^3$ is addressed by Shi \cite{Shi}. The Landau solutions are also useful to describe the local behavior near a singularity. Indeed, it was proved by Miura and Tsai \cite{MT12} that the leading term of point singularity like $|x|^{-1}$ at $x=0$ of the Navier-Stokes flow is also given by a Landau solution provided it is small enough. See Hishida \cite{Hishida} for a survey including stationary Navier–Stokes flows around a rotating body. The papers \cite{Slezkin, Landau, Squire, Tian-Xin, MR2034160} study self-similar solutions of \eqref{SNS1} in the axisymmetric class. In the axisymmetric class, \eqref{SNS1} is reduced to an ODE system, and can be analyzed by ODE techniques. This has been extended by Li, Li and Yan \cite{MR3744383,MR3770045,Li-Li-Yan3} to axisymmetric self-similar solutions with point singularities at the north and south poles on $\mathbb{S}^2$. Without assuming axisymmetry, it has been shown by \v Sver\'ak \cite{Sverak2011} that, if a solution of \eqref{SNS1} in $\Om$ is self-similar, then it must be a Landau solution. His analysis reduces \eqref{SNS1} to a PDE system on the unit sphere $\mathbb{S}^2$ using the self-similarity assumption. To take one step further, we would like to ask: Do we have any strictly DSS solution of \eqref{SNS1}? This is motivated by the following. \begin{conjecture} A nonzero solution of \eqref{SNS1} in $\mathbb{R}^3 \setminus \{0\}$ satisfying the bound \eqref{SNS2} must be a Landau solution. \end{conjecture} In this conjecture, no assumption is made on self-similarity or axisymmetry. The conjecture is first formulated in \cite{Sverak2011}, and is known to be true if the constant $C_0$ in \eqref{SNS2} is sufficiently small by Korolev-\v Sver\'ak \cite{Korolev-Sverak} and Miura-Tsai \cite{MT12} via different proofs. For ``large'' solutions, the paper \cite{Sverak2011} by \v Sver\'ak excludes a counterexample in the class of self-similar flows. One is naturally led to look for a counterexample in the class of DSS flows with axisymmetry. Such solutions may come in two kinds. Solutions of the first kind occur as bifurcations of Landau solutions. Solutions of the second kind occur in isolated island(s) in the class of DSS axisymmetric flows and stay away from the Landau solutions. This paper investigates the first kind only. There is a rich literature on bifurcations of fluid PDEs. The most relevant to us is that of the Couette-Taylor flows by Velte \cite{Velte66}, presented in Temam \cite[II.4]{Temam} (see also \cite{nslec}). Under the ansatz of discrete self-similarity with DSS factor $\la>1$, a solution $u$ is determined by its value in the region \[ \bket{x \in \mathbb{R}^3: \ 1 \le |x| \le \la }, \] with the boundary condition \[ u( x) = \la u(\la x),\quad ( |x|=1). \] Assuming axisymmetry in spherical coordinates $(\rho, \theta, \phi)$, the components $u_\rho$, $u_\th$, and $u_\phi$ do not depend on $\th$, and there is an axisymmetric stream function $\psi$ so that $u_\rho e_\rho + u_\phi e_\phi=\curl (\psi e_\th)$. Introduce the new variable \[ \tau = \ln \rho, \quad 0 \le \tau \le \ln \la, \] and let \[ \underline\psi(\tau,\phi):=\psi(\rho,\phi),\quad \chi(\tau,\phi):=\rho u_{\theta}(\rho,\phi) . \] They satisfy periodic boundary condition in $\tau$ and Dirichlet/Neumann boundary conditions in $\phi$. The solution pair corresponding to a Landau solution $U^a$ is $(\Psi^a,0)$. We study the nonlinear equations for perturbations of $(\Psi^a,0)$. The null space of its linearized operator always contains $(\frac{\partial }{\partial a} \Psi_a,0)$. Looking for saddle-node type bifurcations, we try to find an additional element of the null space by varying the parameters $a$ and $\la$. Recall that $\beta_0(a) \to \infty $ as $ a \to 1_+$. Bifurcation does not occur for $a$ sufficiently large by \cite{Korolev-Sverak,MT12}, and is more likely to happen for $a$ close to $1$. We will also consider the general eigenvalue problems. Because the functions are periodic in the radial variable $\tau$, we can consider the restrictions of these eigenvalue problems to Fourier subspaces of $\tau$. This paper contains both analytical and numerical results. Analytically, we show that the inclusion of the swirl component $u_\th$ does not enhance the bifurcation. In other words, if a bifurcation does occur, it can happen with $u_\th=0$. Numerically, we show evidences that bifurcation does not occur if $a\ge 1.01$. Therefore, our results suggest that there is no axisymmetric discretely self-similar solution curve emanated from a Landau solution, when $a\ge 1.01$. The rest of the paper is organized as follows. We first comment on time dependent settings in Subsection \ref{sec1.1}. In Section \ref{sec2}, we deduce the nonlinear equations for DSS axisymmetric solutions in terms of the swirl component and the stream function. In Section \ref{sec3}, we identify the linearization of the above system and rewrite them in similarity variables. By restricting to Fourier subspaces of the radial variable, we change its eigenvalue problem to eigenvalue problems for ordinary differential operators. In Section \ref{sec4}, we analyze the eigenvalue problems for the swirl operator, and prove Theorem \ref{eig.sM.thm}, which implies that the swirl component does not enhance bifurcation. In Section \ref{sec5}, weanalyze the eigenvalue problems for the stream operator, and provide numerical evidence for no bifurcation of Landau solutions in the class of DSS axisymmetric flows with no swirl when $a\geq 1.01$. \subsection{Comments on time dependent settings}\label{sec1.1} To place Landau solutions in a time dependent setting, we may consider \begin{equation}\label{0521} \partial_t u - \De u + u \cdot \nabla u + \nabla p = \be(t) e_3\de_0, \quad \div u=0. \end{equation} A stationary solution is $u(x,t)=U^a(x)$ with $\be(t)\equiv \be_0(a)$. The $L^2$-stability of such a fixed Landau solution is studied by Cannone and Karch \cite{MR2034160}. Even for constant $\beta$, there are more possibilities of bifurcation for time dependent $u$. When $\be(t)$ is time dependent, $\partial_a \Psi$ plays a more significant role. In this case, in the equation of the difference $\tilde u(t) = u(t) - U^{a(t)}$, the term $- a'(t) \partial_a U^{a}$ appears on the right side. Eq.~\eqref{0521} may not seem physical. However, we may study solutions of a coupled system of Navier-Stokes equations with another equation (for heat, magnetic field, etc), and use \eqref{0521} as a model or a truncated version. Let us consider time dependent solutions of \eqref{0521} with constant $\be(t)\equiv \be_0(a)$ and examine what kind of equations we will get under similar change of variables as in Section \ref{sec3}. We keep $\partial_t u$ in our derivation, starting from \eqref{NSaxisym1}-\eqref{NSaxisym3}. It creates $\partial_t \hat{A}\psi$ in the equation for the stream function and $\partial_t u_\th$ in the equation for the swirl component. (See Section \ref{sec3} for the definitions of $\hat{A}$.) Proceeding the linearization around a Landau solution $(U^a, P^a)$ as in Section \ref{sec3}, we obtain in \eqref{operator.form} and \eqref{eig.prob} the time-dependent stream and swirl operators, $\hat{\mathfrak{L}^t}$ and $\hat{\mathcal{M}^t}$, defined by \begin{align*} \hat{\mathfrak{L}^t} = \hat{\mathfrak{L}} + \partial_t \hat{A}, \quad \hat{\mathcal{M}^t} = \hat{\mathcal{M}} + \partial_t. \end{align*} (See Section \ref{sec3} for the definitions of $\hat{\mathfrak{L}}$ and $\hat{\mathcal{M}}$.) In addition to the similarity radial variable $\tau=\ln \rho$ to be defined in \eqref{ch.var} and suitable for DSS perturbations, we introduce a similarity time variable $s$, \begin{align*} s= \rho^{-2}t, \quad \rho^2\partial_t = \partial_s. \end{align*} The factor $\rho^{-2}$ is needed to fit the scaling. Since $s=s(t,\rho)$, the derivative $\rho\partial_\rho$ in the time-dependent case contains an extra term with derivative in $s$: $\rho\partial_\rho=\partial_\tau -2s\partial_s$. If we define $\xi(t,\rho,\phi) = \zeta(s,\tau,\phi)$ and $\mathfrak{L}^t \zeta = \rho^4 \hat{\mathfrak{L}^t} \xi$, then $\mathfrak{L}^t$ contains up to third derivative in $s$. One can transfer the eigenvalue problem of $\mathfrak{L}^t$ into a first order system in $s$ and analyze a new eigenvalue problem. Of this new problem, $s$-independent solutions are exactly those to be considered in Sections \ref{sec4} and \ref{sec5}. We may also consider $s$-periodic solutions of this new system, corresponding to a Hopf bifurcation. However, periodicity in $s$ does not mean periodicity in $t$ or DSS in $t$. Thus Hopf bifurcation is possible, but requires clarification of its meaning. Motivated by this, we will also consider purely imaginary eigenvalues in Sections \ref{sec4} and \ref{sec5}, but those results have no implication on Hopf bifurcation. \section{Equations for DSS axisymmetric flows}\label{sec2} In this section, we describe the stationary Navier-Stokes equations for DSS, axisymmetric flows. Indeed, an axisymmetric flow $u$ can be written in terms of a stream function $\psi$ and a swirl velocity $u_\th$, \[ u = \curl(\psi e_\th) + u_\th e_\th. \] Also, the axisymmetry reduces our domain % to a two dimensional space. % As a consequence, stationary Navier-Stokes equations with axisymmetry reduce to the equation for $\psi$ and $u_\th$ and we impose natural boundary conditions for smooth solutions on the restricted domain. \subsection{The equations for DSS axisymmetric steady flows} In this subsection, we introduce the Navier-Stokes equations for axisymmetric steady flows in the spherical coordinates. We first recall the time-dependent Navier-Stokes equations in the spherical coordinates $(\rho,\th,\phi)$ without axisymmetry assumption (see \cite[Appendix 2]{Bat}) \vspace{-3mm} \begin{multline*} \partial_t{u_{\rho}}+ ({u}\cdot\nabla) u_{\rho}-\frac{u^2_{\phi}}{\rho}-\frac{u^2_{\theta}}{\rho} =-\frac{1}{\rho_0}\partial_\rho p \\ +\nu\bke{\Delta u_{\rho}-\frac{2u_{\rho}}{\rho^2}-\frac{2}{\rho^2\sin\phi}\partial_{\phi}(\sin\phi\, u_{\phi})-\frac{2}{\rho^2\sin\phi}\partial_{\theta} u_{\theta}}, \end{multline*} \vspace{-5mm} \begin{multline*} \partial_t{u_{\phi}}+ ({u}\cdot\nabla) u_{\phi}+\frac{u_\rho u_{\phi}}{\rho}-\frac{u^2_{\theta}\cot\phi}{\rho} =-\frac{1}{\rho_0\rho}\partial_{\phi} p \\ +\nu\bke{\Delta u_{\phi}+\frac{2}{\rho^2}\partial_{\phi} u_{\rho}-\frac{u_{\phi}}{\rho^2\sin^2\phi} -\frac{2\cos\phi}{\rho^2\sin^2\phi}\partial_{\theta}u_{\theta}}, \end{multline*} \vspace{-5mm} \begin{multline*} \partial_t{u_{\theta}}+ ({u}\cdot\nabla) u_{\theta}+\frac{u_{\theta}u_\rho}{\rho}+\frac{u_{\phi}u_{\theta}\cot\phi}{\rho} =-\frac{1}{\rho_0\rho\sin\phi}\partial_{\theta} p \\ +\nu\bke{\Delta u_{\theta}+\frac{2}{\rho^2\sin\phi}\partial_{\theta} u_{\rho}+\frac{2\cos\phi}{\rho^2\sin^2\phi}\partial_{\theta}u_{\phi} -\frac{u_{\theta}}{\rho^2\sin^2\phi}}, \end{multline*} $$ \nabla\cdot {u} =\frac{1}{\rho^2}\partial_{\rho}(\rho^2u_{\rho})+\frac{1}{\rho\sin\phi}\partial_{\phi}(\sin\phi\, u_{\phi})+\frac{1}{\rho\sin\phi}\partial_{\theta}u_{\theta} =0. $$ Here, $\nu>0$ is the viscosity constant and $\rho_0>0$ is the constant density. We set $\nu=\rho_0=1$ below. For axisymmetric steady flows, the components $u_\rho$, $u_\phi$ and $u_\th$ do not depend on $t$ or $\th$, and the above system becomes \begin{align} \label{NSaxisym1} &({b}\cdot \nabla) u_{\rho}-\frac{u^2_{\phi}}{\rho}-\frac{u^2_{\theta}}{\rho} =-\partial_\rho p +\Delta u_{\rho}-\frac{2u_{\rho}}{\rho^2}-\frac{2}{\rho^2\sin\phi}\partial_{\phi}(\sin\phi\,u_{\phi}), \\ &({b}\cdot \nabla) u_{\phi}+\frac{u_\rho u_{\phi}}{\rho}-\frac{u^2_{\theta}\cot\phi}{\rho} =-\frac{1}{\rho}\partial_{\phi} p + \Delta u_{\phi}+\frac{2}{\rho^2}\partial_{\phi} u_{\rho}-\frac{u_{\phi}}{\rho^2\sin^2\phi}, \\ \label{NSaxisym3} &({b}\cdot\nabla) u_{\theta}+\frac{u_{\theta}u_\rho}{\rho}+\frac{u_{\phi}u_{\theta}\cot\phi}{\rho} = \Delta u_{\theta} -\frac{u_{\theta}}{\rho^2\sin^2\phi}, \\ \label{NSaxisym4} &\qquad\qquad\qquad\partial_{\rho}(\rho^2\sin\phi\, u_{\rho})+\partial_{\phi}(\rho \sin\phi\, u_{\phi}) =0, \end{align} where ${b} = u_\rho e_\rho + u_\phi e_\phi$. We consider the system on the domain \[ (\rho,\phi) \in (0,\infty)\times (0,\pi). \] The natural boundary conditions for a smooth axisymmetric vector field $u$ are% \EQN{ \label{u.bc} \partial_\phi u_\rho = u_\th = u_\phi = \partial_\phi p = 0, \quad \text{at } \rho>0, \text{ and at } \phi=0,\pi. } As we look for DSS solutions, we also impose the DSS boundary conditions \EQN{ \label{u.bc2} u_\rho(\rho,\phi) = \la u_\rho(\la \rho,\phi),\quad u_\phi(\rho,\phi) = \la u_\phi(\la \rho,\phi),\quad u_\th(\rho,\phi) = \la u_\th(\la \rho,\phi), } for some $\la>1$ to be chosen. We will only consider DSS solution $u \in H^1_\text{loc}(\Om)$ for $\Om=\mathbb{R}^3 \setminus \{ 0\}$. By regularity theory, $u\in L^\infty_\text{loc}$ and hence satisfies the bounds \eqref{SNS2} and \eqref{SNS3}. \subsection{The stream function} In this subsection we address the existence of a stream function $\psi$ such that $b=\curl(\psi e_\th)$. Since $b$ is a divergence-free vector field, $b$ can be written as a curl of some vector potential $F$. Recall in spherical coordinates, (see \cite[Appendix 2]{Bat}) \EQN{ \label{eq2.9} \nabla \times {\bf F} &=\frac{1}{\rho \sin \ph}\bke{\partial_\phi (F_\th \sin \phi) - \partial_\th F_\phi} e_\rho + \frac{1}{\rho}\bke{\frac 1{\sin \phi} \partial_\th F_\rho - \partial_\rho (\rho F_\th)} e_\phi \\ &\quad + \frac 1 \rho \bke{\partial_\rho (\rho F_\phi) - \partial_\phi F_\rho} e_\th. } Since $b$ is axisymmetric, we can choose axisymmetric $F$. Indeed, we can take $F= \psi e_\th$, $F_\rho=F_\psi=0$ and $F_\th=\psi=\psi(\rho,\phi)$ with \EQN{ \label{u.psi.relation} {b} = u_\rho e_\rho + u_\phi e_\phi &=\curl (\psi e_\th) ,\\ u_{\rho} =\frac 1{\rho\sin\phi} \partial_\phi (\psi \sin \phi) ,&\qquad u_{\phi} = -\frac {1}{\rho } \partial_\rho ( \rho \psi ). } We now show the global existence of $\psi$. For this purpose, we introduce the \emph{Stokes stream function} $\tilde\psi(\rho,\phi)$ (\cite[p.78]{Bat}) defined on $(\rho,\phi) \in (0,\infty)\times (0,\pi)$ by (\cite[(2.2.14)]{Bat}) \EQN{ \label{eq2.8} \rho^2\sin\phi\, u_{\rho} = \partial_\phi \tilde\psi, \quad -\rho \sin\phi\, u_{\phi}= \partial_\rho\tilde \psi, } which exists by the divergence-free condition \eqref{NSaxisym4}. It relates to $\psi$ by \EQN{\label{rel.psi} \psi(\rho,\phi) = \frac 1{\rho\sin \phi}\, \tilde \psi(\rho,\phi) . } Since $u$ satisfies the bound \eqref{SNS2}, $u_\rho,u_\phi \in L^\infty(\Pi)$, $\Pi = (1,\la) \times (0,\pi)$, although $u_\rho$ may be discontinuous at $\phi=0,\pi$. Thus, $\tilde \psi\in W^{1,\infty}(\Pi)$ and is hence continuous in $\bar \Pi$. By \eqref{eq2.8}, $\partial_\phi \tilde\psi = \partial_\rho \tilde\psi =0$ when $\phi=0,\pi$. Now, as a consequence of the divergence free condition, we obtain \[ 0 = \int_{ |x|<\rho} \div u \, dx = \int_{ |x|=\rho} u \cdot e_\rho dS_x = \int_0^{2\pi}\!\! \int_0^{\pi} u_\rho \rho^2 \sin (\phi )d\phi d \th =2 \pi \int_0^{\pi} \partial_\phi \tilde\psi\, d\phi, \] and hence $\tilde \psi (\rho,0) = \tilde\psi(\rho,\pi)$ for any $\rho>0$. Since solutions to \eqref{eq2.8} are invariant under adding a constant, we can set $\tilde \psi (\rho,0) = \tilde\psi(\rho,\pi)=0$. Therefore, we obtain \[ \psi(\rho,\phi) = \frac 1{\rho\sin\phi} \int_0^\phi \rho^2\sin \phi' u_\rho(\rho,\phi') d\phi' \in L^\infty(\Pi). \] The above argument uses the axisymmetry but not the DSS condition. \subsection{The equations for $(\psi,u_\th)$} In this subsection, we reduce the equations \eqref{NSaxisym1}-\eqref{NSaxisym4} for velocity $u$ to a system for $\psi$ and $u_\th$. The boundary conditions for $u_{\rho}$ and $u_{\phi}$ in \eqref{u.bc} are equivalent to the following boundary conditions for $\psi$ \EQN{ \label{psi.bc} \partial_\rho (\rho\psi)|_{\phi=0,\pi}=\partial_\phi \bke{ \frac 1{\sin \phi} \partial_\phi (\psi \sin \phi )}\bigg |_{\phi=0,\pi}= 0. } Note that \[ \div (\psi e_\th) = 0, \quad \curl (\psi e_\th) = b,\quad (0<\phi<\pi). \] Since $b \in H^1_\text{loc}(\Om)$, we have $\psi e_\th \in W^{1,6}_{\text{loc}}(\Om)$ and hence $\psi e_\th $ is locally H\"older continuous in $\Om$. In particular, \EQN{ \label{psi.bc1a} \psi(\rho,0) = \psi(\rho,\pi) = 0. } On the other hand, the DSS boundary condition \eqref{u.bc2} implies \EQN{ \label{psi.bc2a} \psi(\rho,\phi) = \psi(\la \rho,\phi). } In order to derive the equation for $\psi$, we first consider the equation for the vorticity. For axisymmetric flow $u$, by \eqref{eq2.9} the vorticity in the spherical coordinates is \EQ{ \om = \curl u = \om_\rho e_\rho + \om_\th e_\th + \om_\phi e_\phi } with \EQN{ \label{om.formula} \om_\rho =\frac{\partial_\phi (u_\th \sin \phi)}{\rho \sin \phi}, \quad \om_\phi = - \frac{1}{\rho}\,{ \partial_\rho (\rho u_\th)} , \quad \om_\th = \frac 1 \rho \bke{\partial_\rho (\rho u_\phi) - \partial_\phi u_\rho} . } With $b= \curl F$, $F=\psi e_\th$ and $\div F=0$, \EQ{ \om_\th e_\th = \curl b = \curl \curl F = - \De F + \nabla \div F = -\De (\psi e_\th) . } We introduce the operator $\hat A$ for $f=f(\rho,\phi)$, \[ \hat{A} f = - \Delta_{\text{as}} f +\frac{f}{\rho^2\sin^2\phi}, \quad -\De (f e_\th) = (\hat{A}f)e_\th, \] where $\Delta_{\text{as}}$ is usual $\Delta$ restricted to axisymmetric functions, \[ \Delta_{\text{as}} f=\frac{1}{\rho^2}\partial_{\rho}(\rho^2\partial_{\rho}f) +\frac{1}{\rho^2\sin\phi}\partial_{\phi}(\sin\phi\partial_{\phi} f). \] Thus \[ \om_\th e_\th = -\De (\psi e_\th) =(\hat A \psi)e_\th, \quad \om_\th = \hat A \psi. \] Recall the vorticity equation \[ - \De \om+ (u \cdot \nabla) \om = ( \om \cdot \nabla) u. \] The $\om_\th$ component satisfies \EQ{ \hat A \om_{\theta} +(u\cdot\nabla)\om_{\theta} + \frac{u_{\theta}\om_{\rho}}{\rho}+ \frac{u_{\theta}\om_{\phi}}{\rho}\cot\phi =(\om\cdot\nabla)u_{\theta} + \frac{u_{\rho}\om_{\theta}}{\rho}+\frac{u_{\phi}\om_{\theta}}{\rho}\cot\phi. } Replacing $\om_\th = \hat A \psi$ and replacing $\om_\rho$ and $\om_\phi$ by \eqref{om.formula} with $ d(u_\th) = \om_\rho e_\rho + \om_\phi e_\phi= \curl (u_\th e_\th)$, we get the equation for $\psi$ \begin{multline} \label{psi.eq.full} \hat{A}^2\psi +(b\cdot\nabla)\hat{A}\psi + \frac{u_{\theta}\partial_{\phi}(u_{\theta}\sin\phi)}{\rho^2\sin\phi}- \frac{u_{\theta}\partial_{\rho}(\rho u_{\theta})}{\rho^2}\cot\phi \\ =(d(u_\th) \cdot\nabla)u_{\theta} + \frac{\hat{A}\psi} {\rho} \left(u_{\rho}+u_{\phi}\cot\phi\right). \end{multline} The $u_\th$ equation \eqref{NSaxisym3} becomes \EQN{ \label{uth.eq} \hat{A}u_{\theta} + (b\cdot\nabla)u_{\theta} +\frac{u_{\theta}}{\rho} \left(u_{\rho}+u_{\phi}\cot\phi\right) % % % = 0. } Now, we consider the boundary conditions for $\psi$. Note that \eqref{psi.bc} and \eqref{psi.bc1a} are equivalent to \EQN{ \label{psi.bc1} \psi|_{\phi=0,\pi}=\hat A \psi |_{\phi=0,\pi}= 0, } while \eqref{psi.bc2a} implies \EQN{ \label{psi.bc2} \psi(\rho,\phi) = \psi(\la \rho,\phi), \quad \hat A \psi(\rho,\phi) =\la^2 \hat A \psi(\la\rho,\phi). } The system \eqref{psi.eq.full}-\eqref{uth.eq} for $(\psi,u_\th)$ with the relations \eqref{u.psi.relation} and \eqref{om.formula} is self-contained with the boundary conditions \eqref{u.bc}, \eqref{u.bc2} for $u_{\theta}$ and \eqref{psi.bc1}, \eqref{psi.bc2} for $\psi$. It is the system for DSS, axisymmetric, steady Navier-Stokes flows which will be studied in the remaining part of this paper for a possible bifurcation of the Landau solutions. In the special case of an axisymmetric flow $u$ with no swirl, i.e, $u_\th=0$ and $u=b$, we no longer need \eqref{uth.eq}. In this case, our system of equations reduces to \EQN{ \label{psi.eq} \hat{A}^2 \psi + b \cdot \nabla \hat{A}\psi - \frac{\hat{A}\psi}\rho(u_\rho+u_\phi \cot \phi) = 0 } with the boundary conditions \eqref{psi.bc1} and \eqref{psi.bc2} for $\psi$. \section{The linearization} \label{sec3} In this section we deduce the nonlinear equations for perturbations of Landau solutions and consider the linearization around Landau solutions. We will in particular study its kernel in the axisymmetric DSS class. \subsection{Perturbation of Landau solutions} Recall \eqref{Landau-sol}-\eqref{Landau-Psi} that the Landau solution $U^a$ with parameter $a>1$ is given by \EQN{ \label{Landau-sol2} U_\rho^a &= \frac 2{\rho}\bke{\frac {a^2-1}{(a-\cos \phi)^2} -1} =\frac 1{\rho^2\sin\phi} \partial_\phi (\Psi^a \rho \sin \phi) \\ U_\phi^a &= - \frac {2\sin \phi} {\rho(a-\cos \phi)} =- \frac {1}{\rho \sin\phi} \partial_\rho (\Psi^a \rho \sin \phi) \\ \Psi^a & = \frac {2\sin \phi }{a-\cos \phi}. } For the convenience, we drop the index $a$ in $U^a$, $U_\rho^a$, $U_\phi^a$ and $\Psi^a$ below. Note that these solutions are self-similar, axisymmetric, steady flows with no swirl, so that they solve \eqref{psi.eq} with $\psi = \Psi$ and $b= U$: \EQN{ \label{Psi.eq} \hat{A}^2 \Psi + U \cdot \nabla \hat{A}\Psi - \frac{\hat{A}\Psi}\rho(U_\rho+U_\phi \cot \phi) = 0. } Denote a perturbation from a Landau solution by% \EQ{ \psi=\Psi+\xi, \quad u = U + u_\th e_\th + v. } The component $v=v_\rho e_\rho+ v_\phi e_\phi$ has no swirl and is determined by $\xi$ \EQ{ v(\xi) = \curl (\xi e_\th)= \frac 1{\rho^2\sin\phi}\partial_{\phi}(\xi\rho\sin\phi) e_{\rho} -\frac 1{\rho\sin\phi}\partial_{\rho}(\xi\rho\sin\phi) e_{\phi} . } Subtracting the equation \eqref{Psi.eq} from \eqref{psi.eq.full}, the system \eqref{psi.eq.full}-\eqref{uth.eq} for the perturbation pair $(\xi, u_\th)$ can be written in the following form \begin{equation} \label{operator.form} \left\{ \begin{split} \hat {\mathfrak{L}} \xi &= N_1(\xi,u_{\theta})\\ \hat {\mathcal{M}} u_{\theta} &= N_2(\xi,u_{\theta}). \end{split} \right . \end{equation} Here, the linear operators $\hat\mathfrak{L}$ and $\hat\mathcal{M}$ are defined by \EQ{ \hat{\mathfrak{L}} \xi &=\hat{A}^2\xi +(U\cdot\nabla)\hat{A}\xi + (v(\xi)\cdot\nabla)\hat{A}\Psi - \frac{\hat{A}\xi}{\rho}(U_{\rho}+U_{\phi}\cot\phi) -\frac {\hat{A}\Psi}{\rho}(v_{\rho}+v_{\phi}\cot\phi), } and \EQ{ \hat{\mathcal{M}} u_{\theta} &=\hat{A} u_{\theta} + (U\cdot\nabla)u_{\theta} +\frac{u_{\theta}}{\rho}(U_\rho+ U_{\phi}\cot\phi). } The non-linear mappings $N_1$ and $N_2$ are given by \EQ{ N_1(\xi,u_{\theta}) =& -(v(\xi)\cdot\nabla)\hat{A}\xi+\frac{\hat{A}\xi}\rho (v_\rho+v_\phi \cot \phi)\\ &-\frac{u_{\theta}\partial_{\phi}(u_{\theta}\sin\phi)}{\rho^2\sin\phi} + \frac{u_{\theta}\partial_{\rho}(\rho u_{\theta})}{\rho^2}\cot\phi + (d(u_\theta)\cdot\nabla)u_{\theta}, } and \EQ{ N_2(\xi,u_{\theta})=-(v(\xi)\cdot\nabla)u_{\theta} -\frac{u_\th}{\rho}(v_\rho+ v_\phi\cot\phi). } It is important to observe that the linear part of \eqref{operator.form} is decoupled: $\hat{\mathfrak{L}}$ acts only on $\xi$ while $\hat{\mathcal{M}}$ acts only on $u_\theta$. It will be convenient to call $\hat{\mathfrak{L}}$ the \emph{stream operator} and $\hat{\mathcal{M}}$ the \emph{swirl operator}. Note that, with $\curl U = \Om_\th e_\th$, \EQN{\label{0904a} (U_\rho+U_\phi \cot \ph) % = \frac 2{\rho} \frac {(a\cos \phi-1)}{(a-\cos \phi)^2}, \quad \hat A \Psi = \Om_\th = \frac{4(a^2-1)\sin \phi}{ \rho^2 (a-\cos \phi)^3}. } We will be looking for nonzero $(\xi,u_\th)$ satisfying \eqref{operator.form} under the boundary conditions \begin{align} &\xi|_{\phi=0,\pi} = \hat{A}\xi|_{\phi=0,\pi} = 0 , \label{bc.xi.phi}\\ & \xi(\rho,\phi) = \xi(\la \rho,\phi), \quad \hat{A}\xi(\rho,\phi) = \la^2 \hat{A}\xi(\la \rho,\phi), \label{bc.xi.rho}\\ &u_{\theta}|_{\phi = 0,\pi} = 0, \quad % u_{\theta}(\rho,\phi) = \la u_{\theta}(\la \rho,\phi). \label{bc.utht} \end{align} To this end, we look for nontrivial kernel of its linear part, \EQN{ \label{eig.prob} \begin{cases} \hat \mathfrak{L} \xi = 0 \\ \hat \mathcal{M} u_{\theta} =0 \end{cases} } under the same boundary conditions \eqref{bc.xi.phi}-\eqref{bc.utht}. This system includes two parameters $a$ and $\la$. \subsection{Similarity variables} % In this subsection, we introduce \emph{similarity variables} so that our system \eqref{operator.form} becomes periodic in the radial variable. It will enable us in next subsection to restrict \eqref{eig.prob} to Fourier subspaces of the radial variable and reduce our problem to one-variable problems. Since $u$ is $\la$-DSS, both $\xi$ and $\rho u_{\theta}$ are $\la$-DSS of degree zero, in the sense that \begin{equation*} \left\{ \begin{split} &\xi(\la \rho, \phi)=\xi( \rho, \phi), \\ &(\rho u_{\theta})(\la \rho, \phi)=\la\rho u_{\theta}(\la \rho, \phi) =(\rho u_{\theta})(\rho, \phi), \end{split} \right . \quad \forall \rho>0, \ \forall \phi \in(0,\pi). \end{equation*} Introduce the new variable \EQN{\label{ch.var} \tau = \ln \rho, } and define the functions $\zeta$ and $\chi$ \EQ{ \zeta(\tau,\phi):=\xi(\rho,\phi),\quad \chi(\tau,\phi):=\rho u_{\theta}(\rho,\phi) . } They are both periodic in $\tau$ with period $\ln \la$. By the periodicity, we can restrict our domain to \EQ{ (\tau,\phi) \in (0,\ln\la) \times (0,\pi). } Let \[ \mathfrak{L} \zeta = \rho^4\hat\mathfrak{L} \xi, \quad \mathcal{M} \chi = \rho^3\hat\mathcal{M} u_{\theta}. \] In the new variables, the problem \eqref{eig.prob} on $(0,\ln\la) \times (0,\pi)$ becomes \EQN{\label{eig.new.coord} \begin{cases} \mathfrak{L} \zeta = 0\\ \mathcal{M} \chi =0 . \end{cases} } The induced boundary conditions will be discussed in Section \ref{BC.tau}. \subsection{Invariance of Fourier subspaces under the operators $\mathfrak{L}$ and $\mathcal{M}$} By the periodicity of $\zeta$ and $\chi$ in $\tau$, the linear system \eqref{eig.new.coord} can be considered in each Fourier mode, which leads to a family of 1D linear systems. In other words, we consider \eqref{eig.new.coord} on each subfamily of functions % \EQ{ \mathcal{F}_n = \bket{ h(\phi) e^{in\si\tau} }, \quad \si = \frac{2\pi}{\ln \la}, \quad n\in \mathbb{Z}. } For this purpose, we first need the invariance of $\mathcal{F}_n$ under the operations $\mathfrak{L}$ and $\mathcal{M}$. In other words, if $\zeta$ and $\chi$ are in $\mathcal{F}_n$, then \[ \mathfrak{L} \zeta \in \mathcal{F}_n \quad\text{and}\quad \mathcal{M} \chi \in \mathcal{F}_n. \] Note that functions of the form $f(\tau) e^{ik\phi}$ are not preserved under $\mathfrak{L}$ and $\mathcal{M}$ whose coefficients depend on $\phi$. \begin{proposition}\label{invariance} The function spaces $\mathcal{F}_n$, $n \in \mathbb{Z}$, are invariant under the linear operators $\mathfrak{L}$ and $\mathcal{M}$. \end{proposition} Consider the linear operator $\mathfrak{L}$ first. We will decompose it in the form \EQ{ \mathfrak{L} = (A+B)A + C, } and then show the invariance of $\mathcal{F}_n$ under the operators $A$, $B$ and $C$. \begin{lemma}\label{linear.op} The linear operator $\mathfrak{L}$ can be written as \begin{eqnarray*} \mathfrak{L} = (A+B)A + C, \end{eqnarray*} where \EQ{ A\zeta &= -(\partial_{\tau\tau}+\partial_{\tau}) \zeta -\partial_{\phi}\bke{\frac 1{\sin \phi}\partial_{\phi}(\zeta \sin \phi ) } \\ B\zeta &=2\left(\frac{a^2-1}{(a-\cos\phi)^2}+1\right)\partial_{\tau}\zeta -\frac {2\sin\phi}{(a-\cos\phi)}\partial_{\phi}\zeta +\left(2-\frac{4a^2+2a\cos\phi-6}{(a-\cos\phi)^2}\right)\zeta \\ C\zeta &= - \frac{12(a^2-1)\sin\phi}{(a-\cos\phi)^3} (\partial_{\phi}\zeta +\cot\phi \zeta)+\frac{12(a^2-1)\sin^2\phi}{(a-\cos\phi)^4}(\partial_{\tau}\zeta+\zeta). }% In particular, $A$ satisfies $A\zeta(\tau,\phi) = \rho^2\hat{A} \xi(\rho,\phi)$ where $\rho=e^{\tau}$ and $\zeta(\tau,\phi) = \xi(\rho,\phi)$. \end{lemma} \begin{proof} Recall that \begin{eqnarray*} \hat\mathfrak{L} \xi = \hat{A}^2 \xi + (U \cdot\nabla )\hat{A} \xi + (v(\xi)\cdot\nabla) \hat{A}\Psi - \frac {\hat{A}\xi}{\rho}(U_{\rho} +U_{\phi}\cot \phi) - \frac {\hat{A}\Psi}{\rho}(v_{\rho}(\xi) + v_{\phi}(\xi)\cot \phi) \end{eqnarray*} and \EQ{ \hat{A}\xi = -\frac 1{\rho^2}\partial_{\rho}(\rho^2 \partial_{\rho} \xi) - \frac 1{\rho^2\sin\phi} \partial_{\phi}(\sin \phi \partial_{\phi} \xi) + \frac 1{\rho^2\sin^2\phi} \xi. } Under the change of variable $\rho= e^{\tau}$ with \eqref{ch.var}, $\partial_{\rho} = e^{-\tau}\partial_{\tau}$, and \EQN{\label{operator.A} \rho^2 \hat{A} \xi &= -\partial_{\rho}(\rho^2 \partial_{\rho} \xi) - \frac 1{\sin\phi} \partial_{\phi}(\sin \phi \partial_{\phi} \xi) + \frac 1{\sin^2\phi} \xi\\ &= -(\partial_{\tau\tau}+\partial_{\tau}) \zeta -(\cot \phi \partial_{\phi}+\partial_{\phi\phi})\zeta + \frac 1{\sin^2\phi} \zeta\\ &=-(\partial_{\tau\tau}+\partial_{\tau}) \zeta -\partial_{\phi}\bke{\frac 1{\sin \phi}\partial_{\phi}(\zeta \sin \phi ) } =: A \zeta . } Using for $F=F(\tau,\phi)$, \[ -e^{k\tau}(\partial_{\tau\tau}+\partial_{\tau})(e^{-k\tau}F) =-(\partial_{\tau\tau}+\partial_{\tau})F +2k\partial_{\tau}F - k(k-1)F, \] we have \EQN{\label{Arho.commute} e^{k\tau}A(e^{-k\tau}F) = [A +2k\partial_{\tau} - k(k-1)]F. } Thus, with $k=2$ and $F=A\zeta$, \EQ{ \rho^4 \hat{A}^2 \xi = e^{2\tau} A(e^{-2\tau}A\zeta) = (A + 4\partial_{\tau}-2) A\zeta. } Now, we consider the terms $\rho^4 (U\cdot \nabla) \hat{A}\xi$ and $\rho^4 (v(\xi)\cdot\nabla) \hat{A}\Psi$. Using the gradient in the spherical coordinates \EQ{ \nabla = e_{\rho}\partial_{\rho} +e_{\theta}\frac 1{\rho\sin\phi}\partial_{\theta}+ e_{\phi}\frac 1{\rho} \partial_{\phi}, } we have for any $V(\phi)=V_\tau(\phi)e_\tau+V_\phi(\phi)e_\phi$ with $e_\tau = e_\rho$, \EQ{\rho^4 \left(\frac {V(\phi)}{\rho} \cdot \nabla\right) g(\rho,\phi) &= \rho^3 (V(\phi)\cdot \nabla) g = \rho^3 \bke{V_\tau(\phi) \partial_{\rho} + V_\phi(\phi)\frac 1{\rho} \partial_{\phi} } g\\ &= e^{2\tau} (V_\tau(\phi) \partial_{\tau} + V_\phi(\phi) \partial_{\phi} ) G\\ &= (V_\tau(\phi) \partial_{\tau} + V_\phi(\phi) \partial_{\phi} - 2V_\tau(\phi)) (e^{2\tau}G), } where $G(\tau,\phi)=g(\rho,\phi)$. As a consequence, plugging $V=\tilde{U}$, $g=\hat A \xi$ and $e^{2\tau}G=A\zeta$ where \EQN{\label{td.U} \tilde{U}(\phi) := \rho U = 2\left(\frac{a^2-1}{(a-\cos\phi)^2}-1\right)e_{\rho} -\frac {2\sin\phi}{(a-\cos\phi)}e_{\phi} = \tilde U_\tau (\phi) e_\tau + \tilde U_\phi (\phi) e_\phi, } we get \EQ{ \rho^4 (U \cdot\nabla) \hat{A} \xi &= (\tilde{U}_\tau \partial_{\tau}+\tilde{U}_\phi \partial_{\phi}) A \zeta -2\tilde U_\tau A\zeta\\ &= \left( 2\left(\frac{a^2-1}{(a-\cos\phi)^2}-1\right)\partial_{\tau} -\frac {2\sin\phi}{(a-\cos\phi)}\partial_{\phi} \right) A \zeta -4\left(\frac{a^2-1}{(a-\cos\phi)^2}-1\right)A\zeta.\\ } Similarly, we obtain \EQ{ \rho^4 (v(\xi)\cdot\nabla) \hat{A}\Psi &=(\tilde v_{\tau} \partial_{\tau} + \tilde v_{\phi} \partial_{\phi} - 2\tilde v_{\tau}) (A\Psi )\\ &=[( \partial_{\phi}\zeta +\zeta\cot\phi ) \partial_{\tau} -(\partial_{\tau}\zeta+\zeta) \partial_{\phi} - 2( \partial_{\phi}\zeta +\zeta\cot\phi )] A\Psi. } Here, $\tilde v=\tilde v_{\tau}e_{\tau}+\tilde v_{\phi}e_{\phi}$ is defined by \EQ{ \tilde v(\zeta)(\tau,\phi) &= \rho v(\xi)(\rho,\phi) \\ &=\rho\left( \frac 1{\rho^2\sin\phi} \partial_{\phi}(\xi\rho\sin\phi)e_{\rho} -\frac 1{\rho\sin\phi} \partial_{\rho}(\xi\rho\sin\phi)e_{\phi}\right)\\ &= ( \partial_{\phi}\zeta +\zeta\cot\phi ) e_{\tau} -(\partial_{\tau}\zeta+\zeta) e_{\phi}. } Finally, the remaining terms are rewritten as \EQ{ &\rho^4 \left(-\frac{\hat{A}\xi}{\rho}(U_{\rho} + U_{\phi}\cot\phi)-\frac{\hat{A}\Psi}{\rho}(v_{\rho}(\xi)+v_{\phi}(\xi)\cot\phi) \right)\\ &= - A\zeta(\tilde U_{\tau} +\tilde U_{\phi}\cot \phi) - A\Psi (\tilde v_\tau + \tilde v_\phi \cot\phi) \\ &= -\frac{2(a\cos\phi -1)}{(a-\cos\phi)^2}A\zeta - (\partial_{\phi}\zeta -\cot \phi\partial_{\tau}\zeta)A\Psi. } Note that (see also \eqref{0904a}) \EQ{ A\Psi = \rho^2 \hat{A}\Psi = \rho \bke{\partial_\rho (\rho U_\phi) - \partial_\phi U_\rho} = 2\partial_\phi \bke{\frac {a^2-1}{(a-\cos \phi)^2} -1} = \frac {4(a^2-1 )\sin \phi}{(a-\cos \phi)^3}. } Collect all terms in $\rho^4\hat{\mathfrak{L}}$. In order to write $\rho^4\hat\mathfrak{L}$ in the form $(A+B)A+C$, we need \EQ{ B &=-2 + 4\partial_{\tau} +\tilde{U}_\tau \partial_{\tau}+\tilde{U}_\phi \partial_{\phi} -2\tilde U_\tau -\frac {2(a\cos\phi-1)}{(a-\cos\phi)^2}\\ &=2\left(\frac{a^2-1}{(a-\cos\phi)^2}+1\right)\partial_{\tau} -\frac {2\sin\phi}{(a-\cos\phi)}\partial_{\phi} -4\left(\frac{a^2-1}{(a-\cos\phi)^2}-\frac12\right) -\frac {2(a\cos\phi-1)}{(a-\cos\phi)^2}\\ &=2\left(\frac{a^2-1}{(a-\cos\phi)^2}+1\right)\partial_{\tau} -\frac {2\sin\phi}{(a-\cos\phi)}\partial_{\phi} +2 -\frac{4a^2+2a\cos\phi-6}{(a-\cos\phi)^2} } and \EQ{ C &=(\partial_{\tau}A\Psi- 3A\Psi) (\partial_{\phi} +\cot\phi ) -(\partial_{\phi}A\Psi -A\Psi\cot \phi)(\partial_{\tau}+1) \\ &=- \frac{12(a^2-1)\sin\phi}{(a-\cos\phi)^3} (\partial_{\phi} +\cot\phi )-\bke{\partial_{\phi}\frac{4(a^2-1)\sin\phi}{(a-\cos\phi)^3} -\frac{4(a^2-1)\cos\phi}{(a-\cos\phi)^3}}(\partial_{\tau}+1) \\ &=- \frac{12(a^2-1)\sin\phi}{(a-\cos\phi)^3} (\partial_{\phi} +\cot\phi )+\frac{12(a^2-1)\sin^2\phi}{(a-\cos\phi)^4}(\partial_{\tau}+1). } This completes the proof of the lemma. \end{proof} \begin{lemma}\label{invariance.sL} The function spaces $\mathcal{F}_n$, $n \in \mathbb{Z}$, are invariant under the operators $A$, $B$, $C$ and hence $\mathfrak{L}$. The restriction $\mathfrak{L}_n$ of $\mathfrak{L}$ on $\mathcal{F}_n$ in the sense of \EQ{ \mathfrak{L}(h(\phi)e^{in\si\tau})=(\mathfrak{L}_nh)(\phi)e^{in\si\tau} } is given by \[ \mathfrak{L}_n = (A_n+B_n)A_n + C_n, \] where \EQN{\label{op.ABCn} A_n h &=((n\sigma )^2- in\sigma )h -\partial_{\phi}\bke{\frac 1{\sin \phi}\partial_{\phi}(h \sin \phi ) } \\ B_nh &=\left( 2 i n\sigma\left(\frac{a^2-1}{(a-\cos\phi)^2}+1\right) -\frac {2\sin\phi}{(a-\cos\phi)}\partial_{\phi} +2 -\frac{4a^2+2a\cos\phi-6}{(a-\cos\phi)^2} \right)h\\ C_nh &=\left(- \frac{12(a^2-1)\sin\phi}{(a-\cos\phi)^3} \partial_{\phi} +\frac{12i n\sigma(a^2-1)\sin^2\phi}{(a-\cos\phi)^4} +\frac{12(a^2-1)(1-a\cos\phi)}{(a-\cos\phi)^4} \right)h } are the restrictions of the operators $A$, $B$, and $C$ on $\mathcal{F}_n$, respectively. % \end{lemma} \begin{proof} For a function $\zeta\in \mathcal{F}_n$, $\zeta(\tau,\phi)=h(\phi)e^{in\sigma\tau}$, we have \EQ{ A [he^{in\sigma\tau}] &= -(\partial_{\tau\tau}+\partial_{\tau}) (he^{in\sigma\tau}) -\partial_{\phi}\bke{\frac 1{\sin \phi}\partial_{\phi}(he^{in\sigma\tau} \sin \phi ) }\\ &=\bke{((n\sigma )^2- in\sigma )h -\partial_{\phi}\bke{\frac 1{\sin \phi}\partial_{\phi}(h \sin \phi ) }}e^{in\sigma\tau}\\ &= (A_nh)e^{in\tau}. } Similarly, we find the restrictions $B_n$ and $C_n$, \EQ{ B [he^{in\sigma\tau}] &=\left( 2 i n\sigma\left(\frac{a^2-1}{(a-\cos\phi)^2}+1\right) -\frac {2\sin\phi}{(a-\cos\phi)}\partial_{\phi} +2 -\frac{4a^2+2a\cos\phi-6}{(a-\cos\phi)^2} \right)he^{in\sigma\tau}\\ &=(B_nh)e^{in\sigma\tau} } and \EQ{ &C [he^{in\sigma\tau}] \\ &=\left(- \frac{12(a^2-1)\sin\phi}{(a-\cos\phi)^3} (\partial_{\phi} +\cot\phi )+\frac{12(a^2-1)\sin^2\phi}{(a-\cos\phi)^4}(\partial_{\tau}+1)\right) he^{in\sigma \tau}\\ &=\left(- \frac{12(a^2-1)\sin\phi}{(a-\cos\phi)^3} \partial_{\phi} +\frac{12i n\sigma(a^2-1)\sin^2\phi}{(a-\cos\phi)^4} +\frac{12(a^2-1)(1-a\cos\phi)}{(a-\cos\phi)^4} \right)h e^{in\sigma \tau}\\ &=(C_nh) e^{in\sigma \tau}. } Obviously, each subspace $\mathcal{F}_n$ is invariant under $A$, $B$ and $C$. \end{proof} Similar to getting the invariance of $\mathcal{F}_n$ under the operator $\mathfrak{L}$, we prove its invariance under $\mathcal{M}$. \begin{lemma}\label{invariance.sL'} The spaces $\mathcal{F}_n$, $n\in\mathbb{Z}$, are invariant under the operator $\mathcal{M}$, and the restriction $\mathcal{M}_n$ of $\mathcal{M}$ on $\mathcal{F}_n$ can be written as \EQ{ \mathcal{M}_n= A_n+ E_n, } where $A_n$ is defined as in Lemma \ref{invariance.sL} and, with $\tilde U$ given by \eqref{td.U}, \EQN{\label{En} E_n g = \left(in\sigma(\tilde{U}_{\tau}+2) + \tilde{U}_{\phi}\partial_{\phi} +\tilde U_\phi \cot\phi \right) g. } \end{lemma} \begin{proof} Recall for $\chi = \rho u_\theta$, \EQ{ \mathcal{M}\chi =\rho^3\hat{\mathcal{M}}u_{\theta}. } The right hand side can be written in the new variables $(\tau,\phi)$ as, using \eqref{Arho.commute}, \EQ{ \rho^3\hat{\mathcal{M}}u_{\theta} &=e^{\tau} \left(A+\tilde{U}_{\tau} \partial_{\tau}+ \tilde{U}_{\phi}\partial_{\phi} +\tilde U_{\tau} + \tilde U_\phi \cot\phi\right)(e^{-\tau}\chi) \\ &=\left(A +2\partial_{\tau} +\tilde{U}_{\tau} \partial_{\tau}+ \tilde{U}_{\phi}\partial_{\phi} + \tilde U_\phi \cot\phi\right)\chi \\ &= (A+E)\chi, } where $E=(\tilde{U}_{\tau}+2) \partial_{\tau}+ \tilde{U}_{\phi}\partial_{\phi} + \tilde U_\phi \cot\phi$. By Lemma \ref{invariance.sL}, $\mathcal{F}_n$ is invariant under the operator $A$ with its restriction given by $A_n$. Therefore, it is enough to show its invariance under $E$. Since \EQ{ E(g(\phi)e^{in\sigma\tau}) &=\left(in\sigma(\tilde{U}_{\tau}+2) + \tilde{U}_{\phi}\partial_{\phi} +\tilde U_\phi \cot\phi \right) g(\phi) e^{in\sigma\tau}, } $\mathcal{F}_n$ is invariant under $E$ and the restricted operator $E_n$ is given as in \eqref{En}. \end{proof} By Lemma \ref{invariance.sL} and Lemma \ref{invariance.sL'}, Proposition \ref{invariance} holds true. It reduces the system of equations \eqref{eig.new.coord} with two variables to a family of systems with one variable $\phi\in (0,\pi)$ \EQN{\label{eig.cF_n}\begin{cases} \mathfrak{L}_n h = (A_n+B_n) A_n h + C_n h = 0 \\ \mathcal{M}_n g = (A_n +E_n) g = 0. \end{cases}} When we study them, we keep the parameter $a$ but replace $\la$ by $\si$. \subsection{Induced boundary conditions} \label{BC.tau} In the previous subsection, we get a family of linear systems of ordinary differential equations \eqref{eig.cF_n}. Now, we find the corresponding boundary conditions. For $\zeta(\tau,\phi) =\xi(\rho,\phi)$, the boundary condition \eqref{bc.xi.phi}-\eqref{bc.xi.rho} for $\xi$ becomes \EQN{\label{zeta.bc3} \zeta|_{\phi=0,\pi}=A \zeta |_{\phi=0,\pi}=0, } \EQN{ \label{zeta.bc2} \zeta(\tau,\phi) = \zeta(\tau+\ln\la,\phi). } If $\zeta\in \mathcal{F}_n$, i.e., $\zeta=h(\phi)e^{in\si\tau}$, the boundary condition \eqref{zeta.bc3}-\eqref{zeta.bc2} reduces to % \EQN{\label{zeta_n.bc} h|_{\phi=0,\pi}=A _0 h |_{\phi=0,\pi}=0, } which implies $A_n h|_{\phi=0,\pi}=0$, where $A_n$, $n\in\mathbb{Z}$, is defined in Lemma \ref{invariance.sL}. \medskip On the other hand, for $\chi(\tau,\phi)=\rho u_{\theta}(\rho,\phi)$, the boundary condition \eqref{bc.utht} implies \EQN{\label{bc.chi} \chi|_{\phi=0,\pi} = 0,\quad \chi(\tau,\phi) = \chi(\tau+\ln\la,\phi). } If $\chi\in \mathcal{F}_n$, $\chi(\tau,\phi) = g(\phi) e^{in\si\tau}$, then \eqref{bc.chi} reduces to \EQN{\label{bc.g} g|_{\phi=0,\pi} = 0. } \subsection{Function spaces for the operators $\mathfrak{L}_n$ and $\mathcal{M}_n$} \label{ftnsp} We have considered $\mathfrak{L}_n$ and $\mathcal{M}_n$ as differential operators. We now consider their domains and ranges. The base space is $X_0=L^2((0,\pi),\sin\phi \,d\phi)$, which is the Hilbert space equipped with the natural inner product: \EQN{ (g,f)_{X_0} = \int_0^\pi g(\phi)\overline{f(\phi)} \sin\phi \,d\phi. } We will also use an $a$-dependent inner product \EQN{\label{X0a} (g,f)_{X_0^a} = \int_0^\pi g(\phi)\overline{f(\phi)} (a-\cos\phi)^2 \sin\phi \,d\phi. } They are equivalent but the constant depends on $a$. Define the space $X_1$ by \EQN{\label{spX1} X_1 = \left\{g\in L^1_\text{loc}(0,\pi) : \norm{g}_{X_1}^2 = \int_0^\pi \bke{|g'(\phi)|^2 + \frac{|g(\phi)|^2}{\sin^2\phi}} \sin\phi d\phi \,<\infty \right\}, } which will work as the domain of the operator $\mathcal{M}_n$, $n\in \mathbb{Z}$. Obviously, $X_1\subset X_0$. Furthermore, any function in $X_1$ is continuous on $(0,\pi)$ and vanishes at the boundary. \begin{lemma}\label{prop.sp.X1} If $g\in X_1$, then $g\in C_0([0,\pi])$. More precisely, $g$ satisfies \[ \norm{g}_{C(0,\pi)} {\ \lesssim \ } \norm{g}_{X_1}, \quad \lim_{\phi \to 0_+} g(\phi) = 0\quad\text{and}\quad \lim_{\phi \to \pi_-} g(\phi) = 0. \] \end{lemma} \begin{proof} By the change of variable $t = \ln|\csc \phi- \cot \phi|$, $dt = \frac 1{\sin\phi} d\phi$, and $G(t)= g(\phi)$, we have \[ \norm{g}_{X_1}^2=\int_0^\pi \bke{|g'(\phi)|^2 + \frac{|g(\phi)|^2}{\sin^2\phi}} \sin\phi d\phi = \int_{ \mathbb{R}} |G'(t)|^2 + |G(t)|^2 dt = \norm{G}_{H^1(\mathbb{R})}^2. \] By the Sobolev embedding, $G$ and hence $g$ are bounded and continuous, with \[ \norm{g}_{C(0,\pi)} =\norm{G}_{C(\mathbb{R})} {\ \lesssim \ } \norm{G}_{H^1(\mathbb{R})} = \norm{g}_{X_1}. \] Furthermore, $\lim_{\phi \to 0_+,\pi_-}g(\phi) = \lim_{t \to -\infty,\infty}G(t)=0$. \end{proof} We now consider the operators $\mathcal{M}_n$ and start with $\mathcal{M}_0$. \begin{lemma}\label{th3.6} The weight $\om(\phi)=(a-\cos\phi)^2$ satisfies \[ b[g,f]= \int_0^\pi (\mathcal{M}_0 g)\bar f(\phi) \om(\phi) \sin \phi \,d\phi =\overline{ b[f,g]}, \] for any $f,g \in C^2([0,\pi])\cap X_1$. It is the unique choice up to a constant factor. \end{lemma} \begin{proof} Recall $\mathcal{M}_0=A_0+E_0$, \[ \mathcal{M}_0 g = - \partial_\phi \bke{ \frac 1{\sin \phi} \partial_\phi(g\sin \phi )} + \frac {\tilde U_\phi}{\sin \phi} \partial_\phi(g\sin \phi ). \] Thus, using Lemma \ref{prop.sp.X1}, \[ b[g,f] = \int_0^\pi \frac 1{\sin \phi} \partial_\phi(g\sin \phi ) (\partial_\phi + \tilde U_\phi)\bke{ \bar f \sin \phi \,\om} d \phi. \] We would have \[ b[g,f] =\int_0^\pi \frac 1{\sin \phi} \partial_\phi(g\sin \phi ) \partial_\phi \bke{ \bar f \sin \phi} \om\, d \phi, \] which is symmetric, if $\tilde U_\phi \om + \om'=0$. Since $\tilde U_\phi =-\frac{2\sin \phi}{a -\cos \phi}$, $\om(\phi)=C(a-\cos\phi)^2$. \end{proof} This lemma motivates the definition of the space $X_0^a$ as $b[g,f]=(\mathcal{M}_0g,f)_{X_0^a}$. It also allows us to consider $\mathcal{M}_0$ as a linear operator from $X_1$ to its dual space $X_1'$ by \[ (\mathcal{M}_0 g)(f) = \mathfrak{B}[g,f],\quad \forall f,g\in X_1, \] where the bilinear form $\mathfrak{B}$ is \[ \mathfrak{B}[g,f] := \int_0^{\pi} \partial_{\phi}(g\sin\phi) \partial_{\phi}(\bar{f}\sin \phi) \frac{(a-\cos\phi)^2}{\sin\phi}\, d\phi. \] Note that $\mathfrak{B}[g,g] {\ \lesssim \ } a^2 \norm{g}_{X_1}^2$. Since $\tilde U_\tau \in L^\infty(0,\infty)$, the difference \[ \mathcal{M}_n -\mathcal{M}_0 =((n\si)^2 -in\si + in\si (\tilde U_\tau +2))I = ((n\si)^2 + in\si (\tilde U_\tau +1))I \] is also well-defined from $X_1$ to $X_1'$: For each $g\in X_1$, $(\mathcal{M}_n-\mathcal{M}_0)g$ is in $X_1'$, mapping any $f\in X_1$ into \[ ((\mathcal{M}_n -\mathcal{M}_0)g)(f) :=((\mathcal{M}_n -\mathcal{M}_0)g,f)_{X_0^a} = ((n\si)^2g -in\si g + in\si (\tilde U_\tau +2)g, f)_{X_0^a}\in \mathbb{C}. \] Similar to $\mathcal{M}_0$, the element $g$ in the kernel of $\mathcal{M}_n:X_1 \to X_1'$, $n\in \mathbb{Z}$, satisfy $\mathcal{M}_n g = 0$ in the sense of \[ (\mathcal{M}_n g)(f) = (\mathcal{M}_0 g)(f) +((\mathcal{M}_n -\mathcal{M}_0)g)(f) =0, \quad\forall f\in X_1. \] \medskip To find a suitable domain and range for the operator $\mathfrak{L}_0$, we first consider the operator $A_0$. It can be defined on $X_1$ with its value in $X_1'$, \EQ{ (A_0h)(f) &=\mathfrak{B}_0[h,f] := \int_0^\pi \partial_{\phi}(h\sin\phi)\partial_{\phi}(\overline{f\sin\phi)} \,\frac{1}{\sin\phi}\, d\phi. } for any $h,f\in X_1$. Tracking the proof of Lemma \ref{th3.6} with $\tilde U_\phi$ replaced by zero and $\om(\phi)=1$, we get $(A_0h)(f) = (A_0h,f)_{X_0}$, (not $X_0^a$), for any $h,f \in C^2([0,\pi])\cap X_1$. % On the other hand, the operators $B_0:X_1\to X_1'$ and $C_0:X_1\to X_1'$ are also well-defined by \[ (B_0h)(f) = (B_0h, f)_{X_0} , \quad (C_0h)(f) = (C_0h, f)_{X_0}, \quad \forall f,h\in X_1. \] Therefore, $\mathfrak{L}_0h = (A_0+B_0)(A_0h) + C_0h$ can be defined on \[ X_3 = \{h \in X_1: \ A_0 h \in X_1\}, \] and $\mathfrak{L}_0 : X_3 \to (X_1)'$ can be defined as \EQN{\label{sL_0.weakdef} (\mathfrak{L}_0 h)(f) = \mathfrak{B}_0[A_0h,f] + (B_0(A_0h), f)_{X_0} + (C_0h, f)_{X_0}, } for any $h$ in $X_3$ and $f$ in $X_1$. Then, \[ \mathfrak{L}_0 h = 0 \] holds in the sense that $h\in X_3$ satisfies \[ (\mathfrak{L}_0h)(f) = 0, \quad \forall f\in X_1. \] We note that $h\in X_3$ implies $A_0h\in X_1$ and $h\in X_1$, and therefore it achieves the boundary conditions \eqref{zeta_n.bc} by Lemma \ref{prop.sp.X1}. Finally, defining $A_n-A_0$, $B_n-B_0$, and $C_n-C_0$ on $X_1$ by \EQ{ ((A_n-A_0)h)(f) &=((A_n-A_0)h,f)_{X_0},\\ ((B_n-B_0)h)(f) &=((B_n-B_0)h,f)_{X_0},\\ ((C_n-C_0)h)(f) &=((C_n-C_0)h,f)_{X_0} } for any $f$ and $h$ in $X_1$, we can easily check that $A_n$, $B_n$, and $C_n$ map from $X_1$ to $X_1'$. Furthermore, since \[ \norm{(A_n-A_0)h}_{X_1} = \norm{((n\si)^2 - in\si)h}_{X_1} \leq (|n\si|^2 + |n\si|) \norm{h}_{X_1}, \] for $h\in X_1$, $A_0 h \in X_1$ is equivalent with $A_n h\in X_1$ for all $n\in \mathbb{Z}$. This implies that $\mathfrak{L}_n:X_3 \to X_1'$ is well-defined. % \begin{remark}$\mathcal{M}_0$ is self-adjoint over $X_0^a$ while $A_0$ is self-adjoint over $X_0$ since $\mathfrak{B}$ and $\mathfrak{B}_0$ are symmetric. Hence their eigenvalues are all real. \end{remark} \begin{remark}The eigenvalue problems $\mathfrak{L}_n h = \mu h $ and $\mathcal{M}_ng = \mu g$ correspond to $\hat{\mathfrak{L}}_n\xi =\mu \rho^{-4} \xi$ and $\hat \mathcal{M}_n u_\th = \mu \rho^{-2} u_\th$. The weights $\rho^{-4}$ and $\rho^{-2}$ are needed to fit the scaling property of DSS perturbations. \end{remark} \section{Zero and purely imaginary eigenvalues of swirl operators} \label{sec4} In this section, we prove that the trivial solution $g=0$ is the only solution $g\in X_1$ to \EQ{\begin{cases} \mathcal{M}_ng = \mu g,\\ g|_{\phi=0,\pi} = 0, \end{cases}} for either $\mu=0$ or $i\mu \in \mathbb{R}$, and for all $n\in \mathbb{Z}$, $a>1$ and $\si>0$. Note that $\mathcal{M}_n$ has a dependence on $a>1$ and $\si>0$ for $n\not=0$, and $\mathcal{M}_0$ depends on $a$ only. Recall that $\mathcal{M}_n: X_1 \to X_1'$ is defined for $g,f\in X_1$ by \EQ{ (\mathcal{M}_ng)(f) &= (\mathcal{M}_0g)(f) + ((\mathcal{M}_n-\mathcal{M}_0)g)(f) \\ &= \int_0^{\pi} \partial_{\phi}(g\sin\phi) \partial_{\phi}(\bar{f\sin \phi}) \frac{(a-\cos\phi)^2}{\sin\phi} d\phi +\big((n\si)^2g + in\si(\tilde{U}_{\rho}+1)g,f \big)_{X_0^a}, } where $(,)_{X_0^a}$ is defined in \eqref{X0a}. If such solution $g$ exists, it satisfies the zero boundary condition by Lemma \ref{prop.sp.X1}. The following is the main theorem of this section. \begin{theorem}\label{eig.sM.thm} For any $a>1$, $\si>0$, the operator $\mathcal{M}_n = A_n+E_n: X_1 \to X_1'$ for any $n\in \mathbb{Z}$ does not have zero eigenvalue, nor any purely imaginary eigenvalue. \end{theorem} \begin{proof} Fix $a$, $\si$, and $n$. Any eigenfunction $g\in X_1$ of $\mathcal{M}_n$ with eigenvalue $\mu$ satisfies $(\mathcal{M}_n g)(g) = \mu (g,g)_{X_0^a}$. Suppose that $\mu$ is either zero or purely imaginary. Taking the real parts, we get \begin{align*} \Re ((\mathcal{M}_ng)(g)) = 0. \end{align*} However, \EQ{% 0=\Re ((\mathcal{M}_ng)(g)) =\int_0^{\pi} |\partial_{\phi}(g\sin\phi)|^2 \frac{(a-\cos\phi)^2}{\sin\phi} d\phi +\big((n\si)^2g,g\big)_{X_0^a}. } Hence $g=0$ and is not an eigenfunction. \end{proof} \begin{remark} Theorem \ref{eig.sM.thm} implies that, if there is a bifurcation curve originating from a Landau solution in the class of DSS, axisymmetric steady flows along a zero eigenfunction of the linearized operator, then the swirl component of the eigenfunction is zero. In view of the nonlinear system \eqref{operator.form} for the perturbation $(\xi,u_\th)$, the solutions on the curve sufficiently close to the Landau solution must have zero swirl components. \end{remark} \section{Analysis of stream operators} \label{sec5} In this section, we analyze the kernel of $\mathfrak{L}_0$ and the eigenvalues of $\mathfrak{L}_n$ both analytically and numerically, with the help of asymptotic analysis. These operators are defined in Lemma \ref{invariance.sL}. Since the Landau solutions \eqref{Landau-sol} is a continuous family with parameter $a$, one expects and can verify that $\mathfrak{L}_0$ has $\partial_a\Psi$ in its kernel. We will first prove that this is the only element in the kernel of $\mathfrak{L}_0$ up to a constant multiple. Then, we present numerical evidence for the non-existence of zero eigenvalue of $\mathfrak{L}_n$ when $n$ is a non-zero integer, and that there are no purely imaginary eigenvalues.% \subsection{Linear operator $\mathfrak{L}_0$ and its kernel} In this subsection, we consider the linear operator $\mathfrak{L}_0$ \EQ{ \mathfrak{L}_0 = (A_0+B_0)A_0 +C_0. } Note that it depends on the parameter $a\in (1,\infty)$ but not on $\si$. Since the Landau solutions \eqref{Landau-sol} is a continuous family with parameter $a$ of explicit solutions to (SNS) which are axisymmetric and self-similar, one expects and can verify that $\mathfrak{L}_0$ has $\partial_a\Psi$ in its kernel. In fact, $\partial_a\Psi$ is the unique eigenfunction up to a constant factor, which is reasonable in view of the rigidity result of \cite{Sverak2011} since the zero mode $n=0$ corresponds to minus one homogeneous functions in $\mathbb{R}^3$. Recall \eqref{Landau-sol2}, \[ \Psi(\psi)=\frac{2 \sin \phi}{a-\cos \phi}, \quad \partial_a\Psi=\frac{-2 \sin \phi}{(a-\cos \phi)^2}. \] \begin{theorem}\label{simple.eigenvalue} For any $1<a<\infty$, the kernel of $\mathfrak{L}_0: X_3 \to X_1'$ is spanned by $\partial_a\Psi$. There is no strictly generalized eigenfunction. \end{theorem} To prove this theorem, we do the following change of variables \EQN{\label{variable.z} \cos \phi = z, \quad -\sin \phi \partial_z = \partial_{\phi}. } Then, we can write $\mathfrak{L}_0$ in a simpler form. \begin{lemma}\label{sL_0.z} Let $z,\phi$ satisfy \eqref{variable.z}. For $h\in C^4_{\text{loc}}(0,\pi)$, we have \EQN{\label{tdL0} \frac 1{\sin\phi}\mathfrak{L}_0 h = \tilde L_0 H, \quad H(z) =\sin\phi \,h(\phi), } where the linear operator $\tilde L_0$ is defined by \EQ{ \tilde L_0 H = ((1-z^2)H'+2zH-f_0H)''',\quad f_0(z) = \frac{2(1-z^2)}{a-z}. } \end{lemma} \begin{remark}Here we understand \eqref{tdL0} in pointwise sense. Note that $\norm{h}_{X_1}^2=\int_{-1}^1 (dH/dz)^2dz$. Note $f_0(z) = \Psi(\phi)\sin \phi$, and $H_0(z) = \partial_a f_0(z) =\partial_a\Psi(\phi)\sin \phi$ will appear in \eqref{H0z.def}. \end{remark} \begin{proof} % Under the change of variable \eqref{variable.z}, $A_0 h$ (see \eqref{op.ABCn}) can be written as \EQN{\label{A_0.z} A_0 h = -\frac{d}{d\phi}\bke{\frac1{\sin\phi}\frac{d}{d\phi}(h\sin\phi )} = -\sqrt{1-z^2} H''(z). } This implies that \EQN{\label{A_02.z} A_0^2 h = A_0(A_0 h) = -\sqrt{1-z^2} \partial_z^{(2)}(\sin\phi A_0h) = \sqrt{1-z^2} ((1-z^2) H'')''. } In a similar way, $B_0A_0$ and $C_0$ can also be written as \EQN{\label{B_0.z} B_0(A_0 h) &= -\frac{2\sin\phi}{a-\cos\phi}\partial_{\phi}(A_0 h) + V(A_0 h)\\ &= \frac{2\sin^2\phi}{a-\cos\phi}(-\sqrt{1-z^2} H'')' - V\sqrt{1-z^2} H''\\ &= \frac{2(1-z^2)}{a-z}\bke{-\sqrt{1-z^2}H'''+\frac{z}{\sqrt{1-z^2}}H''} -V\sqrt{1-z^2} H'' } with $V(\phi)= 2 -\frac{4a^2+2a\cos\phi-6}{(a-\cos\phi)^2}=2 -\frac{4a^2+2az-6}{(a-z)^2}$, and \EQN{\label{C_0.z} C_0 h & = -\frac {12(a^2-1)}{(a-\cos\phi)^3} \left(\sin\phi \, h' -\frac{1-a\cos\phi}{a-\cos\phi}h\right)\\ & = -\frac {12(a^2-1)}{(a-\cos\phi)^3} \left(-\sin^2\phi \partial_z(\frac{H}{\sqrt{1-z^2}}) -\frac{1-az}{a-z}\frac{H}{\sqrt{1-z^2}}\right)\\ & = \frac {12(a^2-1)}{(a-z)^3} \left(\sqrt{1-z^2} H' +\frac{H}{\sqrt{1-z^2}} \left(z +\frac{1-az}{a-z}\right)\right)\\ &= \frac {12(a^2-1)\sqrt{1-z^2}}{(a-z)^3} \left( H' +\frac{H}{a-z}\right).\\ } Summing up \eqref{A_02.z}, \eqref{B_0.z}, and \eqref{C_0.z}, we can rewrite $\frac {1}{\sin\phi}\mathfrak{L}_0 h$ as \EQ{ \frac {1}{\sin\phi}\mathfrak{L}_0 h = (1-z^2)H''''-(4z+f_0)H''' -\frac{6(z^2-2az+1)}{(a-z)^2} H'' + \frac{12(a^2-1)}{(a-z)^3}H' + \frac{12(a^2-1)}{(a-z)^4}H, } which matches the right hand side of \eqref{tdL0}. \end{proof} Now, we prove that $\partial_a\Psi$ is the unique solution of $\mathfrak{L}_0 h = 0$ up to a constant factor. \begin{proof}[Proof of Theorem \ref{simple.eigenvalue}] If $h \in X_3$ is in the kernel of $\mathfrak{L}_0$, i.e., $\mathfrak{L}_0 h\equiv0$, we have $h \in C^\infty_\text{loc}(0,\pi)$ by usual regularity theory. By Lemma \ref{sL_0.z}, $\mathfrak{L}_0 h(\phi) =0$ on $(0,\pi)$ is equivalent to $\tilde L_0 H(z)=0$ on $(-1,1)$ for $H(z) = \sin\phi \, h(\phi)\in C^\infty_\text{loc}(-1,1)$ and $z= \cos\phi$. Then, \begin{align} \label{exp.tdL0} &\tilde L_0 H(z)= ((1-z^2)H'+2zH-f_0H)'''=0 \nonumber\\ &\iff (1-z^2)H'+2zH-f_0H = a_0 + a_1z + a_2 z^2 \quad \text {on } (-1,1), \end{align} for some constants $a_0$, $a_1$, and $a_2$, where $f_0(z) = \frac{2(1-z^2)}{a-z}$. Furthermore, since $h\in X_3$, the function $H(z)=\sin\phi \, h(\phi)$ satisfies $H\in C^1([-1,1])$, $H(-1)=H(1)=0$. Indeed, $H\in C_0([-1,1])$ follows from Lemma \ref{prop.sp.X1}. Moreover, applying Lemma \ref{prop.sp.X1} to $A_0h$, we have $A_0h\in C_0([0,\pi])$ and hence $\sqrt{1-z^2}H''(z) \in C_0([-1,1])$ by \eqref{A_0.z}. Then, writing $H'(z) = H'(0) + \int_0^z \frac{\sqrt{1-w^2} H''(w)}{\sqrt{1-w}\sqrt{1+w}} dw$, the integrability of $\frac{\sqrt{1-w^2} H''(w)}{\sqrt{1-w}\sqrt{1+w}}$ in $[-1,1]$ gives $H'\in C([-1,1])$. Now, taking limits on \eqref{exp.tdL0} as $z\to \pm 1$, we obtain \EQ{ a_0 \pm a_1 + a_2 =0 \implies a_1=0, \ a_2=-a_0. } In other words, \eqref{exp.tdL0} becomes \EQN{\label{eq1} (1-z^2)H' + 2zH -f_0H = a_0(1-z^2). } We can take derivative of \eqref{eq1} in $z$ to get \[ (1-z^2)H'' +2H - f_0 H' -f_0' H = -2a_0 z. \] Taking again limits as $z\to \pm 1$ and using $f_0(\pm 1)=H(\pm 1)=0$ and $\sqrt{1-z^2}H''(z) \in C_0([-1,1])$, we get \[ 0 + 0 - 0 -0 - 0 = \mp 2a_0. \] Thus $a_0=0$. In other words, to solve $\tilde L_0 H=0$ under the given boundary conditions, it is enough to solve $\eqref{eq1}$ with $a_0=0$. We now look for an integration factor $k(z)$ so that \EQN{\label{L0.factor1} (1-z^2)H' + 2zH -f_0H =(1-z^2)k^{-1} \frac d{dz} (kH) } Since $f_0(z) = \frac{2(1-z^2)}{a-z}$, \[ \frac{k'}k = \frac{2z -f_0}{1-z^2} = \frac{2z }{1-z^2} + \frac{2}{z-a}. \] Therefore, $k$ satisfies \[ \ln k = \int \frac{2z }{1-z^2} + \frac{2}{z-a} dz =- \ln (1-z^2) + 2 \ln |z-a|+c, \] so that it can be chosen as \[ k= \frac {(a-z)^2}{1-z^2}. \] This implies that the solutions of $\tilde L_0 H =0$ with $H(\pm1) =0$ is \EQN{\label{H0z.def} H = CH_0, \quad H_0(z) =-2k^{-1} = \frac {-2(1-z^2) }{(a-z)^2} } for some $C$. Since $H_0(\cos\phi)=\sin \phi \,\partial_a\Psi(\phi)$, we have $h=C \partial_a\Psi$, i.e., any solution $h\in X_3$ of $\mathfrak{L}_0 h=0$ is a multiple of $\partial_a \Psi$. Suppose now we have a generalized eigenfunction $h \in X_3$ satisfying \EQN{\label{sL0h=paPsi} \mathfrak{L}_0 h = \frac12 \partial_a \Psi. } By Lemma \ref{sL_0.z}, $H(z) = h(\phi)\sin \phi$ satisfies \[ \tilde L_0 H =\frac 1{ \sin \phi} \mathfrak{L}_0 h = \frac1{2 \sin \phi} \partial_a \Psi = \frac1{2 \sin^2 \phi} H_0(z) = \frac {-1}{(a-z)^2}. \] Using the formula for $\tilde L_0$ in Lemma \ref{sL_0.z} and integrating three times, we get \[ (1-z^2)H'+2zH-f_0H = G(z) := a_0 + a_1z + a_2 z^2 -(a-z)[ \ln (a-z)-1] \] for some constants $a_1,a_2,a_3$. By the argument in the first part of the proof, we have $G(\pm 1)=0$ and $G'(\pm 1)=0$. The conditions $G(\pm 1)=0$ give \[ a_0+a_1+a_2 = (a-1)[\ln(a-1)-1], \quad a_0-a_1+a_2 = (a+1)[\ln(a+1)-1], \] hence $2a_1=2+(a-1)\ln(a-1)-(a+1)\ln(a+1)$. The conditions $G'(\pm 1)=0$ give \[ a_1+2a_2 =- \ln(a-1), \quad a_1-2a_2 =- \ln(a+1), \] hence $2a_1=- \ln(a-1)- \ln(a+1)$. These two equations for $2a_1$ give \[ f(a):=\frac2a+\ln(a-1)-\ln(a+1)=0. \] But $\lim_{a\to 1^+} f(a)=-\infty$, $\lim_{a\to \infty}f(a)=0$ and $f'(a)=\frac2{a^2(a^2-1)}>0$ for $a>1$. Hence $f(a)<0$ in $(1,\infty)$ and there is no solution $h \in X_3$ of \eqref{sL0h=paPsi}. This completes the proof of Theorem \ref{simple.eigenvalue}. \end{proof} \begin{remark} \label{sL_0.z2} In view of \eqref{L0.factor1}, we can factorize $\tilde L_0$, \EQN{ \tilde L_0 H(z) = \partial^3_z \bke{ (1-z^2)H_0 \partial_z \frac H{H_0}} = \partial^3_z \bket{ \frac{ (1-z^2)^2}{(a-z)^2} \partial_z \bke{\frac {(a-z)^2}{1-z^2} H }}. } \end{remark} \subsection{Eigenvalues of $\mathfrak{L}_0$} In the following two subsections we study numerically the general eigenvalue problem for the linear operator $\mathfrak{L}_n$, $n \in \mathbb{Z}$. This is necessary even for the numerical study of the zero eigenvalue of $\mathfrak{L}_n$ due to numerical errors. The focus of our study is to examine whether the smallest real parts of eigenvalues are positive, except the zero eigenvalue of $\partial_a \Psi$. A positive result would imply that there is no nontrivial zero eigenfunction and no purely imaginary eigenvalue, and is an evidence of no bifurcation of Landau solutions. Consider the general eigenvalue problem for the linear operator $\mathfrak{L}_n$:% \EQ{\begin{cases} \mathfrak{L}_n h =\mu h\\ h|_{\phi=0,\pi}=A_n h_{\phi=0,\pi} = 0. \end{cases}} In this problem, we look for an eigenvalue $\mu\in \mathbb{C}$ and an eigenfunction $h\in X_3$ such that for any $f\in X_1$, we have \[ (\mathfrak{L}_n h)(f) = \mu(h,f)_{X_0}. \] Recall the definitions of the space $X_0$, $X_1$, $X_3$ and $\mathfrak{L}_n:X_3\to X_1'$ in Section \ref{ftnsp}. By the decomposition $\mathfrak{L}_n = (A_n+B_n)A_n + C_n$ in Lemma \ref{invariance.sL}, we can rewrite the eigenvalue problem as \EQN{\label{eq5.10} \begin{pmatrix} I & -A_n\\ A_n +B_n & C_n \end{pmatrix} Y =\mu \begin{pmatrix} 0 & 0\\ 0 & I \end{pmatrix} Y, \quad Y= \begin{pmatrix} A_n h \\ h\end{pmatrix}. } This formulation seems natural because the two components of $Y$ live in the same space $X_1$. It is convenient for the numerical study as it changes a fourth order system to second order. We will apply a finite difference scheme to a discretized version of \eqref{eq5.10}. As a stationary Navier-Stokes flow satisfying the bound \eqref{SNS2} has higher regularity \eqref{SNS3}, we can show sufficient regularity of the solution for the convergence of the finite difference scheme. We consider two cases: one is for $n=0$ and the other is for $n\neq 0$. % We study the case $n=0$ in this subsection. We will study the case $n \not = 0$ in next subsection. We first describe our numerical observations for the case $n=0$: \begin{enumerate} \item The smallest absolute value of eigenvalue is close to $0$, and the second smallest is away from $0
$. This agrees with Theorem \ref{simple.eigenvalue} that the eigenvalue $0$ of $\mathfrak{L}_0$ only has the eigenfunction $\partial_a\Psi$ up to a constant multiple. It also suggests that there is no strictly generalized eigenfunction $h(\phi)$ with $\mathfrak{L}_0 h = \partial_a\Psi$. \item All eigenvalues are \emph{real and (almost) nonnegative.} \end{enumerate} We cannot explain the second numerical observation above. One might guess that $\mathfrak{L}_0$ is self-adjoint in $$L^2(0,\pi; w(\phi)\,d\phi)$$ for some weight function $w(\phi)$, but this is disproved by the following lemma. \begin{lemma} The bilinear form \[ B(g,h) = \int_0^\pi g (\mathfrak{L}_0 h) w(\phi)\,d\phi,\quad g,h \in C^\infty_c(0,\pi), \] is not symmetric for any smooth weight $w(\phi) >0$ in $(0,\pi)$: For any such weight $w$, there are $g,h\in C^\infty_c(0,\pi)$ so that $B(g,h)\not= B(h,g)$. \end{lemma} \begin{proof} Change variables \[ z = \cos \phi, \quad G(z) = g(\phi)\sin \phi, \quad H(z)= h(\phi)\sin \phi, \quad S(z) = \sin \phi. \] By Lemma \ref{sL_0.z} and Remark \ref{sL_0.z2}, \EQ{ B(g,h) &= \int_{-1}^1 \frac {G}S \, \bke{S \tilde L_0 H}\frac w S\, dz = \int_{-1}^1 \frac {Gw}S \, \tilde L_0 H\, dz \\ &= \int_{-1}^1 \frac {Gw}S \,\partial_z^3 \bke{ Q \partial_z (k H)}\, dz, } where \[ Q= \frac{ (1-z^2)^2}{(a-z)^2}, \quad k = \frac {(a-z)^2}{1-z^2} . \] Suppose $w=kS W$ for some $W>0$. Then \[ B(g,h) = \int_{-1}^1 kGW \,\partial_z^3 \bke{ Q \partial_z (k H)}\, dz = \int_{-1}^1 kG \, L (kH)\, dz = \int_{-1}^1 L^*(kG ) \, kH\, dz, \] where \[ Lu = W\partial_z^3 \bke{ Q \partial_z u} = W\bke{Q u^{(4)} + 3Q' u''' + 3 Q''u'' + Q''' u'}, \] \EQ{ L^*u &=\partial_z \bke{Q\partial_z^3 \bke{ W u}} \\ &= QW u^{(4)} +(Q'W+ 4QW') u''' + (3 Q'W'+6QW'')u'' \\ & \qquad \qquad \quad + (3Q'W''+4QW''') u' + (QW''')' u. } For $B(g,h)$ to be symmetric, we need $L=L^*$, and hence their coefficients should match. Matching $u'''$ coefficients, \[ 3Q'W = Q'W+ 4QW', \quad \text{i.e.}\quad WQ'= 2 QW'. \] We get $2W'/W = Q'/Q$, $W^2=cQ$. We may choose $c=1$ and hence \[ W=Q^{1/2} = \frac{ 1-z^2}{a-z} = z+a - \frac{a^2- 1}{a-z}. \] Matching $u$ coefficients, $0= (QW''')' $, hence \[ c = Q W''' = W^2 \frac{-6(a^2-1)}{(a-z)^4}, \] which is a contradiction. The lemma is proved. \end{proof} We formulate a conjecture. \begin{conjecture}\label{conj.sL0} For all $a>1$, all nonzero eigenvalues of the linear operator $\mathfrak{L}_0$ are real and positive. \end{conjecture} In the finite difference scheme applied to the second order ODE system \eqref{eq5.10}, we first introduce a finite-dimensional approximate eigenvalue problem for the operator $\mathfrak{L}_0$ \EQN{\label{ex.sL0} \begin{pmatrix} I & -A_0\\ A_0 +B_0 & C_0 \end{pmatrix} Y =\mu \begin{pmatrix} 0 & 0\\ 0 & I \end{pmatrix} Y. } This finite-dimensional problem is obtained by approximating the eigenvalue problem at a finite number of points $\{\phi_k\}_{k=0}^{N+1}$ on the interval $[0,\pi]$ defined by % \EQ{ \phi_k = \frac{\pi}{N+1}k =: \delta k, \quad k=0,\cdots,N+1. } The boundary conditions with $\phi_0=0$, and $\phi_{N+1}=\pi$ give \EQN{\label{bc.matrix} h(\phi_0)=h(\phi_{N+1}) =0,\quad A_0h(\phi_0)=A_0h(\phi_{N+1})=0. } The first and the second derivatives are approximated by \EQN{\label{der.hd} h'(\phi_k) \sim \frac{h_{k+1}-h_{k-1}}{2\delta},\quad h''(\phi_k) \sim \frac{h_{k+1}-2h_{k}+h_{k-1}}{\delta^2}, \quad \forall k=1,\cdots,N } where $h_k=h(\phi_k)$. Based on these, the operators $A_0$, $B_0$ and $C_0$ can be expressed as a $N\times N$ matrix by \eqref{der.hd} acting on $(h_1,\ldots,h_N)^T$, and \eqref{ex.sL0} becomes an eigenvalue problem for a $2N\times 2N$ matrix with the eigenvector $Y \in \mathbb{C}^{2N}$. Now, we find the eigenvalue $\mu$ in \eqref{ex.sL0} by the assistance of MATLAB using commands \texttt{eig} and \texttt{eigs}. Tables \ref{min0} and \ref{2min0} are the tables of the first and second minimum of real part of eigenvalues of $\mathfrak{L}_0$, respectively. Recall that $\mathfrak{L}_0$ depends on $a$ but not on $\si$. The notation 4.3375e+06 means $4.3375\cdot 10^{+06}$.% \begin{table}[H] \caption{Minimum of real parts of eigenvalues of $\mathfrak{L}_0$}\label{min0} \begin{center} \begin{tabular}{|l||*{6}{c|}} \hline \diagbox{$N$}{$a$} & 1.001 & 1.01 & 1.1 & 1.2 & 2\\ \hline 100 & -4.3375e+06 & -526.5826 & -0.4929 & -0.1113 & -0.0066\\ 320 & -4.9314e+04 & -19.8387 & -0.0465 & -0.0108 & -6.5386e-04\\ 640 & -5.9662e+03 & -4.0271 & -0.0116 & -0.0027 & -1.6395e-04\\ 900 & -0.24404e+03 & -1.9419 & -0.0059 & -0.0014 & -8.2981e-05\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[H] \caption{Second minimum of real parts of eigenvalues of $\mathfrak{L}_0$}\label{2min0} \begin{center} \begin{tabular}{|l||*{6}{c|}} \hline \diagbox{$N$}{$a$}& 1.001 & 1.01 & 1.1 & 1.2 & 2\\ \hline 100 & 11.9690& 11.7248& 18.7715& 20.3521& 23.0242\\ 320 & 11.9535& 13.3592& 19.1610& 20.4829& 23.0448\\ 640 & 11.9611& 14.9310& 19.1929& 20.4937& 23.0465\\ \hline \end{tabular} \end{center} \end{table} As mentioned at the beginning of this subsection, we find that all eigenvalues of $\mathfrak{L}_0$ for the specific choices of $a$ and $N$ on the tables are real-valued. To further support the observation about real eigenvalues, we consider more candidates $N=640$ and additional $a=10, 10^2, 10^4, 10^6$ and still obtain all real eigenvalues. In this investigation, we used the command \texttt{eig} to obtain an array of all ($2N$) eigenvalues, \texttt{imag} to extract the imaginary parts, and \texttt{min} and \texttt{max} to see that, indeed, all imaginary parts are zero. (MATLAB returned exact 0, not something like $10^{-6}$.) We then obtain the first and second minimums of the real-parts of all eigenvalues of $\mathfrak{L}_0$, using the command \texttt{eig}. For comparison, we also applied the above procedure to $\mathfrak{L}_1$, and found that $\mathfrak{L}_1$ has eigenvalues with non-zero imaginary part. Thus the observation that $\mathfrak{L}_0$ has only real eigenvalues should not be due to code error. In the tables, we do not have $0$ as an eigenvalue, which is the known eigenvalue with the eigenfunction $\partial_a\Psi$ in theory. Instead, the minimum of real part of eigenvalues of $\mathfrak{L}_0$ has a negative value, which approaches $0$ very quickly as $N$ increases (at least for $a\ge1.1$). Also, the second minimum is quite far from the first minimum. % Thus, we guess that the true eigenvalue corresponding to the negative real part is $0$. To make sure of this, we compute the cosine of the angle between the approximated eigenfunctions and the true eigenfunction $\partial_a\Psi$. As we see in Table \ref{angle}, the cosine of the angle is almost $1$, which means the approximated eigenfunction is almost the same as the true eigenfunction. \begin{table}[H] \caption{Cosine of angle between the approximated eigenfunction and $\partial_a\Psi$}\label{angle} \begin{center} \begin{tabular}{|l||*{6}{c|}} \hline \diagbox{$N$}{$a$}& 1.001 & 1.01 & 1.1 & 1.2 & 2\\ \hline 640 & 0.9988 & 1 & 1& 1 & 1\\ \hline \end{tabular} \end{center} \end{table} Moreover, we have only one eigenvalue which is close to zero. It is an evidence that $\partial_a\Psi$ is the unique eigenfunction of $\mathfrak{L}_0$. Also, we observe that all the other eigenvalues are positive, which provides the evidence of Conjecture \ref{conj.sL0}. \subsection{Eigenvalues of $\mathfrak{L}_n$, $n\not =0$} In this subsection we study the eigenvalues of $\mathfrak{L}_n$ in the second case $n\not =0$. First, we claim that it is enough to consider $n\in \mathbb{N}$, i.e., $n>0$, because $n$-mode and $-n$-mode have the same real-parts of the eigenvalues. \begin{lemma}\label{restictN} $\mu$ is an eigenvalue of $\mathfrak{L}_n$ iff $\bar{\mu}$ is an eigenvalue of $\mathfrak{L}_{-n}$. In particular, $\mathfrak{L}_{n}$ and $\mathfrak{L}_{-n}$, $n\in \mathbb{Z}\setminus\{0\}$, share the same real parts of eigenvalues. \end{lemma} \begin{remark} The analogy of this proposition also works for $\mathcal{M}_n$, $n\in \mathbb{Z}\setminus\{0\}$. i.e., $\mu$ is an eigenvalue of $\mathcal{M}_n$ iff $\bar{\mu}$ is an eigenvalue of $\mathcal{M}_{-n}$. \end{remark} \begin{proof} Recall the decomposition $\mathfrak{L}_n = (A_n+B_n)A_n + C_n$ in Lemma \ref{invariance.sL}. The operators $A_n$, $B_n$ and $C_n$ defined by \eqref{op.ABCn} satisfy \[ \overline{A_n h } = A_{-n} \bar h,\quad \overline{B_n h } = B_{-n} \bar h,\quad \overline{C_n h } = C_{-n} \bar h. \] This shows \EQ{ \overline{\mathfrak{L}_n h} = \mathfrak{L}_{-n} \bar{h}. } This % equality implies that \EQN{ \mathfrak{L}_n h = \mu h \iff \mathfrak{L}_{-n}\bar{h} = \bar{\mu}\bar{h}, } and the statement of the lemma follows. \end{proof} By Lemma \ref{restictN}, it is enough to consider the eigenvalue problem for the linear operator $\mathfrak{L}_n$ only for $n\in \mathbb{N}$. The next lemma shows that it suffices to consider $n=1$. \begin{lemma}\label{restictn=1} An eigenpair $(\mu,h)$ of $\mathfrak{L}_n$ for parameters $(a,\si)$ is also an eigenpair of $\mathfrak{L}_1$ for parameters $(a,n\si)$. \end{lemma} It is because $n$ in the expression of $A_n$, $B_n$, $C_n$ and $\mathfrak{L}_n$ is always together with $\si$. Therefore, it is enough to consider the eigenvalues of $\mathfrak{L}_1$ for any $a>1$ and $\si>0$. Our numerical evidences suggest the following. \begin{conjecture}\label{conj.sL1} For all $a>1$ and $\si>0$, all eigenvalues of the linear operator $\mathfrak{L}_1$ have positive real-parts. \end{conjecture} We provide some numerical evidences for this conjecture. \medskip \textbf{Step 1}.\ Finding the eigenvalues of the linear operator $\mathfrak{L}_1$. As we did for $\mathfrak{L}_0$, by the finite difference scheme, the eigenvalue problem for $\mathfrak{L}_1$ can be written as the form of a matrix equation: \EQN{\label{ex.sL} \begin{pmatrix} I & -A_1\\ A_1 +B_1 & C_1 \end{pmatrix} Y =\mu \begin{pmatrix} 0 & 0\\ 0 & 1 \end{pmatrix} Y. } \begin{table}[H] \begin{center} \caption{Minimum of real-part of eigenvalues of $\mathfrak{L}_1$, $N=640$}\label{min.sL1} \begin{tabular}{|l||*{8}{c|}} \hline \diagbox[width=12mm] {$\si$}{$a$} & 1.001 & 1.01 & 1.1 & 1.2 & 1.5 & 2 & 5 & 10\\ \hline .001 & -5931.5 & -2.8719 &-0.0101 & -0.0025 & -0.0005 & -0.00015 & -0.00001 & 0.000001\\ \hline 0.01 & -0.0030 & 11.9444 & 0.1391 & 0.0209 & 0.0025 & 0.00091 & 0.00053 & 0.00051\\ \hline 0.1 & 12.0776 & 11.9809 & 11.7450 & 2.5152 & 0.2995& 0.1077 & 0.0546 & 0.0511\\ \hline 1 & 21.9890 & 22.0544 & 21.6837 & 20.8485 & 17.5098 & 10.5236 & 6.3884 & 6.0898\\ \hline 10 & 10911 & 10913 & 10975 & 11013 & 11048 & 11024 & 10706 & 10547\\ \hline 50 & 6272600 & 6273900 & 6281400 &6285600 &6291300 & 6293900 & 6288200 & 6279700\\ \hline \end{tabular} \end{center} \end{table} Then, by the assistance of MATLAB, we obtain Table \ref{min.sL1} % of the minimum of real part of eigenvalues of $\mathfrak{L}_1$. % From the values on the table, we observe that the minimum of the real part of the eigenvalues are positive except for when $\si\ll 1$. Thus, this supports the Conjecture \ref{conj.sL1} for $\si\gtrsim 1$. Also, as $\si$ increases, the minimum of real part of eigenvalues also increases. % When $\si\ll 1$, Table \ref{min.sL1} suggests that we may have eigenvalues with negative real parts. However, there's a possibility that negative approximated eigenvalues are obtained because of the approximation errors. This is supported by the following comparisons with the case $n=0$. Recall that \EQ{ \mathfrak{L}_1 h &=(A_1+B_1) A_1h +C_1h\\ &=(A_0 + B_0 + \si^2 I)(A_0h + \si^2h) +\si^2 \left(1+\frac {2(a^2-1) }{(a-\cos \phi)^2}\right)h+ C_0 h\\ &\quad +i\si\left(\frac {2(a^2-1) }{(a-\cos \phi)^2}(A_0h + \si^2h)-B_0h \right) + 12 i\si \frac{(a^2-1)\sin^2\phi}{(a-\cos \phi)^4}h.\\ &= [(A_0+B_0)A_0 +C_0]h+Th = \mathfrak{L}_0 h+Th } where \EQ{ Th =& \ i\si\left(\frac {2(a^2-1) }{(a-\cos \phi)^2} A_0h- B_0h+ \frac{12(a^2-1)\sin^2\phi}{(a-\cos \phi)^4}h\right)\\ &+\si^2\left(2 A_0h + B_0h+ h+\frac {2(a^2-1) }{(a-\cos \phi)^2}h\right) +i\si^3\frac {2(a^2-1) }{(a-\cos \phi)^2}h+ \si^4h \\ =:& \ \si T_1h + \si^2T_2h+\si^3T_3h+\si^4T_4h. } Since $T=O(\si)$, it can be considered as a perturbation to $\mathfrak{L}_0$ for sufficiently small $\si$. In other words, $\mathfrak{L}_1=\mathfrak{L}_0+T(\si)$ is a perturbed operator from $\mathfrak{L}_0$ when $\si\ll 1$. By the perturbation theory, we expect that the eigenvalues of $\mathfrak{L}_1$ are perturbations of those of $\mathfrak{L}_0$. This is evidenced by Table \ref{comparison}, where the operator $\mathfrak{L}_1$ numerically have a negative minimum of real part of the eigenvalues, for sufficiently small $\si$, (especially $\si=0.001$), because $\mathfrak{L}_0$ has a negative numerical minimum of real part of the eigenvalues. \begin{table}[H] \caption{Comparison between the minimums of real parts of eigenvalues. The notation 5.9662e+03 means $5.9662\cdot 10^{+03}$. }\label{comparison} \begin{center} \begin{tabular}{|p{1.8cm}||*{7}{c|}} \hline &\diagbox{$N$}{$a$}& 1.001 & 1.01 & 1.1 & 1.2 & 2\\ \hline \multirow{2}{*}{$n=0$} & 640 & -5.9662e+03 & -4.0271 & -0.0116 & -0.0027 & -1.6395e-04\\ & 900 & -0.24404e+03 & -1.9419 & -0.0059 & -0.0014 & -8.2981e-05\\ \hline \multirow{2}{\linewidth}{$n=1$ $\si=0.001$} & 640 &-5.9315e+03 & -2.8719 &-0.0101 & -0.0025 & -0.00015\\ & 900 & -2.3882e+03 & -0.5854 & -0.0044 & -0.001 & -7.2192e-05\\ \hline \end{tabular} \end{center} \end{table} \medskip \textbf{Step 2}.\ Asymptotic analysis of the minimum of the real part of the eigenvalues of $\mathfrak{L}_1$ for small $\si$. \smallskip To resolve the issue for $\si \ll 1$ mentioned in the end of the previous step, we revise our numerical scheme based on asymptotic analysis. Since we already know that $(0,\partial_a\Psi)$ is an eigen-pair of $\mathfrak{L}_0$, we decompose an eigenfunction $h$ of $\mathfrak{L}_1$ as \EQ{ h = \partial_a\Psi + \eta, \quad \int_0^{\pi} \partial_a\Psi\cdot \eta\, d\phi =0. } One may add a weight like $\sin \phi$ in the orthogonality condition. We skip it for simplicity. Noting $\mathfrak{L}_0 \partial_a\Psi = 0$, we have the equation for the perturbation $\eta$: \begin{equation}\label{eta.eq} \begin{cases} \mathfrak{L}_0 \eta + T(\partial_a\Psi + \eta) = \mu (\partial_a\Psi + \eta)\\ \int_0^{\pi} \partial_a\Psi\cdot \eta\, d\phi =0\\ A_0\eta|_{\phi=0,\pi} = \eta|_{\phi=0,\pi} =0. \end{cases} \end{equation} By matching the order of each term in $\si \ll 1$, we expect the expansion of $\mu$ and $\eta$ as \EQN{ \mu = \sum_{k=1}^{\infty} \mu_k \si^k,\quad \eta = \sum_{k=1}^{\infty} \eta_k \si^k, } and the equation of order $\si^1$ from \eqref{eta.eq} is \EQN{\label{Osi-eq} \mathfrak{L}_0 \eta_1 &= -T_1 \partial_a\Psi+ \mu_1 \partial_a\Psi\\ &=i\left(-\frac {2(a^2-1) }{(a-\cos \phi)^2}A_0\partial_a\Psi + B_0\partial_a\Psi - \frac{12(a^2-1)\sin^2\phi}{(a-\cos \phi)^4}\partial_a\Psi\right) + \mu_1 \partial_a\Psi. } Since $\Re(\mathfrak{L}_0\eta_1) = \mathfrak{L}_0(\Re\eta_1)$, $\partial_a\Psi$ is a real-valued function, and the operators $A_0$ and $B_0$ map real-valued functions to real-valued, the real-part of \eqref{Osi-eq} is \EQN{\label{eig.eta1}\begin{cases} \mathfrak{L}_0 \Re(\eta_1) =\Re(\mu_1) \partial_a\Psi\\ \int_0^{\pi} \partial_a\Psi \cdot \Re(\eta_1) d\phi = 0\\ A_0\Re(\eta_1)|_{\phi=0,\pi}=\Re(\eta_1)|_{\phi=0,\pi} = 0. \end{cases}} By Theorem \ref{simple.eigenvalue}, the solution of \eqref{eig.eta1} should be \begin{align}\label{sol.first.order} (\Re \mu_1, \Re \eta_1)=(0,0). \end{align} Indeed, when $\Re(\mu_1)= 0$, the solutions $\Re (\eta_1)$ of the first equation with the boundary conditions are constant multiples of $\partial_a\Psi$. Then, by the orthogonality condition, we obtain $\Re \eta_1=0$. On the other hand, when $\Re(\mu_1)\neq 0$, we have no solution because of the non-existence of strictly generalized eigenfunction for zero eigenvalue. We also observed \eqref{sol.first.order} numerically by solving \EQN{\label{matrix.sL1.asymp} \begin{pmatrix} I & -A_0 & 0\\ A_0+B_0 & C_0 & -\partial_a\Psi\\ 0 & (\partial_a\Psi)^T & 0 \end{pmatrix} Y_1 =\begin{pmatrix} 0\\ 0 \\0 \end{pmatrix}, } where $Y_1\in \mathbb{R}^{2N+1}$ is the discretization of $\Re (A_0 \eta_1, \eta_1, \mu_1)^T$, \[ Y_1 = \bke{ A_0\Re(\eta_1)(\phi_1), \ldots, A_0\Re(\eta_1)(\phi_N), \Re(\eta_1)(\phi_1) , \ldots, \Re(\eta_1)(\phi_N) , \\ \Re(\mu_1) }^T, \] and for $\partial_a\Psi$ in \eqref{matrix.sL1.asymp}, we use \EQ{ \partial_a\Psi = (\partial_a\Psi(\phi_1),\ldots,\partial_a\Psi(\phi_N))^T. } The system \eqref{matrix.sL1.asymp} consists of $2N+1$ equations. The first $N$ equations make $Y_1$ of the form $Y_1=(A_0 \xi, \xi, \nu)^T$. The next $N$ equations correspond to the first equation in \eqref{eig.eta1}. The last equation in \eqref{matrix.sL1.asymp} corresponds to the orthogonality condition in \eqref{eig.eta1}. We omit our numerical results for \eqref{sol.first.order} as we have given a proof of it. \medskip Now, consider the real part of the equation of order $\si^2$ from \eqref{eta.eq}: \EQN{\label{2nd.ord.eq} \mathfrak{L}_0 \Re(\eta_2) -\Re(\mu_2)\partial_a\Psi =& \ -T_2\partial_a\Psi+\Im(T_1)\Im(\eta_1)-\Im(\mu_1)\Im(\eta_1)\\ =& \ -(2A_0 + B_0)\partial_a\Psi - \left(1 + \frac {2(a^2-1) }{(a-\cos \phi)^2}\right)\partial_a\Psi \\ &+ \left( \frac {2(a^2-1) }{(a-\cos \phi)^2} A_0- B_0 +12\frac{(a^2-1)\sin^2\phi}{(a-\cos \phi)^4}I \right)\Im(\eta_1)\\ &- \Im(\mu_1)\Im(\eta_1). } Here, $\Re(\eta_2)$ and $\Re(\mu_2)$ are unknown and ($\Im(\mu_1)$, $\Im(\eta_1)$) can be obtained by solving the imaginary part of the $O(\si)$-equation \eqref{Osi-eq}. Also, we compute \EQ{ (2A_0+B_0)\partial_a\Psi &= -\frac{4 (a^2-1) \sin\phi}{(a-\cos\phi)^4} =\frac{2 (a^2-1) }{(a-\cos\phi)^2}\partial_a\Psi. } In the similar way of solving the real part of the $O(\si)$-equation, we solve (\ref{2nd.ord.eq}) under the following boundary and orthogonality conditions: \EQ{ A_0\Re(\eta_2)|_{\phi=0,\pi}=\Re(\eta_2)|_{\phi=0,\pi} =0, \quad \int_0^{\pi} \partial_a\Psi\cdot \Re(\eta_2) d\phi =0. } \begin{table}[H] \caption{Values of $\Re(\mu_2)$}\label{mu2} \begin{center} \begin{tabular}{|l|*{7}{c|}} \hline \diagbox{$N$}{$a$}& 1.001 & 1.01& 1.1& 1.2& 2 & 10 & 100\\ \hline 320 & 40.4784& 13.2605& 6.8694 &6.0795 &5.2064&5.0067&5.0001\\ \hline 640 & 34.7380& 13.1886& 6.8677 &6.0790 &5.2063&5.0067&5.0001\\ \hline 1000 & 33.9805& 13.1748& 6.8674 &6.0788 &5.2063&5.0067&5.0001\\ \hline 2000 & 33.6191& 13.1677& 6.8673 &6.0788 &5.2063&5.0067&5.0001\\ \hline 3000 & 33.5545& 13.1664& 6.8672 &6.0788 &5.2063&5.0067&5.0001\\ \hline \end{tabular} \end{center} \end{table} Then, we obtain Table \ref{mu2} for the values of $\Re(\mu_2)$. Note that the eigenvalue $\mu$ of the linear operator $\mathfrak{L}_1$ satisfies \EQN{\label{mu} \Re(\mu) \sim \Re(\mu_2) \si^2 } because of $\Re(\mu_1)=0$. Thus, the positiveness of the values of $\Re(\mu_2)$ implies that $\Re(\mu)$ is positive for sufficiently small $\si$. This provides the numerical evidence of the desired spectral property of $\mathfrak{L}_n$, $n\neq 0$, even in the case of $\si\ll 1$. On the other hand, on the Table \ref{mu2}, we observe that as $a$ goes to infinity, the value of $\Re(\mu_2)$ become stabilized. Also, for sufficiently small $a$, as $N$ goes to infinity, $\Re(\mu_2)$ is relatively stable. Considering (\ref{mu}), the value of $\Re(\mu)$ is stable numerically for sufficiently small $\si$. \subsection{Summary} In this section, we analyzed the kernel and eigenvalues of $\mathfrak{L}_n$ both analytically and numerically, with the help of asymptotic analysis. Here is a summary for all $a>1$ and $\si>0$: \begin{enumerate} \item We proved that the kernel of $\mathfrak{L}_0$ is spanned by $\partial_a \Psi$. We also proved that $\mathfrak{L}_0$ is not symmetric with respect to any weight. \item We presented numerical evidence that $\mathfrak{L}_0$ only has real eigenvalues, and the second smallest eigenvalue is positive. \item For $\mathfrak{L}_n$, $n\not =0$, we presented numerical evidence that $\mathfrak{L}_n$ has complex eigenvalues, and the real parts of all its eigenvalues are positive. In particular, it has no zero eigenvalue nor purely imaginary eigenvalue. \item Our numerical observations support that there is no bifurcation for $ 1.01<a<\infty$. Conjectures \ref{conj.sL0} and \ref{conj.sL1} suggest that Landau solutions are stable under DSS no-swirl axisymmetric perturbations. \end{enumerate} \section*{Acknowledgments} The research of both Kwon and Tsai was both partially supported by NSERC grant RGPIN-2018-04137 (Canada). \bibliographystyle{abbrv}
\section{Introduction} Throughout this paper, we assume that $(R,\frak{m})$ is a Cohen-Macaulay (abbreviated to CM) local ring of positive dimension $d$ with infinite residue field and $I$ an $\frak{m}$-primary ideal. The ideals of the form $I^{m+1}:I^m=\{x\in R\vert\ \ xI^m\subseteq I^{m+1}\}$ increase with $m$. The union of this family first studied by Ratliff and Rush [\ref{Rr}]. Let us denote $\widetilde{I}=\cup_{m\geq 1}(I^{m+1}:I^m)$. The ideal $\widetilde{I}$ is called {\it the Ratliff-Rush ideal associated} with $I$ or {\it the Ratliff-Rush closure} of $I$. Ratliff and Rush showed that $\widetilde{I}$ is the largest ideal for which ${(\widetilde{I})}^m=I^m$ for all large $m$ and hence that $\widetilde{\widetilde{I}}=\widetilde{I}$. More generally, they proved that $\widetilde{I^m}=\cup_{k\geq 1}(I^{m+k}:I^k)$ and $\widetilde{I^m}=I^m$ for all large $m$; in particular, it holds that $$\widetilde{I^m}=\cup_{k\geq 1}(I^{m+k}:(x_1^k,...,x_d^k)),$$ where $x_1,...,x_d$ is a system of parameters contained in $I$. The ideal $I$ for which $\widetilde{I}=I$ is called {\it Ratliff-Rush closed}. There exist many ideals which are Ratliff-Rush ideals, for example, all radical and all integrally closed ideals. In [\ref{Hls}], Heinzer, Lantz and Shah showed that the depth of the associated graded ring $gr_I(R)=\bigoplus_{m\geq 0}\dfrac{I^m}{I^{m+1}}$ is positive iff all powers of $I$ are Ratliff-Rush ideals. For example, all power of an ideal is Ratliff-Rush, whenever it is generated by a regular sequence. Recall that an ideal $J\subseteq I$ is called a reduction of $I$ if $I^{m+1}=JI^{m}$ for some non-negative integer $m$. A reduction $J$ is called a minimal reduction if $J$ is minimal with respect to inclusion and under our assumption, it is generated by a regular sequence. These concepts were first introduced and studied by Northcott and Rees [\ref{Nr}]. If $J$ is a reduction of $I$, define {\it the reduction number} of $I$ with respect to $J$, denoted by $r_J(I)$, to be $\min\{m \vert\ \ I^{m+1}=JI^m\}$. The reduction number of $I$ is defined by $r(I)=\min\{r_J(I)\vert\ \ J$ is a minimal reduction of $I\}$. The notion of minimal reduction can be given for filtrations and the extension is clear in the case of the Ratliff-Rush filtration. Since $\widetilde{I^m}=I^m$ for large $m$, a minimal reduction $J$ of $I$ is a minimal reduction with respect to the Ratliff-Rush filtration. Rossi and Swanson [\ref{Rs}] denoted by $$\widetilde{r_J(I)}=\min\{m\vert\ \ \widetilde{I^{n+1}}=J\widetilde{I^n}\ \ for\ n\geq m\},$$ and they called it the {\it Ratliff-Rush reduction number} of $I$ with respect to $J$. It is not clear whether $\widetilde{I^{m+1}}=J\widetilde{I^m}$ for some integer $m$ implies that $\widetilde{I^{n+1}}=J\widetilde{I^n}$ for all $n\geq m$. We remark in fact $\widetilde{I}\widetilde{I^m}$ is not necessarily $\widetilde{I^{m+1}}$. Recall that an element $x$ of the ideal $I$ is said to be {\it superficial element} for $I$ if there exists a non-negative integer $k$ such that $(I^{m+1}:x)\cap I^k=I^m$ for all $m\geq k$ and so, with our assumption, there exists a non-negative integer $k_0$ such that $(I^{m+1}:x)=I^m$ for all $m\geq k_0$. A set of elements $x_1,...,x_s\in I$ is a {\it superficial sequence} of $I$ if $x_i$ is a superficial element of $I/{(x_i,...,x_{i-1})}$ for $i=1,...,s$. Swanson [\ref{S}] proved that if $x_1,...,x_d$ is a superficial sequence of $I$, then $J=(x_1,...,x_d)$ is a minimal reduction of $I$. Elias [\ref{E}] defined that a superficial sequence $x_1,...,x_s$ of $I$ is {\it tame} if $x_i$ is a superficial element of $I$, for all $i=1,...,s$. Also, he proved that a tame superficial sequence always exists. The main aim of this paper is to prove the question that made by Rossi and Swanson in [\ref{Rs}, Question 4.6]. For any unexplained notation or terminology, we refer the reader to [\ref{Bh}] and [\ref{Hs}]. \section{The results} We shall need some auxiliary results. The following result was essentially proven in [\ref{K}, Lemma 3] and [\ref{C}, Theorem 2]. \begin{lemma} Let $x,x_1,...,x_s\in\frak{m}$ be an $R$-regular sequence and $\frak{a}=(x_1,...,x_s)$. Then the following conditions hold. \begin{itemize} \item[(i)] ${\frak{a}}^{n+1}:x_i={\frak{a}}^n$ for all $n\in\mathbb{N}$ and all $i$ $(1\leq i\leq s)$. \item[(ii)] ${\frak{a}}^{n}:x={\frak{a}}^n$ for all $n\in\mathbb{N}$. \end{itemize} \end{lemma} The next result is known. For the proof see [\ref{Mc}, Ch.VIII] and [\ref{P}, Theorem 2.2 ]. \begin{lemma} Let $m$ a non-negative integer. Then the following conditions hold. \begin{itemize} \item[(i)] $\widetilde{I^{m+1}}:x=\widetilde{I^m}$ for every superficial element $x\in I$. \item[(ii)] $\widetilde{I^{m+1}}:J=\widetilde{I^m}$ for all minimal reduction $J$ of $I$. \item[(iii)] $\widetilde{I^{m+1}}:I=\widetilde{I^m}$. \item[(iv)] $J\widetilde{I^{m+1}}:x=\widetilde{I^{m+1}}$ for all minimal reduction $J$ of $I$ and all superficial element $x\in J$. \end{itemize} \end{lemma} \begin{lemma} Let $(R,\frak{m})$ be a CM local ring of dimension two and let $x_1, x_2$ be a superficial sequence on I with $J=(x_1,x_2)$. Then $J^{n+1}\widetilde{I^{m}}:x_1=J^n\widetilde{I^m}$ for all $m,n\in\mathbb{N}$. \end{lemma} \begin{proof} We will prove the claim by induction on $n$. The case $n=0$ follows by Lemma 2.2. Assume that $n\geq 1$ and clearly $J^n\widetilde{I^m}\subseteq J^{n+1}\widetilde{I^{m}}:x_1$. Suppose $yx_1\in J^{n+1}\widetilde{I^{m}}$; then we have $yx_1=\alpha_1x_1+\alpha_2x_2$ for some $\alpha_1,\alpha_2\in J^{n}\widetilde{I^{m}}$. Thus $x_1(y-\alpha_1)=x_2\alpha_2\in x_2J^{n}\widetilde{I^{m}}$ and since $x_1, x_2$ is a regular sequence, we obtain $y-\alpha_1=tx_2$ for some $t\in R$. Since $x_1(y-\alpha_1)=tx_1x_2\in x_2J^{n}\widetilde{I^{m}}$ and $x_2$ is a non-zerodivisor, it follows that $tx_1\in J^{n}\widetilde{I^{m}}$ and so $t\in J^{n}\widetilde{I^{m}}:x_1$. Therefore, by induction hypothesis, $t\in J^{n-1}\widetilde{I^{m}}$ and hence $y\in J^n\widetilde{I^{m}}$, as requird. \end{proof} The following result was proved by Rossi and Swanson [\ref{Rs}]. We reprove with a simplified proof. \begin{proposition} Let $(R,\frak{m})$ be a CM local ring of dimension two and let $x_1, x_2$ be a superficial sequence on I with $J=(x_1,x_2)$. If $r_J(I)=m$, then $\widetilde{r_J(I)}\leq m$. \end{proposition} \begin{proof} Let $n$ be an integer such that $n\geq m$. Since $\widetilde{I^{n+1}}=I^{n+k+1}:(x_1^k,x_2^k)$ for all large $k$, we have $\widetilde{I^{n+1}}=J^{k+1}I^n:(x_1^k,x_2^k)\subseteq J^{k+1}\widetilde{I^{n}}:x_1^k$. By using Lemma 2.3, we have $\widetilde{I^{n+1}}\subseteq J\widetilde{I^{n}}$ and clearly $J\widetilde{I^{n}}\subseteq\widetilde{I^{n+1}}$. It therefore follows that $\widetilde{I^{n+1}}=J\widetilde{I^{n}}$ for all $n\geq m$ and so $\widetilde{r_J(I)}\leq m$. \end{proof} \begin{remark} Let $(R,\frak{m})$ be a CM local ring of dimension two and $J$ a minimal reduction of $I$. If $r_J(I)=m$ and $\widetilde{I^m}=I^m$, then by Proposition 2.4, we have $\widetilde{I^n}=I^n$ for all $n\geq m$. \end{remark} Let $G=\bigoplus_{m\geq 0}G_m$ be a Notherian graded ring where $G_0$ is an Artinian local ring, $G$ is generated by $G_1$ over $G_0$ and $G_{+}=\bigoplus_{m>0}G_m$. Let $H_{G_{+}}^i(G)$ denote the i-th local cohomology module of $G$ with respect to the graded ideal $G_+$ and set $a_i(G)=\max\{m\vert\ \ H_{G_{+}}^i(G)_m\neq 0\}$ with the convention $a_i(G)=-\infty$, if $H_{G_{+}}^i(G)=0$. The Castelnuovo-Mumford regularity is defined by $reg (G):=\max\{a_i(G)+i\vert\ \ i\geq 0\}$. The following result can be also concluded by [\ref{Drt}, Theorem 2.4] \begin{proposition} Let $(R,\frak{m})$ be a CM local ring of dimension two and $J$ a minimal reduction of $I$ such that $r_J(I)=m$. If $\widetilde{I^m}=I^m$, then $r_J(I)=reg(gr_I(R))$. \end{proposition} \begin{proof} By using Remark 2.5, we have $\widetilde{I^n}=I^n$ for all $n\geq m$ and so for all superficial element $x\in I$, we have $(x)\cap I^{n+1}=xI^n$ for all $n\geq m$. Therefore by [\ref{T1}, Proposition 4.7], $reg(gr_I(R))\leq m$ and also by [\ref{T}, Proposition 3.2] we have $r_J(I)\leq reg(gr_I(R))$ (see also [\ref{M}, Lemma 1.2]). Hence $r_J(I)=reg(gr
_I(R))$ \end{proof} Let $(R,\frak{m})$ be a Noetherian local ring with $\operatorname{dim} R>0$ and $I$ an ideal of $R$. A system of homogeneous elements $y^{*}_1,...,y^{*}_t$ in $gr_{I}(R)$ is called filter-regular sequence if and only if $$y^{*}_i\notin\bigcup_{\frak{p}\in\operatorname{Ass}(gr_{I}(R)/{(y^{*}_1,...,y^{*}_{i-1})gr_{I}(R)})\setminus V(gr_{I}(R)_{+})}\frak{p}$$ for $i=1,...,t$ (see [\ref{T}]). Trung in [\ref{T1}, Lemma 6.2] proved that $y_1,...,y_t$ is a superficial sequence of $I$ if and only if $y_1^{*},...,y_t^{*}$ forms a filter-regular sequence of $gr_{I}(R)$, where $y_i^{*}=y_i+{I}^2$. A filter-regular $y^{*}_1,...,y^{*}_t$ in any order is called unconditioned filter-regular sequence. \begin{lemma} Let $(R,\frak{m})$ be a CM local ring and $I$ an $\frak{m}$-primary ideal of $R$. Then every minimal reduction $J$ of $I$ can be generated by a tame superficial sequence of $I$. \end{lemma} \begin{proof} Let $J$ be a minimal reduction of $I$. Since the residue field is infinite, it is well known that there exists a superficial sequence $y_1,..,y_d$ in $I$ such that $J=(y_1,..,y_d)$, see for instance section 8.6 in [\ref{Hs}]. Since $y^{*}_1,...,y^{*}_d$ forms a filter-regular sequence in $gr_I(R)$, then by [\ref{Tz}, Proposition 1.2] $Q=(y^{*}_1,...,y^{*}_d)$ admits a system of generators of length $d$, say $z^{*}_1,...,z^{*}_d$, which forms an unconditional $Q$-filter-regular sequence. Since $z_1,...,z_d$ is in particular a maximal superficial sequence in the minimal reduction $J$, it follows that $J=(z_1,...,z_d)$ is generated by a tame superficial sequence. \end{proof} \begin{lemma} Let $(R,\frak{m})$ be a CM local ring of dimension $d\geq 3$, $I$ an $\frak{m}$-primary ideal and $x,x_1,...,x_s$ a tame superficial sequence of $I$. If $\frak{a}=(x_1,...,x_s)$, then ${\frak{a}}^n\widetilde{I^{m+1}}:x={\frak{a}}^n\widetilde{I^m}$ for all non-negative integers $m,n$. \end{lemma} \begin{proof} We proceed by induction on $n$. The case $n=0$ is trivial. Now we assume that $n\geq 1$ and by induction hypothesis ${\frak{a}}^n\widetilde{I^{m+1}}:x\subseteq{\frak{a}}^{n-1}\widetilde{I^{m+2}}:x={\frak{a}}^{n-1}\widetilde{I^{m+1}}$. So that the argument is finished once we prove the following:\\ $(x_1,...,x_r)^{n-1}\widetilde{I^{m+1}}\cap({\frak{a}}^n\widetilde{I^{m+1}}:x)\subseteq {\frak{a}}^n\widetilde{I^m}$ for all $r$ $(0\leq r\leq s)$. Again, we proceed by induction on $r$. For $r=0$, we take $(x_1,...,x_r)=0$. Assume $r\geq 1$ and $y$ an element of $(x_1,...,x_r)^{n-1}\widetilde{I^{m+1}}\cap({\frak{a}}^n\widetilde{I^{m+1}}:x)$. We can put $y=\alpha+x_r\beta$, where $\alpha\in(x_1,...,x_{r-1})^{n-1}\widetilde{I^{m+1}}$ and $\beta\in (x_1,...,x_r)^{n-2}\widetilde{I^{m+1}}$. Now let $\frak{b}=(x_1,...,\widehat{x_r},...,x_s)$ and then $xy=x\alpha+xx_r\beta\in{\frak{a}}^n\widetilde{I^{m+1}}={\frak{b}}^n\widetilde{I^{m+1}}+x_r{\frak{a}}^{n-1}\widetilde{I^{m+1}}$. Thus we can find an element $z$ of ${\frak{a}}^{n-1}\widetilde{I^{m+1}}$ such that $xy-x_rz=x\alpha+x_r(x\beta-z)\in{\frak{b}}^n\widetilde{I^{m+1}}$. Since ${\frak{b}}^n\widetilde{I^{m+1}}\subseteq{\frak{b}}^{n-1}\widetilde{I^{m+2}}$ and $\alpha\in{\frak{b}}^{n-1}\widetilde{I^{m+1}}$, we get $x\alpha\in{\frak{b}}^{n-1}\widetilde{I^{m+2}}$. Hence $x_r(x\beta-z)\in{\frak{b}}^{n-1}\widetilde{I^{m+2}}$ and by induction hypothesis on $n$, we have $x\beta-z\in{\frak{b}}^{n-1}\widetilde{I^{m+1}}$ and so $x\beta\in{\frak{a}}^{n-1}\widetilde{I^{m+1}}$ (as ${\frak{b}}^{n-1}\widetilde{I^{m+1}}\subseteq{\frak{a}}^{n-1}\widetilde{I^{m+1}}$). Again, by induction hypothesis on $n$, we have $\beta\in{\frak{a}}^{n-1}\widetilde{I^{m}}$. Therefore $x\alpha=xy-xx_r\beta\in{\frak{a}}^n\widetilde{I^{m+1}}$ and so $\alpha\in({\frak{a}}^n\widetilde{I^{m+1}}:x)\cap(x_1,...,x_{r-1})^{n-1}\widetilde{I^{m+1}}$. Thus by induction hypothesis on $r$, we have $\alpha\in{\frak{a}}^n\widetilde{I^{m}}$ so that $y=\alpha+x_r\beta$ is contained in ${\frak{a}}^n\widetilde{I^{m}}$, as desired. \end{proof} \begin{lemma} Let $(R,\frak{m})$ be a CM local ring of dimension $d\geq 3$, $I$ an $\frak{m}$-primary ideal, and $x_1,x_2,...,x_d$ a tame superficial sequence of $I$. If $J=(x_1,x_2,...,x_d)$, then $J^{n+1}\widetilde{I^{m}}:x_1=J^n\widetilde{I^{m}}$ for all non-negative integers $m,n$. \end{lemma} \begin{proof} Let us proceed by induction on $n$. The case $n=0$ follows by Lemma 2.2. Let $n\geq 1$. We have $x_1(J^{n+1}\widetilde{I^{m}}:x_1)=J^{n+1}\widetilde{I^{m}}\cap(x_1)=(J_1^{n+1}\widetilde{I^{m}}+x_1J^n\widetilde{I^{m}})\cap(x_1)= J_1^{n+1}\widetilde{I^{m}}\cap(x_1)+x_1J^n\widetilde{I^{m}}=x_1(J_1^{n+1}\widetilde{I^{m}}:x_1)+x_1J^n\widetilde{I^{m}}$, where $J_1=(x_2,...,x_d)$. Therefore by using Lemma 2.8, we have $x_1(J^{n+1}\widetilde{I^{m}}:x_1)=x_1J_1^{n+1}\widetilde{I^{m-1}}+x_1J^n\widetilde{I^{m}}=x_1J^n\widetilde{I^{m}}$ (as $J_1^{n+1}\widetilde{I^{m-1}}\subseteq J^n\widetilde{I^{m}}$). Hence $J^{n+1}\widetilde{I^{m}}:x_1=J^n\widetilde{I^{m}}$, as desired. \end{proof} \begin{theorem} Let $(R,\frak{m})$ be a CM local ring of dimension $d\geq 3$, $I$ an $\frak{m}$-primary ideal $x_1,x_2,...,x_d$ a tame superficial sequence of $I$. If $J=(x_1,x_2,...,x_d)$, then $\widetilde{r_J(I)}\leq r_J(I)$. \end{theorem} \begin{proof} Let us write $r_J(I)=m$ and we prove $\widetilde{I^{m+1}}=J\widetilde{I^m}$. For large $k$, we have $\widetilde{I^{m+1}}=I^{m+k+1}:(x_1^k,x_2^k,...,x_d^k)$, in particular $\widetilde{I^{m+1}}=J^{k+1}I^m:(x_1^k,x_2^k,...,x_d^k)$ as $I^{m+n}=J^nI^m$ for all non-negative integers $n$. Therefore by using Lemma 2.9, we have $\widetilde{I^{m+1}}=J^{k+1}I^m:(x_1^k,...x_d^k)\subseteq J^{k+1}\widetilde{I^{m}}:x_1^k=J\widetilde{I^{m}}$. Hence $\widetilde{I^{m+1}}\subseteq J\widetilde{I^{m}}$ and so $\widetilde{I^{m+1}}=J\widetilde{I^{m}}$. This complete the proof. \end{proof} \begin{remark} Let $(R,\frak{m})$ be a CM local ring of dimension $d\geq 3$, $x_1,x_2,...,x_d$ a tame superficial sequence of $I$ and $J=(x_1,x_2,...,x_d)$. If $r_J(I)=m$ and $\widetilde{I^m}=I^m$, then by Theorem 2.10, we have $\widetilde{I^n}=I^n$ for all $n\geq m$. \end{remark} The following example shows that $I^m$ is not necessarily Ratliff-Rush closed even if $m\geq r_J(I)$. The computations are performed by using Maxaulay2 [\ref{Gs}]. \begin{example} Let $R=k[\![ x,y ]\!]$, where $k$ be a field and $I=(x^7,x^6y,x^2y^5,y^7)$. Then $r(I)=3$ and $x^{17}y^4\in I^4:I\setminus I^3$. Hence $\widetilde{I^3}\neq I^3$. \end{example} The following example show that the inequality in Theorem 2.10 can be strict. \begin{example} Let $R=k[\![ x,y ]\!]$, where $k$ be a field and $I=(x^4,x^3y,xy^3,y^4)$. Then $r(I)=2$, $e_2(I)=0$ and so by Huckaba and Marley[\ref{Hm}, Corollary 4.13] $\widetilde{r_J(I)}\leq 1$ for all minimal reduction $J$ of $I$, where $e_2(I)$ is the second Hilbert coefficients. \end{example} \begin{acknowledgement} This paper was done while I was visiting the University of Osnabruck. I would like to thank the Institute of Mathematics of the University of Osnabruck for hospitality. Also, I would like to express my deep gratitude to Professor Louis Ratliff and Professor Tony Puthenpurakal for valuable suggestions. Finally, I would like grateful to the referee for the careful reading of the manuscript and the helpful suggestions. \end{acknowledgement}
\section*{Results} Fig.~\ref{Fig1} shows a TF-$\mu$SR time spectrum collected at 150~mK in a field of 2~mT for a pure $\rm Ho_2Ti_2O_7$\ sample. This curve is representative of the data collected during this study. A rapid loss in asymmetry from an initial value of $\sim0.22$ occurs outside the time window of the MuSR spectrometer~\cite{Bramwell3,Lago,Blundell}. The slowly relaxing component of the data were fit using Eq.~\ref{Exponential decay}. \begin{figure}[t \begin{center} \includegraphics[width=0.7\columnwidth]{Chang_Figure1.eps} \caption{\label{Fig1}\textbf{TF-$\boldsymbol{\mu}$SR time spectrum collected at 150~mK in a field of 2~mT for a pure Ho$\boldsymbol{_{2}}$Ti$\boldsymbol{_2}$O$\boldsymbol{_7}$ sample.} These results are representative of the data collected during this study.} \end{center} \end{figure} Fig.~\ref{Fig2} shows the temperature dependence of the muon relaxation rate $\lambda(T)$ for Ho$_{2-x}$Y$_{x}$Ti$_2$O$_7$\ extracted from fits to $\mu$SR time data collected in 2~mT, (see methods and \cite{SuppNote}). For all the samples containing Ho, a nearly $T$ independent $\lambda(T)$ is observed at low-temperature. As the temperature is raised there is a rapid increase in $\lambda(T)$ at some crossover temperature $T_{CR}$. This $T_{CR}$ increases from $\sim0.4$~K for the crystals with $x=1.6$ and 1.0 (data not shown) to 0.5~K for the samples with $x=0.1$ and 0.0. Above $T_{CR}$ the relaxation rate decreases with increasing temperature and has a similar $T$ dependence for all four samples containing Ho that were studied. For two samples ($x=0.1$ and 1.6) we also collected field-cooled-cooling data. In both cases a divergence between the zero-field-cooled warming (ZFCW) and the field-cooled cooling (FCC) curves appears at $T_{CR}$. For pure $\rm Y_2Ti_2O_7$\ a temperature independent relaxation rate is measured for the whole temperature range (0.05 to 5~K) studied. \begin{figure}[tb \begin{center} \includegraphics[width=0.7\columnwidth]{Chang_Figure2.eps} \caption{\label{Fig2} \textbf{Temperature dependence of the muon relaxation rate $\boldsymbol{\lambda(T)}$ extracted the fits to the TF-$\boldsymbol{\mu}$SR time spectra collected in 2~mT for samples of Ho$\boldsymbol{_{2-x}}$Y$\boldsymbol{_{x}}$Ti$\boldsymbol{_2}$O$\boldsymbol{_7}$ with $\boldsymbol{x=0}$, 0.1, 1.6 and 2.0.} The closed symbols show the zero-field-cooled warming data and the open symbols show the field-cooled cooling data.} \end{center} \end{figure} In order to better understand the origins of these signals we have also collected relaxation data as a function of temperature in 2~mT for the pure~$\rm Ho_2Ti_2O_7$\ sample discussed above, covered with a silver foil 0.25~mm thick. This thickness of foil is expected to stop all the muons before they reach the sample. Muons implanted in silver have a negligible relaxation and so any relaxation must result from a combination of the externally applied field and/or field lines originating from the sample penetrating into the silver. The $\lambda(T)$ curve obtained in this way (see ~\cite{SuppNote}) is very similar to the signal from the pure $\rm Ho_2Ti_2O_7$\ shown in Fig.~\ref{Fig2}a and demonstrates that at least some of the signal come from fields within the silver, but that these fields are the result of the magnetic properties of the sample~\cite{SuppNote}. As a next step we then investigated the magnetic field dependence of muon relaxation rate. Fig.~\ref{Fig3} shows $\lambda(B)$ for a sample with $x=0$ at selected temperatures. Studies were also made for samples with $x=0.1$, 1, 1.6 and 2. Following Bramwell \textit{et al}., linear fits to the $\lambda(B)$ data were made at each temperature. Using the gradient and intercept extracted from each fit, the effective magnetic charge $Q_{\rm{eff}}$ was obtained from $Q_{\rm{eff}}=2.1223m^{1/3}T^{2/3}$, where $m=(d\lambda(B)/dB)/\lambda_0$~\cite{Bramwell3}. For samples with $x=0$ and 0.1 the resulting values of $Q_{\rm{eff}}$ range from 4.5 to $7.5~\mu_B$\AA$^{-1}$ in the temperature regime in which Onsager's theory is expected to be valid, but increase rapidly as the temperatures increase outside this range (see Fig.~\ref{Fig4}). \begin{figure}[tb \begin{center} \includegraphics[width=0.6\columnwidth]{Chang_Figure3.eps} \caption{\label{Fig3}\textbf{Magnetic field dependence of the muon relaxation rate $\boldsymbol{\lambda(B)}$ for pure Ho$\boldsymbol{_{2}}$Ti$\boldsymbol{_2}$O$\boldsymbol{_7}$ at three different temperatures.} The values for $m=(d\lambda(B)/dB)/\lambda_0$ and the effective magnetic charge $Q_{\rm{eff}}$ shown in figure~\ref{Fig4} have been obtained from the straight line fits to the data.} \end{center} \end{figure} \begin{figure}[tb \begin{center} \includegraphics[width=0.7\columnwidth]{Chang_Figure4.eps} \caption{\label{Fig4}\textbf{$\boldsymbol{Q_{\rm{eff}}}$ versus $\boldsymbol{1/T}$ for samples of Ho$\boldsymbol{_{2-x}}$Y$\boldsymbol{_{x}}$Ti$\boldsymbol{_2}$O$\boldsymbol{_7}$ with $\boldsymbol{x=0}$ and 0.1.} The vertical dashed lines indicate the high and low temperature limits between which the Onsager theory is expected to be valid~\cite{Bramwell3} and the horizontal line marks the value for $Q_{\rm{eff}}=4.6~\mu_B$\AA$^{-1}$~\cite{Castelnovo}. The inset shows $m(T)$ for the same data; the solid line shows $m=Q_{\rm{eff}}^3/T^{2}$ with $Q_{\rm{eff}}=5~\mu_B$\AA$^{-1}$. Also shown in both plots are the data of Bramwell \textit{et al}. from Ref.~\cite{Bramwell3}.} \end{center} \end{figure} At high temperature,
a linear field dependence for $\lambda(B)$ is also observed for the two samples with a much higher yttrium doping ($x=1$ and 1.6) but the calculated $Q_{\rm{eff}}$ is always greater that $\sim10~\mu_B$\AA$^{-1}$. For $x=1$ and 1.6 in the low-temperature regime $T < T_{CR}$ there is no systematic linear field dependence in $\lambda(B)$ and no signal that can be associated with magnetricity. We have also looked for a linear magnetic field dependence in $\lambda(B)$ for the pure $\rm Ho_2Ti_2O_7$\ sample covered in a thick (0.25~mm) silver foil. At higher temperatures $T > T_{CR}$ we observed a linear behaviour leading to a large $Q_{\rm{eff}}$ (i.e. $Q_{\rm{eff}}> 10~\mu_B$\AA$^{-1}$), but at low temperatures $T < T_{CR}$ we found no signature of magnetricity and could not obtain reliable linear fits to the $\lambda(B)$ data or physically acceptable values for $Q_{\rm{eff}}$. \ \section*{Discussion} We can draw a number of important conclusions from our work. Our results indicate that at higher temperatures, as suggested previously~\cite{Dunsiger,Blundell,Bramwell5}, the dominant contribution to the $\lambda(T)$ signal arises from stray fields from the magnetized spin ice that penetrate into the silver sample plate. The observation of a signal in a sample covered with thick Ag foil adds weight to this hypothesis. The sample coverage of the Ag backing plates used in our experiments was always approximately 50\%. It will be interesting to explore how this signal changes as this coverage is varied. It may also be important to consider the ratio between the surface area and the volume of the spin ice in these and other experiments. Differences between the bulk and surface conductivity of water ice are well documented~\cite{PetrenkoWhitworth} and it is likely that analogous processes operate in spin ice. In reply to the comments on their work, however, Bramwell \textit{et al}.~\cite{Bramwell5} make the point that a signal from muons implanted in the sample plate may not negate the important findings of their study. Our data are consistent with the suggestion made in Ref.~\cite{Bramwell5} that the Wien effect signal may arise from inside the sample or from within the Ag sample plate but at distances very close to the spin ice sample surface. We will return to this point later. First we note that the $\lambda(T)$ curve for pure $\rm Ho_2Ti_2O_7$\ follows closely the form expected for the magnetization of pure spin ice~\cite{Snyder2} supporting the view that $\lambda(T)$ reflects the magnetization in all the samples studied. This then raises an interesting question concerning the low-temperature magnetic dynamics of spin ice. Recently there have been a number of experimental reports on the magnetic dynamics of spin ice (see for example~\cite{Giblin,Slobinsky,Yaraskavitch, Erfanifam, Petrenko}). In addition to the discussion of magnetic monopoles and the Wien effect~\cite{Castelnovo, Ryzhkin, Jaubert, Bramwell3} authors have also considered the effects of thermal quenching~\cite{Castelnovo2}. A key component of the current theories of spin ice, is that the magnetic response at low temperatures and small applied fields is limited to monopole motion. So as the monopole density decreases the characteristic time scales become longer. This view has recently been called into question following new low-temperature AC susceptibility measurements that exhibit an activated behaviour with energy barriers that are inconsistent with the present understanding of monopoles in spin ice~\cite{Yaraskavitch, Matsuhira, Quilliam}. Our results for the $x=1.6$ material, showing the survival of ZFCW-FCC splitting in a sample with only 15\% Ho add a further twist to this puzzle. Given the large number of non-magnetic ``defects" on the corners of many of the tetrahedra in this diluted material, it is not easy to attribute the slow relaxation to a low monopole density. At such low concentrations of magnetic ions even the concepts of a spin ice and monopoles are questionable. It is conceivable that single ion physics plays a more important role in the behaviour of the diluted materials. Our diffuse neutron-scattering studies of single-crystal Ho$_{2-x}$Y$_{x}$Ti$_2$O$_7$\ showed that at low temperature the scattering patterns are characteristic of a dipolar spin ice and appear to be unaffected by Y doping up to at least $x=1.0$~\cite{Chang}. One possible scenario is that effects, such as distortions in the local environment due to the variation in the size of the Ho$^{3+}$/Y$^{3+}$ ions~\cite{Snyder1}, produce energy barriers at low-$T$ that exceed the cost of an isolated monopole. The slow dynamics and the ZFCW-FCC hysteresis at low temperatures would thus cross over from a regime where this behaviour is attributed to low monopole density to a regime where it is due to exceedingly slow single ion physics. Alternatively, the long-range nature of the dipolar interactions may give rise to collective effects beyond the monopole description which introduce new energy barriers to spin flipping at very low temperatures that occur in both undiluted and diluted systems. The same qualitative form for the $\lambda(T)$ data for samples with $x=0.1$ and 1.6 indicate that additional ingredients may be required to explain the low $T$ behaviour in spin ice and that further studies on diluted samples are needed to fully understand the role played by factors such as impurities, dislocations, and surface effects on the low-temperature dynamics of spin ice. Returning to the question of magnetricity in spin ice we note that in our $\mu$SR data the low-temperature signal that has previously been interpreted as a signature of magnetricity is seen in the $x=0$ and 0.1 samples and is not observed in the more dilute Ho$_{2-x}$Y$_{x}$Ti$_2$O$_7$\ materials. Within the $T$ range indicated by the dashed lines in Fig.~\ref{Fig4}, where the theory presented by Bramwell \textit{et al}. is expected to be valid, the value of $Q_{\rm{eff}}$ agrees with expectations. Following Blundell~\cite{Blundell} we also plot $m$ versus $T$. We see that the expected $m\propto T^{-2}$ only holds for the same narrow $T$ range. Our experiments, including two separate runs on pure $\rm Ho_2Ti_2O_7$\ carried out three months apart, demonstrate the reproducibility of the data (see Fig.~\ref{Fig2}a). A realignment of the $\rm Ho_2Ti_2O_7$\ disks between runs also shows that the results are not particularly sensitive to the exact details of the sample geometry. Our results for the samples with a higher Y content and with the thick Ag foil demonstrate that the behaviour cannot be attributed to instrumental effects. The samples were made at Warwick~\cite{Balakrishnan} and are Ho rather than Dy based pyrochlores, eliminating the possibility of material specific results. In summary transverse-field $\mu$SR experiments on Ho$_{2-x}$Y$_{x}$Ti$_2$O$_7$, including measurements on non
$, and $\eta^\text{r} = 1.0\times 10^{-5}$ as in \ref{sec:mainSimulations}. Figure~\ref{fig:lambdaSweep} gives the average RMSE of the estimated AC demand across all days as a function of $\lambda$. The average RMSE of the AC demand decreases as $\lambda$ is increased from $1.0\times 10^{-7}$ and reaches a minimum RMSE of 211.3 kW with $\lambda = 0.005$. As $\lambda$ increases from $0.005$, the RMSE increases until it remains relatively constant from $0.1$ to $1.0$. While we set $\lambda$ in Section~\ref{sec:mainSimulations} to allow a single model to dominate the DFS estimate if one model proved to be more accurate than the rest, Fig.~\ref{fig:lambdaSweep} indicates that tuning $\lambda$ on a set days that are similar to the testing days may allow a reduction in the RMSE. \begin{figure} \centering \includegraphics[scale=1]{./Figures/lambdaSweep} \vspace{-5pt} \caption{Average RMSE of the estimated AC demand across all days as a function of $\lambda$, using Update Method 1, $\mathcal{M}^\text{Red}$, $\eta^\text{s} = 0.4$, and $\eta^\text{r} = 1.0\times 10^{-5}$.} \label{fig:lambdaSweep} \end{figure} \section{
Conclusions} \label{sec:conclusions} In this paper, we applied an online learning algorithm, DFS, which uses DMD together with the Fixed Share algorithm, to estimate the real-time AC demand on a distribution feeder using feeder demand measurements, weather data, and system models. Two implementations of algorithms based on DMD were developed and compared via case studies. Our results showed that DFS can effectively estimate the real-time AC demand on a feeder. DFS achieved lower AC demand RMSE than the average across a set of Kalman filters. When selecting the most accurate Kalman filter \emph{ex post}, DFS generally results in larger RMSE. However, DFS learns the most accurate model, or combination of models, in real-time whereas the best Kalman filter can only be chosen after the simulation. The performance of DFS depends heavily on the inclusion of models within its set. Including models that are inaccurate for majority of the day degraded the algorithm performance as did removing models that were frequently weighted heavily. In this work, we separated the demand into only two components. However, the algorithm is applicable to scenarios with more than two components, assuming that we have at least one model of each demand component. As the number of components increases, it may become more difficult to disaggregate them, but these difficulties could be counteracted by incorporating more real-time measurements, e.g., the reactive power demand. Future work will develop improved AC demand models, investigate the relationship between the DMD and Kalman filter algorithms, and incorporate active control into the problem framework. \section*{Acknowledgments} We thank the Pacific Gas \& Electric Company for the commercial building electric load data. \bibliographystyle{./IEEEtran}
\section{Introduction} Radio pulsars tend to suddenly change their pulsation modes, which involves a change in sub-pulse drift and a change in average profile on a timescale of tens or hundreds of star spin periods, $P$ (Weltevrede 2016; Srostlik \& Rankin 2005). \nct{sr2005, w2016} In some modes, the pulse shapes are completely changed with every period (Hankins \& Wright 1980), and\nct{hw80} in other cases they maintain similarity throughout the mode. In PSR B1237$+$25, with a modulation period, $P_d$, of $\sim2.8P$, the pulses appear in a repeated sequence with a pulse component on the left side, then on the right side, and finally in the middle of the profile. In this same pulsar (Srostlik \& Rankin 2005),\nct{sr2005} the cessation of emission (nulling) has been observed to occur preferentially between the distinct pulsation modes. In several objects (e.g.~B1919$+$21; Pr\'oszy\'nski \& Wolszczan 1986)\nct{pw86} the flux modulations occur at fixed pulse longitude $\Phi$ with no drift; however, peculiar $180^\circ$ jumps in modulation phase are observed (also in B0320$+$39; Edwards et al.~2003).\nct{esv03} In several other objects (e.g.~PSR B1918$+$19; Rankin et al.~2013) \nct{rwb13} the drifting sub-pulses are observed only in the profile interior, whereas the fixed-$\Phi$ modulations are limited to the peripheral components. The trend for single pulse emission to move from one side of the profile to the other is evident. One side is then dominating, which creates pseudo-symmetric (antisymmetric) profiles with core and conal components, albeit with only the left or right part of the profile filled in (J2145$-$0750; Stairs et al.~1999; Dai et al.~2015). \nct{stc99, dhm15} In this paper a radio beam is presented that explains these phenomena. The radio beam has long been suspected to consist of two nested hollow cones (Rankin 1983), \nct{ran83} the radii of which have been derived from profile modelling (Rankin 1990; 1993). \nct{ran90, ran93} The only magnetospheric structure suggested for the observed cone size ratio was that of critical magnetic lines, located at $\theta_{cr}=(2/3)^{3/4}\theta_{\rm pc}=0.74\theta_{\rm pc}$, where $\theta_{\rm pc}$ is the angular polar cap radius (Wright 2003).\nct{wri03} Although the $\vec E \times \vec B$ drift has long been considered as the origin of simple sub-pulse drift (Ruderman \& Sutherland 1975; van Leeuwen et al.~2003; Maan 2019; McSweeney et al.~2019),\nct{rs75, maa2019, vanl2003, swee2019} the ansatz of the axially symmetric carousel of sparks made it difficult to interpret all the other phenomena. The pulsations themselves, and thus the modulations, could be interpreted as time variability (Clemens \& Rosen 2008),\nct{cr2008} relativistically outflowing layers (Kirk et al.~2002), \nct{ksg02} or the laterally moving substructure of the emission region. Time variability in the spectral space could also produce the pulsations with the emitted radio spectrum moving across the telescope band. In a parallel paper on B1919$+$21 (Dyks, van Straten, Primak et al., in preparation), a successful model of modulated pulsar polarization is set forth. The model supports the drift interpretation of pulse modulations, suggesting that they must correspond to the mapping of the observed flux straight on the lateral structure of the drifting beam. The basic polarization model involves a single radio beam of radius $\rho$ (grey circle in Fig.~\ref{visi}) that is drifting around the dipole axis and is probed by the line of sight once per spin period, $P$. This is the starting point from which a more realistic radio beam is invoked below. \begin{figure} \begin{center} \includegraphics[width=0.47\textwidth]{visi2.ps} \end{center} \caption{Drift of a wide emission region (grey) around the dipole axis. Far from the profile centre (point A), the region is visible within a limited interval of drift cycle, and strong modulations appear. The extra beam (dashed) on the opposite side of the dipole axis is required to explain the jump by half of the modulation cycle. } \label{visi} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{beam9.ps} \end{center} \caption{Basic beam pattern ({\bf a}) with possible modifications. {\bf (a):} Fixed-altitude emission at all magnetic co-latitudes. The outer and inner circles correspond to the last open and critical B-field lines, respectively. {\bf (b):} Emission from the last open and critical lines within a finite altitude range. {\bf (c):} Arbitrary geometry of sub-beams. {\bf (d):} Radially extended azimuthal structure as invoked from PSR B0826$-$34.} \label{possi} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{beam7b.ps} \end{center} \caption{Head-on view of a generic radio pulsar beam (grey) invoked from pulse modulation properties. Thin solid sections present the paths of sightline for low-order resonant drift ($nP=mP_d$) and different impact angles, $\beta^\prime$. Numbers are consecutive azimuths in beam frame. The left-side cases show the asymmetry invoked for several pulsars. } \label{beam} \end{figure} \nct{esa2005} \section{Radio pulsar beam geometry} The modulation properties of PSR B1919$+$21 and B1237$+$25 (see Figs.~3 and 2 in Pr\'oszy\'nski \& Wolszczan 1986) \nct{pw86} provide fair guidance for the beam structure. In the profile periphery, the drifting beam of Fig.~\ref{visi} is visible only through a limited part of drift period, $P_d$, which corresponds to the grey circle passing through point A in Fig.~\ref{visi}. However, closer to the profile centre (thus closer to the dipole axis), this grey beam must be missing (since a peak flux is replaced with a minimum in the modulation pattern), and another beam must be located at a half modulation cycle later. Half of a modulation phase corresponds to $P_d/2$; therefore,~the near-axis beam must be located on the other side of the dipole axis (the dashed circle in Fig.~\ref{visi}). We must have an outer sub-beam and an inner sub-beam, each on the opposite side of the polar tube. In fact, the polar tube is predicted to be divided into the outer and inner part by the critical magnetic field lines defined by the Goldreich-Julian charge density. This leads to the geometry shown in Fig.~\ref{possi}a. If the altitude scaling is ignored ($\propto r^{1/2}$), the grey outer arc has a radial width of $1.5\theta_{\rm pc}-1.5\theta_{\rm cr}$. The illustrated structure presents an average beam and reflects only crucial features.\ For example, the partial overlap of the sub-beams in co-latitude $\theta$, likely resulting from magnetic line curvature, is ignored; however, this aspect is not important for understanding the key modulation properties. Another beam of Fig.~\ref{possi}b shows the radially extended case, in which the radio emission is limited to the critical and last open field lines. The beam consists of two half-rings that result from the flaring of B-field lines. In Fig.~\ref{beam} the azimuthal extent has been limited to allow for nulling in the case of near-central viewing. The beam is shown in the co-drifting frame, that is, the beam rotates around its centre (and dipole axis) at period $P_d$. In the case of resonant drift ($nP=mP_d$), the beam is passed along a fixed set of viewing paths that are shown in the figure for low resonance orders. The paths keep a constant impact angle, $\beta$ (angular distance from the drift or beam centre) and are marked with numbers that give their steadily increasing angle of orientation. The passage typically occurs either only from the numbered tip to the tip without the number or vice versa. To pass in both directions requires special conditions ($\beta=0$ and, for example, $P_d=n2P$). \subsection{Origin of pulsation modes and nulling} In the case of non-resonant sampling, the viewing paths may slowly rotate with respect to the beam (while keeping the fixed $\beta\ne0$). When the viewing path rotates through the horizontal orientation orthogonal to the beam symmetry plane (which is vertical in Fig~\ref{beam}), the pulsation mode is changed. When $\beta$ is small and the sub-beams do not span $180^\circ$ in azimuth, a null will be observed between pulsation modes. This is where nulls are indeed often observed in pulsars (e.g.~Fig. 5 in Srostlik \& Rankin 2005). \nct{sr2005} For $\beta=0$ and a uniform motion of viewing paths through the beam, only the nulling and two pulsation modes would be possible: `outer sub-beam--inner sub-beam' and `inner--outer'. However, for $\beta\ne 0$ at least four pulsation modes are possible: `outer--inner', `inner--outer', `outer--outer', and `inner' or `core'. Moreover, in the case of the resonant drift, the beam is sampled at a fixed set of paths with specific orientations. Then, the resonant set of viewing paths can define a number of profile shapes that are repeatedly observed in consecutive order. For example, in the top-right case in Fig.~\ref{beam}, three different pulse shapes will be observed. In the case of disturbance of the drift motion (e.g.~the charge flow stop and revival), the relative phase of the viewed paths and the beam orientation can change, which can also change the pulsation mode. For peripheral viewing ($\beta$ larger than the inner sub-beam radius), only the outer sub-beam can be detectable, which can lead to a nulling behaviour of a different type than for the central sightline passage. \subsection{Interpretation of PSR B1237$+$25} PSR B1237$+$25 exhibits a sequence of pulsation modes that are consistent with the not-quite-resonant rotation of the viewing path with respect to the beam in Fig.~\ref{beam}. In one of the modes, the components appear first at the leading side (LS) of the profile, then at the trailing side (TS), and finally in the centre. The sequence has been interpreted as a spiral motion (the S-burst in Hankins \& Wright 1980). \nct{hw80} Since $P_d\sim 2.8P$ is close to $3P$ in B1237$+$25, the burst can be tentatively interpreted with Fig.~\ref{beam}d. Let the first passage correspond to the viewing path marked $120$ (with the sightline moving towards the number). This produces the LS outer conal component and the inner conal component on the TS (see the top pulse in Fig.~2 of Hankins \& Wright). In the next pulse, the path at $240$ is followed (again, towards the end with the number), producing the LS inner conal component and the TS outer conal component. In the third period the inner cone-core complex is observed along path 0. Sometimes all five components are observed together (see Fig.~2 in Hankins \& Wright), which suggests that the inner and outer sub-beams temporarily and partially overlap in azimuth (as on the left side in Fig.~\ref{possi}c). The model can explain several other effects observed in PSR B1237$+$25, for example~the tendency for the appearance of component 1 together with 4, and of component 2 together with 5. The abnormal mode from Srostlik \& Rankin appears when the viewing moments pick up the LS of the inner cone-core complex. The modes are changed when the viewing path traverses through the blank space between the inner and outer sub-beams, which can lead to the intermodal nulling when this drift phase happens to be sampled. The modes can also change with partial nulls or without nulls. The latter may happen for several reasons, the most important being the asymmetry of the inner sub-beam shown in Fig.~\ref{beam}a: While passing through the vertical orientation, the outer half-cone is always visible, but the sightline gains (or loses) the view of the inner beam without a null. The conclusion is that the overall modulation properties of PSR B1237$+$25, along with several details, can be understood as the result of the sampling of the beam shown in Fig.~\ref{beam}. The spiral is not required (though two spiraling arcs could be inscribed into the grey sub-beams, sharing the basic properties of the zonal beam: imagine the thick solid arcs as having different $\theta$ at each end). \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{sinbis.ps} \end{center} \caption{Single pulses calculated for the beam types shown in grey at the top of Fig.~\ref{beam}. The figure shows several classical single pulse effects: left-right-middle modulation with core pulses (top left), jump by half of the modulation phase, with sporadic nulls in pulses 2 and 19 (top right), pulsation mode change at intermodal nulling (bottom left), and interior drift of sub-pulses flanked by fixed-longitude flux modulations (bottom right). The corresponding partial profiles are shown on top with a line and the average profiles for 2000 pulses in grey. Parameter values are (from left to right, top to bottom) $P_d=2.9, 4.2, 2.8, 1.9$, $\beta^\prime=-0.1, 0.1, 0, 0.4$, $(\phi_{\rm out}^{\rm min}, \phi_{\rm out}^{\rm max}) =(-70^\circ,70^\circ)$ everywhere, $ \phi_{\rm in}^{\rm min}=110^\circ$ everywhere, and $\phi_{\rm in}^{\rm max}=180^\circ$, except for the top right, where $\phi_{\rm in}^{\rm max}=250^\circ$; thus, only the top-right panel is for the symmetric beam (Fig.~\ref{beam}, top right). } \label{singles} \end{figure} \subsection{Examples of modelled pulsation modes} A simple code samples the beams of Fig.~\ref{beam} at rotation angles \begin{equation} \Phi_{\rm cut}=(2\pi/P_d)(i_p + W_j/360^\circ) + \phi_{\rm ct0} \ \rm [rad] ,\end{equation} where $P_d$ is in $P$, $i_p
$ is the pulse number, $\phi_{\rm ct0}$ (in radians) is a constant reference drift phase, and $W_j$ (in degrees) is the vector of longitudes (of pulse width length). The viewing path $(W_j,\beta)$ is then rotated by $\Phi_{\rm cut}$ while maintaining the fixed impact angle $\beta=\beta^\prime1.5 \theta_{\rm pc}$. The intra-beam polar coordinates $(\phi_b,\theta_b)$ are then calculated for each $W_j$, and the conditions to fall into the grey area of Fig.~\ref{beam} are applied, which includes the azimuthal span of the sub-beams: $\Delta\phi_{\rm out}$ and $\Delta\phi_{\rm in}$ or, more generally, the beam limiting azimuths $\phi_{\rm out}^{\rm min}$, $\phi_{\rm out}^{\rm max}$, $\phi_{\rm in}^{\rm min}$, and $\phi_{\rm in}^{\rm max}$. The size ratio of the radial zones is here assumed to be fixed by $\theta_{\rm cr}/\theta_{\rm pc}$ but in general involves $\theta_{\rm out}^{\rm min}$, $\theta_{\rm out}^{\rm max}$, $\theta_{\rm in}^{\rm min}$, and $\theta_{\rm in}^{\rm max}$. Asymmetries and overlaps of the sub-beams are controlled by the last eight parameters. Example code output is shown in Fig.~\ref{singles}. The top-right panel is for the symmetric beam of Fig.~\ref{beam}b, whereas the other cases are for the asymmetric beam of Fig.~\ref{beam}a. The top-left case ($P_d=2.9P$, $\beta^\prime=-0.1$) reveals the left-right-middle sequence that so misleadingly resembles the spiral in B1237$+$25 (Hankins \& Wright 1980). We note that there is a core component despite no core being in the beam. In the top-right case ($P_d=4.2P$, $\beta^\prime=0.1$) the symmetric beam results in the half-phase modulation jumps, such as those observed in B1919$+$21. We note the sporadic single pulse nulls at pulses 2 and 19. The bottom-left case ($P_d=2.8P$, $\beta^\prime=0.0$) shows clear changes in pulse modes separated by intermodal nulls, such as those observed in B1237$+$25 (Fig.~5 in Srostlik and Rankin). The bottom-right case shows a clear interior drift flanked by the fixed-longitude modulation, which is observed in several objects (e.g.~B1918$+$19). The pulsations have an obvious tendency to take on a quasi-conal look with several asymmetries, which are mostly related to the sub-beam being located on each side of the profile. Interestingly, pulses from an apparently inner cone are sometimes visible in several patterns, despite only one half-cone and a birthday-cake-like wedge being in the beam. They arise from cutting the corners of the sub-beams. The core component appears in many simulations even without any brightening at the beam centre and results from cutting through the central tip of the inner sub-beam. To obtain the main types of behaviour shown in Fig.~\ref{singles}, in particular the nulling separating core-dominated and cone-dominated pulsation modes, the beam must be asymmetric (Fig.~\ref{beam}a). Then, over some limiting azimuths the outer sub-beam may be sampled without the inner sub-beam visible, with mode transition occurring either at nulls or without nulls (bottom-left case in Fig.~\ref{singles}). Though not shown, the model also produces the antisymmetric profiles with only one side bright. The zonal structure of the beam implies correlations between a main pulse (MP) and an inter-pulse (IP), as well as anti-correlations (Weltevrede et al.~2012). \nct{wwj12} It is found that several classical types of single pulse behaviour, reported in a multitude of observations, can be interpreted with the sector (zonal) beam of Fig.~\ref{beam}. To properly model the average profiles, a more realistic emissivity profile (non-uniform) must still be implemented. However, calculations made so far make it clear that the question of whether the average profile is brightest at the edges or at the centre clearly depends on the ratio $R_\phi=\Delta\phi_{\rm out}/\Delta\phi_{\rm in}$ and not only on the impact parameter. The average profiles shown in grey in Fig.~\ref{singles} correspond to the uniform and equal emissivity within the sub-beam zones. If the drift is not resonant, the strength ratio of central to peripheral components roughly corresponds to the ratio of the azimuthal extent of the sub-beams, $R_\phi=\Delta\phi_{\rm in}/\Delta\phi_{\rm out}$. The top-right case in Fig.~\ref{singles} has a boxy shape since $R_\phi=1$, whereas in the other cases $R_\phi=0.5$, hence, the average profiles have a bright periphery and correspond to a double type (or perhaps a blended multiple with bright edges). Beyond the proposed model, the diversity of average profiles may furthermore be increased by the dependence of magnetospheric current flow on dipole tilt and by polarization-mode-sensitive effects (absorption, scattering, refraction). The fractional azimuthal extent also affects the statistics of nulling. In the case of non-resonant drift and $\beta=0$, the fraction of nulls corresponds to the fraction of azimuths that are radio quiet on both sides of the beam (at $\phi$ and $\phi+\pi$). For $\beta\ne0$ the null fraction is no longer trivial to estimate because it depends on both the azimuthal and radial ($\Delta\theta$) extent of the sub-beams. For example, when the beam becomes narrow in azimuth ($\Delta\phi \ll \Delta\theta$), it is $\beta$ and $\Delta\theta$ that determine the null fraction. Furthermore, the model implies two types of nulling, central and peripheral. The second type appears when our line of sight is just grazing the beam, with $\theta_{\rm in}^{\rm max}< \beta < \theta_{\rm out}^{\rm max}$. In this case the nulling fraction essentially corresponds to $(2\pi-\Delta\phi_{\rm out})/2\pi$. A sample of nulling pulsars may thus contain objects that null in either way (central or peripheral). If the drift period is resonant, the null statistics and average profiles are affected by the ratio $P_d/P$. The null fraction and the profile shapes then depend on the absolute phase of drift with respect to star spin phase. Depending on the drift stability, both periodic and non-periodic nulls are possible (Basu et al.~2020). \nct{bmm20} The proposed geometry implies that, generally, the null fraction increases when the azimuthal extent of the sub-beams decreases. Space-charge limited flow models (discussed in the next section) predict that accelerating voltage decreases with distance from the main meridian ($\vec \Omega$-$\vec \mu$ plane). In older pulsars (closer to the death line) the pair production may thus be possible only close to the $\vec \Omega$-$\vec \mu$ plane. In such objects the sub-beams may then have a smaller azimuthal extent. This is in line with the finding, in Wang et al.~(2007), \nct{wmj07} that older pulsars tend to have larger null fractions; that paper also concludes that `nulling and mode changing are different manifestations of the same phenomenon', which is consistent with the model proposed here. It is unclear if the model applies to the extremely long nulling observed in state switching pulsars (Kramer et al.~2006; Young et al.~2015; Stairs et al 2019)\nct{klo06, yws15, slk19} because this requires special conditions: For the long nulls, the drift must be essentially resonant or extremely slow. Moreover, the spin down would have to depend on the drift phase. \subsection{Fine beam structure: Lesson from B0826$-$34} The observation of several sub-pulses in a single pulse implies that, at least for some objects, the sub-beams must be azimuthally structured (if the time modulation of the sub-beams is excluded). PSR B0826$-$34 offers insight here because its beam is nearly parallel to the rotation axis and the pulsar is viewed at a small viewing angle (Esamdin et al.~2005; Gupta et al.~2004).\nct{esa2005, ggks04} The low-flux minima separating the MP from the IP in this object have been interpreted as evidence for two nested carousels separated in co-latitude (Fig.~10 in Esamdin et al.). However, according to the generic beam of Fig.~\ref{beam}, the IP must correspond to the inner-sub-beam wedge and the MP to the outer half-cone sub-beam. The observed MP and IP are thus separated by the blank space that separates the sub-beams in magnetic azimuth. To account for the observed sub-pulses, the brightest parts of the beam extend radially from the dipole axis, as shown in Fig.~\ref{possi}d. There is just the radial (spoke-like) structure with the break in the azimuth (instead of two nested cones with the break in co-latitude). The data on B0826$-$34 (Fig.~1 in Esamdin et al.) also imply that the inner sub-beam has spectral properties that are quite different from those of the outer cone. This is another factor that affects the average profile shape. The back-and-forth motion of components suggests the sub-beams may wiggle back and forth instead of following the monotonic $\vec E \times \vec B$ rotation. \section{Structure of particle flow} \begin{figure} \begin{center} \includegraphics[width=0.46\textwidth]{bfield.ps} \end{center} \caption{Particle flow pattern suggested by the beam in Fig.~\ref{beam}. A reversed flow of emitting particles (with all arrows reversed) is also possible. The last open lines are dashed, the critical lines are dotted, and $\vec \mu$ is the dipole magnetic moment. } \label{bfield} \end{figure} Assuming that charges within the polar tube are generally capable of emitting detectable radio emission, the blank parts of the invoked beam (Fig.~\ref{beam}) can be interpreted as regions with particles flowing away from the observer. This leads to the particle flow pattern shown in Fig.~\ref{bfield}. The inward flow in regions A and C turns numerous exotic ideas into reasonable possibilities. If downward flow A is radiating, the radiation may be obscured at flow B, it can be scattered there, or it can be mirror-style reflected. It may also pass through to the other side, forming the off-pulse quasi-isotropic radiation. Extremely wide pulse components with strange absorption features (not only double notches) have been observed, for example~in PSR B0950$+$08 and B1929$+$10 (Rankin \& Rathnasree 1997; McLaughlin \& Rankin 2004),\nct{rr97, mr04} although not in other pulsars (e.g.~Vela, Kramer et al.~2002). \nct{kjv02} The asymmetry in Fig.~\ref{bfield} seems consistent with the non-universal character of this phenomenon. When different magnetic azimuths are considered, the emission from flow A is always directed towards the near-axial region, which leads to interesting ray focusing geometry. Moreover, it is now possible to assume that the radio signal is produced at high altitudes in a weaker magnetic field but is emitted inwards, where it is reflected or scattered outwardly. Similar bidirectional effects are expected for flow CD; however, since flow C is more vertical, its radiation is more likely to be reflected outwards by the star surface or a central plasma region. The loops of flow in Fig.~\ref{bfield} (such as AB between the periphery and centre of the polar tube) are generally consistent with expectations (Fig.~3 in Mestel \& Shibata 1994). \nct{ms94} However, the antisymmetry around $\vec \mu$ is yet to be understood. The local excess of charge density depends on whether local B-field lines bend towards the closest rotational pole or towards the equator. Therefore, the accelerating electric field $E_\parallel$ is opposite in two semicircles of a polar cap (Arons \& Scharlemann 1979; Mestel \& Shibata 1994). \nct{as79, ms94} When the semicircles are superposed on a ring in a polar tube, the pattern of Fig.~\ref{possi}a can be expected. While referring to the semicircular acceleration region, Beskin (2010, p.~109) \nct{bes2010} states that `accordingly, the radiation emissivity pattern should also have the form of a semicircle. However, this conclusion contradicts the observational data'. As shown above, the semicircle (and a half-ring) can be invoked from the checkerboard-like modulation pattern. Still, the semicircle of, say, poleward-bending B-field lines does not drift around the dipole axis. On the other hand, if the lateral drift is attributed to the outflowing plasma (rather than to the pattern of $E_\parallel$), it may be difficult to maintain the drifting sub-beams because the relativistic particles escape the magnetosphere in about $1/6$ of $P$. Finally, the ring of critical field lines is expected for small or mild dipole inclinations and depends on the outer gap activity. Instead, it is possible that the outer half-ring of the invoked beam corresponds to the return current sheet (see Fig.~1 by X.~Bai in Timokhin \& Arons 2013). \nct{ta13} \section{Conclusions} The radio pulsar beam is sector-structured both in azimuth and magnetic co-latitude and is indeed $\vec E \times \vec B$-drifting around the dipole axis. The sub-beam zones are essentially antisymmetric, as implied by the checkerboard-like modulation patterns observed in PSR B1919$+$21 and B1237$+$25. The long timescales of pulse moding and nulling result from the near resonance of $P_d$ and $P$, which causes a very slow rotation of viewing path through the beam. The established geometry produces pulsations that bear an obvious and striking resemblance to the single pulse observations, explain several types of typically observed behaviour, and allow for a much larger diversity of effects than the axially symmetric carousel. Moreover, the zonal beam is in line with the general structure of the magnetospheric charge density distribution. Although multiple questions about the beam details remain open, the beam is capable of explanations that are beyond the reach of the standard carousel model or the surface oscillation model, making these models obsolete. However, the results support the cone size origin proposed by Wright (2003). \nct{wri03} It is clear that it becomes possible to apply the proposed beam to several observed effects and objects, which should allow the beam geometry and behaviour to be better constrained. \begin{acknowledgements} This work was supported by the grant 2017/\-25/\-B/\-ST9/\-00385 of the National Science Centre, Poland. \end{acknowledgements} \bibliographystyle{aa}
\section{Introduction} In this paper we consider a family of Majumdar-Papapetrou (MP) type solutions \cite{MP} for gravitating sigma-model (for review see, for example, \cite{Cher}) with the action \ber{1.1a} S_\sigma=\int d^{d_0}x\sqrt{|g^0|}\biggl\{R[g^0]-\hat G_{AB} g^{0\mu\nu}\partial_\mu\sigma^A\partial_\nu\sigma^B - 2V(\sigma) -\sum_{s\in S}\varepsilon_s\e{-2U_A^s\sigma^A} g^{0\mu\nu} \partial_\mu\Phi^s\partial_\nu\Phi^s\biggr\}, \end{eqnarray} where $g^0=g^0_{\mu\nu}(x)dx^\mu\otimes dx^\nu$ is a metric on $d_0$-dimensional manifold $M_0$, $R[g^0]$ is the scalar curvature of $g^0$, $\sigma=(\sigma^A)\in\mbox{\bf R}^N$ and $\Phi=(\Phi^s,s\in S)$ is a set of scalar fields ($S\ne\emptyset$), $V=V(\sigma)$ is a potential, $(\hat G_{AB})$ is a symmetric non-degenerate matrix, $U^s=(U_A^s)\in\mbox{\bf R}^N$ are vectors and $\varepsilon_s=\pm1$, $s\in S$. The sigma-model (\ref{1.1a}) arises in multidimensional gravitational models with scalar fields and fields of forms, when solutions with intersecting $p$-branes are considered \cite{IM4}-\cite{IMR} (a pure gravitational sector of the sigma-model was considered in \cite{Ber,RZ,IM0}). For $p$-brane applications (see, for example, \cite{IM4}-\cite{IMJ}) $g^0$ is Euclidean, $(\hat G_{AB})$ is positively defined and $\varepsilon_s= -1$, if pseudo-Euclidean (electric and magnetic) $p$-branes in a pseudo-Euclidean space-time are considered. The sigma-model (\ref{1.1a}) may be also considered for the pseudo-Euclidean metric $g^0$ of signature $(-,+, \ldots, +)$. In this case for a positively defined matrix $(\hat G_{AB})$ and $\varepsilon_s= 1$ we get a non-negative kinetic energy terms. The target space of the model is $(\mbox{\bf R}^K,{\cal G})$, where \ber{1.1} {\cal G}=\hat G_{AB}d\sigma^A\otimes d\sigma^B+ \sum_{s\in S}\varepsilon_s\e{-2U_A^s\sigma^A}d\Phi^s\otimes d\Phi^s, \end{eqnarray} It was proved in \cite{Cosm} that the target space ${\cal T}=(\mbox{\bf R}^K,{\cal G})$ is a homogeneous (coset) space $G/H$ ($G$ is the isometry group of ${\cal T}$, $H$ is the isotropy subgroup of $G$). ${\cal T}$ is symmetric (i.e. the Riemann tensor is covariantly constant: $\nabla_MR_{M_1M_2M_3M_4}[{\cal G}]=0$) if and only if \ber{1.2} (U^{s_1}-U^{s_2})(U^{s_1},U^{s_2})=0 \end{eqnarray} for all $s_1,s_2\in S$, i.e. when any two vectors $U^{s_1}$ and $U^{s_2}$, $s_1\ne s_2$, are either coinciding $U^{s_1}=U^{s_2}$ or orthogonal $(U^{s_1},U^{s_2})=0$, where scalar product $(\cdot,\cdot)$ is defined as follows \ber{1.2a} (U,U')=\hat G^{AB}U_AU'_B, \end{eqnarray} for $U,U'\in\mbox{\bf R}^N$, where $(\hat G^{AB})=(\hat G_{AB})^{-1}$. In our previous papers \cite{IM4}-\cite{IMR} we considered the orthogonal case \ber{1.3} (U^{s_1},U^{s_2})=0, \end{eqnarray} $s_1\ne s_2$, and $(U^{s_1},U^{s_1})\ne0$, $s_1,s_2\in S$, and for the zero-potential: $V=0$, obtained a family of exact solutions in the sigma-model (\ref{1.1a}) governed by $k$ harmonic functions, where $k$ is a number of $s$, satisfying $\varepsilon_s(U^s,U^s)<0$. Using these sigma-model solutions we also found a family of solutions in multidimensional gravity with $p$-branes. For $p$-brane applications the orthogonality condition (\ref{1.3}) is equivalent to (orthogonal) intersection rules \cite{IM4,IM,IMC,IMR,AR,AEH} also known as no-force condition. In \cite{IMC} these solutions (sigma-model and $p$-brane ones) were also generalized to the case $V\ne0$, where $V=\sum_{a=1}^m A_a\e{u_A^a\sigma^A}$ and $(u^a,U^s)=0$, $a=1,\dots,m$, $s\in S$. Here we generalize the orthogonal $\sigma$-model solutions from \cite{IM4}-\cite{IMR} to a more general block-orthogonal case: \ber{2.3} S=S_1 \cup\dots\cup S_k, \qquad S_i \cap S_j = \emptyset, \quad i \neq j \end{eqnarray} $S_i \ne \emptyset$, i.e. the set $S$ is a union of $k$ non-intersecting (non-empty) subsets $S_1,\dots,S_k$, and \ber{2.4} (U^s,U^{s'})=0 \end{eqnarray} for all $s\in S_i$, $s'\in S_j$, $i\ne j$; $i,j=1,\dots,k$. According to (\ref{2.4}) the set of vectors $(U^s,s\in S)$ has a block-orthogonal structure with respect to the scalar product (\ref{1.2a}): it is splitted into $k$ mutually orthogonal blocks $(U^s,s\in S_i)$, $i=1,\dots,k$. In this case solutions do exist, when the matrix of scalar products $B = ((U^{s_1},U^{s_2}))$ and $\varepsilon_s$ satisfy the relation \ber{2.10} \sum_{s'\in S}(U^s,U^{s'})\varepsilon_{s'}\nu_{s'}^2=-1 \end{eqnarray} for some set of real $\nu_s$, $s \in S$. The number of independent harmonic functions is defined by the number of blocks in the matrix $B$. The paper is organized as follows. In Sect. 2 the sigma-model is considered and in block-orthogonal case exact solutions governed by a set of harmonic functions are obtained. In Sect. 3 certain examples of solutions related to Lie algebras (e.g. finite dimensional and hyperbolic) are considered and restrictions on signature parameters $\varepsilon_s$ for different Lie algebras are derived. It is shown that affine Kac-Moody algebras do not appear in this scheme. Sect. 4 is devoted to application of the sigma-model solutions to multidimensional models with intersecting p-branes. The block-orthogonal generalization of (orthogonal) p-brane MP type solutions is presented in subsect. 4.2 (We note, that recently block-orthogonal black hole and wormhole solutions with p-branes were considered in \cite{Br}. These solutions generalize orthogonal black hole solutions \cite{CT,OS,AIV,O,BIM,IMJ}.) In subsect. 4.3 the behavior of the Riemann tensor squared (Kretschmann scalar) for multicenter solutions is investigated and criteria for the existence of horizon and finiteness of the Kretschmann scalar are established. In Sect. 5 intersection rules and some examples are considered (e.g. for $B_D$-models that may be relevant for future generalizations of $M$- and $F$-theories \cite{M-th,F-th} to dimensions $D > 12$.) Here a dyon solution for $D=11$ supergravity with electric 2-brane and magnetic 5-brane configuration is considered. This solution has $A_2 = sl(3,\mbox{\bf C})$ intersection rule: $3 \cap 6 = 1$ instead of the orthogonal one $3 \cap 6 = 2$. An analogous dyon solution for $D=10$ IIA supergravity was recently considered in \cite{GrI}. We note that the intersecting $p$-brane solutions (see \cite{IM4,IM,IMC,IMR,AV,AR,AEH,St} and references therein) with ``orthogonal'' intersection rules correspond to the Lie algebras $A_1 \oplus \ldots \oplus A_1$, where $A_1 =sl(2,\mbox{\bf C})$. (The MP solution in this classification corresponds to the algebra $A_1$.) In supergravitational models these solutions correspond to the so-called BPS saturated states preserving fractional supersymmetry \cite{DS}-\cite{Ga}. \section{$\sigma$-model solutions} Here we considers a family of solutions to equations of motion of the sigma-model (\ref{1.1a}) in the block-orthogonal case (\ref{2.3}), (\ref{2.4}). Equations of motion corresponding to (\ref{1.1a}) have the following form \ber{2.5} R_{\mu\nu}[g^0]=\hat G_{AB}\partial_\mu\sigma^A\partial_\nu\sigma^B+ \frac{2V}{d_0-2}g_{\mu\nu}^0+\sum_{s\in S}\varepsilon_s \e{-2U_A^s\sigma^A}\partial_\mu\Phi^s\partial_\nu\Phi^s, \\ \nqq \label{2.6} \hat G_{AB}\triangle[g^0]\sigma^B-\frac{\partial V}{\partial\sigma^A}+ \sum_{s\in S}\varepsilon_sU_A^s\e{-2U_C^s\sigma^C} g^{0\mu\nu} \partial_\mu\Phi^s\partial_\nu\Phi^s =0, \\ \nqq \label{2.7} \partial_\mu\left(\sqrt{|g^0|}g^{0\mu\nu}\e{-2U_A^s\sigma^A} \partial_\nu\Phi^s\right)=0, \end{eqnarray} $s\in S$. Here $\triangle[g^0]$ is the Laplace-Beltrami operator corresponding to $g^0$. {\bf Proposition 1.} Let $(M_0,g^0)$ be Ricci-flat \ber{2.8} R_{\mu\nu}[g^0]=0. \end{eqnarray} Then the field configuration \ber{2.9} g^0, \qquad \sigma^A=\sum_{s\in S}\varepsilon_sU^{sA}\nu_s^2\ln H_s, \qquad \Phi^s=\frac{\nu_s}{H_s}, \end{eqnarray} $s\in S$, satisfies the field equations (\ref{2.5})--(\ref{2.7}) with $V=0$ if real numbers $\nu_s$ obey the relations $$ \sum_{s'\in S}(U^s,U^{s'})\varepsilon_{s'}\nu_{s'}^2=-1 $$ $s\in S$, functions $H_s >0$ are harmonic, i.e. \ber{2.11} \triangle[g^0]H_s=0, \end{eqnarray} $s\in S$, and $H_s$ are coinciding inside blocks: \ber{2.12} H_s=H_{s'} \end{eqnarray} for $s,s'\in S_i$, $i=1,\dots,k$. The Proposition 1 can be readily verified by a straightforward substitution of (\ref{2.8})--(\ref{2.12}) into equations of motion (\ref{2.5})--(\ref{2.7}). In special (orthogonal) case, when any block contains only one vector (i.e. all $|S_i|=1$) the Proposition 1 coincides with that of \cite{IMC}. In general case vectors inside each block $S_i$ are not orthogonal. The solution under consideration depends on $k$ independent harmonic functions. For a given set of vectors $(U^s,s\in S)$ the maximal number $k$ arises for irreducible block-orthogonal decomposition (\ref{2.3}), (\ref{2.4}), when any block $(U^s,s\in S_i)$ can not be splitted into two mutually-orthogonal subblocks. We note that due to (\ref{2.4}) the relation (\ref{2.10}) may be rewritten as \ber{2.13} \sum_{s'\in S_i}(U^s,U^{s'})\varepsilon_{s'}\nu_{s'}^2=-1, \end{eqnarray} $s\in S_i$, $i=1,\dots,k$. Hence, parameters $(\nu_s,s\in S_i)$ depend upon vectors $(U^s,s\in S_i)$, $i=1,\dots,k$. \section{Parameters of solutions related to Lie algebras} Here we put \ber{3.1} (U^s,U^s)\ne0 \end{eqnarray} for all $s\in S$ and introduce quasi-Cartan matrix $A=(A^{ss'})$ \ber{3.2} A^{ss'} \equiv \frac{2(U^s,U^{s'})}{(U^{s'},U^{s'})}, \end{eqnarray} $s,s'\in S$, which coincides with the Cartan matrix, when $U^s$, $s \in S$, are simple roots of some Lie algebra and $(.,.)$ is standard bilinear form on the root space. >From (\ref{2.4}) we get a block-orthogonal structure of $A$: \ber{3.3} A=\left(\begin{array}{ccc} A_{(1)} \dots 0\\ \vdots \ddots \vdots\\ 0 \dots A_{(k)} \end{array}\right), \end{eqnarray} where $A_{(i)}=(A^{ss'},s,s'\in S_i)$, $i=1,\dots,k$. Here we tacitly assume that the set $S$ is ordered, $S_1<\dots<S_k$ and the order in $S_i$ is inherited by the order in $S$. For $\det A_{(i)}\ne0$ relation (\ref{2.13}) may be rewritten in the equivalent form \ber{3.4} \varepsilon_s\nu_s^2(U^s,U^s)=-2\sum_{s'\in S_i}A_{ss'}^{(i)}, \end{eqnarray} $s\in S_i$, where $(A_{ss'}^{(i)})=A_{(i)}^{-1}$. Thus, eq. (\ref{2.13}) may be resolved in terms of $\nu_s$ for certain $\varepsilon_s=\pm1$, $s\in S_i$. For $\det A_{(i)}=0$ there exist situations when eq. (\ref{2.13}) has no solutions even for complex $\nu_s$. Indeed, let us suppose that there exists a vector $a=(a_s,s\in S_i)$ satisfying the relations \ber{3.5} \sum_{s\in S_i}a_s A_{(i)}^{ss'}=0, \quad \sum_{s\in S_i}a_s \ne 0, \end{eqnarray} $s'\in S_i$ (eqs. (\ref{3.5}) imply $\det A_{(i)}=0$ and, hence, $\det A=0$). From (\ref{2.13}) and the first relation in (\ref{3.5}) we get $\sum_{s\in S_i} a_s=0$, that contradicts the second relation in (\ref{3.5}). In what follows we consider the block-orthogonal decomposition to be irreducible, i.e. for any $i$ the block $(U^s,s\in S_i)$ can not be splitted into two mutually orthogonal subblocks. In this case any matrix $A_{(i)}$ is indecomposable (or irreducible) in the sense that there is no renumbering of vectors which would bring $A_{(i)}$ to the block diagonal form $A_i=\mathop{\rm diag}\nolimits(A'_{(i)},A''_{(i)})$. Let $A$ be a generalized Cartan matrix \cite{Kac,FS}. In this case \ber{3.6} A^{ss'}\in -\mbox{\bf Z}_+ \equiv\{0,-1,-2,\dots\} \end{eqnarray} for $s \ne s'$ and $A$ generates generalized symmetrizable Kac-Moody algebra \cite{Kac,FS}. Now we fix $i\in\{1,\dots,k\}$. From (\ref{3.3}) and (\ref{3.6}) we get \ber{3.7} A_{(i)}^{ss'}\in-\mbox{\bf Z}_+, \end{eqnarray} $s,s'\in S_i$, $s \ne s'$. There are three possibilities for $A_{(i)}$: when a) $\det A_{(i)}>0$, b) $\det A_{(i)}<0$ and c) $\det A_{(i)}=0$. For $\det A_{(i)}\ne0$ the corresponding Kac-Moody algebra is simple, since $A_{(i)}$ is indecomposable \cite{FS}. Now we analyze these three possibilities. \subsection{Finite dimensional Lie algebras} Let $\det A_{(i)}>0$. In this case $A_{(i)}$ is a Cartan matrix of a simple finite-dimensional Lie algebra and $A_{(i)}^{ss'}\in\{0,-1,-2,-3\}$, $s\ne s'$. The elements of inverse matrix $A_{(i)}^{-1}$ are positive (see Ch.7 in \cite{FS}) and hence we get from (\ref{3.4}) \ber{3.8} \varepsilon_s(U^s,U^s)<0, \end{eqnarray} $s\in S_i$. {\bf Example 1.} Let us consider the Cartan matrix \ber{3.9} A_{(i)}=\left(\begin{array}{cc} 2& -q\\ -1& 2 \end{array}\right), \end{eqnarray} $q=1,2,3$, corresponding to the Lie algebras $A_2=\mathop{\rm sl}\nolimits(3)$, $B_2=\mathop{\rm so}\nolimits(5)$ and $G_2$ respectively. Relations (\ref{3.4}) read in this case \ber{3.10} \varepsilon_s\nu_s^2(U^s,U^s)(q-4)=4+2q, \ 6 \end{eqnarray} for $s=1,2$. Here $S_i=\{1,2\}$. {\bf Example 2.} Let $A_{(i)}$ be $r\times r$ Cartan matrix for the Lie algebra $A_r= \mathop{\rm sl}\nolimits(r+1)$, $r\ge2$. This matrix is described graphically by the Dynkin diagram pictured on Fig.1. \begin{center} \begin{picture}(65,10) \put(5,5){\line(1,0){34}} \put(40,5){\makebox(0,0)[lc]{$\dots$}} \put(47,5){\line(1,0){13}} \put(5,5){\circle*{1}} \put(20,5){\circle*{1}} \put(60,5){\circle*{1}} \put(5,2){\makebox(0,0)[lc]{1}} \put(20,2){\makebox(0,0)[lc]{2}} \put(60,2){\makebox(0,0)[lc]{r}} \end{picture} \\[5pt] \small Fig.1. \it Dynkin diagram for $A_r$ Lie algebra \end{center} (For $s\ne s'$, $A_{(i)}^{ss'}=-1$ if nodes $s$ and $s'$ are connected by a line on the diagram and $A_{(i)}^{ss'}=0$ otherwise). Using the relation for the inverse matrix $A_{(i)}^{-1}=(A_{ss'}^{(i)})$ (see Ch.7 in \cite{FS}) \ber{3.11} A_{ss'}^{(i)}=\frac1{r+1}\min(s,s')[r+1-\max(s,s')] \end{eqnarray} we may rewrite (\ref{3.4}) as follows \ber{3.12} \varepsilon_s\nu_s^2(U^s,U^s)=s(s-1-r), \end{eqnarray} $s\in\{1,\dots,r\}=S_i$. \subsection{Hyperbolic Kac-Moody algebras} Now we consider the case $\det A_{(i)}<0$. Among such irreducible symmetrizable martrices satisfying (\ref{3.7}) there exists a large subclass of Cartan matrices, corresponding to infinite-dimensional simple hyperbolic generalized Kac-Moody (KM) algebras of ranks $r=2,\dots,10$ \cite{Kac,FS}. {\bf Example 3.} Let \ber{3.13} A_{(i)}=\left(\begin{array}{cc} 2& -q_1\\ -q_2& 2 \end{array}\right), \quad q_1q_2>4, \end{eqnarray} $q_1,q_2\in \mbox{\bf N}$. This is the Cartan matrix for the hyperbolic KM algebra $H_{2}(q_1,q_2)$. From (\ref{3.4}) we get \ber{3.14} \varepsilon_s\nu_s^2(U^s,U^s)(q_1q_2-4)=2q_s+4, \end{eqnarray} $s\in\{1,2\}=S_i$. {\bf Example 4.} Let $A_{(i)}$ be a Cartan matrix corresponding to $E_{10}$ hyperbolic KM algebra with the Dynkin diagram pictured on Fig.2. \begin{center} \begin{picture}(90,20) \put(5,5){\line(1,0){80}} \put(5,5){\circle*{1}} \put(15,5){\circle*{1}} \put(25,5){\circle*{1}} \put(35,5){\circle*{1}} \put(45,5){\circle*{1}} \put(55,5){\circle*{1}} \put(65,5){\circle*{1}} \put(75,5){\circle*{1}} \put(85,5){\circle*{1}} \put(65,5){\line(0,1){10}} \put(65,15){\circle*{1}} \put(5,2){\makebox(0,0)[lc]{1}} \put(15,2){\makebox(0,0)[lc]{2}} \put(25,2){\makebox(0,0)[lc]{3}} \put(35,2){\makebox(0,0)[lc]{4}} \put(45,2){\makebox(0,0)[lc]{5}} \put(55,2){\makebox(0,0)[lc]{6}} \put(65,2){\makebox(0,0)[lc]{7}} \put(75,2){\makebox(0,0)[lc]{8}} \put(85,2){\makebox(0,0)[lc]{9}} \put(68,15){\makebox(0,0)[lc]{10}} \end{picture} \\[5pt] \small Fig.2. \it Dynkin diagram for $E_{10}$ hyperbolic KM algebra \end{center} In this case we get from (\ref{3.4}) \cite{GrI} \ber{3.15} \frac12\varepsilon_s(U^s,U^s)\nu_s^2=30,61,93,126,160,195,231,153,76,115 \end{eqnarray} for $s=1,2,\dots,10$ respectively. In both examples for hyperbolic algebras the following relations are satisfied \ber{3.16} \varepsilon_s(U^s,U^s) >0, \end{eqnarray} $s\in S_i$. This relation is valid in the general case, since $(A_{(i)}^{-1})_{ss'}\le0$, $s,s' \in S$, for any hyperbolic algebra \cite{NikP}. Hyperbolic KM algebras appeared in different areas of mathematical physics, e.g. in ordinary gravity \cite{FF} (${\cal F}_3$ hyperbolic algebra), supergravity: \cite{J,Miz} ($E_{10}$ hyperbolic algebra), \cite{Nic} (${\cal F}_3$ hyperbolic algebra), strings etc (see also \cite{Nik} and references therein). In \cite{Nic} it was shown that the chiral reduction of a simple ($N=1$) supergravity from four dimensions to one dimension gives rise to the hyperbolic algebra of rank 3 (namely ${\cal F}_3$). In \cite{IKM} an example of cosmological solution in $D=11$ supergravity describing three Euclidean $p$-branes (two magnetic and one electric) with intersection rules corresponding to the hyperbolic KM algebra ${\cal F}_3$ was constructed. \subsection{Affine Kac-Moody algebras} Now we proceed to the degenerate case: $\det A_{(i)}=0$. Here we restrict ourselves to a subclass of affine KM algebras \cite{Kac,FS}). Unfortunately, there are no solutions considered above in the affine case. Indeed, any affine Cartan matrix satisfy the relations (\ref{3.5}) with $a_s > 0$ called Coxeter labels and, hence, the solutions are absent in this case. \section{Solutions with intersecting $p$-branes} \subsection{The model} Now we consider a multidimensional gravitational model governed by the action \cite{IMC} \ber{4.1} S=\int d^Dz\sqrt{|g|}\biggl\{R[g]-h_{\alpha\beta}g^{MN}\partial_M\varphi^\alpha \partial_N\varphi^\beta-\sum_{a\in\triangle}\frac{\theta_a}{n_a!} \exp[2\lambda_a(\varphi)](F^a)^2\biggr\} \end{eqnarray} where $g=g_{MN}dz^M\otimes dz^N$ is the metric, $\varphi=(\varphi^\alpha)\in\mbox{\bf R}^l$ is a vector of scalar fields, $(h_{\alpha\beta})$ is a symmetric non-degenerate $l\times l$ matrix $(l\in \mbox{\bf N})$, $\theta_a=\pm1$, $F^a=dA^a$ is $n_a$-form ($n_a\ge1$), $\lambda_a$ is a 1-form on $\mbox{\bf R}^l$: $\lambda_a(\varphi)=\lambda_{\alpha a}\varphi^\alpha$, $a\in\triangle$, $\alpha=1,\dots,l$. Here $\triangle$ is some finite set. In the models with one time all $\theta_a = 1$ when the signature of the metric is $(-1,+1, \ldots, +1)$. We consider the manifold \ber{4.2} M=M_0\times M_1\times\dots\times M_n, \end{eqnarray} with the metric \ber{4.3} g=\e{2\gamma(x)}g^0+\sum_{i=1}^n\e{2\phi^i(x)}g^i \end{eqnarray} where $g^0=g_{\mu\nu}^0(x)dx^\mu\otimes dx^\nu$ is a metric on the manifold $M_0$, and $g^i=g_{m_in_i}^i(y_i)dy_i^{m_i}\otimes dy_i^{n_i}$ is a metric on the manifold $M_i$ satisfying \ber{4.4} Ric[g^i]=\xi_ig^i, \end{eqnarray} $\xi_i=\mathop{\rm const}\nolimits$, $i=1,\dots,n$ ($Ric[g]$ denotes Ricci-tensor, corresponding to $g$). Thus, all internal spaces $(M_i,g^i)$, $i=1,\dots,n$, are Einstein ones. Any manifold $M_\nu$ is claimed to be oriented and connected and $d_\nu\equiv\dim M_\nu$, $\nu=0,\dots,n$. Let \ber{4.5} \tau_i \equiv\sqrt{|g^i(y_i)|}dy_i^1\wedge\dots\wedge dy_i^{d_i}, \quad \varepsilon(i)\equiv\mathop{\rm sign}\nolimits(\det(g_{m_in_i}^i))=\pm1 \end{eqnarray} denote the volume $d_i$-form and signature parameter respectively, $i=1,\dots,n$. Let $\Omega=\Omega_n$ be a set of all subsets of $\{1,\dots,n\}$, $|\Omega|=2^n$. For any $I=\{i_1,\dots,i_k\}\in\Omega$, $i_1<\dots<i_k$, we denote \ber{4.6} \tau(I)\equiv\tau_{i_1}\wedge\dots\wedge\tau_{i_k}, \quad d(I)\equiv\sum_{i\in I}d_i, \quad \varepsilon(I) \equiv \prod_{i \in I} \varepsilon(i). \end{eqnarray} We also put $\tau(\emptyset)= \varepsilon(\emptyset)= 1$ and $d(\emptyset)=0$. For fields of forms we consider the following composite electromagnetic ansatz \ber{4.7} F^a=\sum_{I\in\Omega_{a,e}}{\cal F}^{(a,e,I)}+ \sum_{J\in\Omega_{a,m}}{\cal F}^{(a,m,J)} \end{eqnarray} where \ber{4.8} {\cal F}^{(a,e,I)}=d\Phi^{(a,e,I)}\wedge\tau(I), \\ \nqq \label{4.9} {\cal F}^{(a,m,J)}=\e{-2\lambda_a(\varphi)}*(d\Phi^{(a,m,J)} \wedge\tau(J)) \end{eqnarray} are elementary forms of electric and magnetic types respectively, $a\in\triangle$, $I\in\Omega_{a,e}$, $J\in\Omega_{a,m}$ and $\Omega_{a,e}\subset\Omega$, $\Omega_{a,m}\subset\Omega$. In (\ref{4.9}) $*=*[g]$ is the Hodge operator on $(M,g)$. For scalar functions we put \ber{4.10} \varphi^\alpha=\varphi^\alpha(x), \quad \Phi^s=\Phi^s(x), \end{eqnarray} $s\in S$. Here and below \ber{4.11} S=S_e \cup S_m, \quad S_v=\bigcup_{a\in\triangle}\{a\}\times\{v\}\times\Omega_{a,v}, \end{eqnarray} $v=e,m$. Due to (\ref{4.8}) and (\ref{4.9}) \ber{4.12} d(I)=n_a-1, \quad d(J)=D-n_a-1, \end{eqnarray} for $I\in\Omega_{a,e}$, $J\in\Omega_{a,m}$. {\bf Remark 1.} It is more correct to write in (\ref{4.3}) $\hat{g}^{i}$ instead of $g^{i}$, where $\hat{g}^{i} = p_{i}^{*} g^{i}$ is the pullback of the metric $g^{i}$ to the manifold $M$ by the canonical projection: $p_{i} : M \rightarrow M_{i}$, $i = 1, \ldots, n$. Here we omit all ``hats'' (e.g. corresponding to volume $\tau$-forms) for simplicity. Let $d_0 \neq 2$ and \ber{4.13} \gamma=\gamma_0(\phi) \equiv \frac1{2-d_0}\sum_{j=1}^n d_j\phi^j, \end{eqnarray} i.e. the generalized harmonic gauge is used. Now we impose restriction on sets $\Omega_{a,v}$. These restrictions guarantee the block-diagonal structure of a stress-energy tensor (like for the metric) and the existence of $\sigma$-model representation \cite{IMC}. We denote $w_1\equiv\{i|i\in\{1,\dots,n\},\quad d_i=1\}$, and $n_1=|w_1|$ (i.e. $n_1$ is the number of 1-dimensional spaces among $M_i$, $i=1,\dots,n$). We also denote by $I \sqcup J$ the union of non-intersecting sets $I$ and $J$ {\bf Restriction 1.} For any $a\in\triangle$, $v\in\{e,m\}$, there are no $I,J\in\Omega_{a,v}$ such that \ber{r.1} I= \{i\} \sqcup (I \cap J), \qquad J= (I \cap J) \sqcup \{ j \} \end{eqnarray} for some $i,j \in w_1$, $i \neq j$. {\bf Restriction 2} (only for $d_0=1,3$). For any $a\in\triangle$ there are no $I\in\Omega_{a,m}$, $J\in\Omega_{a,e}$ such that \ber{r.2} \bar I=\{i\}\sqcup J \end{eqnarray} for $d_0 = 1$ and \ber{r.3} J=\{i\}\sqcup \bar I \end{eqnarray} for $d_0 = 3$, where $i \in w_1$ and $\bar I$ is defined as follows \ber{4.13a} \bar I\equiv\{1,\ldots,n\}\setminus I. \end{eqnarray} Restriction 1 is satisfied for $n_1 \leq 1$ and in the case when $|\Omega_{a,v}| \leq 1$ for all $a\in\triangle$, $v\in\{e,m\}$ (e.g. in the non-composite case). For $n_1\ge2$ it forbids certain pairs of two electric or two magnetic $p$-branes, corresponding to the same form ($F^a, a \in \triangle$). Restriction 2 is satisfied for $n_1=0$ or when $d_0 \neq 1,3$. For $n_1\ge1$ and $d_0 = 1,3$ it forbids certain electro-magnetic pairs, corresponding to the same form. It was proved in \cite{IMC} that equations of motion for the model (\ref{4.1}) and the Bianchi identities: $d{\cal F}^s=0$, $s\in S_m$, for fields from (\ref{4.3})--(\ref{4.13}), when Restrictions 1 and 2 are imposed, are equivalent to equations of motion for the $\sigma$-model (\ref{1.1a}) with $(\sigma^A)=(\phi^i,\varphi^\alpha)$, the index set $S$ from (\ref{4.11}), target space metric \ber{4.14} (\hat G_{AB})=\left(\begin{array}{cc} G_{ij}& 0\\ 0& h_{\alpha\beta} \end{array}\right), \end{eqnarray} with \ber{4.15} G_{ij}= d_i \delta_{ij}+\frac{d_i d_j}{d_0-2}, \end{eqnarray} the potential \ber{4.16} V=-\frac12\sum_{i=1}^n\xi_id_i\e{-2\phi^i+2\gamma_0(\phi)}, \end{eqnarray} vectors \ber{4.17} (U_A^s)=(d_i\delta_{iI_s},-\chi_s\lambda_{\alpha a_s}), \end{eqnarray} where $s=(a_s,v_s,I_s)$, $\chi_s=+1,-1$ for $v_s=e,m$ respectively \ber{4.i} \delta_{iI}= \sum_{j\in I}\delta_{ij} \end{eqnarray} is the indicator of $i$ belonging to $I$: $\delta_{iI}=1$ for $i\in I$ and $\delta_{iI}=0$ otherwise; and \ber{4.18} \varepsilon_s=(-\varepsilon[g])^{(1-\chi_s)/2}\varepsilon(I_s) \theta_{a_s}, \end{eqnarray} $s\in S$, $\varepsilon[g]\equiv\mathop{\rm sign}\nolimits\det(g_{MN})$. More explicitly (\ref{4.18}) reads \ber{4.18a} \varepsilon_s= \varepsilon(I_s) \theta_{a_s}, \quad v_s= e \\ \nqq \label{4.18b} \varepsilon_s= -\varepsilon[g] \varepsilon(I_s) \theta_{a_s} \quad v_s=m. \end{eqnarray} The scalar products (\ref{1.2a}) for vectors $U^s$ were calculated in \cite{IMC} \ber{4.19} (U^s,U^{s'})=d(I_s\cap I_{s'})+\frac{d(I_s)d(I_{s'})}{2-D}+ \chi_s\chi_{s'}\lambda_{\alpha a_s}\lambda_{\beta a_{s'}} h^{\alpha\beta} \equiv B^{ss'}, \end{eqnarray} where $(h^{\alpha\beta})=(h_{\alpha\beta})^{-1}$; $s=(a_s,v_s,I_s)$ and $s'=(a_{s'},v_{s'},I_{s'})$ belong to $S$. \subsection{Exact solutions with Ricci-flat spaces} Here we consider the case of Ricci-flat internal spaces, i.e. $\xi_i=0$, $i=1,\dots,n$, in (\ref{4.4}). The potential (\ref{4.16}) is trivial in this case and we may apply the results from Sect.2 to our multidimensional model (\ref{4.1}), when vectors $(U^s,s\in S)$ obey the block-orthogonal decomposition (\ref{2.3}), (\ref{2.4}) with scalar products defined in (\ref{4.19}). The solution reads: \ber{4.20} g=U\left\{g^0+\sum_{i=1}^n U_ig^i\right\}, \\ \nqq \label{4.21} U=\left(\prod_{s\in S}H_s^{2d(I_s)\varepsilon_s\nu_s^2}\right)^{1/(2-D)}, \\ \nqq \label{4.22} U_i=\prod_{s\in S}H_s^{2\varepsilon_s\nu_s^2\delta_{iI_s}}, \\ \nqq \label{4.22a} Ric[g^0]=Ric[g^1]=\dots=Ric[g^n]=0, \\ \nqq \label{4.23} \varphi^\alpha=-\sum_{s\in S}\lambda_{a_s}^\alpha\chi_s \varepsilon_s\nu_s^2\ln H_s, \\ \nqq \label{4.24} F^a=\sum_{s\in S}{\cal F}^s\delta_{a_s}^a, \end{eqnarray} where \ber{4.25} {\cal F}^s=\nu_sdH_s^{-1}\wedge\tau(I_s), \mbox{ for } v_s=e, \\ \nqq \label{4.26} {\cal F}^s=\nu_s(*_0dH_s)\wedge\tau(\bar I_s), \mbox{ for } v_s=m, \end{eqnarray} $H_s$ are harmonic functions on $(M_0,g^0)$ coinciding inside blocks of matrix $(B^{ss'})$ from (\ref{4.19}) ($H_s=H_{s'}$, $s,s'\in S_j$, $j=1,\dots,k$, see (\ref{2.3}), (\ref{2.4})) and relations \ber{4.27} \sum_{s'\in S} B^{ss'} \varepsilon_{s'}\nu_{s'}^2=-1 \end{eqnarray} on the matrix $(B^{ss'})$, parameters $\varepsilon_s$ (\ref{4.18}) and $\nu_s$ are imposed, $s\in S$, $i=1,\dots,n$; $\alpha=1,\dots,l$. Here $\lambda_a^\alpha= h^{\alpha\beta}\lambda_{\beta a}$, $*_0=*[g^0]$ is the Hodge operator on $(M_0,g^0)$ and $\bar I$ is defined in (\ref{4.13a}). In deriving the solution we used as in \cite{IMC} the relations for contravariant components of $U^s$-vectors: \ber{4.29a} U^{si}=\delta_{iI_s}-\frac{d(I_s)}{D-2}, \quad U^{s\alpha}=-\chi_s\lambda_{a_s}^\alpha, \end{eqnarray} $s=(a_s,v_s,I_s)$. Thus, we obtained the generalization of the solutions from \cite{IMC} to the block-orthogonal case (here we eliminate the misprint with sign in eq. (5.19) in \cite{IMC}). {\bf Remark 2}. The solution is also valid for $d_0=2$, if Restriction 2 is replaced by Restriction $2^{*}$. It may be proved using a more general form of the sigma-model representation (see Remark 2 in \cite{IMC}). {\bf Restriction 2${}^*$} (for $d_0=2$). For any $a \in \triangle$ there are no $I\in\Omega_{a,m}$, $J\in\Omega_{a,e}$ such that $\bar I = J$ and for $n_1 \geq 2$, $i,j\in w_1$, $i \neq j$, there are no $I\in\Omega_{a,m}$, $J\in\Omega_{a,e}$ such that $i\in I$, $j\in \bar J$, $I\setminus\{i\}= \bar J \setminus \{j\}$. \subsection{Behaviour of the Kretschmann scalar and horizon} Let $M_0=\mbox{\bf R}^{d_0}$, $d_0>2$ and $g^0=\delta_{\mu\nu}dx^\mu\otimes dx^\nu$. For \ber{4.28} H_s=1+\sum_{b\in X_s}\frac{q_{sb}}{|x-b|^{d_0-2}}, \end{eqnarray} where $X_s$ is finite non-empty subset $X_s\subset M_0$, $s\in S$, all $q_{sb}>0$, and $X_s=X_{s'}$, $q_{sb} = q_{s'b}$ for $b \in X_s=X_{s'}$, $s, s' \in S_j$, $j =1, \ldots, k$. The harmonic functions (\ref{4.28}) are defined in domain $M_0\setminus X$, $X=\bigcup_{s\in S}X_s$, and generate the solutions (\ref{4.20})--(\ref{4.27}). Denote $S(b)\equiv\{s\in S|\quad b \in X_s\}$. We also put $M_i=\mbox{\bf R}^{d_i}$, $R[g^i]= {\cal K}[g^i]=0$, $i=1,\dots,n$, where \ber{4.k} {\cal K}[g] \equiv R_{MNPQ}[g]R^{MNPQ}[g] \end{eqnarray} is the Kretschmann scalar (or Riemann tensor squared). Then for the metric (\ref{4.20}) we obtain \ber{4.29} {\cal K}[g] = \frac{C' + o(1)}{U^2|x-b|^4} = [C + o(1)]|x-b|^{4(d_0-2)\eta(b)} \end{eqnarray} for $x\to b \in X$, where \ber{4.30} \eta(b)\equiv\sum_{s\in S(b)}(-\varepsilon_s)\nu_s^2 \frac{d(I_s)}{D-2}- \frac{1}{d_0-2}, \end{eqnarray} and $C = C(b) \geq 0$ ($C = {\rm const}$). The relation for $C = C(b)$ is given in Appendix. Relation (\ref{4.29}) may be obtained using the relation for ${\cal K}[g]$ in Appendix of Ref. \cite{IMA}. In what follows we consider non-exceptional $b \in X$ defined by relations $C = C(b) > 0$. {\bf Remark 3.} It follows from the Appendix that an exceptional point $b \in X$, defined by relation $C = C(b) = 0$, appears iff (if and only if) \ber{4.30a} U(x) \sim c |x-b|^{ -2 \alpha}, \quad U(x) U_i(x) \sim c_i, \end{eqnarray} for $x\to b$, where $\alpha = 0,2$ and $c, c_i \neq 0$ are constants, $i=1,\dots, n$. Due to (\ref{4.29}) the metric (\ref{4.20}) has no curvature singularity when $x\to b \in X$, $C(b) > 0$, iff \ber{4.31} \eta(b)\ge 0. \end{eqnarray} >From (\ref{4.30}) we see that the metric (\ref{4.20}) is regular at a ``point'' $b\in X$ for $\varepsilon_s=-1$ and large enough values of $\nu_s^2$, $s\in S(b)$. For $\varepsilon_s=+1$, $s\in S(b)$, we have a curvature singularity at non-exceptional point $b\in X$. Now we consider a special case: $d_1=1$, $g^1=-dt\otimes dt$. In this case we have a horizon, when $x\to b\in X$, iff \ber{4.32} \xi(b)\equiv \sum_{s\in S(b)}(-\varepsilon_s) \nu_s^2 \delta_{1I_s} -\frac1{d_0-2}\ge0. \end{eqnarray} This relation follows from the requirement of infinite time propagation of light to $b \in X$. If $\varepsilon_s= -1$, $1 \in I_s$ for all $s\in S(b)$, we get \ber{4.33} \eta(b)<\xi(b), \end{eqnarray} $b\in X$. This follows from the inequalities $d(I_s)<D-2$ ($d_0>2$). We note that $g_{tt} \to 0$ for $x \to b \in X$, if (\ref{4.33}) is satisfied. This follows from the relation \ber{4.33a} g_{tt} \sim {\rm const} |x-b|^{2(d_0-2)(\xi(b)- \eta(b))}, \end{eqnarray} $x \to b$. {\bf Remark 4.} Due to relations (\ref{4.30a}) and (\ref{4.33a}) the point $b \in X$ is non-exceptional if $g^1=-dt\otimes dt$ and $1\in I_s$, $\varepsilon_s= -1$ for all $s \in S(b)$. Thus, for the metric (\ref{4.20}) with $H_s$ from (\ref{4.28}) there are two dimensionless indicators at the non-exceptional point $b\in X$: a) horizon indicator $\xi(b)$ (corresponding to time $t$) and b) curvature singularity indicator $\eta(b)$. These indicators define (for our assumptions) the existence of a horizon and the singularity of the Kretschmann scalar (when $(M_i,g^i)$ are flat, $i=1, \ldots,n$) at non-exceptional $b \in X$. \subsection{Generalized MP solutions} Here we consider special black hole solutions for the model (\ref{4.1}) with all $\theta_a = 1$, $a \in \triangle$, when the signature of the metric $g$ is $(-1,+1, \ldots, +1)$. >From the results of the previous subsections we get the following solution (with all $\varepsilon_s = \varepsilon(I_s)= -1$) \ber{4.34} g= \Bigl(\prod_{s \in S} H_s^{2d(I_s)\nu_s^2} \Bigr)^{1/(D-2)} \Bigl\{ \sum_{\mu=1}^{d_0} dx^{\mu} \otimes dx^{\mu} \nn - \Bigl(\prod_{s \in S} H_s^{-2 \nu_s^2 }\Bigr) dt \otimes dt + \sum_{i=2}^{n} \Bigl( \prod_{s\in S} H_s^{-2\nu_s^2\delta_{iI_s}}\Bigr) g^i \Bigr\}, \\ \label{4.35} \varphi^\alpha= \sum_{s\in S}\lambda_{a_s}^\alpha\chi_s \nu_s^2\ln H_s, \\ \label{4.36} F^a= \sum_{s\in S_e} \delta_{a_s}^a \nu_s dH_s^{-1} \wedge \tau(I_s) + \sum
_{s\in S_m} \delta_{a_s}^a \nu_s (*_0 dH_s) \wedge \tau(\bar{I}_s), \end{eqnarray} where $(M_i,g^i)$ are flat Euclidean spaces, $i = 2, \ldots, n$, \ber{4.37} 1 \in I_s, \end{eqnarray} $s \in S$, i.e. all branes have a common time submanifold $M_1 = \mbox{\bf R}$, the harmonic functions $H_s$ (\ref{4.28}) are coinciding inside blocks, i.e. $H_s=H_{s'}$, $s,s'\in S_j$, $j=1,\dots,k$, parameters $\nu_s$ satisfy the relations \ber{4.27a} \sum_{s'\in S} B^{ss'} \nu_{s'}^2=1 \end{eqnarray} with the matrix $(B^{ss'})$ from (\ref{4.19}), $*_0=*[g^0]$ is the Hodge operator on $(M_0 = \mbox{\bf R}^{d_0}, g^0 = \sum_{\mu=1}^{d_0} dx^{\mu} \otimes dx^{\mu})$, $\bar I$ is defined in (\ref{4.13a}), $\theta_a = 1$ for all $a \in \triangle$, and \ber{4.38} \eta(b) = \sum_{s\in S(b)}\nu_s^2\frac{d(I_s)}{D-2}- \frac1{d_0-2} \geq 0, \end{eqnarray} $b \in X$. This solution describes a set of extreme $p$-brane black holes with horizons at $b \in X$. The Riemann tensor squared has a finite limit at any $b \in X$. Calculation of the Hawking temperature corresponding to $b\in X$ using standard formula (see, for example, \cite{W,BIM}) gives us \ber{4.39} T_H(b)=0, \end{eqnarray} for any $b\in X$ satisfying $\xi(b) > 0$. {\bf MP solution.} The standard 4-dimensional Majumdar-Papapetrou solution \cite{MP} in our notations reads \ber{4.40} g=H^2g^0-H^{-2}dt\otimes dt, \\ \nqq \label{4.41} F=\nu dH^{-1} \wedge dt, \end{eqnarray} where $\nu^2 = 2$, $g^0 = \sum_{i=1}^{3} dx^i \otimes dx^i$ and $H$ is a harmonic function. We have one electric $0$-brane (point) ``attached'' to the time manifold; $d(I_s) =1$, $\varepsilon_s= -1$ and $(U^s,U^s) = 1/2$. In this case (e.g. for extremal Reissner-Nordstr\"om black hole) we get \ber{4.41a} \eta(b)=0, \qquad \xi(b)=1, \end{eqnarray} and $T_H(b)=0$, $b\in X$. {\bf D = 11 supergravity.} Let us consider the action (\ref{4.1}) without scalar fields and with the only one form $F^4$ of rank $4$. This action coincides with the truncated (i.e. without Chern-Simons term) bosonic part of $D=11$ supergravity action. In this case there are a lot of $p$-brane MP-type (multicenter extremal black hole) solutions with orthogonal intersection rules \cite{DS}-\cite{GKT}, e.g. (i) solution with one electric $2$-brane ($d(I_s)=3$) and $d_0 =8$; (ii) solution with one magnetic $5$-brane ($d(I_s)=6$) and $d_0 =5$; (iii) solution with one electric $2$-brane and one magnetic $5$-brane ($d(I_{s_1}\cap I_{s_2})=2$) and $d_0 =4$; (iv) solution with two electric $2$-branes ($d(I_{s_1}\cap I_{s_2})=1$) and $d_0 =5$; (v) solution with two magnetic $5$-branes ($d(I_{s_1}\cap I_{s_2})= 4$) and $d_0 =3$. In the examples (iii)-(v) the harmonic functions $H_{s_1}$ and $H_{s_2}$ from (\ref{4.28}) should have the coinciding sets of poles, i.e. $X_{s_1} = X_{s_2}$, to maintain the relation (\ref{4.38}). The addition of the Chern-Simons term does not destroy these solutions. In all these examples $\eta(b)=0$, $b \in X_{s}$ and $\nu_s^2 = 1/2$, $s \in S$. \subsection{Non-extremal $p$-brane black hole solutions} There exists a non-extremal generalization of the solution from the previous subsection, when all the poles in all harmonic functions are coinciding, say $b =0$, $b \in X$. The metric and scalar fields for this solution read \ber{4.42} g= \Bigl(\prod_{s \in S} H_s^{2 d(I_s)\nu_s^2/(D-2)} \Bigr) \biggl\{ \frac{dr \otimes dr}{1 - 2\mu / r^{\bar d}} + r^2 d \Omega^2_{d_0 -1} \nn \nn - \Bigl(\prod_{s \in S} H_s^{-2 \nu_s^2} \Bigr) \left(1 - \frac{2\mu}{r^{\bar d }} \right) dt \otimes dt + \sum_{i = 2}^{n} \Bigl(\prod_{s\in S} H_s^{-2 \delta_{iI_s} \nu_s^2} \Bigr) g^i \biggr\}, \\ \label{4.43} \varphi^\alpha= \sum_{s\in S} \nu_s^2 \chi_s \lambda_{a_s}^\alpha \ln H_s, \end{eqnarray} where \beq{4.44} H_s = 1 + \frac{q_s}{r^{\bar{d}}}, \end{equation} $s \in S$, $\bar{d} = d_0 -2$, $\mu > 0$, parameters $\nu_s$ satisfy the relations (\ref{4.27a}) with the matrix $(B^{ss'})$ from (\ref{4.19}), and parameters $q_s > 0$ coincide inside blocks i.e. $q_s=q_{s'}$, $s,s'\in S_j$, $j=1,\dots,k$. Here $(M_i,g^i)$ are Ricci-flat Euclidean spaces, $i = 2, \ldots, n$, and $d \Omega^2_{d_0 -1}$ is canonical metric on the $(d_0 -1)$-dimensional sphere $S^{d_0 -1}$. The fields of forms are given by (\ref{4.24})-(\ref{4.26}) with \ber{4.45} \Phi^s = \frac{\nu_s}{H_s^{'} }, \\ \nqq \label{4.46} H_s^{'}= \Bigl(1 - \frac{p_s}{H_s r^{\bar{d}}} \Bigr)^{-1} = 1 + \frac{ p_s}{ r^{\bar{d}} + q_s - p_s}, \\ \nqq \label{4.47} p_s = \sqrt{q_s (q_s + 2\mu)}, \end{eqnarray} $s \in S$. The solution describes non-extremal charged intersecting $p$-brane black hole with the horizon at $r^{\bar{d}} = 2\mu$. In the limit $\mu \to + 0$ this solution coincides with the 1-center extremal black hole solution >from the previous subsection. Here we generalize the solution from \cite{Br}, where all $\nu_s^2$ are coinciding inside blocks. In our case the only restriction on $\nu_s$ is the relation (\ref{4.27a}). For orthogonal intersections it agrees with the solutions >from \cite{CT}-\cite{O} ($d_1 = \ldots = d_n =1$) and \cite{BIM,IMJ}. The Hawking temperature corresponding to this non-extremal solution is \beq{4.48} T_H(0,\mu) = \frac{\bar{d}}{4 \pi (2 \mu)^{1/\bar{d}}} \prod_{s \in S} \left(\frac{2 \mu}{2 \mu + q_s}\right)^{\nu_s^2}. \end{equation} For $\mu \to + 0$ we get (in agreement with (\ref{4.39})) $T_H(0,\mu) \to 0$ for the extremal black hole configurations satisfying \beq{4.49} \xi(0) = \sum_{s\in S}\nu_s^2- \bar{d}^{-1} > 0. \end{equation} \section{Intersection rules and some examples} \subsection{Intersection rules} >From orthogonality relation (\ref{2.4}) and (\ref{4.19}) we get \ber{5.1} d(I_s \cap I_{s'})=\triangle(s,s') \end{eqnarray} where $s\in S_i$, $s'\in S_j$, $i\ne j$ and \ber{5.2} \triangle(s,s')\equiv\frac{d(I_s)d(I_{s'})}{D-2}- \chi_s\chi_{s'}\lambda_{a_s}\cdot\lambda_{a_{s'}}. \end{eqnarray} Here $\lambda\cdot\lambda'\equiv h^{\alpha\beta}\lambda_\alpha\lambda'_\beta$. Let \ber{5.3} N(a,b)\equiv\frac{(n_a-1)(n_b-1)}{D-2}-\lambda_a\cdot\lambda_b, \end{eqnarray} $a,b\in\triangle$. The matrix (\ref{5.3}) is called the fundamental matrix of the model (\ref{4.1}) \cite{IMJ}. For $s_1,s_2\in S$, $s_1\ne s_2$, the $\triangle$-symbol (\ref{5.2}) may be expressed by means of the fundamental matrix \cite{IMJ} \ber{5.4} \triangle(s_1,s_2)=\bar D\bar\chi_{s_1}\bar\chi_{s_2}+ \bar{n}_{a_{s_1}}\chi_{s_1}\bar\chi_{s_2}+ \bar{n}_{a_{s_2}}\chi_{s_2}\bar\chi_{s_1}+ N(a_{s_1},a_{s_2})\chi_{s_1}\chi_{s_2}, \end{eqnarray} where $\bar D=D-2$, $\bar n_a=n_a-1$, $\bar\chi_s=\frac12(1-\chi_s)$. More explicitly (\ref{5.4}) reads \ber{5.4a} \Delta(s_1,s_2) =N(a_{s_1},a_{s_2}), \quad v_{s_1}=v_{s_2}=e; \\ \nqq \label{5.4b} \Delta(s_1,s_2) = \bar{n}_{a_{s_1}}-N(a_{s_1},a_{s_2}), \quad v_{s_1}=e, \quad v_{s_2}=m; \\ \nqq \label{5.4c} \Delta(s_1,s_2) = \bar{D}-\bar{n}_{a_{s_1}}-\bar{n}_{a_{s_2}}+N(a_{s_1},a_{s_2}), \quad v_{s_1}=v_{s_2}=m. \end{eqnarray} This follows from the relations \ber{5.5} d(I_s)=\bar D\bar\chi_s+\bar n_{a_s}\chi_s, \end{eqnarray} equivalent to (\ref{4.12}). Let \ber{5.6} K(a)\equiv n_a-1-N(a,a)=\frac{(n_a-1)(D-n_a-1)}{D-2}+ \lambda_a\cdot\lambda_a, \end{eqnarray} $a\in\triangle$. The parameters (\ref{5.6}) play a rather important role in supergravity theories, since they are preserved under Kaluza-Klein reduction \cite{St} and define the norms of $U^s$ vectors: \ber{5.7} (U^s,U^s)=K(a_s), \end{eqnarray} $s\in S$. Here we put $K(a)\ne0$, $a\in\triangle$. Then, we obtain the general intersection rule formulas \ber{5.8} d(I_{s_1}\cap I_{s_2})=\triangle(s_1,s_2)+\frac12K(a_{s_2})A^{s_1s_2} \end{eqnarray} $s_1\ne s_2$, where $(A^{s_1s_2})$ is the quasi-Cartan matrix (\ref{3.2}) (see also (6.32) from \cite{IMJ}). \subsection{$B_D$-models and examples of solutions} Now we consider some examples of solutions from Sect. 4. These examples will be demonstrated for the so-called $B_D$-models. Action of the $B_D$-model reads \cite{IMJ} \ber{5.9} S_D=\int d^Dz\sqrt{|g|}\biggl\{R[g]+g^{MN}\partial_M\vec\varphi\partial_N\vec\varphi- \sum_{a=4}^{D-7}\frac1{a!}\exp[2\vec\lambda_a\vec\varphi](F^a)^2\biggr\}, \end{eqnarray} where $\vec\varphi=(\varphi^1,\dots,\varphi^l)\in\mbox{\bf R}^l$, $\vec\lambda_a= (\lambda_{a1},\dots,\lambda_{al})\in\mbox{\bf R}^l$, $l=D-11$, $\mathop{\rm rank}\nolimits F^a=a$, $a=4,\dots,D-7$. Here vectors $\vec\lambda_a$ satisfy the relations \ber{5.10} \vec\lambda_a\vec\lambda_b=N(a,b)-\frac{(a-1)(b-1)}{D-2}, \\ \nqq \label{5.11} N(a,b)=\min(a,b)-3, \end{eqnarray} $a,b=4,\dots,D-7$. The vectors $\vec\lambda_a$ are linearly dependent \ber{5.12} \vec\lambda_{D-7}=-2\vec\lambda_4. \end{eqnarray} For $D>11$ vectors $\vec\lambda_4,\dots,\vec\lambda_{D-8}$ are linearly independent. The model (\ref{5.9}) contains $l$ scalar fields with a negative kinetic term (i.e. $h_{\alpha\beta}=-\delta_{\alpha\beta}$ in (\ref{4.1})) coupled to $(l+1)$ forms. For $D=11$ ($l=0$) the model (\ref{5.9}) coincides with a truncated (without Chern-Simons term) bosonic sector of $D=11$ supergravity. For $D=12$ $(l=1)$ (\ref{5.9}) coincides with truncated $D=12$ model from \cite{KKP} (see also \cite{IMC}). The matrix (\ref{5.11}) is the fundamental matrix of the $B_D$-model. For $p$-brane worldsheets we have the following dimensions (see \cite{IMJ}) \ber{5.13} d(I)=3,\dots,D-8, \quad I\in\Omega_{a,e}, \\ \nqq \label{5.14} d(I)=D-5,\dots,6, \quad I\in\Omega_{a,m}. \end{eqnarray} Thus, there are $(l+1)$ electric and $(l+1)$ magnetic $p$-branes, $p=d(I)-1$. For $B_D$-model all $K(a)=2$. Now we are interested in the one-block solutions, where $A=(A^{s_1s_2})$ in (\ref{3.2}) is an irreducible Cartan matrix either of a simple finite dimensional Lie algebra or of a simple hyperbolic KM algebra. Since $K(a)=2$, we rewrite (\ref{5.8}) as the following: \ber{5.13a} d(I_{s_1}\cap I_{s_2})=\triangle(s_1,s_2)+A^{s_1s_2}, \end{eqnarray} $s_1\ne s_2$, and get $A^{s_1s_2}=A^{s_2s_1}$, i.e. the Cartan matrix is symmetric. \subsubsection{Finite dimensional Lie algebras} Here we put all $\theta_a = 1$, $a \in \triangle$. >From $\varepsilon_s=-1$ we get $\varepsilon(I_s)=-1$ for the electric "brane" and $\varepsilon(I_s)= \varepsilon[g]$ for the magnetic one. In a finite dimensional case we are led to the so-called simply laced or $A-D-E$ Lie algebras. The intersection rules are totally defined by the corresponding Dynkin diagram: $d(I_{s_1}\cap I_{s_2})= \triangle(s_1,s_2)-1$, when the vertices corresponding to $s_1$ and $s_2$ are connected by a line and $d(I_{s_1}\cap I_{s_2})=\triangle(s_1,s_2)$ otherwise (since in $A-D-E$ case $A^{s_1s_2}=0, -1$, $s_1 \neq s_2$). {\bf Example for $A_2$.} Let us consider $B_D$-model, $D\ge11$, $a\in\{4,\dots,D-7\}$, $g^3=-dt\otimes dt$, $d_1=a-2$, $d_2=D-2-a$, $d_0=3$ and $g^0,g^1,g^2$ are Ricci-flat. The $A_2$-solution describing a dyon configuration with electric $d_1$-brane and magnetic $d_2$-brane, corresponding to $F^a$-form and intersecting in 1-dimensional time manifold reads: \ber{5.14a} g=H^2g^0-H^{-2}dt\otimes dt+g^1+g^2, \\ \nqq \label{5.15} F^a=\nu_1dH^{-1}\wedge dt\wedge\tau_1+\nu_2(*_0dH)\wedge\tau_1, \end{eqnarray} $\vec\varphi=0$, $H$ is the harmonic function on $(M_0,g^0)$ and $\nu_1^2 = \nu_2^2 =1$. For $D=11$ we have $a=4$ and $d_1=2$, $d_2=5$. For $D=12$ we have two possibilities: a) $a=4$, $d_1=2$, $d_2=6$; b) $a=5$, $d_1=3$, $d_2=5$. The signature restrictions on $g^1$ and $g^2$ are the following: $\varepsilon(1)=+1$, $\varepsilon(2) = -\varepsilon[g]$. They are satisfied when $g^0$ and $g^1$ are Euclidean metrics. The 4-dimensional section of (\ref{5.14a}) coincides for the flat Euclidean $g^0$ with the MP solution (\ref{4.40}), (\ref{4.41}). Here the "indicators" of the solution coincide with MP ones (\ref{4.41a}). Now, we list $A_2$ intersection rules, for $D=11,12$. Here \ber{5.15a} A=B=\left(\begin{array}{cc} 2& -1\\ -1& 2 \end{array}\right). \end{eqnarray} ${\bf D=11}$ : \ber{5.17a} 3\cap3=0, \quad 3\cap6=1, \quad 6\cap6=3. \end{eqnarray} ${\bf D=12}$ : \ber{5.17b} 3\cap3=3\cap4=0, \quad 3\cap6=3\cap7=4\cap4=4\cap6=1, \nn 4\cap7=2, \quad 6\cap6 = 6\cap7= 3, \quad 7\cap7=4. \end{eqnarray} Here and in what follows we denote $n_1\cap n_2=n$ $\Leftrightarrow$ ($d(I_1)=n_1$, $d(I_2)=n_2$, $d(I_1\cap I_2)=n$). {\bf Remark 5.} The appearance of Lie algebras it is not accident here. It was shown in \cite{IMJ} that if $p$-branes have intersection rules (\ref{5.8}) governed by a Cartan matrix of some (semisimple) Lie algebra, then equations of motion for the problems of cosmology and spherical symmetry may be reduced to integrable Euclidean Toda lattice equations corresponding to this Lie algebra. (For certain examples of solutions see, for example, \cite{LPX,LMPX,LMMP,GMel}.) Thus, for intersections related to Lie algebras we may find general spherically symmetric solutions which contain one-center MP-type solutions (e.g. extremal black hole configurations) as special case. \subsubsection{Hyperbolic algebras} In hyperbolic case all $\varepsilon_s=+1$, $s\in S_1 =S$ and, hence, corresponding solutions with $H_s$ from (\ref{4.28}) are singular at $b \in X$ (in this case $b$ is non-exceptional). {\bf Example with $H_2(q,q)$.} For the Cartan matrix (\ref{3.13}) with $q_1=q_2=q \ge 3$, we obtain \ber{5.16} \nu_s^2=(q-2)^{-1}=1,\frac12,\frac13,\dots \end{eqnarray} for $q=3,4,5,\dots$; $s=1,2$. An example of the solution for $d_0=3$ with two electric $p$-branes, $p=d_1,d_2$, corresponding to $F^a$ and $F^b$ fields and intersecting in time manifold, is the following: \ber{5.17} g=H^{-2/(q-2)}g^0-H^{2/(q-2)}dt\otimes dt+g^1+g^2, \\ \nqq \label{5.18} F=\nu_{1}dH^{-1}\wedge dt\wedge\tau_1+ \nu_{2}dH^{-1}\wedge dt\wedge\tau_2, \\ \nqq \label{5.19} \vec\varphi=-(\vec\lambda_a+\vec\lambda_b)(q -2)^{-1}\ln H \end{eqnarray} where $d_1=a - 2$, $a=q+4$, $b\ge a$, $d_2=b-2$, $d_0 = 3$, $D=a+b$. Here $F=F^a+F^b$ for $a<b$ and $F=F^a$ for $a=b$. The signature restrictions are : $\varepsilon(1)= \varepsilon(2) = -1$. Thus, the space-time $(M,g)$ should contain at least three time directions. The minimal $D$ is 14. For $D=14$ we get $a=b=7$, $d_1=d_2=6$, $q=3$. In this case $6\cap6=1$. \subsubsection{Affine (forbidden) case} Affine Cartan matrices do not arise in our solutions. This means that some configurations are forbidden. Let us consider $A_1^{(1)}$ affine KM algebra with the Cartan matrix \ber{5.20} A=\left(\begin{array}{cc} 2& -2\\ -2& 2 \end{array}\right). \end{eqnarray} For $D=11$ the intersections: $3\cap6=0$, $6\cap6=2$, corresponding to the $A$-matrix (\ref{5.20}), are forbidden. For $D=12$ we get forbidden intersections: $3\cap6=3\cap7=4\cap4=4\cap6=0$, $4\cap7=1$, $6\cap6=6\cap7=2$, $7\cap7=3$. {\bf Remark 6.} Recently some new solutions in the affine case were obtained \cite{ETT}. (These solutions contain as a special case a solution in $D =11$ supergravity from \cite{GKT} with $6 \cap 6=2$.) The solutions from \cite{ETT} use some modified ansatz for fields of forms and do not belong to our scheme. This indicates that the sigma-model considered in this paper is not the most general one describing possible $p$-brane configurations. There are at least three possible ways of generalization: (i) ``non-block-diagonal'' metric instead of the ``block-diagonal'' metric (\ref{4.3}) may be considered (the first step in this direction was done in \cite{GR}); (ii) a bimetric sigma-model defined on the product of two base spaces ($M_{01},g^{01}$) and ($M_{02},g^{02}$) instead of ($M_{0},g^{0}$) with two sorts of $p$-branes governed by functions on $M_{01}$ and $M_{02}$ respectively may be constructed ; (iii) a sigma-model describing the action (\ref{4.1}) with Chern-Simons terms added may be also considered. Solution from \cite{ETT} seems to belong to the case (ii). We note also that our solutions in the special case of $D= 10,11$ supergravities are different from non-marginal bound state solutions (see, for example, \cite{O,ILPT,RT} and references therein) although the rules for binary intersections may look similar. These solutions probably need some extensions of the sigma-model along the lines (i) and (iii). \subsubsection{Other possibilities} We note that it is not obvious for quasi-Cartan matrix $A$ to be a Cartan one. Let us consider the example with \ber{5.21} A=\left(\begin{array}{cc} 2& 1\\ 1& 2 \end{array}\right). \end{eqnarray} For $D=11$ we obtain (non-Lie) dyon with \ber{5.22} 3\cap6=3 \end{eqnarray} and $\varepsilon_s=-1$, $\nu_s^2=1/3$, $s =1,2$ (see also \cite{Br}). Two other intersections: $3\cap3=2$, $6\cap6=5$, corresponding to $A$ are forbidden by Restriction 1 (this restriction follows from the block-diagonal form of metric $g$ and may be weakened for non-block-diagonal ansatz from \cite{GR}). For $d_0= 5$ the solution with $d_1=d_2=3$ reads \ber{5.23} g=H^{2/3}g^0+H^{-2/3}g^1+g^2, \nn F^4 = \nu_1dH^{-1}\wedge\tau_1+\nu_2*_0dH \end{eqnarray} with $H$ from (\ref{4.28}) and indicators $\xi(b)=1/3$, $\eta(b)=0$, $b\in X$, i.e. any point $a\in X$ corresponds to the horizon without a curvature singularity. Here we assume that $M_1 = \mbox{\bf R} \times M_1^{'}$, where $\mbox{\bf R}$ is the time manifold. \section{Conclusions} Thus, here, like in \cite{IM4}-\cite{IMR}, we considered the $\sigma$-model of a $p$-brane origin. We obtained new block-orthogonal exact solutions (\ref{2.8})--(\ref{2.12}) governed by a set of harmonic functions. These solutions crucially depend on the matrix of scalar products $(U^s,U^{s'})$, $s,s'\in S$, and parameters $\varepsilon_s=\pm1$, $s\in S$ (see (\ref{2.10})). In Sect. 3 we analyzed three possibilities, when quasi-Cartan matrix (\ref{3.2}) coincides with the Cartan matrix of a (simple): a) finite-dimensional Lie algebra; b) hyperbolic KM algebra; c) affine KM algebra. It was shown that the last possibility does not appear in our solutions, and all $\varepsilon_s=-1$ in the case a) and all $\varepsilon_s=+1$ in the case b). In Sect. 4 we applied this method for obtaining new MP type solutions in multidimensional gravity with fields of forms and scalar fields (see subsect. 4.2 and 4.4). In subsect. 4.3 for $d_0>2$ and harmonic functions from (\ref{4.28}) the indicators $\eta(b)$ and $\xi(b)$ were introduced. These indicators describe under certain assumptions the existence of a curvature singularity and a horizon for $x\to b$. In Sect. 5 intersection rules for fixed $A$-matrix are written (see (\ref{5.8})) and some examples of solutions and intersection rules for a chain of $B_D$-models, $D=11,12,\dots$, were suggested. Among the examples there are $A_2$, hyperbolic, affine (forbidden) intersections and ``non-Cartan'' ones. We note that the solutions obtained here may be also generalized to the case when some non-Ricci-flat internal spaces are added to $M$ as it was done in \cite{IMC}. \section{Appendix} Here we present the relation for the parameter $C = C(b)$, $b \in X$, from (\ref{4.29}) \ber{a.1} C = C_0 + C_1 + C_2, \\ \nqq \label{a.2} C_0 = 2 (d_0-1)(d_0 -2) \alpha^2 (\alpha -2)^2, \\ \nqq \label{a.3} C_1 = 4 [(d_0 - 1) \alpha^2 + (\alpha - 1)^2] \sum_{i=1}^{n} d_i \alpha_i^2, \\ \nqq \label{a.4} C_2 = 2 (\sum_{i=1}^{n} d_i \alpha_i^2)^2 - 2 \sum_{i=1}^{n} d_i \alpha_i^4, \end{eqnarray} where \ber{a.5} \alpha = \alpha(b) \equiv (d_0-2) \sum_{s\in S(b)}(-\varepsilon_s)\nu_s^2 \frac{d(I_s)}{D-2} = (d_0 -2) \eta(b) + 1, \\ \nqq \label{a.6} \alpha_i = \alpha_i(b) \equiv (d_0-2) \sum_{s\in S(b)}(-\varepsilon_s)\nu_s^2 \left[\delta_{iI_s} - \frac{d(I_s)}{D-2} \right], \end{eqnarray} $i=1, \dots,n$. It follows from definitions (\ref{a.1})-(\ref{a.4}) that $C \geq 0$ and \ber{a.7} C =0 \Leftrightarrow (\alpha = 0, 2, \quad \alpha_i =0, \ i=1, \dots,n). \end{eqnarray} Parameter $C$ appears in the Kretschmann scalar (\ref{4.29}) for the ``1-pole'' metric \ber{a.8} g_*= r^{-2 \alpha}[dr \otimes dr + r^2 d \Omega^2_{d_0 -1} ] + \sum_{i = 1}^{n} r^{2\alpha_i} g^i, \end{eqnarray} with $R[g^i]= {\cal K}[g^i]=0$, $i=1,\dots,n$. Using formulas from Appendix of \cite{IMA}, we obtain \ber{a.9} {\cal K}[g_*] = C r^{-4 +4 \alpha}. \end{eqnarray} \begin{center} {\bf Acknowledgments} \end{center} This work was supported in part by the DFG grant 436 RUS 113/236/O(R), by the Russian Ministry of Science and Technology and Russian Foundation for Basic Research. The authors are grateful to M.A. Grebeniuk, K.A. Bronnikov, S.-W. Kim for useful discussions and V.V. Nikulin for valuable information. One of us (V.D.I) also thanks S.-W. Kim for a kind hospitality during his stay in Ewha Womans University (Seoul, S. Korea). \small
\section{Introduction} Let $X$ and $Y$ be Banach spaces and denote by $\mathcal{L}(X,Y)$ (resp., $\operatorname{NA}(X,Y)$) the space of all bounded linear operators (resp., the set of all \emph{norm attaining operators}) from $X$ into $Y$. Recall that a bounded linear operator $T \in \mathcal{L}(X,Y)$ is said to be a \emph{norm attaining operator} if the operator norm $\|T\|$ is equal to $\max \{ \|T(x) \| : \|x\| \leq 1\}$. When $X=Y$, we simply denote them by $\mathcal{L}(X)$ and $\operatorname{NA}(X)$. One of the famous unsolved problems in Banach space theory is to characterize when the space $\mathcal{L}(X,Y)$ coincides with the set $\operatorname{NA}(X,Y)$, and some partial solutions for this problem have been given by J. Holub \cite{H2} and J. Mujica \cite{Muj}. To the best of our knowledge, the most general result, which is obtained recently in \cite{DJM}, states in particular the following: if $X$ is a reflexive space and $Y$ is an arbitrary Banach space for which either $X$ or $Y$ has the \emph{compact approximation property}, then $\mathcal{L}(X, Y) = \operatorname{NA}(X,Y)$ is equivalent to that every operator from $X$ into $Y$ is a compact operator. Thus, for a Banach space $X$ with the compact approximation property, the situation when $\mathcal{L}(X) = \operatorname{NA}(X)$ implies that $X$ must be finite dimensional. Paralleling the study on norm attaining operators, there have been numerous studies on \emph{numerical radius attaining operators} \cite{Acosta1991, AR, AP1989, AP1989-2, BS, Cardassi, Paya}. Recall that the \emph{numerical radius of an operator} $T \in \mathcal{L}(X)$ is defined by \begin{equation}\label{eq:nu} \nu(T) := \sup \big\{ |x^*(T(x))|: (x, x^*) \in \Pi(X) \big\}, \end{equation} where $\Pi(X) := \{ (x, x^*) \in S_X \times S_{X^*}: x^*(x) = 1 \}$. We say that $T \in \mathcal{L}(X)$ is a \emph{numerical radius attaining operator} if there exists some element $(x,x^*) \in \Pi(X)$ such that $\nu(T) = |x^* (T(x))|$. We denote by $\operatorname{NRA}(X)$ the set of all numerical radius attaining operators on $X$. Among others, it is proved in \cite{AR} that if all the rank-one operators on a Banach space $X$ attain their numerical radii, then $X$ must be reflexive, which can be viewed as a version of the celebrated James' theorem \cite{James} for numerical radius. One of the main motivations of this paper is to investigate Banach spaces $X$ for which $\mathcal{L}(X) = \operatorname{NRA}(X)$. In Section \ref{sec:1}, we shall observe that for a Banach space $X$, if $\mathcal{L}(X) = \operatorname{NRA}(X)$, then $X$ must be a separable Banach space, which is the numerical radius version of Kalton's result \cite{K}. Moreover, we prove that if $X$ has the compact approximation property and $\mathcal{L}(X) = \operatorname{NRA}(X)$, then $X$ must be a finite dimensional space. Following that, we are devoted to the case of $N$-homogeneous polynomials. As it is defined in \eqref{eq:nu}, the \emph{numerical radius of an $N$-homogeneous polynomial} $P$ from a Banach space $X$ into itself is given by \[ \nu (P) := \sup \big\{ |x^*(P(x))|: (x, x^*) \in \Pi(X) \big\}. \] We say that $P$ is a \emph{numerical radius attaining $N$-homogeneous polynomial} if $\nu(P) = |x^* (P(x))|$ for some $(x,x^*) \in \Pi(X)$. Let us denote by $\mathcal{P}(^N X)$ and $\operatorname{NRA}(^N X)$ the Banach space of $N$-homogeneous polynomials from $X$ into $X$ and the set of all numerical radius attaining $N$-homogeneous polynomials, respectively. In Section \ref{sec:2}, we give an improvement of the polynomial James' theorem for numerical radius proved in \cite{AGG2003} by using the refinement of James' theorem in \cite{JiM}. Similarly as in the previous section, we prove that for a Banach space $X$ with the compact approximation property, if $\mathcal{P}(^N X) = \operatorname{NRA}(^N X)$, then $X$ must be a finite dimensional space. Finally, Section \ref{sec:3} focuses on the denseness of the set of weakly (uniformly) continuous $2$-homogeneous polynomials whose Aron-Berner extensions attain their numerical radii. This can be viewed as a numerical radius version of the result in \cite{CLS2010} where it is proved that the set of $2$-homogeneous polynomials whose Aron-Berner extensions attain their norms is dense. \section{Numerical radius attaining operators}\label{sec:1} Let $X$ be a Banach space. Then it is clear that $\nu (T) \leq \|T\|$ for every $T \in \mathcal{L}(X)$ and that $\nu$ is a seminorm on $\mathcal{L}(X)$. The greatest constant $k \geq 0$ such that $k \|T\| \leq \nu(T)$ for every $T \in \mathcal{L}(X)$ is the well known constant which is called the \emph{numerical index} of $X$ and denoted by $n(X)$. Equivalently, \begin{equation*} n(X) = \inf \big\{ \nu(T): T \in \mathcal{L}(X), \|T\| = 1 \big\}. \end{equation*} If we consider \begin{equation*} \mathcal{Z}(X) := \big\{ S \in \mathcal{L}(X): \nu (S) = 0 \big\}, \end{equation*} which is a closed subspace of $\mathcal{L}(X)$, and the quotient space $\mathcal{L}(X)/\mathcal{Z}(X)$ endowed with the \emph{norm} $\nu(T+\mathcal{Z}(X)) = \inf \{ \nu (T-S): S \in \mathcal{Z}(X)\}$, then it is straightforward to check the following: \begin{itemize} \itemsep0.3em \item[(a)] $\nu (T+\mathcal{Z}(X)) = \nu(T)$ for every $T \in \mathcal{L}(X)$, and \item[(b)] $\nu (T+\mathcal{Z}(X)) \leq \|T + \mathcal{Z}(X)\| = \inf \{ \|T - S\|: S \in \mathcal{Z}(X)\}$ for every $T \in \mathcal{L}(X)$. \end{itemize} For Banach spaces $X$ and $Y$, recall that $\mathcal{L}(X,Y^*)$ is the dual of the projective tensor product space $X \ensuremath{\widehat{\otimes}_\pi} Y$. In our scenario, we shall observe that for a reflexive Banach space $X$ the normed space $(\mathcal{L}(X)/\mathcal{Z}(X), \nu)$ is the dual of a kind of tensor product space. To this end, consider the following normed space \begin{equation*} X \otimes_{\Pi} X^* := \Big\{ \sum_{i=1}^n \lambda_i x_i \otimes x_i^*: n \in \mathbb{N}, \ \lambda_i \in \mathbb{K}, \ (x_i, x_i^*) \in \Pi(X) \Big\} \end{equation*} endowed with the norm \begin{equation}\label{eq:Pi-norm} \|u\| := \inf \Big\{ \sum_{i=1}^n |\lambda_i|: u = \sum_{i=1}^n \lambda_i x_i \otimes x_i^*, \ (x_i, x_i^*) \in \Pi(X), \ \lambda_i \in \mathbb{K}, \ n \in \mathbb{N} \Big\}. \end{equation} We denote by $X \widehat{\otimes}_{\Pi} X^*$ the completion of $X \otimes_{\Pi} X^*$ with respect to the norm in \eqref{eq:Pi-norm}. Let, as usual, $\mathcal{B} (X \times X^*)$ stand for the set of all bilinear forms on $X \times X^*$ and consider the seminorm $\| \cdot \|_{\Pi}$ on $\mathcal{B} (X \times X^*)$ given by $\| B \|_{\Pi} := \sup \{ |B(x,x^*)|: (x,x^*) \in \Pi (X) \}$. \begin{theorem}\label{thm:isometry} Let $X$ be a reflexive Banach space. Then \[ (\mathcal{L}(X)/\mathcal{Z}(X), \nu) \stackrel{1}{=} (\mathcal{B}(X \times X^*), \|\cdot\|_{\Pi}) / \ker \| \cdot \|_{\Pi} \stackrel{1}{=} (X \widehat{\otimes}_{\Pi} X^*)^* \,\,\, (\text{isometric isomorphically}). \] \end{theorem} \begin{proof} We claim that the mapping $\Phi : (\mathcal{L}(X)/\mathcal{Z}(X), \nu) \rightarrow (\mathcal{B}(X \times X^*), \|\cdot\|_{\Pi}) / \ker \| \cdot \|_{\Pi}$ given by \[ \Phi (T + \mathcal{Z} (X)) := \varphi_T + \ker \| \cdot \|_{\Pi}, \] where $\varphi_T (x,x^*) = x^* (T(x))$, is an isometric isomorphism. It is not difficult to check that $\Phi$ is a well-defined linear operator. Note that for any $S \in \mathcal{Z}(X)$, we have $\varphi_S \in \ker \| \cdot \|_{\Pi}$. Conversely, if $\psi \in \ker \| \cdot \|_{\Pi}$, then the operator $L_{\psi} \in \mathcal{L}(X)$ given by $L_{\psi} (x) = \psi (x, \cdot)$ belongs to $\mathcal{Z}(X)$. Moreover, \begin{align*} \| \Phi (T + \mathcal{Z} (X)) \| = \| \varphi_{T} + \ker\| \cdot\|_{\Pi} \| &= \inf \{ \| \varphi_T + \psi \|_\Pi : \psi \in \ker \| \cdot \|_{\Pi} \} \\ &= \inf \{ \| \varphi_T + \varphi_S\|_{\Pi} : S \in \mathcal{Z} (X) \} \\ &= \inf \{ \nu (T+S) : S \in \mathcal{Z} (X)\} = \nu (T + \mathcal{Z} (X)). \end{align*} To see that $\Phi$ is surjective, take any $B \in \mathcal{B}(X\times X^*)$. As $X$ is reflexive, the operator $T_B$ given by $T_B (x) = B(x,\cdot)$ belongs to $\mathcal{L}(X)$. It is straightforward to check that $\Phi (T_B + \mathcal{Z}(X))= B + \ker \| \cdot\|_{\Pi}$. Thus, the claim is proved. Next, define $\Psi : (\mathcal{B}(X \times X^*), \|\cdot\|_{\Pi}) / \ker \| \cdot \|_{\Pi} \rightarrow (X \widehat{\otimes}_{\Pi} X^*)^*$ by \[ \Psi (\varphi + \ker \| \cdot \|_{\Pi} ) (u) := \sum_{i=1}^n \lambda_i \varphi (x_i, x_i^*) \] where $u = \sum_{i=1}^n \lambda_i x_i \otimes x_i^*$. The map $\Psi$ is a well-defined linear operator. Observe that \begin{align*} | \Psi (\varphi + \ker \| \cdot\|_{\Pi}) (u) | &= \Big | \sum_{i=1}^n \lambda_i \varphi (x_i, x_i^*) \Big| =\Big | \sum_{i=1}^n \lambda_i (\varphi + \xi) (x_i, x_i^*) \Big| \leq \sum_{i=1}^n |\lambda_i| \| \varphi + \xi \| \end{align*} for every $\xi \in \ker \| \cdot \|_{\Pi}$ and any representation $u = \sum_{i=1}^n \lambda_i x_i \otimes x_i^*$. It follows that \[ | \Psi (\varphi + \ker \| \cdot\|_{\Pi}) (u) | \leq \| \varphi + \xi\| \|u\| \text{ for every } \xi \in \ker \| \cdot \|_{\Pi}; \] hence $\|\Psi (\varphi + \ker \| \cdot\|_{\Pi})\| \leq \| \varphi + \ker \| \cdot \|_{\Pi}\|$. On the other hand, \begin{align*} \|\Psi (\varphi + \ker \| \cdot\|_{\Pi})\| &\geq \sup \{ | \Psi (\varphi + \ker \| \cdot\|_{\Pi}) (x \otimes x^*)| : (x, x^*) \in \Pi (X) \} \\ &= \sup \{ | \varphi (x, x^*)| : (x, x^*) \in \Pi (X) \} \\ &= \| \varphi \|_{\Pi}. \end{align*} Thus, $\|\Psi (\varphi + \ker \| \cdot\|_{\Pi})\| \geq \| \varphi + \ker \| \cdot \|_{\Pi}\|$. It remains to check that $\Psi$ is surjective. Let $L \in (X \widehat{\otimes}_{\Pi} X^*)^*$ be fixed. Note that $X {\otimes}_{\Pi} X^*$ is an algebraic subspace of the projective tensor product space $X \widehat{\otimes}_{\pi} X^*$. Considering an algebraic complement of $X {\otimes}_{\Pi} X^*$, we can consider the algebraic linear extension $\tilde{L}$ of $L \vert_{X {\otimes}_{\Pi} X^*}$ to $X \widehat{\otimes}_{\pi} X^*$ (which is not necessarily continuous). Now, consider the map $\dbtilde{L} : X \times X^* \rightarrow \mathbb{K}$ given by $\dbtilde{L}(x,x^*) = \tilde{L} (x \otimes x^*)$ which is a bilinear form on $X \times X^*$. Note that \[ |\dbtilde{L} (x,x^*)| = |\tilde{L}(x \otimes x^*)| = |L(x\otimes x^*)| \leq \|L\| \text{ for every } (x,x^*) \in \Pi(X); \] hence $\dbtilde{L}$ is a member of $(\mathcal{B}(X \times X^*), \|\cdot\|_{\Pi})$ with $\| \dbtilde{L} \|_{\Pi} \leq \| L \|$. As it is clear that $\Psi (\dbtilde{L} + \ker \| \cdot \|_{\Pi}) = L$, the surjectivity of $\Psi$ is proved. \end{proof} \begin{remark} We remark that the above Theorem \ref{thm:isometry} generalizes the observation in the proof of \cite[Proposition 23]{DKLM-21}, where the authors showed that if $X$ is a reflexive Banach space with $n(X) >0$, then $(\mathcal{L}(X), \nu)$ is isometrically isomorphic to $(\mathcal{B}(X \times X^*), \|\cdot\|_{\Pi})$ and to $(X \widehat{\otimes}_{\Pi} X^*)^*$. \end{remark} \begin{proposition}\label{prop:reflexive} Let $X$ be a Banach space. If $\mathcal{L}(X) = \operatorname{NRA}(X) $, then $(\mathcal{L}(X)/\mathcal{Z}(X), \nu)$ is reflexive. If, in addition, $n(X) >0$, then $\mathcal{L}(X)$ is reflexive. \end{proposition} \begin{proof} Notice that $X$ is reflexive due to \cite[Theorem 1]{AR} since, in particular, every rank-one operator attains its numerical radius. By Theorem \ref{thm:isometry}, we have that $(\mathcal{L}(X)/\mathcal{Z}(X), \nu)$ and $(X \widehat{\otimes}_{\Pi} X^*)^*$ are isometrically isomorphic. Take $L \in (X \widehat{\otimes}_{\Pi} X^*)^*$ and consider $T+\mathcal{Z}(X)$ the corresponding element in $(\mathcal{L}(X)/\mathcal{Z}(X), \nu)$. Say $\nu (T) = |x^* (T(x))|$ for some $(x,x^*) \in \Pi (X)$. Then \[ |L(x\otimes x^*)| = |x^* (T(x))| = \nu (T) = \nu (T + \mathcal{Z}(X)) = \| L \|, \] which implies that $L$ attains its norm. Since $L$ is chosen arbitrarily, this shows that $X \widehat{\otimes}_{\Pi} X^*$ is reflexive by James' theorem \cite{James}; hence so is $(\mathcal{L}(X)/\mathcal{Z}(X), \nu)$. \end{proof} N.J. Kalton proved in \cite[Theorem 2]{K} that if $\mathcal{L} (X)$ is reflexive, then $X$ must be a separable (reflexive) space. Having this fact in mind, one direct consequence of Proposition \ref{prop:reflexive} is that if $\mathcal{L}(X) = \operatorname{NRA}(X)$ and $n(X) >0$, then $X$ must be separable. However, the following result shows that the assumption that $n(X) >0$ is in fact superfluous. The proof is motivated by the one of aforementioned result of N.J. Kalton. \begin{proposition} Let $X$ be a Banach space. If $\mathcal{L}(X) = \operatorname{NRA}(X) $, then $X$ must be a separable reflexive space. \end{proposition} \begin{proof} As before, the reflexivity of $X$ follows from \cite[Theorem 1]{AR}. Thus, it is enough to show that $X$ is separable. Assume that $X$ is non-separable. By a result due to Chadwick \cite{C}, there exists non-trivial projections $Q_n$ on $X$ such that $\sum_{n=1}^{\infty} Q_n = \Id_X$ in the weak operator topology. Note that $\sup_{N \in \mathbb{N}} \left\| \sum_{n=1}^N Q_n \right\| < \infty$ by the Principle of Uniform Boundedness. Since $\mathcal{L}(X) = \operatorname{NRA}(X) $, we have from Theorem \ref{thm:isometry} and Proposition \ref{prop:reflexive} that $(\mathcal{L}(X) /\mathcal{Z}(X), \nu)$ is reflexive and its dual is isometrically isomorphic to $X \widehat{\otimes}_{\Pi} X^*$. Observe that $\langle u, \sum_{n=1}^k Q_n +\mathcal{Z}(X) \rangle \rightarrow \langle u, \sum_{n=1}^\infty Q_n + \mathcal{Z}(X) \rangle$ for each $u \in X \widehat{\otimes}_{\Pi} X^*$ as $k \rightarrow \infty$, that is, $\sum_{n=1}^{\infty} Q_n + \mathcal{Z}(X) = \Id_X + \mathcal{Z}(X)$ in the weak topology. It follows that there exists $S_n = \sum_{i=1}^n \lambda_i^{(n)} Q_i$ such that $S_n + \mathcal{Z}(X) \stackrel{\nu}{\longrightarrow} \Id_X + \mathcal{Z}(X)$ in $(\mathcal{L}(X) /\mathcal{Z}(X), \nu)$. Let us find a large $n \in \mathbb{N}$ so that \begin{equation*} \nu (S_n - \Id_X) = \nu ( (S_n + \mathcal{Z}(X)) - (\Id_X + \mathcal{Z}(X)) ) < 1. \end{equation*} Note that $Q_{n+1} \neq 0$ and take $x_0 \in X$ such that $Q_{n+1} (x_0) \in S_X$. Put $x_1 = Q_{n+1} (x_0)$ and let $x_1^* \in S_X^*$ so that $x_1^* (x_1) = 1$. Then we have that \[ |x_1^* ((S_n - \Id_X) (Q_{n+1} (x_0))) | = |x_1^* ((S_n - \Id_X) (x_1))| \leq \nu (S_n - \Id_X) < 1. \] On the other hand, \[ |x_1^* ((S_n - \Id_X) (Q_{n+1} (x_0))) | = |x_1^* ( S_n ( Q_{n+1} (x_0) ) ) - x_1^* (x_1) | = |x_1^* (x_1)| = 1 \] since $Q_{n+1} (X) \subseteq \ker S_n$. This is a contradiction, so $X$ must be separable. \end{proof} As already noted, if $\mathcal{L}(X) =\operatorname{NRA}(X)$, then $X$ must be reflexive. However, the converse is false. Indeed, it is observed also in \cite{AR} that given a reflexive Banach space $X$ with a Schauder basis, there exists an isomorphic copy $\tilde{X}$ of $X$ such that not every operator on $\tilde{X}$ is numerical radius attaining. The following result, in particular, shows that if $X$ is an infinite dimensional Banach space with Schauder basis, then there exists $T \in \mathcal{L}(X)$ which does not attain its numerical radius without considering renormings of $X$. \begin{theorem} \label{theorem:CAP+NRA-fin-dim} Let $X$ be a Banach space with the compact approximation property. If $\mathcal{L}(X) = \operatorname{NRA}(X) $, then $X$ must be finite dimensional. \end{theorem} \begin{proof} Assume that $X$ is infinite dimensional. As $X$ is reflexive, applying the Josefson-Nissenzweig theorem (see \cite[\S~XII]{Diestel_seq_series}), we take a sequence $(x_n)_{n=1}^{\infty} \subseteq S_X$ converging weakly to $0$. For each $n \in \mathbb{N}$, take $x_n^* \in S_{X^*}$ to be such that $x_n^*(x_n) = 1$. Note from Theorem \ref{thm:isometry} and Proposition \ref{prop:reflexive} that $X \widehat{\otimes}_{\Pi} X^*$ is reflexive and its dual is $(\mathcal{L}(X) /\mathcal{Z}(X), \nu)$. Passing to a subsequence if it is necessary, we can assume that $(x_n \otimes x_n^*)$ converges weakly to some $u \in X \widehat{\otimes}_{\Pi} X^*$. Observe that for any compact operator $T$ on $X$, \begin{equation}\label{eq:cpt} \langle u, T + \mathcal{Z}(X) \rangle = \lim_n \big\langle x_n \otimes x_n^*, T + \mathcal{Z}(X) \big\rangle = \lim_n x_n^*(T(x_n)) = 0. \end{equation} Now, let $0 < \varepsilon < 1/2$ and pick $u_0 = \sum_{j=1}^N \lambda_j v_j \otimes v_j^* \in X \widehat{\otimes}_{\Pi} X^*$ with $(\lambda_j) \in \mathbb{K}$ and $(v_j,v_j^*) \in \Pi(X)$ for every $j=1,\ldots, N$, to be such that $\|u - u_0\|_{\Pi} < \varepsilon$. Notice from \eqref{eq:cpt} that \begin{equation} \label{eq3} | \langle u_0, T + \mathcal{Z}(X) \rangle| \leq | \langle u, T + \mathcal{Z}(X) \rangle| + \varepsilon = \varepsilon, \ \forall \ T \in \mathcal{K}(X) \text{ with } \|T\| \leq 1. \end{equation} Since $X$ is a reflexive space with the compact approximation property, $X$ has in fact the metric compact approximation property (see, for instance, \cite[Proposition 1 and Remarks 1]{CJ}). Thus, there exists a net $(T_{\alpha})$ of norm-one compact operators such that $(T_{\alpha}) \rightarrow \Id_X$ in the compact open topology. It follows, therefore, that \begin{eqnarray*} | \langle u_0, \Id_X + \mathcal{Z}(X) \rangle &=& \Big| \sum_{j=1}^N \lambda_j v_j^*(\Id_X(v_j)) \Big| \\ &=& \lim_{\alpha} \Big| \sum_{j=1}^N \lambda_j v_j^*(T_{\alpha} (v_j)) \Big| = \lim_{\alpha} \left| \langle u_0, T_{\alpha} + \mathcal{Z}(X) \rangle \right| \stackrel{(\ref{eq3})}{\leq} \varepsilon. \end{eqnarray*} This implies that \begin{equation*} | \langle u, \Id_X + \mathcal{Z}(X) \rangle | \leq | \langle u_0, \Id_X + \mathcal{Z}(X) \rangle| + \varepsilon \leq 2 \varepsilon < 1. \end{equation*} On the other hand, \begin{equation*} \langle u, \Id_X + \mathcal{Z}(X) \rangle = \lim_n \langle x_n \otimes x_n^*, \Id_X + \mathcal{Z}(X) \rangle = \lim_n x_n^*(x_n) = 1, \end{equation*} which is a contradiction. Therefore, $X$ must be finite dimensional. \end{proof} It is natural to ask if every \emph{compact operator} on a Banach space $X$ (with the compact approximation property) attains its numerical radius, then $X$ must be finite dimensional. However, it is known \cite{Acosta1991, dGS} that every compact operator on $\ell_p$ with $1<p<\infty$ attains its numerical radius. Therefore, there is no compact operator version of Theorem \ref{theorem:CAP+NRA-fin-dim}. \section{Numerical radius attaining homogeneous polynomials}\label{sec:2} Let $X$ be a Banach space and $N \in \mathbb{N} \cup \{0\}$ be given. It is known that if every (finite-type) $N$-homogeneous polynomial on $X$ attains its numerical radius, then $X$ must be a reflexive space. More precisely, what is proved in \cite{AGG2003} is the following. For $x^*, x_1^*, \ldots, x_N^* \in \mathbb{N}$ and $x_0 \in X$, let the notation $P_{x_1^*, \ldots, x_N^*, x^*, x_0}$ stand for the $(N+1)$-homogeneous polynomial on $X$ defined by \[ P_{x_1^*, \ldots, x_N^*, x^*; x_0} (x) = x_1^* (x) \cdots x_N^* (x) x^* (x) x_0. \] \begin{theorem}[\mbox{\cite[Theorem 4]{AGG2003}}]\label{thm:AGG2003} Let $X$ be a Banach space and $N \in \mathbb{N}\cup \{0\}$ be given. If there exist $x_0 \in X \setminus \{0\}$ and $x_1^*, \ldots, x_{N}^* \in X^* \setminus \{0\}$ such that $P_{x_1^*, \ldots, x_N^*, x^*, x_0}$ attains its numerical radius for every $x^* \in X^*$, then $X$ must be a reflexive space. \end{theorem} Nevertheless, there is a reflexive Banach space on which not every $N$-homogeneous polynomial is numerical radius attaining \cite[Example 1]{AGG2003}. We start this section by generalizing the above Theorem \ref{thm:AGG2003} by using a refined version of James' theorem \cite{JiM} as it is done for the case of linear operators in \cite{AGG2004}. \begin{theorem} Let $X$ be a Banach space and $N \in \mathbb{N}$ be given. Suppose that there exist $x_0 \in X \setminus \{0\}$ and $x_1,\ldots, x_{N-1}^* \in X^* \setminus \{0\}$ such that the set \[ \{ x^* \in X^* : P_{x_1^*,\ldots,x_{N-1}^*, x^*; x_0} \text{ attains its numerical radius.} \} \] has a non-empty weak-star interior. Then $X$ must be reflexive. \end{theorem} \begin{proof} Write the set \[ B = \{ x_1^* (z) \cdots x_{N-1}^* (z) z^* (x_0) z : (z, z^*) \in \Pi (X) \} \subseteq X. \] Then $P_{x_1^*,\ldots,x_{N-1}^*, x^*, x_0}$ attains its numerical radius if and only if $|x^*|$ attains its supremum on the set $B$. This implies that the following set \[ \{ x^* \in X^* : |x^*| \text{ attains its supremum on } B \} \] has a non-empty weak-star interior. By the same reasoning as in the proof of \cite[Theorem 3.1]{AGG2004}, it is enough to show that $\overline{\mathbb{D}} B:= \{ \lambda b : \lambda \in \mathbb{K}, |\lambda| \leq 1, b \in B \}$ contains an open ball which, in turn, implies that there is an equivalent norm on $X$ which makes the set of norm attaining functionals contain a weak-star open set (see \cite[Proposition 3.2]{JiM}). Put $Q := x_1^* \cdots x_{N-1}^*$. If $Q(x_0) \neq 0$, then take any $\delta >0$ such that $|Q(x_0)|>\delta$. If not, then observe that by applying the identity principle \cite[Proposition 5.7]{Mujicabook} for any $\varepsilon >0$ there exists $u \in X$ such that $\| u - x_0\| < \varepsilon$ and $|Q(u)| > \delta$ for some $\delta >0$. Since the first case can be deduced from the second case, we only consider the second case. Let $0< \varepsilon < \|x_0\|$, and find $u \in X$ and $\delta>0$ such that $\| u- x_0\| < \varepsilon$ and $|Q(u)|>\delta$. Take $0<s<1$ so that \begin{equation}\label{eq:s} \frac{s (\|x_0\| + \varepsilon)^N }{(\|x_0\| -\varepsilon) \delta} < \frac{1}{3}. \end{equation} Pick an element $y \in s u + r B_X$, where $0<r<1$ is sufficiently small so that \begin{equation}\label{eq:r1} 2s^{-1} r < \| x_0 \| - \varepsilon, \end{equation} \begin{align}\label{eq:r2} |Q(y)| &= |Q(su + rz)| \quad (\text{for some } z \in B_X ) \\ \nonumber &\geq s^{N-1} |Q(u)| - |f(r)| > s^{N-1} \delta - |f(r)| > 0, \end{align} where $f(r) = Q(su + rz) - s^{N-1} Q(u)$ (so, $|f(t)| \rightarrow 0$ as $t \rightarrow 0$), and \begin{equation}\label{eq:r3} \left| \frac{(s (\|x_0\| + \varepsilon) + r )^N}{(s^{N-1} \delta - |f(r)|)(\|x_0\| - 2 s^{-1}r -\varepsilon)} - \frac{s (\|x_0\| + \varepsilon)^N }{(\|x_0\| -\varepsilon) \delta} \right| < \frac{1}{3}. \end{equation} Take $y^* \in S_{Y^*}$ so that $y^* (y) = \|y\|$, so $(\frac{y}{\|y\|}, y^*) \in \Pi (X)$. Note that \begin{align*} |y^* (x_0)| = |y^* (u+ x_0 - u)| &\geq |y^*(u)| - \|x_0 - u \| \\ &> |y^* (s^{-1} y - s^{-1} r z)| - \varepsilon \\ &\geq s^{-1} \|y\| - s^{-1} r - \varepsilon \geq \| x_0\| - 2 s^{-1} r - \varepsilon > 0. \end{align*} Since \[ y^* (x_0) y = \frac{\|y\|^N}{\prod_{i=1}^{N-1} x_i^* (y)} \Big[ \prod_{i=1}^{N-1} x_i^* \Big( \frac{y}{\|y\|}\Big) \Big] y^* ( x_0) \frac{y}{\| y\|} \in \left( \frac{\|y\|^N}{Q(y)} \right) B \] and $y^* (x_0) \neq 0$, we conclude that \[ y \in \left( \frac{\|y\|^N}{Q(y) y^* (x_0)} \right) B. \] Now, observe from \eqref{eq:s}-\eqref{eq:r3} and the fact $\|y\| \leq s(\|x_0\| +\varepsilon) +r$ that \begin{align*} \left| \frac{\|y\|^N}{Q(y) y^* (x_0)} \right| &\leq \frac{(s (\|x_0\| + \varepsilon) + r )^N}{(s^{N-1} \delta - f(r))(\|x_0\| - 2 s^{-1}r -\varepsilon)} < \frac{s (\|x_0\| + \varepsilon)^N }{(\|x_0\| -\varepsilon) \delta} + \frac{1}{3} < \frac{2}{3}. \end{align*} This proves that $su + r B_X$ is contained in $\overline{\mathbb{D}} B$ and completes the proof. \end{proof} Next, we will obtain the polynomial version of Theorem \ref{theorem:CAP+NRA-fin-dim}. In order to do so, we follow a similar approach as in the preceding section. Given a Banach space $X$ and $N \in \mathbb{N}$, consider the closed subspace $\mathcal{Z} (^N X):= \{ P \in \mathcal{P}(^N X) : \nu(P) = 0\}$. Then the quotient space $(\mathcal{P}(^N X) / \mathcal{Z} (^N X), \nu)$ turns to be a normed space endowed with $\nu (P + \mathcal{Z}(^N X)) = \inf \{ \nu (P-Q): Q \in \mathcal{Z}(^N X)\}$. Also, consider the following space of tensors: \[ (\otimes^N X ) \otimes_{\Pi} X^* := \Big\{ \sum_{i=1}^n \lambda_i (\otimes^N x_i ) \otimes x_i^* : (x_i,x_i^*) \in \Pi (X), \lambda_i \in \mathbb{K}, n \
in \mathbb{N} \Big\}, \] endowed with the norm \[ \|u \| = \inf \Big \{ \sum_{i=1}^n |\lambda_i| : u = \sum_{i=1}^n \lambda_i (\otimes^N x_i ) \otimes x_i^* : (x_i,x_i^*) \in \Pi (X), \lambda_i \in \mathbb{K}, n \in \mathbb{N} \Big\} \] and denote by $(\otimes^N X ) \widehat{\otimes}_{\Pi} X^*$ its completion. The following duality result is a version of Theorem \ref{thm:isometry} for homogeneous polynomials. \begin{proposition}\label{prop:polyisometry} Let $X$ be a reflexive Banach space and $N \in \mathbb{N}$. Then $(\mathcal{P}(^N X) / \mathcal{Z} (^N X), \nu)$ is isometrically isomorphic to $((\otimes^N X ) \widehat{\otimes}_{\Pi} X^*)^*$. \end{proposition} \begin{proof} Even though the proof is very similar to that of Theorem \ref{thm:isometry}, we sketch the proof for the sake of completeness. Let $\Phi : (\mathcal{P}(^N X) / \mathcal{Z} (^N X), \nu) \rightarrow ((\otimes^N X ) \widehat{\otimes}_{\Pi} X^*)^*$ be the map defined as \[ \Phi (P + \mathcal{Z}(^N X)) (u) = \sum_{i=1}^n \lambda_i x_i^* (P(x_i)) \] for $u = \sum_{i=1}^n \lambda_i (\otimes^N x_i) \otimes x_i^*$. Then $\Phi$ is well-defined and it is a linear operator. To see that $\Phi$ is a surjective isometry, observe that \begin{align*} \| \Phi (P + \mathcal{Z}(^N X)) (u)\| = \Big | \sum_{i=1}^n \lambda_i x_i^* (P(x_i)) \Big| \leq \nu(P) \sum_{i=1}^n |\lambda_i| \end{align*} for any representation $u = \sum_{i=1}^n \lambda_i (\otimes^N x_i)\otimes x_i^*$. Since $\nu(P) = \nu(P+\mathcal{Z}(^N X))$, we conclude that $\| \Phi (P+\mathcal{Z}(^N X))\| \leq \nu (P + \mathcal{Z}(^N X))$. Conversely, note that \[ |x^* (P(x))| = | \Phi (P + \mathcal{Z}(^N X) ) ( (\otimes x^N) \otimes x^* ) | \leq \| \Phi (P + \mathcal{Z}(^N X) ) \| \] for every $(x,x^*) \in \Pi (X)$; hence $\nu (P + \mathcal{Z}(^N X )) = \nu (P) \leq \| \Phi (P + \mathcal{Z}(^N X) ) \|$. This shows that $\Phi$ is an isometry. Finally, we claim that $\Phi$ is a surjective. Let $L \in ((\otimes^N X ) \widehat{\otimes}_{\Pi} X^*)^*$ be given. Let $\tilde{L}$ be an algebraic linear extension of $L \vert_{(\otimes^N X ) {\otimes}_{\Pi} X^*}$ to $(\widehat{\otimes}_{N,s,\pi} X) \widehat{\otimes}_{\pi} X^*$ and define the map $\dbtilde{L}$ from $X$ into $X^{**} = X$ by \[ (\dbtilde{L}(x) )(x^*) = \tilde{L} ((\otimes^N x) \otimes x^*) \text{ for every } x \in X \text{ and } x^* \in X^*. \] Then $\dbtilde{L}$ is an $N$-homogeneous polynomial on $X$ satisfying that $\nu (\dbtilde{L}) \leq \| L \|$ and $\Phi (\dbtilde{L} + \mathcal{Z}(^N X)) = L$. This completes the proof. \end{proof} Arguing in the same way as in Proposition \ref{prop:reflexive}, we obtain the following. \begin{proposition}\label{prop:reflexive-poly} Let $X$ be a Banach space and $N \in \mathbb{N}$. If $\mathcal{P}(^N X) = \operatorname{NRA}(^N X)$, then $(\mathcal{P}(^N X)/\mathcal{Z}(^N X), \nu)$ is reflexive. \end{proposition} In \cite{CGKM2006}, the \emph{polynomial numerical index of order N}, denoted by $n^{(N)} (X)$, of a Banach space $X$ is introduced and investigated. Namely, \[ n^{(N)} (X) = \{ \nu (P) : P \in \mathcal{P} (^N X), \|P\| = 1 \}. \] As a consequence, Proposition \ref{prop:reflexive-poly} implies that if $n^{(N)} (X) > 0$ and $\mathcal{P}(^N X) = \operatorname{NRA}(^N X)$, then $\mathcal{P}(^N X)$ is reflexive since $\mathcal{Z} (^N X) = \{ 0\}$. It might be worth mentioning that $n^{(N)}(X) > 0$ for every $N \in \mathbb{N}$ whenever $n(X) >0$ (see \cite[Proposition 2.5]{CGKM2006}). Now, we are ready to prove the following. The idea of the proof is similar to that of Theorem \ref{theorem:CAP+NRA-fin-dim}, but using slightly more carefully the Josefson-Nissenzweig theorem. Recall that a polynomial $P \in \mathcal{P}(^N X)$ is said to be \emph{weakly sequentially continuous} if the sequence $(P(x_n))$ is norm-convergent whenever a sequence $(x_n)$ is weakly convergent. Also, a polynomial $P \in \mathcal{P}(^N X)$ is said to be \emph{weakly (uniformly) continuous} if it is weakly uniformly continuous on bounded subsets of $X$. \begin{theorem}\label{theorem:CAP+NRA-fin-dim-poly} Let $X$ be a Banach space with the compact approximation property and $N \in \mathbb{N}$. If $\mathcal{P}(^N X) = \operatorname{NRA}(^N X)$, then $X$ must be finite dimensional. \end{theorem} \begin{proof} Assume that $X$ is an infinite dimensional Banach space. Note from Theorem \ref{thm:AGG2003} that $X$ is reflexive. Using the Josefson-Nissenzweig theorem, take a sequence $(x_n) \subseteq S_X$ converging weakly to some $x_\infty \in X$ with $\|x_\infty\| = 1/2$. For each $n \in \mathbb{N}$, pick $x_n^* \in S_{X^*}$ to be such that $x_n^*(x_n) = 1$, and choose $x_\infty^* \in S_{X^*}$ such that $x_\infty^* (x_\infty) = 1/2$. By Proposition \ref{prop:reflexive-poly}, $(\otimes^N X) \widehat{\otimes}_{\Pi} X^*$ is reflexive and its dual is $(\mathcal{P}(^N X)/\mathcal{Z}(^N X), \nu)$. Thus, we may assume that $(\otimes^N x_n) \otimes x_n^*$ converges weakly to some $u \in (\otimes^N X) \widehat{\otimes}_{\Pi} X^*$. Then for every weakly sequentially continuous polynomial $P$ in $\mathcal{P}(^N X)$ with $\|P\|=1$, we have that \begin{equation}\label{eq:u,P} |\langle u, P + \mathcal{Z}(^N X ) \rangle| = \lim_n | x_n^* (P(x_n)) | = \lim_n |x_n^* ( P(x_\infty)) | \leq \frac{1}{2^N}. \end{equation} Now, let $Q \in \mathcal{P}(^N X)$ with $\|Q\|=1$ be given. Using the metric compact approximation property of $X$, we can take a net $(P_\alpha)$ of weakly (uniformly) continuous (hence, weakly sequentially continuous) $N$-homogeneous polynomials on $X$ converging to $Q$ in the compact-open topology and satisfying $\|P_\alpha\| \leq 1$ (see \cite[Corollary 7]{Caliskan2004} and \cite[Proposition 2.1]{MV}). Since $u$ can be approximated by elements of the form $\sum_{i=1}^n \lambda_i (\otimes^N x_i) \otimes x_i^*$ with $(x_i, x_i^*) \in \Pi(X)$, we conclude that \begin{equation*} |\langle u, Q + \mathcal{Z}(^N X) \rangle| \leq \limsup_\alpha |\langle u, P_\alpha + \mathcal{Z}(^N X) \rangle| \stackrel{\eqref{eq:u,P}}{\leq} \frac{1}{2^N}. \end{equation*} As the above inequality holds for arbitrary $Q \in \mathcal{P}(^N X)$ with $\|Q \| = 1$, we have that \begin{equation}\label{eq:uQ} \sup \{ | \langle u, Q + \mathcal{Z}(^N X) \rangle | : Q \in \mathcal{P}(^N X), \|Q\| =1 \} \leq \frac{1}{2^N}. \end{equation} On the other hand, consider $P_\infty \in \mathcal{P}(^N X)$ given by $P_\infty (x) = x_\infty^* (x)^{N-1} x$ for every $x \in X$. It is clear that $\|P_\infty\| =1$. However, \[ \langle u, P_\infty + \mathcal{Z}(^N X) \rangle = \lim_n x_n^* ( P_\infty (x_n)) = \lim_{n} x_\infty^* (x_n)^{N-1} x_n^* (x_n) = x_\infty^* (x_\infty)^{N-1} = \frac{1}{2^{N-1}}. \] This contradicts \eqref{eq:uQ}, so we conclude that $X$ must be a finite dimensional space. \end{proof} In the previous section, we mentioned that every compact operator on $\ell_p$, $1<p<\infty$, attains its numerical radius. Not surprisingly, this result can be extended to the case of weakly sequentially continuous homogeneous polynomials. It is immediate that a weakly sequentially continuous bounded linear operator on $X$ is nothing but a completely continuous operator on $X$. The proof is similar to the argument in \cite{Acosta1991}, but we present the details for the sake of completeness. \begin{proposition}\label{prop:lp-poly} Given $1<p<\infty$ and $N \in \mathbb{N}$, every weakly sequentially continuous $N$-homogeneous polynomial on $\ell_p$ is numerical radius attaining. \end{proposition} \begin{proof} Let $P$ be a weakly sequentially continuous $N$-homogeneous polynomial on $\ell_p$ and take $(x_n, x_n^*) \in \Pi (\ell_p)$ so that $|x_n^* (P(x_n))| \rightarrow \nu(P) \neq 0$. Note that $x_n^* (i) = |x_n (i)|^{p/q} \alpha_n (i)$, where $\alpha_n (i) \in \mathbb{T}$ satisfies that $|x_n (i)| = \alpha_n (i) x_n (i)$ and $q$ is the conjugate of $p$. Passing to a subsequence, we may assume that $(x_n)$ converges weakly to $x_\infty \in B_{\ell_p}$. Consider $x_\infty^* \in \ell_p^* = \ell_q$ given by $x_\infty^* (i) = |x_\infty (i)|^{p/q} \alpha (i)$, where $\alpha(i) \in \mathbb{T}$ satisfies that $|x_\infty (i)| = \alpha(i) x_\infty (i)$. Then one can deduce that $(x_n^*)$ converges weekly to $x_\infty^*$ and $x_\infty^* (x_\infty) = \| x_\infty^*\| \|x_\infty\|$. Moreover, since $P$ is weakly sequentially continuous, $|x_\infty^* (P(x_\infty)) | = \nu(P) \neq 0$. In particular, $x_\infty \neq 0$ and $x_\infty^* \neq 0$. Thus, \[ \frac{\nu(P)}{\|x_\infty^* \| \|x_\infty\|^N } = \frac{ |x_\infty^*(P(x_\infty) ) |}{\|x_\infty^*\| \|x_\infty\|^N } = \left| \frac{x_\infty^*}{\|x_\infty^*\|} \Big ( P \Big ( \frac{x_\infty}{\|x_\infty\|} \Big) \Big )\right| \leq \nu(P); \] hence $\|x_\infty\| = \|x_\infty^* \| = 1$ and $P$ attains its numerical radius at $(x_\infty, x_\infty^*) \in \Pi (\ell_p)$. \end{proof} One may ask whether a similar result can be obtained for Lipschitz functions on a Banach space $X$ into itself as there is a notion of numerical radius for Lipschitz functions, the so called \emph{Lipschitz numerical radius} (we refer the interested readers to \cite{CJT, KMMW, WHT}). However, it is not possible to consider the case when every Lipschitz function attains its Lipschitz numerical radius. Indeed, it is proved in \cite{CJT} that for any Banach space $X$ the set of Lipschitz numerical radius attaining Lipschitz functions on $X$ is not dense in the whole space of Lipschitz functions on $X$. \section{$2$-homogeneous polynomials whose Aron-Berner extensions attain their numerical radii}\label{sec:3} It is shown in \cite{AP1989} that for any Banach space $X$ the set of bounded linear operators on $X$ whose second adjoints attain their numerical radii is dense in $\mathcal{L}(X)$. Moreover, it is mentioned in \cite{AP1989-2} that, actually, it is true that the set of bounded linear operators on $X$ whose first adjoints attain their numerical radii is dense. As a matter of fact, those results are parallel versions of the result of J. Lindenstrauss \cite{Lin} and V. Zizler \cite{Zizler} on norm attaining operators. In the context of homogeneous polynomials, it is first observed in \cite{AGM} that the set of {scalar-valued} $2$-homogeneous polynomials whose Aron-Berner extensions attain their norms is dense in the whole space. Afterwards, this result was extended to a vector-valued $2$-homogeneous polynomials in \cite{CLS2010}. Recall that the canonical extension of a bilinear mapping is obtained by weak-star density as follows (which also works for a general multilinear mapping) \cite{Arens}. For Banach spaces $X, Y$ and $Z$, and $B \in \mathcal{B}(X\times Y; Z)$, the space of all bounded bilinear mappings from $X \times Y$ into $Z$, the extension $\overline{B} \in \mathcal{B}(X^{**} \times Y^{**}; Z^{**})$ of $B$ is defined by \[ \overline{B} (x^{**}, y^{**}) \xlongequal{w^*} \lim_\alpha \lim_\beta B(x_\alpha, y_\beta) \] where $(x_\alpha) \subseteq X$ and $(y_\beta) \subseteq Y$ are nets converging weak-star to $x^{**} \in X^{**}$ and $y^{**} \in Y^{**}$, respectively. The \emph{Aron-Berner extension} of $P \in \mathcal{P}(^2 X)$ is the polynomial $AB(P) \in \mathcal{P}(^2 X^{**})$ given by $AB(P) (x^{**}) := \overline{B}(x^{**})$ where $B$ is the unique symmetric bilinear mapping from $X \times X$ into $X$ associated to $P$ \cite{AronBerner}. It is known that $\|P \| = \|AB(P)\|$ for every $P \in \mathcal{P}(^2 X)$ \cite{DG}. Let us denote by $\mathcal{P}_{wu}(^N X)$ the space of weakly (uniformly) continuous $N$-homogeneous polynomials from $X$ into $X$, that is, those elements are {weakly (uniformly) continuous on bounded subsets of $X$}. Notice that the space $\mathcal{P}_{wu} (^1 X)$ coincides with the space of all compact operators on $X$. In this section, we prove that for any Banach space $X$ the set of $P \in \mathcal{P}_{wu}(^2 X)$ whose Aron-Berner extensions are numerical radius attaining is dense in $\mathcal{P}_{wu}(^2 X)$. In order to prove this, we need the following lemma which is a slight modification of \cite[Lemma 3]{AP1989}. \begin{lemma}\label{lem:AcoPaya} Let $X$ be a Banach space and $P \in \mathcal{P}(^2 X)$. Suppose that $|u^* (P(u))| > \nu(P) - \alpha$ for some $(u,u^*) \in \Pi (X)$ and $\alpha >0$. Let us define \[ P' (x) = P(x) + \lambda \delta^2 u^* (x)^2 u + \lambda \delta^2 (u^* (B(u,x)) )^2 u, \] where $\lambda \in \mathbb{T}$ satisfies that $u^* (P(u)) = \lambda |u^* (P(u))|$ and $B$ is the symmetric bilinear mapping corresponding to $P$. If $|y^* ( P' (y))| > \nu (P') - \rho$ for some $(y,y^*) \in \Pi(X)$ and $\rho >0$, then we have \[ \delta^2 |u^* (y)|^2 + \delta^2 |u^* (B(u, y))|^2 + \rho \geq -\alpha + \delta^2 + \delta^2 (\nu(P)-\alpha)^2. \] \end{lemma} \begin{proof} Note first that \begin{align*} \nu(P') \geq | u^* (P' (u))| &= | u^* (P(x)) + \lambda \delta^2 + \lambda \delta^2 u^* (P(u))^2 | \\ &= |u^* (P(u))| + \delta^2 + \delta^2 |u^* (P(u))|^2 \stackrel{\text{(I)}}{\geq} \nu(P) -\alpha + \delta^2 + \delta^2 (\nu(P) - \alpha)^2. \end{align*} Second, observe that \begin{align*} \nu(P') &< |y^* (P'(y))| + \rho \\ &\stackrel{\text{(II)}}{\leq} \nu (P) + \delta^2 |u^* (y)|^2 + \delta^2 |u^* (B(u,y))|^2 + \rho. \end{align*} Combining (I) with (II), we complete the proof. \end{proof} The proof of the following result is essentially based on an adaption of Lindenstrauss' argument in \cite[Theorem 1]{Lin}. \begin{theorem}\label{thm:bilinear} Let $X$ be a Banach space. Then the set \[ \{ P \in \mathcal{P}_{wu}(^2 X) : AB(P) \in \operatorname{NRA} (^2 X^{**}) \} \] is dense in $\mathcal{P}_{wu}(^2 X)$. \end{theorem} \begin{proof} Let $P \in \mathcal{P}_{wu}(^2 X)$, $\| P \|=1$ and $B$ be the corresponding symmetric bilinear mapping. Given $0< \varepsilon<1$, choose decreasing sequences $(\alpha_n)$ and $(\delta_n)$ of positive numbers satisfying that \begin{equation}\label{eq:delta1} \sum_{j=1}^n (1+4^2) \delta_j^2 < \varepsilon, \quad \frac{1}{\delta_n^2} \sum_{j=n+1}^\infty (1+4^2) \delta_j^2 \rightarrow 0 \quad \text{and} \quad \frac{\alpha_n}{\delta_n^2} \rightarrow 0. \end{equation} Put $P_1 := P$ and inductively construct sequences $(P_n)$ in $\mathcal{P}(^2 X)$ and $(x_n, x_n^*) \in \Pi (X)$ satisfying \begin{align*} &|x_n^* (P_n (x_n))| > \nu (P_n) - \alpha_n, \\ &P_{n+1} (x) := P_n (x) + \lambda_n \delta_n^2 x_n^* (x)^2 x_n + \lambda_n \delta_n^2 (x_n^* (B_n (x, x_n)))^2 x_n, \end{align*} where $\lambda_n \in \mathbb{T}$ satisfies that $x_n^* (P_n (x_n)) = \lambda_n |x_n^* (P_n (x_n))|$ and $B_n$ is the symmetric bilinear mappings corresponding to $P_n$. Note that \[ \|P_2 \| \leq \| P_1 \| + \delta_1^2 + \delta_1^2 \|S_1\|^2 \leq 1 + (1+2^2 ) \delta_1^2. \] Also, \begin{align*} \|P_3\| \leq \|P_2\| + \delta_2^2 + \delta_2^2 \|S_2\|^2 &\leq 1 + (1+2^2 ) \delta_1^2 + \delta_2^2 + \delta_2^2 (2^2 (1 + (1+2^2 ) \delta_1^2 )^2 ) \\ &\leq 1 + (1+2^2 ) \delta_1^2 + (1 + 4^2) \delta_2^2, \end{align*} since $(1 + (1+2^2 ) \delta_1^2 ) < 2$. In this way, one can verify that \[ \|P_{n+1}\| \leq 1 + (1+2^2 ) \delta_1^2 + (1 + 4^2) \delta_2^2 + \cdots + (1 +4^2) \delta_{n}^2 \leq 1 + \sum_{j=1}^n (1+4^2) \delta_j^2 \leq 2 \] for each $n \in \mathbb{N}$. This also shows that \[ \| P_{n+k} - P_n \| \leq \sum_{j=n}^{n+k-1} (1+\|B_j\|^2) \delta_j^2 \leq \sum_{j=n}^{n+k-1} (1+4^2) \delta_j^2 \] since $\|B_j\| \leq 2 \|P_j\| \leq 2^2$. Thus, $(P_n)$ converges in norm to some $P_\infty \in \mathcal{P}(^2 X)$, and $\|P_\infty - P \| \leq \varepsilon$. Notice that each $P_{n}$ is weakly uniformly continuous on bounded sets; hence so is $P_\infty$. We claim that $AB(P_\infty) \in \operatorname{NRA} (^2 X^{**})$. Note that \begin{align*} |x_{n+k}^* (P_{n+1} (x_{n+k}))| &\geq \nu (P_{n+k}) - \| P_{n+k}-P_{n+1}\| \\ &\geq \nu (P_{n+1}) - 2 \|P_{n+k}-P_{n+1}\| \geq \nu (P_{n+1}) - 2 \sum_{j=n}^{n+k-1} (1+4^2) \delta_j^2. \end{align*} Applying Lemma \ref{lem:AcoPaya} with $P'=P_{n+1}$, we obtain that \[ \delta_n^2 |u_n^* (x_{n+k})|^2 + \delta_n^2 |x_n^* (B_n(x_n, x_{n+k}))|^2 + 2 \sum_{j=n}^{n+k-1} (1+4^2) \delta_j^2 \geq -\alpha_n + \delta_n^2 + \delta_n^2 (\nu(P_n)-\alpha_n)^2. \] Let $z$ and $\phi$ be cluster points of the sequences $(x_n)$ and $(x_n^*)$ in the weak-star topologies of $X^{**}$ and $X^{***}$, respectively. Letting $k \rightarrow \infty$, we have \begin{equation}\label{eq:delta_n} \delta_n^2 |z(u_n^*)|^2 + \delta_n^2 |[ \overline{B_n} (x_n, z)] (x_n^*) |^2 + 2 \sum_{j=n}^{\infty} (1+4^2) \delta_j^2 \geq -\alpha_n + \delta_n^2 + \delta_n^2 (\nu(P_n)-\alpha_n)^2, \end{equation} where $\overline{B_n}$ is the canonical extension of $B_n$. Note from the polarization inequality \cite[Theorem 2.2]{Mujicabook} that $\overline{B_n}$ converges in norm to $\overline{B_\infty}$, where $B_\infty$ is the symmetric bilinear mapping corresponding to $P_\infty$ and $\overline{B_{\infty}}$ is its canonical extension. Notice that $AP(P_\infty)$ is uniformly continuous on $(B_{X^{**}}, w^*)$. Applying the classical polarization formula \cite[Theorem 1.10]{Mujicabook}, we observe that $\overline{B_\infty} (x_n, z)$ converges in norm to $\overline{B_\infty} (z,z) = AB(P_\infty) (z)$. By dividing $\delta_n^2$ in \eqref{eq:delta_n}, letting $n \rightarrow \infty$ and combining it with \eqref{eq:delta1}, we obtain that \begin{equation}\label{eq:phi_z_AB_P} |\phi(z)|^2 + |\phi (AB(P_\infty) (z))|^2 \geq 1 + \nu(P_\infty)^2. \end{equation} This shows that $|\phi(z)| = 1$ and $|\phi (AB(P_\infty) (z))| = \nu(P_\infty)$. Since $\nu(AB(P_\infty)) = \nu(P_\infty)$ \cite[Corollary 2.14]{CGKM2006}, we conclude that $AB(P_\infty)$ attains its numerical radius. \end{proof} \begin{remark} If we denote by $\mathcal{P}_{wsc}(^N X)$ the space of {weakly sequentially continuous} $N$-homogeneous polynomials (for its definition, see the paragraph before Theorem \ref{theorem:CAP+NRA-fin-dim-poly}), then we have the following: \[ \mathcal{P}_{wu} (^N X) \subseteq \mathcal{P}_{wsc}(^N X) \subseteq \mathcal{P} (^N X). \] We do not know if we can replace $\mathcal{P}_{wu} (^N X)$ in Theorem \ref{thm:bilinear} by $\mathcal{P}_{wsc}(^N X)$, since for a given $P \in \mathcal{P}_{wsc}(^N X)$, its extension $AB(P) \in \mathcal{P}(^N X^{**})$ is not necessarily weak-star to norm sequentially continuous (hence the inequality in \eqref{eq:phi_z_AB_P} is not clear). As a matter of fact, it can be deduced from \cite[Example 1.9]{Zal} that there exist a Banach space $X$ and a weakly sequentially continuous $2$-homogeneous polynomial $P$ on $X$ such that $AB(P)$ is not weak-star to norm sequentially continuous. \end{remark} It is observed in \cite{CLM2012} that if the dual space of a Banach space $X$ is separable and has the approximation property, then for any natural number $N \in \mathbb{N}$, the set of $N$-homogeneous polynomials from $X$ to a dual Banach space $Y^*$ whose Aron-Berner extensions attain their norms is dense. Indeed, their idea was to use the integral representation for elements in tensor product spaces by identifying polynomials between $X$ and $Y^*$ with elements in $C(B_{X^{**}} \times B_{Y^{**}})$ and using the Riesz representation theorem \cite[Theorem 2.2]{CLM2012} from the compactness of $B_{X^{**}} \times B_{Y^{**}}$. Here, $B_{X^{**}}$ and $B_{Y^{**}}$ are endowed with their weak-star topologies. In order to adapt this to our situation, it would be natural to consider first the set $\Pi (X^{*})$ instead of $B_{X^{**}} \times B_{Y^{**}}$. However, the set $\Pi (X^*)$ cannot be a compact subset of $B_{X^*} \times B_{X^{**}}$, both are endowed with their weak-star topologies, unless $X$ is finite dimensional as the following result shows. \begin{proposition} Let $X$ be a Banach space. \begin{enumerate} \setlength\itemsep{0.3em} \item The set $\Pi (X^*)$ is a compact subset of $B_{X^*} \times B_{X^{**}}$ if and only if $X$ is finite dimensional, where both $B_{X^*}$ and $B_{X^{**}}$ are endowed with their weak-star topologies. \item The set $\Pi (X)$ is a compact subset of $B_X \times B_{X^*}$ if and only if $X$ is finite dimensional, where $B_X$ and $B_{X^*}$ are endowed with the weak topology and weak-star topology, respectively. \end{enumerate} \end{proposition} \begin{proof} (1): As ``If" part is clear, assume to the contrary that $X$ is infinite dimensional. By applying the Josefson-Nissenzweig theorem, take $(x_n^*) \subseteq S_{X^*}$ so that $(x_n^*)$ converges weak-star to some $x_\infty^* \in X^*$ with $\|x_\infty^*\| = 1/2$. Take $x_n^{**} \in S_{X^{**}}$ so that $x_n^{**} (x_n^*) = 1$ for each $n \in \mathbb{N}$. If $\Pi (X^*)$ were compact, then we would have a subnet $(x_\alpha^*, x_\alpha^{**})$ which converge to some $(z^*, z^{**})$ in $\Pi (X^*)$. This implies, in particular, that $z^* = x_\infty^*$ which is a contradiction. (2): We prove the ``only if" part. Assuming $X$ is infinite dimensional, take a sequence $(x_n^*) \subseteq S_{X^*}$ so that $(x_n^*)$ converges weak-star to some $x_\infty^* \in X^*$ with $\|x_\infty^*\| = 1/2$ as above. Take $x_n \in S_X$ such that $|1- x_n^* (x_n)| < (1/\sqrt{2}n)^2$ for each $n \in \mathbb{N}$. By the Bishop-Phelps-Bollob\'as theorem \cite{Bol}, there exists $(y_n, y_n^*) \in \Pi (X)$ such that $\|y_n -x_n\| < 1/n + (1/n)^2$ and $\| y_n^* - x_n^* \| < 1/n$. If $\Pi (X)$ were compact, then there would be a subnet $(y_\alpha, y_\alpha^{*})$ which converge to some $(z, z^{*})$ in $\Pi (X)$. This implies that $z^* = x_\infty^*$ which contradicts $\|z^*\| = 1$. \end{proof} \subsection*{Acknowledgment} The author is grateful to Sheldon Dantas, Miguel Mart\'in and \'Oscar Rold\'an.
\section{Introduction} \subsection{Regularity constants} We are interested in obtaining estimates for the Neumann problem, namely \[ \begin{cases} \Delta u =0 \text{ in } E,\\[0.5em] \displaystyle \frac{\partial u}{\partial \nu}=g \text{ on } \partial E \end{cases} \] and $\displaystyle \int_{E}u(y)dy=0$, for domains $E$ of the form:\\ \begin{align}\label{eq:genericE} E=B(z_0,r_0)\setminus\bigcup_{k=1}^{n}\overline{B(z_k,r_k)}\subset \R^2. \end{align} Here, $\nu(x)$ is the unit outward normal, and $g\in C^{1,\alpha}\left( \bigcup_{k=0}^{n}\partial B(z_k, r_k)\right)$ for some $\alpha\in(0,1)$. The datum must be compatible with the equation: \begin{equation} \int_{\partial B(z_0,r_0)}g=\sum_{k=1}^{n}\int_{\partial B(z_k, r_k)}g. \label{nec} \end{equation} We find that the estimates do not blow up provided that the radii of the holes, their distance to the outer boundary and the distance between them do not become too small compared to the domain size. To obtain quantitative estimates, we assume throughout that \begin{align} \label{eq:d} \begin{gathered} \forall i\geq 1\medspace r_i\geq d,\\ \forall i\geq 1 \medspace B(z_i,r_i+d)\subset B(z_0,r_0), \ \text{and} \\ \min_{\substack{i,j\geq 1\\ i\neq j}} \dist(\overline {B(z_i,r_i)}, \overline{B(z_j,r_j)})\geq 2d, \end{gathered} \end{align} for some generic length $d$. We also set \begin{align} \label{def_C_P} C_P(E):= \sup \left \{ \|\phi\|_{L^2(E)}: \phi \in H^1(E) \text{ s.t. } \|D\phi\|_{L^2(E)}=1 \ \text{and}\ \int_E \phi =0\right \}, \end{align} \begin{gather}\label{eq:B} B =B(E) := |E|^{\frac{1}{2}}C_P(E)\Big (d^{-\frac{1}{2}}C_P(E)+d^{\frac{1}{2}}\Big )n^{\frac{1}{2}}r_0^{\frac{1}{2}}. \end{gather} \begin{thm} \label{finalthm} Let $B$ and $u$ be as above, then, we have:\\ $\left\|Du\right\|_{\infty(E)} \leq C (1+Bd^{-4}r_0)\left\|g\right\|_{\infty}+Cr_{0}^{\alpha}[g]_{0,\alpha}.$\\ $ [D u]_{0,\alpha(E)}\leq C(d^{-\alpha}+Bd^{-5}r_{0}^{2-\alpha})\left\|g\right\|_{\infty}+C[g]_{0,\alpha} .$\\ $ \left\|D^2 u\right\|_{\infty(E)} \leq C(d^{-1}+Bd^{-5}r_0)\left\|g\right\|_{\infty}+Cd^{\alpha-1}[g]_{0,\alpha}+C\left\|g'\right\|_{\infty}+Cr_{0}^{\alpha}[g']_{0,\alpha} . $\\ $ [D^2 u]_{0,\alpha(E)} \leq C(d^{-1-\alpha}+Bd^{-6}r_{0}^{2-\alpha})\left\|g\right\|_{\infty}+Cd^{-1}[g]_{0,\alpha}+Cd^{-\alpha}\left\|g'\right\|_{\infty}+C[g']_{0,\alpha}.$ \end {thm} It should be noted that the above theorem shows the dependence on $d$ and $r_0$ of the elliptic regularity constant in front of each of the seminorms $\left\|g\right\|_{\infty}$, $[g]_{0,\alpha}$, $\left\|g'\right\|_{\infty}$ and $[g']_{0,\alpha}$ separately, as opposed to just an estimate of the form $ \left\|D^2 u\right\|_{2,\alpha}\leq C(d,r_0)\left\|g\right\|_{2,\alpha}$, where much information is lost. This more complete understanding of the regularity theory is interesting in itself and might be relevant in applications. In particular, independent knowledge of the dependence on the derivatives of different order is necessary in any careful analysis of the scalings of the problem. As can be seen, here we give a deeper treatment to the regularity constants than the one that is done in \cite{CH19a}. For instance, in the proof of Lemma \ref{lemma1}, the estimates of the form $[\cdot]_{0,\alpha}\leq C(d,r_{max})\| \cdot \|_{L^1}$, show the dependence of $C(d,r_{max})$ on $d$ and $R$ in a more explicit and detailed way (compare with the proof of \cite[Lemma 5.4]{CH19a}). The motivation for studying the above problem follows by a cavitation problem analyzed in \cite{CH19a}, where the major difficulty relies in constructing a family of explicit admissible deformation maps producing round cavities of a certain size. For that, near each cavity point, one can define explicitly a radially symmetric deformation map creating cavities of the desired size. Now, for gluing the above we can use the flow of Dacorogna and Moser \cite{DaMo90}, which yields the following free boundary equation \begin{align} \left\{ \begin{aligned} div \medspace v_t&=0& &\text{in $ E(t)$,} \\ v_t(x) &=g_t(x)\nu(x) & &\text{on $ \partial E(t)$ ,} \end{aligned} \right. \end {align} where $E(t)=B(z_0(t),r_0(t))\setminus\bigcup_{k=1}^{n}\overline{B(z_k(t),r_k(t))}\subset \R^2.$ Reducing the problem (after using a Leray type descomposition) to the study of the regularity of the solution to the Neumann problem in $E(t)$. Evidently, if we want to estimate $\left\|v_t\right\|_{\infty}$ and $\left\|Dv_t\right\|_{\infty}$ through the evolution, we have to take care of the uniform control in time of each seminorm, so we need to know the dependence on the domain of each regularity constant. One could think that the circular shape is too restrictive for modelling cavitation phenomena, but it seems that the minimizers prefer to keep their round shaped cavities (until a critical load) as suggested in \cite{BaMu84} and \cite{HeSe13}. That makes the problem with circular holes, which is already challenging, also interesting at least in that application. From the more pure side, working with holes that are circular allows for fine and more explicit calculations using singular integrals, leading to a better understanding of the dependence on the geometry. \subsection{Relation between the Dirichlet and Neumann problems} The other result is the relation between harmonic extensions and harmonic functions with prescribed Neumann data. First, let us introduce the two fundamental kernels : \begin{equation}\label{kernelp} P_{r}(\phi)=\frac{1-r^2}{r^2+1-2r\cos(\phi)} \end{equation} \begin{equation}\label{kernelk} K_{r}(\phi)=\frac{r\sin\left(\phi\right)}{r^2+1-2r\cos(\phi)} . \end{equation} Now, let us recall that on the disk, the solution of the Dirichlet problem, namely: \begin{align} \label{bvpd} \left\{ \begin{aligned} \Delta u&=0& &\text{in $ B(0,1)$,} \\ u &=g & &\text{on $ \partial B(0,1)$,} \end{aligned} \right. \end {align} is given by: $$u(re^{i\phi})=\frac{1}{2\pi}P_r*g(\phi)=\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{1-r^2}{1+r^2-2r\cos(\tau-\phi)}g(\tau)d\tau,$$ and the solution of the Neumann problem (with zero average): \begin{align} \label{bvpn1} \left\{ \begin{aligned} \Delta w&=0& &\text{in $ B(0,1)$,} \\ \frac{\partial w}{\partial \nu} &=g& &\text{on $ \partial B(0,1)$},\\ \int_{B(0,1)}w dx& =0, \end{aligned} \right. \end {align} is equal to: $$w(x)=-\frac{1}{\pi}\int_{-\pi}^{\pi}log|x-y|g(\tau)d\tau \text{, where $y=(cos(\tau),sin(\tau))$}.$$ In particular, the solution of the problem: \begin{align} \label{bvpn2} \left\{ \begin{aligned} \Delta \omega&=0& &\text{in $ B(0,1)$,} \\ \frac{\partial \omega}{\partial \nu} &=g'& &\text{on $ \partial B(0,1)$},\\ \int_{B(0,1)}\omega dx& =0. \end{aligned} \right. \end {align} Where $g'$ denotes the tangential derivative of $g$, is given by: $$\omega(x)=-\frac{1}{\pi}\int_{-\pi}^{\pi}log|x-y|g'(\tau)d\tau=\frac{1}{\pi}K_r*g(\phi)=\frac{1}{\pi}\int_{-\pi}^{\pi}\frac{r\sin\left(\tau-\phi\right)}{r^2+1-2r\cos(\tau-\phi)}g(\tau)d\tau.$$ \begin{rem} The solutions to the exterior problem are very similar. \end{rem} The analysis in both \cite{CH19a} and Theorem \ref{finalthm} is made possible by the following more fundamental connection between the Dirichlet and Neumann problems, a result that is of independent interest and we highlight as our second main theorem. \begin{thm} \label{rfthm} Let $u$ and $w$ be the unique solutions to \eqref{bvpd} and \eqref{bvpn1} respectively, then: $$D u(re^{i\phi})=-\frac{1}{r}\left(\frac{1}{\pi}K_r*g'(\phi)\right)e^{i\phi}+\frac{1}{r}\left(\frac{1}{2\pi}P_r*g'(\phi)\right)e^{i\left(\phi+\frac{\pi}{2}\right)},$$ $$D w(re^{i\phi})=\frac{1}{r}\left(\frac{1}{2\pi}P_r*g(\phi)\right)e^{i\phi}+\frac{1}{r}\left(\frac{1}{\pi}K_r*g(\phi)\right)e^{i\left(\phi+\frac{\pi}{2}\right)}.$$ \end {thm} \begin{cor} If $u$ and $\omega$ are the solutions to \eqref{bvpd} and \eqref{bvpn2}, then $$Du(re^{i\phi})=D\omega(re^{i\phi})\cdot e^{i\frac{\pi}{2}} \text{ \medspace$\forall r\in (0,1)$ \medspace $\forall \phi \in\mathbb{R}$}.$$ \end{cor} If $u=g$ on $\partial B(0,1)$ then the tangential derivative of $u$ is $g'$, that is, it coincides with the normal derivative of $\omega$. This somehow suggests that on $\partial B(0,1)$ the result that $Du$ and $D\omega$ are the same up to a rotation by $\frac{\pi}{2}$ is to be expected. However, it is surprising that the connection carries through to the interior of the domain. \begin{rem} The analogous formulas for the exterior problem also hold. \end{rem} \begin{rem} One could think about the previous corollary as an analogous for the Cauchy-Riemann equations. More precisely, if $f(x+iy)=u(x,y)+iv(x,y)$, is analytic in the disk, then the Cauchy-Riemann equations are equivalent to $Du=e^{\frac{3}{2}i\pi}Dv$. So, if $u$ and $\omega$ are the solutions to \eqref{bvpd} and \eqref{bvpn2}, then $f=u-i\omega$ is analytic in the disk. Moreover: $$f(z)=\frac{1}{2\pi}\int_{\mathbb{T}}\frac{e^{i\tau}+z}{e^{i\tau}-z}g(\tau)d\tau,$$ which is the Schwarz integral formula for $g$ (i.e. an holomorphic function whose real part on the boundary is equal to $g$). The point is, that we can see from the relation formula that the imaginary part is related to the solution of the Neumann problem. Actually, the last holds for every smooth domain: To see this, it suffices to note that if $f$ is holomorphic in a smooth domain, then from the Cauchy-Riemann equations we get: $$ -\partial_{\nu}\left( Im(f)\right)= \langle\,-D(Im(f)),\nu \rangle =\langle\,-i\cdot D(Re(f)),-i\cdot\tau \rangle=\langle\,D(Re(f)),\tau \rangle = \partial_{\tau}\left( Re(f)\right),$$ where $\partial_{\nu}$ and $\partial_{\tau}$ are the normal and tangential derivatives. \end{rem} From the relation formulas, we deduce that we can study the regularity of the above convolutions to obtain the regularity of the harmonic functions. \section{Notation and Preliminaries} \subsubsection*{Function spaces and Green's function} We fix a value of $\alpha\in (0,1)$ and work with the norms $\left\|f\right\|_{\infty}:=\sup|f(x)|$ and \begin{align*} [f]_{0,\alpha}&:=\sup_{x\neq y}\frac{|f(x)-f(y)|}{|x-y|^{\alpha}}, & \left\|f\right\|_{0,\alpha} &:=\left\|f\right\|_{\infty}+[f]_{0,\alpha},\\ [f]_{1,\alpha}&:=\sup_{x\neq y}\frac{|Df(x)-Df(y)|}{|x-y|^{\alpha}}, & \left\|f\right\|_{1,\alpha}&:=\left\|f\right\|_{\infty}+\left\|Df\right\|_{\infty}+[f]_{1,\alpha}. \end{align*} The function $g$ will belong to $$C_{per}^{0,\alpha}:=\{ g\in C_{loc}^{0,\alpha}(\mathbb{R}): g \text{ is $2\pi$-periodic} \}.$$ The inversion of $x\in \R^2$ with respect to $B(0,R)$ is $x^{*}=\frac{R^2}{|x|^2}x.$ Set $$\Phi(x) :=\frac{-1}{2\pi}\log(|x|), \quad \phi^x(y):=\frac{1}{2\pi}\log(|y-x^{*}|)-\frac{|y|^2}{4\pi R^2}, \quad G_{N}(x,y):=\Phi (x)-\phi^x(y).$$ The expression $u_{,\beta}$ stands for $\partial_{\beta} u=\frac{\partial u}{\partial x_\beta}$.\\ \section{Relation Formula} \begin{proof}[Proof of the Theorem \ref{rfthm}] Set $x=re^{i\phi}\in B(0,1)$ and $y=e^{i\tau}=(\cos(\tau),\sin(\tau))$. Let us prove the first formula. For that, let us start by computing the $x$ derivative of the Poisson kernel: $$D_x(P_r(\tau-\phi))=D_x\left( \frac{1-|x|^2}{|x-y|^2} \right)=-2\left( \frac{x(|x-y|^2+1-|x|^2)-y(1-|x|^2)}{|x-y|^4}\right) .$$ Now, for $x\in B(0,1)$, we have (due to the dominated convergence theorem): $$D_x(u)=\frac{1}{2\pi}\int_{-\pi}^{\pi}D_x\left(P_r(\tau-\phi)\right)g(\tau)d\tau .$$ In addition, the $x$ derivatives of $P_r(\tau-\phi)$ are given by (note that we use $\tau=(\tau-\phi)+\phi$ and $|x-y|^2=1+r^2-2r\cos(\tau-\phi)$): $$\frac{\partial }{\partial x_1}\left(P_r(\tau-\phi)\right)=-2\frac{\cos(\phi)(2r-(r^2+1)\cos(\tau-\phi))+\sin(\phi)(1-r^2)\sin(\tau-\phi)}{(1+r^2-2r\cos(\tau-\phi))^2}$$ $$ \frac{\partial }{\partial x_2}\left(P_r(\tau-\phi)\right)=-2\frac{\sin(\phi)(2r-(r^2+1)\cos(\tau-\phi))-\cos(\phi)(1-r^2)\sin(\tau-\phi)}{(1+r^2-2r\cos(\tau-\phi))^2}.$$ Furthermore: $$\int_{-\pi}^{\pi}\frac{2r-(r^2+1)\cos(\tau-\phi)}{(1+r^2-2r\cos(\tau-\phi))^2}g(\tau)d\tau=-\int_{-\pi}^{\pi}\frac{d}{d\tau}\left(\frac{\sin(\tau-\phi)}{1+r^2-2r\cos(\tau-\phi)}\right)g(\tau)d\tau$$ $$=\int_{-\pi}^{\pi}\frac{\sin(\tau-\phi)}{1+r^2-2r\cos(\tau-\phi)}g'(\tau)d\tau=\int_{-\pi}^{\pi}\frac{\sin(\tau)}{1+r^2-2r\cos(\tau)}g'(\tau+\phi)d\tau.$$ Moreover: $$ \int_{-\pi}^{\pi}\frac{(1-r^2)\sin(\tau-\phi)}{(1+r^2-2r\cos(\tau-\phi))^2}g(\tau)d\tau$$ $$=-\frac{1-r^2}{2r}\int_{-\pi}^{\pi}\frac{d}{d\tau}\left(\frac{1}{1+r^2-2r\cos(\tau-\phi)}\right)g(\tau)d\tau$$ $$=\frac{1}{2r}\int_{-\pi}^{\pi}\frac{1-r^2}{1+r^2-2r\cos(\tau-\phi)}g'(\tau)d\tau.$$ From the above, it is easy to conclude the validity of the first formula.\\ For proving the second formula, first note that the derivative of $w$ (times $-\pi$) is given by: $$-\pi Dw(x)=\int_{-\pi}^{\pi}g(\tau)\frac{x-y}{|x-y|^2}d\tau.$$ Now, the tangential component is equal to : $$\int_{-\pi}^{\pi}g(\tau)\frac{-\cos\left(\tau-\phi-\frac{\pi}{2}\right)}{r^2+1-2r\cos(\tau-\phi)})d\tau=-\frac{1}{r}\int_{-\pi}^{\pi}g(\tau)\frac{r\sin\left(\tau-\phi\right)}{r^2+1-2r\cos(\tau-\phi)}d\tau$$ On the other hand, the normal component is equal to : $$ \int_{-\pi}^{\pi}g(\tau)\frac{r-\cos(\tau-\phi)}{r^2+1-2r\cos(\tau-\phi)}d\tau=\frac{1}{r}\int_{-\pi}^{\pi}g(\tau)\frac{r^2-1}{r^2+1-2r\cos(\tau-\phi)}d\tau$$ $$-\int_{-\pi}^{\pi}g(\tau)\frac{r-\cos(\tau-\phi)}{r^2+1-2r\cos(\tau-\phi)}d\tau+\frac{1}{r}\int_{-\pi}^{\pi}g(\tau)d\tau.$$ From which the result follows by using that the integral of $g$ is equal to zero because $w$ is harmonic. \end{proof} \section{H\"older regularity of the convolutions} \begin{lem} \label{lemma3} Let $g\in C_{per}^{0,\alpha}$, $\phi\in[0,2\pi]$, $1<r_2<r_1$. Then: $$ |\omega(r_1e^{i\phi})-\omega(r_2e^{i\phi})|\leq Cr_1[g]_{0,\alpha}|r_1-r_2|^{\alpha},$$ where \begin{equation} \label{kernel1} \omega:=\int_{-\pi}^{\pi}g(\tau+\phi)\frac{r\sin(\tau)d\tau}{r^2+1-2r\cos(\tau)} \end{equation} \end {lem} \begin{proof} Note that: $$|\omega(r_1e^{i\phi})-\omega(r_2e^{i\phi})|=\left|\int_{r_2}^{r_1}\frac{\partial \omega}{\partial r}dr\right|\leq \int_{r_2}^{r_1}\left|\frac{\partial \omega}{\partial r}\right|dr.$$ On the other hand:$$ \frac{\partial \omega}{\partial r}(re^{i\phi})=\int_{-\pi}^{\pi}g(\tau+\phi)\frac{(1-r^2)\sin(\tau)d\tau}{((1-r)^2+2r(1-\cos(\tau)))^2}$$ $$=\int_{-\pi}^{\pi}(g(\tau+\phi)-g(\phi))\frac{(1-r^2)\sin(\tau)d\tau}{((1-r)^2+2r(1-\cos(\tau)))^2},$$ where we have used that $\sin(\tau)$ is odd. Moreover: $$\left|\int_{|\tau|\leq r-1}(g(\tau+\phi)-g(\phi))\frac{(1-r^2)\sin(\tau)d\tau}{((r-1)^2+2r(1-\cos(\tau)))^2}\right|$$ $$\leq\int_{|\tau|\leq r-1}\frac{2r_1(r-1)[g]_{0,\alpha}|\tau|^{1+\alpha}}{((r-1)^2+2r(1-\cos(\tau)))^2}\leq \int_{|\tau|\leq r-1}\frac{Cr_1[g]_{0,\alpha}(r-1)^{2+\alpha}}{(r-1)^4}d\tau$$ $$=Cr_1[g]_{0,\alpha}(r-1)^{\alpha-1}.$$ Recall that $\frac{2}{\pi^2}|\tau|^2\leq 1-\cos(\tau) \leq \frac{1}{2}|\tau|^2$ for $\tau\in (-\pi,\pi)$. To estimate the rest of the integral, it suffices to note that: $$\left|\int_{ r-1 \leq|\tau|\leq \pi}(g(\tau+\phi)-g(\phi))\frac{(1-r^2)\sin(\tau)d\tau}{((r-1)^2+2r(1-\cos(\tau)))^2}\right|$$ $$\leq\int_{r-1\leq |\tau|\leq \pi}2r_1(r-1)[g]_{0,\alpha}\frac{|\tau|^{1+\alpha}}{((r-1)^2+2r(1-\cos(\tau)))^2}d\tau$$ $$\leq \int_{r-1\leq|\tau|\leq \pi}Cr_1(r-1)[g]_{0,\alpha}\frac{|\tau|^{1+\alpha}}{4|\tau|^4}d\tau \leq (r-1)Cr_1[g]_{0,\alpha}\int_{r-1\leq|\tau|\leq \pi}|\tau|^{\alpha-3}d\tau $$ $$\leq Cr_1(r-1)(r-1)^{\alpha-2}= Cr_1[g]_{0,\alpha}(r-1)^{\alpha-1}.$$ Finally: $$|\omega(r_1e^{i\phi})-\omega(r_2e^{i\phi})| \leq \int_{r_2}^{r_1}\left|\frac{\partial \omega}{\partial r}\right|dr\leq Cr_1[g]_{0,\alpha}\int_{r_2}^{r_1}(r-1)^{\alpha-1}dr\leq Cr_1[g]_{0,\alpha}|r_1-r_2|^{\alpha}.$$ (Recall that $|x|^{\alpha}$ is locally H\"older continuous in $[0,\infty)$.) \end{proof} \begin{lem} \label{lemma4} Let $g\in C_{per}^{0,\alpha}$, $r>1$, $\omega$ as in \eqref{kernel1}, and $x_1,x_2\in\mathbb{R}^2$ such that $|x_1|=|x_2|=r$. Then: $$|\omega(x_1)-\omega(x_2)|\leq Cr^2[g]_{0,\alpha}(r-1)^{\alpha-1}|x_1-x_2|.$$ \end {lem} \begin{proof} Let $1<r\leq 2$ and $|\phi_1-\phi_2|\leq\pi$, if we define $K_r(\tau)=\frac{\sin(\tau)}{1+r^2-2r\cos(\tau)}$ then: $$\omega(re^{i\phi})=r\int_{-\pi}^{\pi}g(\tau+\phi)K_r(\tau)d\tau=-r\int_{-\pi}^{\pi}g(\tau)K_r(\phi-\tau)d\tau.$$ The derivative of $K_r$ is given by: $$ \frac{\cos(\tau)(1+r^2)-2r}{(1+r^2-2r\cos(\tau))^2}=\left(1-\frac{(1+r)^2(1-\cos(\tau))}{(r-1)^2+2r(1-\cos(\tau))}\right)(1+r^2-2r\cos(\tau))^{-1}.$$ Since: $$ \left|\frac{\cos(\tau)(1+r^2)-2r}{(r-1)^2+2r(1-\cos(\tau))}\right|\leq 1+\frac{(1+r)^2(1-\cos(\tau))}{2r(1-\cos(\tau))}\leq Cr,$$ \noindent we have: $$\left|\frac{\partial K_r}{\partial \tau}(\tau)\right|\leq \frac{Cr}{(r-1)^2+2r(1-\cos(\tau))}\leq C'r|\tau|^{-2}, \text{if }|\tau|\leq \pi.$$ Let $\rho=|\phi_1-\phi_2|\leq \pi$, then: $$\left|\frac{\partial \omega}{\partial \phi}\right|\leq r\left|\int_{-\pi}^{\pi}(g(\tau)-g(\phi))K_r'(\phi-\tau)d\tau\right|$$ $$ \leq Cr^2[g]_{0,\alpha}\int_{|\tau-\phi|\leq r-1}\frac{|\tau-\phi|^{\alpha}}{(r-1)^2}d\tau+ Cr^2[g]_{0,\alpha}\int_{r-1\leq|\tau-\phi|\leq \pi}|\phi-\tau|^{\alpha-2}d\tau$$ $$\leq Cr^2(r-1)^{\alpha-1}[g]_{0,\alpha}. $$ Now using the fundamental theorem of calculus: $$|\omega(re^{i\phi_1})-\omega(re^{i\phi_2})|\leq \int_{\phi_1}^{\phi_2}Cr^2(r-1)^{\alpha-1}[g]_{0,\alpha}d\phi$$ $$=Cr^2(r-1)^{\alpha-1}[g]_{0,\alpha}|\phi_1-\phi_2|\leq Cr^2(r-1)^{\alpha-1}[g]_{0,\alpha}|re^{i\phi_1}-re^{i\phi_2}|.$$ \end{proof} \begin{proposition} \label{prop5} Let $g\in C_{per}^{0,\alpha}$, $\omega$ as in \eqref{kernel1}, and $x_1,x_2\in\mathbb{R}^2$ such that $ 1<|x_2|\leq|x_1|\leq 2$. Then: $$|\omega(x_1)-\omega(x_2)|\leq C[g]_{0,\alpha}|x_1-x_2|^{\alpha}.$$ (i.e. $[\omega]_{0,\alpha}\leq C[g]_{0,\alpha}$). \end {proposition} \begin{proof} Set $x_1=r_1e^{i\phi_1}$, $x_2=r_2e^{i\phi_2}$, $|\phi_1-\phi_2|\leq \pi$, $\rho:=|x_1-x_2|$. \begin{description} \item[Case $r_1-1\geq \rho$:] by Lemmas \ref{lemma3} and \ref{lemma4} : $$ |\omega(x_1)-\omega(x_2)|\leq |\omega(r_1e^{i\phi_1})-\omega(r_1e^{i\phi_2})|+|\omega(r_1e^{i\phi_2})-\omega(r_2e^{i\phi_2})|$$ $$\leq Cr_1[g]_{0,\alpha}(r_1-1)^{\alpha-1}|r_1e^{i\phi_1}-r_1e^{i\phi_2}|+Cr_1[g]_{0,\alpha}||x_1|-|x_2||^{\alpha}$$ $$ \leq 2C[g]_{0,\alpha}\rho^{\alpha-1}(|r_1e^{i\phi_1}-r_2e^{i\phi_2}|+|r_2e^{i\phi_2}-r_1e^{i\phi_2}|)+2C[g]_{0,\alpha}|x_1-x_2|^{\alpha} $$ $$ \leq C[g]_{0,\alpha}(\rho^{\alpha-1}(\rho+\rho)+\rho^{\alpha}).$$ \item [Case $r_1-1< \rho$:] set $r:=1+\rho$. Note that since $r_2<r_1<2$, then $r=1+|x_1-x_2|<1+r_1+r_2\leq 5$ $$ |\omega(x_1)-\omega(x_2)|\leq |\omega(r_1e^{i\phi_1})-\omega(re^{i\phi_1})|+|\omega(re^{i\phi_1})-\omega(re^{i\phi_2})|+|\omega(re^{i\phi_2})-\omega(r_2e^{i\phi_2})| $$ $$ \leq 2\cdot 5C[g]_{0,\alpha}|r-r_1|^{\alpha}+5C[g]_{0,\alpha}(r-1)^{\alpha-1}|re^{i\phi_1}-re^{i\phi_2}|, $$ since $r_2>1$, then $r-r_2=\rho-(r_2-1)<\rho$. On the other hand: $|re^{i\phi_1}-re^{i\phi_2}|\leq |r-r_1|+|x_1-x_2|+|r_2-r|<3\rho$ and $(r-1)^{\alpha-1}=\rho^{\alpha-1}$ by definition of $r$. This completes the proof. \end{description} \end{proof} \begin{proposition} \label{prop6} Let $g\in C_{per}^{0,\alpha}$, $\omega$ as in \eqref{kernel1}, and $x_1,x_2\in\mathbb{R}^2$ such that $ 1<|x_2|\leq|x_1|\leq 2$. Then: $$ \left\|\omega\right\|_{\infty}\leq C[g]_{0,\alpha} .$$ \end {proposition} \begin{proof} It is easy to see that: $$ |\omega|\leq C[g]_{0,\alpha}\int_{-\pi}^{\pi}\frac{|\tau|^{1+\alpha}}{|\tau|^2}d\tau\leq C[g]_{0,\alpha}.$$ \end{proof} \begin{lem} \label{lemma5} Let $x=re^{i\phi}$ and $y=e^{i\tau}$. Let $u$ be given by: \begin{equation}\label{PKer} u(re^{i\phi})=\frac{1-r^2}{2\pi}\int_{-\pi}^{\pi}\frac{g(\tau)d\tau}{|x-y|^2}, \end{equation} then: $\left\|u\right\|_{\infty}\leq C\left\|g\right\|_{\infty}.$ \end {lem} \begin{proof} This is immediate from the well-known formula (see \cite{Gamelin01}): \begin{equation}\label{Poisson} \frac{r^2-1}{2\pi}\int_{-\pi}^{\pi}\frac{d\tau}{1+r^2-2r\cos(\tau)}=sgn(r-1). \end{equation} \end{proof} \begin{lem} \label{lemma6} Let $g\in C_{per}^{0,\alpha}$, $r>1$, $|\phi_1-\phi_2|\leq \pi$ and $u$ as in \eqref{PKer}. Then: $$|u(re^{i\phi_1})-u(re^{i\phi_2})|\leq C[g]_{0,\alpha}|re^{i\phi_1}-re^{i\phi_2}|. $$ \end {lem} \begin{proof} First note that (thanks to \eqref{Poisson}): $$u(re^{i\phi})=\frac{1-r^2}{2\pi }\int_{-\pi}^{\pi}g(\tau)\frac{d\tau}{|x-y|^2}=\frac{1-r^2}{2\pi }\int_{-\pi}^{\pi}\frac{g(\tau+\phi)-g(\phi) }{1+r^2-2r\cos(\tau)}d\tau-g(\phi),$$ \noindent then: $$|u(re^{i\phi_1})-u(re^{i\phi_2})|\leq [g]_{0,\alpha}|\phi_1-\phi_2|^{\alpha}+\frac{r^2-1}{2\pi }\int_{-\pi}^{\pi}\frac{|g(\tau+\phi_1)-g(\tau+\phi_2)|}{1+r^2-2r\cos(\tau)}d\tau$$ $$\leq [g]_{0,\alpha}|\phi_1-\phi_2|^{\alpha}+[g]_{0,\alpha}|\phi_1-\phi_2|^{\alpha}\frac{r^2-1}{2\pi}\frac{2\pi}{r^2-1}\leq C'[g]_{0,\alpha}|re^{i\phi_1}-re^{i\phi_2}|^{\alpha}.$$ \end{proof} \begin{lem} \label{lemma7} Let $g\in C_{per}^{0,\alpha}$, $u$ as in \eqref{PKer}, $1<r_2<r_1\leq 2$. Then: $$ |u(r_1e^{i\phi})-u(r_2e^{i\phi})|\leq C[g]_{0,\alpha}|r_1-r_2|^{\alpha}.$$ \end{lem} \begin{proof} Note that: $$ \frac{d}{dr}\left( \frac{1-r}{1+r^2-2r\cos(\tau)} \right)=\frac{(r-1)^2-2(1-\cos(\tau))}{((r-1)^2+2r(1-\cos(\tau)))^2},$$ also: $$\frac{d }{dr}\left(\frac{(1+r)(1-r)}{(1-r)^2+2r(1-\cos(\tau))}\right)=(1+r)\frac{d}{dr}\left( \frac{1-r}{1+r^2-2r\cos(\tau)} \right)$$ $$+ \frac{1-r}{1+r^2-2r\cos(\tau)}. $$ We want to prove $\left|\frac{\partial u}{\partial r}\right|\leq C(r-1)^{\alpha-1}$, for $r\in (1,2)$. For that, it suffices to estimate the following integrals: $$ \left|(r-1)\int_{-\pi}^{\pi}(g(\tau+\phi)-g(\phi))\frac{d\tau}{(r-1)^2+2r(1-\cos(\tau))}\right|\leq C\pi^{\alpha}[g]_{0,\alpha}(r-1)\frac{2\pi}{r^2-1}$$ $$\leq C[g]_{0,\alpha} \leq C[g]_{0,\alpha}(r-1)^{\alpha-1}.$$ Now let us estimate the second integral for $|\tau|\leq r-1$: $$ 2\left|\int_{|\tau|\leq r-1}(g(\tau+\phi)-g(\phi))\frac{1-\cos(\tau)}{((r-1)^2+2r(1-\cos(\tau)))^2}d\tau\right|$$ $$\leq C[g]_{0,\alpha}\int_{|\tau|\leq r-1}\frac{|\tau|^{\alpha+2}}{((r-1)^2+2r(1-\cos(\tau)))^2}d\tau$$ $$\leq C[g]_{0,\alpha}\int_{|\tau|\leq r-1}\frac{|\tau|^{\alpha+2}}{(r-1)^4}d\tau\leq C'[g]_{0,\alpha}\frac{(r-1)^{\alpha+3}}{(r-1)^4}=C'[g]_{0,\alpha}(r-1)^{\alpha-1}.$$ Then for $r-1\leq |\tau|\leq \pi$: $$ 2\left|\int_{r-1\leq |\tau|\leq \pi}(g(\tau+\phi)-g(\phi))\frac{1-\cos(\tau)}{((r-1)^2+2r(1-\cos(\tau)))^2}d\tau\right|$$ $$\leq [g]_{0,\alpha}C\int_{r-1\leq |\tau|\leq \pi}\frac{|\tau|^{\alpha+2}}{(2|\tau|^2)^2}d\tau\leq C'((r-1)^{\alpha-1}-\pi^{\alpha-1})\leq C'[g]_{0,\alpha}(r-1)^{\alpha-1}.$$ Finally, let us estimate the last integral for $|\tau|\leq r-1$: $$ (r-1)^2\left|\int_{|\tau|\leq r-1}\frac{g(\tau+\phi)-g(\phi)}{((r-1)^2+2r(1-\cos(\tau)))^2}d\tau\right|$$ $$\leq [g]_{0,\alpha}C(r-1)^2\int_{|\tau|\leq r-1}\frac{|\tau|^{\alpha}}{(r-1)^4}d\tau\leq C'[g]_{0,\alpha}(r-1)^{\alpha-1}.$$ At last for $r-1\leq |\tau|\leq \pi$: $$ (r-1)^2\left|\int_{r-1\leq |\tau|\leq \pi}\frac{g(\tau+\phi)-g(\phi)}{((r-1)^2+2r(1-\cos(\tau)))^2}d\tau\right|$$ $$ \leq C[g]_{0,\alpha}(r-1)^2\int_{r-1\leq |\tau|\leq \pi}\frac{|\tau|^{\alpha}}{|\tau|^4}d\tau\leq C'[g]_{0,\alpha}(r-1)^2((r-1)^{\alpha-3}-\pi^{\alpha-3})$$ $$\leq C'[g]_{0,\alpha}(r-1)^{\alpha-1}.$$ In conclusion, we have: $$|u(r_1e^{i\phi})-u(r_2e^{i\phi})|=\left|\int_{r_2}^{r_1}\frac{\partial u}{\partial r}dr\right|\leq \int_{r_2}^{r_1}\left|\frac{\partial u}{\partial r}\right|dr\leq C[g]_{0,\alpha}\int_{r_2}^{r_1}(r-1)^{\alpha-1}dr$$ $$\leq C'[g]_{0,\alpha}|r_1-r_2|^{\alpha},$$ and the result follows from the above. \end{proof} \begin{proposition} \label{prop7} Let $g\in C_{per}^{0,\alpha}$, $u$ as in \eqref{PKer} $1<r_1\leq r_2\leq 2$, and $|\phi_1-\phi_2|\leq \pi$. Then: $$ |u(r_1e^{i\phi_1})-u(r_2e^{i\phi_2})|\leq C[g]_{0,\alpha}|r_1e^{i\phi_1}-r_2e^{i\phi_2}|^{\alpha}.$$ (i.e. $[u]_{0,\alpha(B(0,2)\setminus B(0,1))}\leq C[g]_{0,\alpha (\partial B(0,1))}$). \end {proposition} \begin{proof} Note that from the previous propositions we get: $$ |u(r_1e^{i\phi_1})-u(r_2e^{i\phi_2})|\leq |u(r_1e^{i\phi_1})-u(r_1e^{i\phi_2})|+|u(r_1e^{i\phi_2})-u(r_2e^{i\phi_2})|$$ $$\leq C[g]_{0,\alpha (\partial B(0,1))}|r_1e^{i\phi_1}-r_1e^{i\phi_2}|^{\alpha}+C[g]_{0,\alpha (\partial B(0,1))}|r_1e^{i\phi_2}-r_2e^{i\phi_2}|^{\alpha} $$ $$\leq C[g]_{0,\alpha (\partial B(0,1))}| r_1e^{i\phi_1}- r_2e^{i\phi_2}|^{\alpha}+C[g]_{0,\alpha (\partial B(0,1))}\left|r_2-r_1 \right|^{\alpha}$$ $$\leq C[g]_{0,\alpha (\partial B(0,1))}| r_1e^{i\phi_1}- r_2e^{i\phi_2}|^{\alpha},$$ because if $\theta$ is the angle between $r_1e^{i\phi_1}$ and $r_2e^{i\phi_2}$, we have: $$|r_1e^{i\phi_1}-r_2e^{i\phi_2}|^2 -| r_1e^{i\phi_1}- r_1e^{i\phi_2}|^2=r_2^2-r_1^2-2r_1r_2\cos(\theta)+2r_1^2\cos(\theta)$$ $$=(r_2-r_1)(r_1+r_2-2r_1\cos(\theta))\geq (r_2-r_1)^2\geq 0.$$ \end{proof} \begin{proposition} \label{prop8} Let $g\in C_{per}^{2,\alpha}$ and $u$ as in \eqref{PKer}, then (for $1<|x|<2$):\\ $\left\|Du\right\|_{\infty} \leq C (\left\|g\right\|_{\infty}+[g]_{0,\alpha}).$\\ $ [D u]_{0,\alpha}\leq C(\left\|g\right\|_{\infty}+[g]_{0,\alpha}). $\\ $ \left\|D^2 u\right\|_{\infty} \leq C(\left\|g'\right\|_{\infty}+[g']_{0,\alpha}+\left\|g''\right\|_{\infty}+[g'']_{0,\alpha}). $\\ $ [D^2 u]_{0,\alpha}\leq C(\left\|g'\right\|
_{\infty}+[g']_{0,\alpha}+\left\|g''\right\|_{\infty}+[g'']_{0,\alpha}). $ \end {proposition} \begin{proof} It follows by Theorem \ref{rfthm}, Proposition \ref{prop5}, Proposition \ref{prop6}, Lemma \ref{lemma5} and Proposition \ref{prop7} (note that we have used that $[fg]_{0,\alpha}\leq \left\|f\right\|_{\infty}[g]_{0,\alpha}+\left\|g\right\|_{\infty}[f]_{0,\alpha}$). \end{proof} \begin{proposition} \label{prop9} Let $g\in C^{1,\alpha}(\partial B_1)$ and $u(x)=\int_{\partial B_1}g(y)\log|y-x|dS(y)$, then (for $1<|x|<2$) :\\ $\left\|Du\right\|_{\infty} \leq C (\left\|g\right\|_{\infty}+[g]_{0,\alpha}).$\\ $ [D u]_{0,\alpha}\leq C(\left\|g\right\|_{\infty}+[g]_{0,\alpha}). $\\ $ \left\|D^2 u\right\|_{\infty} \leq C(\left\|g\right\|_{\infty}+[g]_{0,\alpha}+\left\|g'\right\|_{\infty}+[g']_{0,\alpha}). $\\ $ [D^2 u]_{0,\alpha}\leq C(\left\|g\right\|_{\infty}+[g]_{0,\alpha}+\left\|g'\right\|_{\infty}+[g']_{0,\alpha}). $ \end {proposition} \begin{proof}: It follows by Theorem \ref{rfthm}, Proposition \ref{prop5}, Proposition \ref{prop6}, Lemma \ref{lemma5} and Proposition \ref{prop7}. \end{proof} \section{H\"older regularity for the harmonic function in a holed domain} \label{se:movingdomain} Throughout this section we study the H\"older regularity of the classical 2D singular integrals in a generic annulus: \begin{align} \label{eq:Omega} \Omega:=\{x\in \mathbb{R}^2: R<|x|<R+d\}. \end{align} For calculations that have to be made away from $\partial \Omega$, we work in \begin{align} \Omega':=\{x\in \mathbb{R}^2: R+\frac{1}{3}d<|x|<R+\frac{2}{3}d\}. \end{align} The role of the generic length $d$ is that of giving a uniform lower bound for the width of an annular neighbourhood of the excised hole that is still contained in the domain. In Proposition \ref{prop10} negative powers of the radii of the holes are obtained. It is for this reason that in the final result (see \eqref{eq:d}) not only the distances between the holes but also their radii are assumed to be greater than the generic length $d$. In some intermediate results, knowing that the radius is greater than $d$ simplifies the estimates (e.g.\ in Lemma \ref{lemma2} we obtain $\|Du\|_\infty \leq CR\|f\|_\infty$ instead of $\|Du\|_\infty\leq C(R+d)\|f\|_\infty$). This is why the hypothesis $R\geq Cd$ is added througout the whole section. \subsection{Estimates in the interior of the domain} \label{se:interior} The following regularity estimates for harmonic functions can be found in \cite[Thm.\ 2.2.7]{Evans10} \begin{lem} \label{harm reg} Let $v$ be harmonic in $B(x,d)$, then:\\ $ \left\|v\right\|_{L^{\infty}(B(x,\frac{d}{2}))}\leq C d^{-2}\left\|v\right\|_{L^1(B(x,d))} .$\\ $ \left\|D^{\beta}v\right\|_{L^{\infty}(B(x,\frac{d}{2}))}\leq C d^{-2-|\beta|}\left\|v\right\|_{L^1(B(x,d))} .$ \end{lem} A careful inspection of the proof of \cite[Prop.\ 5.1]{CH19a} yields the following dependence on R and d in the Hölder interior estimates for harmonic functions. \begin{proposition} \label{prop2} : Let $v$ be harmonic in $\Omega$ and $R\geq Cd$, then we have the folllowing estimates :\\ $ \left\|v\right\|_{L^{\infty}(\Omega')}\leq C d^{-2}\left\|v\right\|_{L^1(\Omega)} .$\\ $ [v ]_{0,\alpha(\Omega')}\leq Cd^{-3}R^{1-\alpha}\left\|v\right\|_{L^1(\Omega)}.$\\ $ \left\|D^{\beta}v\right\|_{L^{\infty}(\Omega')}\leq C d^{-2-|\beta|}\left\|v\right\|_{L^1(\Omega)} .$\\ $ [ v ]_{1,\alpha(\Omega')}\leq Cd^{-4}R^{1-\alpha}\left\|v\right\|_{L^1(\Omega)}.$ \end {proposition} \begin{lem} \label{lemma1} Let $R\geq Cd$, $v$ be harmonic in $\Omega$ and $\zeta$ a cut-off function with support within $|x|<R+\frac{2}{3}d$ and equal to $1$ for $|x|\leq R+\frac{1}{3}d$, then:\\ $ [\Delta(v\zeta)]_{0,\alpha(\mathbb{R}^2)}\leq CR^{1-\alpha}d^{-5}\left\|v\right\|_{L^1(\Omega)}. $\\ $ \left\|\Delta(v\zeta)\right\|_{\infty (\mathbb{R}^2)}\leq Cd^{-4} \left\|v\right\|_{L^1(\Omega)} .$ \end {lem} \begin{proof} It is clear that we can choose $\zeta$ to be such that: $|D^{k}\zeta|\leq C_kd^{-k}$ (and then $[\zeta]_{k,\alpha(\Omega')}\leq C_{k+1}d^{-k-1}R^{1-\alpha} $ since $\zeta\in C_{c}^{\infty}(B(0,R+d))$). Then, using Proposition \ref{prop1} and the estimates for $\zeta$ we get: $$|\Delta(v\zeta)|\leq 2|\nabla v \cdot \nabla \zeta| +|v\Delta \zeta|\leq C d^{-4}\left\|v\right\|_{L^1(\Omega)}.$$ On the other hand: $$ [\Delta(v\zeta)]_{0,\alpha(\Omega')}\leq 2[\nabla v \cdot \nabla \zeta]_{0,\alpha(\Omega')} +[v\Delta \zeta]_{0,\alpha(\Omega')}.$$ Now note that: $$ [v_{,\beta} \cdot \zeta_{,\beta}]_{0,\alpha(\Omega')}\leq [v_{,\beta}]_{0,\alpha(\Omega')}\left\|\zeta_{,\beta}\right\|_{\infty(\Omega')}+[\zeta_{,\beta}]_{0,\alpha(\Omega')}\left\|v_{,\beta}\right\|_{\infty(\Omega')}$$ $$\leq Cd^{-4}R^{1-\alpha}\left\|v\right\|_{L^1(\Omega)}\cdot d^{-1}+ Cd^{-2}R^{1-\alpha}\cdot d^{-3}\left\|v\right\|_{L^1(\Omega)}.$$ Furthermore: $$[v\Delta \zeta]_{0,\alpha(\Omega')}\leq [v]_{0,\alpha(\Omega')}\left\|\Delta\zeta\right\|_{\infty(\Omega')}+[\Delta\zeta]_{0,\alpha(\Omega')}\left\|v\right\|_{\infty(\Omega')}$$ $$ \leq Cd^{-3}R^{1-\alpha}\left\|v\right\|_{L^1(\Omega)}\cdot d^{-2}+Cd^{-3}R^{1-\alpha}\cdot d^{-2}\left\|v\right\|_{L^1(\Omega)}.$$ Hence: $$ [\Delta(v\zeta)]_{0,\alpha(\Omega')}\leq Cd^{-5}R^{1-\alpha}\left\|v\right\|_{L^1(\Omega)}.$$ Now if $x\in \Omega'$ and $y\in \mathbb{R}^2\setminus \overline{\Omega'}$, there exists $t\in (0,1)$ such that $z=tx+(1-t)y\in \partial\Omega'$, then we have $$ |\Delta(v\zeta)(x)-\Delta(v\zeta)(y)|\leq |\Delta(v\zeta)(x)-\Delta(v\zeta)(z)|+|\Delta(v\zeta)(z)-\Delta(v\zeta)(y)|$$ $$=|\Delta(v\zeta)(x)-\Delta(v\zeta)(z)|\leq CR^{1-\alpha}d^{-5}\left\|v\right\|_{L^1(\Omega)}|x-z|^{\alpha}$$ $$=CR^{1-\alpha}d^{-5}\left\|v\right\|_{L^1(\Omega)}(1-t)^{\alpha}|x-y|^{\alpha}\leq CR^{1-\alpha}d^{-5}\left\|v\right\|_{L^1(\Omega)}|x-y|^{\alpha}$$ (Clearly if $x,y\in \mathbb{R}^2\setminus \overline{\Omega'}$, $|\Delta(v(x)\zeta(x))-\Delta(v(y)\zeta(y))|=0$). Finally, we get: $$[\Delta(\zeta v)]_{0,\alpha(\mathbb{R}^2)}\leq CR^{1-\alpha}d^{-5}\left\|v\right\|_{L^1(\Omega)}. $$ \end{proof} \subsection{Estimates near circular boundaries} \label{se:circles} \begin{proposition} \label{prop1} Let $v$ be harmonic in $\Omega$ and $\zeta$ be a cut-off function with support within $|x|<R+\frac{2}{3}d$ and equal to $1$ for $|x|\leq R+\frac{1}{3}d$. Then, if $u=\zeta v$: $$u(x)=C-\int_{\partial B_R}\frac{\partial u}{\partial \nu}\left( \Phi(y-x)-\phi^{x}(y) \right)dS(y)-\int_{\Omega}\Delta u \left( \Phi(y-x)-\phi^{x}(y) \right)dy.$$ \end{proposition} \begin{proof} Let us proceed as in \cite{Evans10}: $$ \int_{\Omega\setminus B_{\varepsilon}(x)}\Delta u(y)\Phi(y-x)-u(y)\Delta_{y}\Phi (y-x) dy=\int_{\partial \Omega}\frac{\partial u}{\partial \nu}\Phi(y-x)-\frac{\partial \Phi}{\partial \nu}(y-x)u(y) dS(y)$$ $$+\int_{\partial B_{\varepsilon}(x)}\frac{\partial \Phi}{\partial \nu}(y-x)u(y)-\frac{\partial u}{\partial \nu}\Phi(y-x)dS(y),$$ letting $\varepsilon \rightarrow 0$ (and using the fact that $u$ vanishes outside $B_{R+\frac{2}{3}d}$), we get: $$ \int_{\Omega}\Delta u(y)\Phi(y-x) dy= \int_{\partial B_R}\frac{\partial \Phi}{\partial \nu}(y-x)u(y)-\frac{\partial u}{\partial \nu}\Phi(y-x) dS(y)-u(x).$$ Hence:$$ u(x)=\int_{\partial B_R}\frac{\partial \Phi}{\partial \nu}(y-x)u(y)-\frac{\partial u}{\partial \nu}\Phi(y-x) dS(y)-\int_{\Omega}\Delta u(y)\Phi(y-x) dy,$$ with the normal pointing outside $B_R$. Now (as can be seen in \cite{DiBenedetto09}), if a function $\phi^{x}(y)$ satisfies: \begin{align} \label{benedetto} \left\{ \begin{aligned} -\Delta_{y}\phi^{x}(y)&=k& &\text{if $y\in \Omega$,} \\ \frac{\partial \phi^{x}}{\partial \nu} &=\frac{\partial \Phi}{\partial \nu}(y-x) & &\text{if $y\in \partial B_R$ ,} \end{aligned} \right. \end {align} with $k$ being a constant, then: $$ \int_{\Omega}\Delta_{y}\phi^{x}(y)u(y)-\Delta u \phi^{x}(y)dy=\int_{\partial \Omega}u(y)\frac{\partial}{\partial \nu}\phi^{x}(y)-\phi^{x}(y)\frac{\partial u}{\partial \nu} dS(y)$$ $$=\int_{\partial B_R}\phi^{x}(y)\frac{\partial u}{\partial \nu}-u(y)\frac{\partial}{\partial \nu}\Phi(y-x) dS(y)=k\int_{\Omega}u dy -\int_{\Omega}\Delta u \phi^{x}(y)dy,$$ \noindent where we have used \eqref{benedetto}. Finally, replacing in the expression for $u(x)$, we obtain: $$ u(x)=C-\int_{\partial B_R}\frac{\partial u}{\partial \nu}\left( \Phi(y-x)-\phi^{x}(y) \right)dS(y)-\int_{\Omega}\Delta u \left( \Phi(y-x)-\phi^{x}(y) \right)dy .$$ It is easy to see that $\phi^x(y)=\frac{1}{2\pi}\log(|y-x^{*}|)-\frac{|y|^2}{4\pi R^2}$ satisfies \eqref{benedetto} using the identity $|x_1||x_2-x_1^{*}|=|x_2||x_1-x_2^{*}|$. \end{proof} \begin{proposition} \label{prop3} Let $f\in C_c^{0,\alpha}(\Omega')$, $R\geq Cd$ and $u=\int_{\mathbb{R}^2}f(y)\Phi(x-y)dy$, then:\\ $ \left\|D u\right\|_{\infty(\mathbb{R}^2)}\leq CR \left\|f\right\|_{\infty} .$\\ $[Du]_{0,\alpha (B(0,R+d)\setminus\overline{B(0,R)})}\leq CR^{1-\alpha}\left\|f\right\|_{\infty}$\\ $ \left\|\partial_{\beta\gamma}^{2} u\right\|_{\infty(B(0,R+d)\setminus\overline{B(0,R)})}\leq CR^{\alpha}[f]_{0,\alpha(\mathbb{R}^2)}+\frac{\delta_{\beta\gamma}}{2}\left\|f\right\|_{\infty} .$\\ $ [D^2 u]_{0,\alpha(B(0,R+d)\setminus\overline{B(0,R)})}\leq C[f]_{0,\alpha(\mathbb{R}^2)}. $ \end {proposition} \begin{proof} Let us estimate the first derivative: $$|u_{,\beta}|\leq \left\|f\right\|_{\infty}\int_{\Omega'}\frac{dy}{|x-y|}\leq C\left\|f\right\|_{\infty}\int_{0}^{2R+\frac{5}{3}d}dr\leq CR\left\|f\right\|_{\infty} ,$$ Now let us estimate the H\"older seminorm of the derivatives: let $$v_{\rho}=\int_{\mathbb{R}^2\setminus B(x,\rho)}f(y)\Phi_{,\beta}(x-y)dy,$$ with $\rho\in (0,2(R+d))$, then: $$|u_{,\beta}-v_{\rho}|\leq C\left\|f\right\|_{\infty}\int_{ B(x,\rho)}|x-y|^{-1}dy\leq C\left\|f\right\|_{\infty}\int_{ B(x,\rho)}|x-y|^{-1}dy$$ $$ \leq C\left\|f\right\|_{\infty}\rho\leq C\left\|f\right\|_{\infty}\rho^{\alpha}R^{1-\alpha}.$$ On the other hand: $$\frac{\partial v_{\rho}}{\partial \gamma}=\int_{\mathbb{R}^2\setminus B(x,\rho)}f(y)\Phi_{,\beta\gamma}(x-y)dy-\int_{\partial B(x,\rho)}f(y)\Phi_{,\beta}(x-y)\nu_{\gamma}dS(y) ,$$ therefore: $$ \left|\frac{\partial v_{\rho}}{\partial \gamma}\right|\leq C\left\|f\right\|_{\infty}\left( \int_{\mathbb{R}^2\setminus B(x,\rho)}|x-y|^{-2}dy+\int_{\partial B(x,\rho)}|x-y|^{-1}dS(y)\right)$$ $$\leq C\left\|f\right\|_{\infty}\left(1+ \int_{B(x,2(R+d))\setminus B(x,\rho)}|x-y|^{-2}dy \right) $$ $$\leq C\left\|f\right\|_{\infty}\left(1+ \left|\log\left(\frac{R}{\rho}\right)\right| \right)\leq C\left\|f\right\|_{\infty}\left(1+ \left(\frac{R}{\rho}\right)^{1-\alpha} \right).$$ (Note that $\frac{R}{\rho}\in(\frac{1}{2},\infty)$). Finally, if $|x-y|=\rho$: $$|u_{,\beta}(x)-u_{,\beta}(y)|\leq |u_{,\beta}(x)-v_{\rho}(x)|+|v_{\rho}(x)-v_{\rho}(y)|+|v_{\rho}(y)-u_{,\beta}(y)|$$ $$\leq C\left\|f\right\|_{\infty}\rho^{\alpha}R^{1-\alpha}+C|x-y|\left\|f\right\|_{\infty}\left(1+ \left(\frac{R}{\rho}\right)^{1-\alpha}\right)$$ $$\leq C\left\|f\right\|_{\infty}\rho^{\alpha}R^{1-\alpha},$$ where we have used that $\rho\leq CR$. To prove the third estimate, first note that the second derivatives of $u$ are given by: $$u_{,\beta\gamma}=\lim_{\rho\rightarrow 0^+}\int_{\mathbb{R}^2\setminus B(x,\rho)}\Phi_{,\beta\gamma}(x-y)f(y)dy-\frac{\delta_{\beta\gamma}}{2}f.$$ Since $f\in C_{c}^{0,\alpha}$ (and using the fact that $\int_{\partial B(0,1)}\Phi_{,\beta\gamma}(z)dS(z)=0$, and $\int_{A}\Phi_{,\beta\gamma}(z)dz= 0$ if $A$ is any annulus centered at the origin), the absolute value of the singular integral is bounded by: $$ \left| \lim_{\rho\rightarrow 0^+}\int_{B(x,2R+\frac{5}{3}d)\setminus B(x,\rho)}(f(y)-f(x))\Phi_{,\beta\gamma}(x-y)dy \right|$$ $$\leq \lim_{\rho\rightarrow 0^+} \int_{\partial B(0,1)}|\Phi_{,\beta\gamma}(\omega)|dS(\omega)\int_{\rho}^{2R+\frac{5}{3}d}r^{\alpha-1}dr[f]_{0,\alpha}\leq CR^{\alpha}[f]_{0,\alpha};$$ that proves the second result (obviously we have $\left\|\frac{\delta_{ij}}{2}f\right\|_{\infty}\leq \frac{\delta_{ij}}{2}\left\|f\right\|_{\infty}$). To prove the last estimate, we proceed as in \cite[Thm.\ 2.6.4]{Morrey66}: first note that if $\Phi_{,ij}(x)=\Delta(x)$, $\omega(x)=u_{,ij}(x)+\frac{\delta_{ij}}{n}f(x)$, $n=2$, and $$\omega_{\rho}(x)=\int_{\mathbb{R}^n\setminus B(x,\rho)}\Delta(x-\xi)f(\xi)d\xi,$$ then: $$|\omega_{\sigma}(x)-\omega_{\rho}(x)|\leq \int_{B(x,\rho)\setminus B(x,\sigma)}|\Delta(x-\xi)|[f]_{0,\alpha}|x-\xi|^{\alpha}d\xi\leq CM_0[f]_{0,\alpha}\rho^{\alpha},$$ being $M_0=\sup_{|x|=1}|\Delta(x)|$. If we let $\sigma \rightarrow 0$, we obtain: $$|\omega(x)-\omega_{\rho}(x)|\leq CM_0[f]_{0,\alpha}\rho^{\alpha}.$$ Let $M=3R+3d$ and $M_1=\sup_{|x|=1}|\nabla\Delta(x)|$. The derivatives of $\omega_{\rho}$ are given by: $$\omega_{\rho,\beta}(x)=\int_{\mathbb{R}^n\setminus B(x,\rho)}\Delta_{,\beta}(x-\xi)f(\xi)d\xi-\int_{\partial B(x,\rho)}\Delta(x-\xi)f(\xi)d\xi_{\beta}^{'}$$ $$= \int_{B(x,M)\setminus B(x,\rho)}\Delta_{,\beta}(x-\xi)(f(\xi)-f(x))d\xi+\int_{\partial B(x,M)}\Delta(x-\xi)(f(\xi)-f(x))d\xi_{\beta}^{'}$$ $$+\int_{\partial B(x,\rho)}\Delta(x-\xi)(f(x)-f(\xi))d\xi_{\beta}^{'}$$ Note that: $$\int_{\partial B(x,M)}\Delta(x-\xi)f(\xi)d\xi_{\beta}^{'}=0.$$ Let $x,z\in B(0,R+d)$ and $\rho=|x-z|$,then: $$|\nabla \omega_{\rho}|\leq C(M_0+M_1)[f]_{0,\alpha}(\rho^{\alpha-1}+M^{\alpha-1})\leq C(M_0+M_1)[f]_{0,\alpha}\rho^{\alpha-1}.$$ Thus (applying the mean value theorem): $$|\omega(x)-\omega(z)|\leq |\omega(x)-\omega_{\rho}(x)|+|\omega_{\rho}(x)-\omega_{\rho}(z)| +|\omega_{\rho}(z)-\omega(z)|\leq C(M_0+M_1)[f]_{0,\alpha}\rho^{\alpha};$$ that yields: $[\omega]_{0,\alpha}\leq C(M_0+M_1)[f]_{0,\alpha}$. \end{proof} \begin{lem} \label{lemma2} Let $u=\int_{\mathbb{R}^2}f(y)\log|x^{*}-y|dy$ with $f\in C_c^{0,\alpha}(B_{R+\frac{2}{3}d}\setminus\overline{B_{R+\frac{d}{3}}})$, $R\geq Cd$. Then:\\ $\left\| Du \right\|_{L^{\infty}(B_{R+d}\setminus \overline{B_R})}\leq C R\left\|f\right\|_{\infty}$.\\ $[ D u]_{0,\alpha(B_{R+d}\setminus \overline{B_R})} \leq CR^{2-\alpha}d^{-1}\left\|f\right\|_{\infty}$.\\ $\left\| D^2 u \right\|_{L^{\infty}(B_{R+d}\setminus \overline{B_R})} \leq CRd^{-1}\left\|f\right\|_{\infty}$.\\ $[ D^2 u]_{0,\alpha(B_{R+d}\setminus \overline{B_R})} \leq CR^{2-\alpha}d^{-2}\left\|f\right\|_{\infty}$. \end {lem} \begin{proof} Using the identity $|x_1||x_1^{*}-x_2|=|x_2||x_1-x_2^{*}|$, let us first note that: \begin{align} \label{log-reflection} \log|y-x^{*}|=\log|y^{*}-x|+\log|y|-\log|x|, \end{align} this implies that: $$ u=C+\int_{\mathbb{R}^2}\log|x-y^{*}|f(y)dy-\log|x|\int_{\mathbb{R}^2}f(y)dy ,$$ \noindent then: $$ |u_{,\beta}|\leq C\int_{\Omega'}\frac{|f(y)|dy}{|x-y^{*}|} + \frac{C}{|x|}\left\|f\right\|_{\infty}Rd\leq C\int_{\Omega'}\frac{|f(y)|dy}{|x|-|y^{*}|} + \frac{C}{|x|}\left\|f\right\|_{\infty}Rd$$ $$\leq CRd\frac{\left\|f\right\|_{\infty}}{R-\frac{R^2}{R+\frac{d}{3}}}+Cd\left\|f\right\|_{\infty}\leq CR\left\|f\right\|_{\infty}.$$ \noindent The other estimates are proved analogously (for the H\"older continuity we can use the same argument as in \cite[Prop.\ 5.1]{CH19a}). \end{proof} \begin{proposition} \label{prop4} Let $f\in C_c^{0,\alpha}(B_{R+\frac{2}{3}d}\setminus\overline{B_{R+\frac{d}{3}}})$, $R\geq Cd$ and $u=\int_{\mathbb{R}^2}f(y)G_N(x,y)dy$, then (in $B_{R+d}\setminus \overline{B_R}$) :\\ $ \left\|Du\right\|_{\infty}\leq CR\left\|f\right\|_{\infty} .$\\ $ [D u]_{0,\alpha} \leq CR^{2-\alpha}d^{-1}\left\|f\right\|_{\infty}.$\\ $ \left\|D^2u\right\|_{\infty} \leq C(Rd^{-1}\left\|f\right\|_{\infty}+R^{\alpha}[f]_{0,\alpha}).$\\ $ [D^2 u]_{0,\alpha} \leq C(R^{2-\alpha}d^{-2}\left\|f\right\|_{\infty}+[f]_{0,\alpha}).$ \end {proposition} \begin{proof} It follows from Proposition \ref{prop3} and Lemma \ref{lemma2}. \end{proof} \begin{proposition} \label{prop10} Let $g\in C^{1,\alpha}(\partial B_R)$ and $u=\int_{\partial B_R}g\log|y-x|dS$, then (for $R<|x|<R+d$, with $d\leq R$) :\\ $\left\|Du\right\|_{\infty} \leq C (\left\|g\right\|_{\infty}+R^{\alpha}[g]_{0,\alpha}).$\\ $ [D u]_{0,\alpha}\leq C(R^{-\alpha}\left\|g\right\|_{\infty}+[g]_{0,\alpha}). $\\ $ \left\|D^2 u\right\|_{\infty} \leq C(R^{-1}\left\|g\right\|_{\infty}+R^{\alpha-1}[g]_{0,\alpha}+\left\|g'\right\|_{\infty}+R^{\alpha}[g']_{0,\alpha}). $\\ $ [D^2 u]_{0,\alpha}\leq C(R^{-1-\alpha}\left\|g\right\|_{\infty}+R^{-1}[g]_{0,\alpha}+R^{-\alpha}\left\|g'\right\|_{\infty}+[g']_{0,\alpha}). $\\ \end {proposition} \begin{proof} It follows by a rescaling argument. \end{proof} \begin{proposition} \label{prop11} Let $u=\int_{\partial B_R}gG_N(x,y)dS(y)$, then:\\ $\left\|Du\right\|_{\infty(B(0,R+d)\setminus\overline{B(0,R)})} \leq C (\left\|g\right\|_{\infty}+R^{\alpha}[g]_{0,\alpha}).$\\ $ [D u]_{0,\alpha(B(0,R+d)\setminus\overline{B(0,R)})}\leq C(R^{-\alpha}\left\|g\right\|_{\infty}+[g]_{0,\alpha}) .$\\ $ \left\|D^2 u\right\|_{\infty(B(0,R+d)\setminus\overline{B(0,R)})} \leq C(R^{-1}\left\|g\right\|_{\infty}+R^{\alpha-1}[g]_{0,\alpha}+\left\|g'\right\|_{\infty}+R^{\alpha}[g']_{0,\alpha}) . $\\ $ [D^2 u]_{0,\alpha(B(0,R+d)\setminus\overline{B(0,R)})}\leq C(R^{-1-\alpha}\left\|g\right\|_{\infty}+R^{-1}[g]_{0,\alpha}+R^{-\alpha}\left\|g'\right\|_{\infty}+[g']_{0,\alpha}) .$ \end {proposition} \begin{proof} Thanks to \eqref{log-reflection} we have: $$G_N(x,y)=-\frac{1}{\pi}\log|y-x|+\frac{1}{2\pi}\log\frac{|x|}{R}-\frac{|y|^2}{4\pi R^2}. $$ The estimates for $u$ then follow from Proposition \ref{prop10} and estimates for $\log|x|$ (for the H\"older continuity, we can proceed as in \cite[Prop.\ 5.1]{CH19a}). \end{proof} \subsection{A trace theorem and the $L^1$ norm} The proofs of the following two results can be found on \cite[Lemma 5.2, Prop.\ 5.5]{CH19a}. \begin{lem} \label{trace} Let $\phi\in H^{1}(B_{\rho_2}\setminus \overline{B_{\rho_1}})$ for some $0<\rho_1<\rho_2$. Then (for $i=1,2$): $$\int_{\partial B_{\rho_i}}\phi^2(x)dS(x)\leq \frac{8}{\rho_2-\rho_1}\int_{B_{\rho_2}\setminus \overline{B_{\rho_1}}}\phi^2(x)dx+4(\rho_2-\rho_1)\int_{B_{\rho_2}\setminus \overline{B_{\rho_1}}}|D\phi|^2(x)dx. $$ \end{lem} \begin{proposition} \label{prop12} Let $E$, $d$, and $B$ be as in \eqref{eq:genericE}, \eqref{eq:d}, and \eqref{eq:B}. Suppose \[ \begin{cases} \Delta u =0 \text{ in } E,\\[0.5em] \displaystyle \frac{\partial u}{\partial \nu}=g \text{ on } \partial E \end{cases} \] and $\displaystyle \int_{E}u(y)dy=0$. Then: $ \Vert u \Vert_{L^{1}(E)}\leq C\cdot B\Vert g\Vert_{\infty}. $ \end {proposition} \subsubsection*{Regularity near the holes} \begin{proposition} \label{prop13} Let $B$ and $u$ be as in \eqref{eq:B} and Proposition \ref{prop12}, then, if $A=\cup_{k=1}^{n}B(z_k,r_k+\frac{d}{3})\
1+2*#4/2+#3*#4/2,#2)} \def\mult(#1,#2)[#3,#4]{\draw (#1,#2) arc (180:360:0.5*#3 and 0.5*#4) (#1+0.5*#3, #2-0.5*#4) -- (#1+0.5*#3,#2-#4)} \def\map(#1,#2)[#3]{\draw (#1,#2-0.5) node[name=nodemap,inner sep=0pt, minimum size=10pt, shape=rectangle, draw, rounded corners]{$#3$} (#1,#2)-- (nodemap) (nodemap)-- (#1,#2-1)} \def\laction(#1,#2)[#3,#4]{\draw (#1,#2) .. controls (#1,#2-0.555*#4/2) and (#1+0.445*#4/2,#2-1*#4/2) .. (#1+1*#4/2,#2-1*#4/2) -- (#1+2*#4/2+#3*#4/2,#2-1*#4/2) (#1+2*#4/2+#3*#4/2,#2)--(#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\twisting(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.25*#3,#2-0.8*#3) and (#1+0.45*#3,#2-0.69*#3) .. (#1+0.50*#3,#2-0.66*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.1*#3,#2-0.65*#3) and (#1+0.22*#3,#2-0.54*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.75*#3,#2-0.47*#3) and (#1+0.9*#3,#2-0.4*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.5*#3,#2-0.34*#3) .. controls (#1+0.6*#3,#2-0.27*#3) and (#1+0.65*#3,#2-0.2*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-1*#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3)(#1+0.72*#3,#2-0.50*#3) -- (#1+0.50*#3,#2-0.66*#3) -- (#1+0.275*#3,#2-0.505*#3) -- (#1+0.5*#3,#2-0.34*#3) -- (#1+0.72*#3,#2-0.50*#3)} \def\doublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1)} \def\doublesinglemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublesinglemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=9.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublesinglemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublesinglemapnode) (doublesinglemapnode)-- (#1+0.5,#2-1)} \def\singledoublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1+0.5,#2)--(doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1) } \def\counit(#1,#2){\draw (#1,#2) -- (#1,#2-0.93) (#1,#2-1) circle[radius=2pt]} \def\braid(#1,#2)[#3]{\draw (#1+1*#3,#2) .. controls (#1+1*#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) -- (#1+0.37*#3,#2-0.41*#3) (#1+0.62*#3,#2-0.59*#3)-- (#1+0.9*#3,#2-0.8*#3) (#1+1*#3,#2-1*#3) .. controls (#1+1*#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\cocicle(#1,#2)[#3]{\draw (#1,#2) .. controls (#1,#2-0.555*#3/2) and (#1+0.445*#3/2,#2-#3/2) .. (#1+#3/2,#2-#3/2) .. controls (#1+1.555*#3/2,#2-#3/2) and (#1+2*#3/2,#2-0.555*#3/2) .. (#1+2*#3/2,#2) (#1+#3/2,#2-#3/2) -- (#1+#3/2,#2-2*#3/2) (#1+#3/2,#2-#3/2) node [inner sep=0pt,minimum size=3pt,shape=circle,fill] {}} \begin{scope}[xshift=0cm, yshift=-0.5cm] \mult(0.5,0)[1,1]; \draw (2,0) -- (2,-1); \doublesinglemap(1,-1)[\scriptstyle w]; \doublesinglemap(3,-1)[\scriptstyle w]; \draw (3,0) -- (3,-1); \draw (4,0) -- (4,-1); \mult(1.5,-2)[2,1.5]; \singledoublemap(2,-3.5)[\scriptstyle \tilde{w}]; \end{scope} \begin{scope}[xshift=4.8cm, yshift=-2.8cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=5.7cm, yshift=-0.25cm] \map(0,0)[\scriptstyle \jmath]; \mult(0,-1)[1.5,1]; \doublesinglemap(1,0)[\scriptstyle w]; \doublesinglemap(3,0)[\scriptstyle w]; \draw (3.5,-1) -- (3.5,-2); \mult(0.75,-2)[2.75,2]; \singledoublemap(1.625,-4)[\scriptstyle \tilde{w}]; \end{scope} \begin{scope}[xshift=10.5cm, yshift=-2.8cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=11.4cm, yshift=0cm] \map(0,0)[\scriptstyle \jmath]; \mult(1.5,-1)[2,1.5]; \doublesinglemap(1,0)[\scriptstyle w]; \doublesinglemap(3,0)[\scriptstyle w]; \draw (0,-1) -- (0,-2.5); \mult(0,-2.5)[2.5,2]; \singledoublemap(0.75,-4.5)[\scriptstyle \tilde{w}]; \end{scope} \begin{scope}[xshift=16.2cm, yshift=-2.8cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=16.9cm, yshift=-0.25cm] \mult(1.5,-1)[2,1.5]; \doublesinglemap(1,0)[\scriptstyle w]; \doublesinglemap(3,0)[\scriptstyle w]; \draw (0,0) -- (0,-3.5); \singledoublemap(2,-2.5)[\scriptstyle \tilde{w}]; \draw (3,-3.5) -- (3,-5); \mult(0,-3.5)[2,1.5]; \end{scope} \begin{scope}[xshift=21.4cm, yshift=-2.8cm] \node at (0,0){,}; \end{scope} \end{tikzpicture} $$ the morphism $\tilde{\mu}$ is left $A$-linear. We next prove that $\tilde{\mu}$ is right $H$-colinear. By Remarks~\ref{w_E es lineal a izquierda} and~\ref{w'E es E_C lineal a izq} we know that $w$ and $\tilde{w}$ are right $H$-colinear. Since $\mu_E$ is also right $H$-colinear, we have $$ \begin{tikzpicture}[scale=0.44] \def\unit(#1,#2){\draw (#1,#2) circle[radius=2pt] (#1,#2-0.07) -- (#1,#2-1)} \def\comult(#1,#2)[#3]{\draw (#1,#2)-- (#1,#2-1*#3/2) (#1,#2-1*#3/2) .. controls (#1+0.555*#3/2,#2-1*#3/2) and (#1+1*#3/2,#2-1.445*#3/2) .. (#1+1*#3/2,#2-2*#3/2) (#1,#2-1*#3/2) .. controls (#1-0.555*#3/2,#2-1*#3/2) and (#1-1*#3/2,#2-1.445*#3/2) .. (#1-1*#3/2,#2-2*#3/2)} \def\rcoaction(#1,#2)[#3,#4]{\draw (#1,#2)-- (#1,#2-2*#4/2) (#1,#2-1*#4/2) -- (#1+1*#4/2+#3*#4/2,#2-1*#4/2).. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-1.445*#4/2) .. (#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\transposition(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\raction(#1,#2)[#3,#4]{\draw (#1,#2) -- (#1,#2-2*#4/2) (#1,#2-1*#4/2)--(#1+1*#4/2+#3*#4/2,#2-1*#4/2) .. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-0.555*#4/2) .. (#1+2*#4/2+#3*#4/2,#2)} \def\mult(#1,#2)[#3,#4]{\draw (#1,#2) arc (180:360:0.5*#3 and 0.5*#4) (#1+0.5*#3, #2-0.5*#4) -- (#1+0.5*#3,#2-#4)} \def\map(#1,#2)[#3]{\draw (#1,#2-0.5) node[name=nodemap,inner sep=0pt, minimum size=10.5pt, shape=rectangle, draw, rounded corners]{$#3$} (#1,#2)-- (nodemap) (nodemap)-- (#1,#2-1)} \def\laction(#1,#2)[#3,#4]{\draw (#1,#2) .. controls (#1,#2-0.555*#4/2) and (#1+0.445*#4/2,#2-1*#4/2) .. (#1+1*#4/2,#2-1*#4/2) -- (#1+2*#4/2+#3*#4/2,#2-1*#4/2) (#1+2*#4/2+#3*#4/2,#2)--(#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\twisting(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.25*#3,#2-0.8*#3) and (#1+0.45*#3,#2-0.69*#3) .. (#1+0.50*#3,#2-0.66*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.1*#3,#2-0.65*#3) and (#1+0.22*#3,#2-0.54*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.75*#3,#2-0.47*#3) and (#1+0.9*#3,#2-0.4*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.5*#3,#2-0.34*#3) .. controls (#1+0.6*#3,#2-0.27*#3) and (#1+0.65*#3,#2-0.2*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-1*#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3)(#1+0.72*#3,#2-0.50*#3) -- (#1+0.50*#3,#2-0.66*#3) -- (#1+0.275*#3,#2-0.505*#3) -- (#1+0.5*#3,#2-0.34*#3) -- (#1+0.72*#3,#2-0.50*#3)} \def\doublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1)} \def\doublesinglemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublesinglemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=9.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublesinglemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublesinglemapnode) (doublesinglemapnode)-- (#1+0.5,#2-1)} \def\singledoublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1+0.5,#2)--(doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1) } \def\counit(#1,#2){\draw (#1,#2) -- (#1,#2-0.93) (#1,#2-1) circle[radius=2pt]} \def\braid(#1,#2)[#3]{\draw (#1+1*#3,#2) .. controls (#1+1*#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) -- (#1+0.37*#3,#2-0.41*#3) (#1+0.62*#3,#2-0.59*#3)-- (#1+0.9*#3,#2-0.8*#3) (#1+1*#3,#2-1*#3) .. controls (#1+1*#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\cocicle(#1,#2)[#3]{\draw (#1,#2) .. controls (#1,#2-0.555*#3/2) and (#1+0.445*#3/2,#2-#3/2) .. (#1+#3/2,#2-#3/2) .. controls (#1+1.555*#3/2,#2-#3/2) and (#1+2*#3/2,#2-0.555*#3/2) .. (#1+2*#3/2,#2) (#1+#3/2,#2-#3/2) -- (#1+#3/2,#2-2*#3/2) (#1+#3/2,#2-#3/2) node [inner sep=0pt,minimum size=3pt,shape=circle,fill] {}} \begin{scope}[xshift=0cm, yshift= -0.5cm] \doublesinglemap(1,0)[\scriptstyle w]; \doublesinglemap(3,0)[\scriptstyle w]; \mult(1.5,-1)[2,1.5]; \singledoublemap(2,-2.5)[\scriptstyle \tilde{w}]; \draw (2,-3.5) -- (2,-4.5); \comult(3,-3.5)[1]; \end{scope} \begin{scope}[xshift=4.8cm, yshift=-2.8cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=4.7cm, yshift=-0.7cm] \doublesinglemap(1,0)[\scriptstyle w]; \doublesinglemap(3,0)[\scriptstyle w]; \mult(1.5,-1)[2,1.5]; \rcoaction(2.5,-2)[1,1]; \singledoublemap(2,-3)[\scriptstyle \tilde{w}]; \draw (4,-3) -- (4,-4); \end{scope} \begin{scope}[xshift=9.5cm, yshift=-2.8cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=10.4cm, yshift=-0.2cm] \doublesinglemap(0,0)[\scriptstyle w]; \doublesinglemap(2,0)[\scriptstyle w]; \rcoaction(0.5,-1)[0,1]; \rcoaction(2.5,-1)[0,1]; \draw (0.5,-2) -- (0.5,-3); \braid(1.5,-2)[1]; \draw (3.5,-2) -- (3.5,-3); \mult(0.5,-3)[1,1]; \mult(2.5,-3)[1,1]; \singledoublemap(0.5,-4)[\scriptstyle \tilde{w}]; \draw (3,-4) -- (3,-5); \end{scope} \begin{scope}[xshift=14.4cm, yshift=-2.8cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=15.2cm, yshift=0cm] \draw (0,0) -- (0,-1); \comult(1.5,0)[1]; \draw (3,0) -- (3,-1); \comult(4.5,0)[1]; \doublesinglemap(0,-1)[\scriptstyle w]; \draw (2,-1) -- (2,-2); \doublesinglemap(3,-1)[\scriptstyle w]; \draw (5,-1) -- (5,-3.5); \draw (0.5,-2) -- (0.5,-3.5); \braid(2,-2)[1.5]; \mult(0.5,-3.5)[1.5,1]; \mult(3.5,-3.5)[1.5,1]; \singledoublemap(0.75,-4.5)[\scriptstyle \tilde{w}]; \draw (4.25,-4.5) -- (4.25,-5.5); \end{scope} \begin{scope}[xshift=20.7cm, yshift=-2.8cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=21.6cm, yshift=0.3cm] \draw (0,0) -- (0,-3); \comult(1.5,0)[1]; \draw (3,0) -- (3,-1); \comult(4.5,0)[1]; \draw (4,-1) -- (4,-2); \draw (5,-1) -- (5,-3); \draw (1,-1) -- (1,-3); \braid(2,-1)[1]; \draw (2,-2) -- (2,-3); \braid(3,-2)[1]; \doublesinglemap(0,-3)[\scriptstyle w]; \doublesinglemap(2,-3)[\scriptstyle w]; \mult(4,-3)[1,1]; \mult(0.5,-4)[2,1]; \singledoublemap(1,-5)[\scriptstyle \tilde{w}]; \draw (4.5,-4) -- (4.5,-6); \end{scope} \begin{scope}[xshift=26.9cm, yshift=-3cm] \node at (0,0){,}; \end{scope} \end{tikzpicture} $$ as desired. \end{proof} \begin{theorem}\label{teo 1} Let $\tilde{\mu}$ and $\tilde{\nu}$ be as in Proposition~\ref{multiplicacon en Ec ot C es Ec lineal a izq}. The morphisms $$ \rho\colon H\ot A\longrightarrow A\qquad\text{and}\qquad f\colon H\ot H\longrightarrow A, $$ defined by $$ \rho\coloneqq (A\ot\epsilon)\xcirc \tilde{\mu}\xcirc \bigl(\eta_A\ot H\ot \jmath'_{\tilde{\nu}}\bigr)\quad\text{and}\quad f\coloneqq (A\ot\epsilon)\xcirc \tilde{\mu}\xcirc \bigl(\eta_A\ot H\ot \eta_A\ot H\bigr), $$ where $\jmath'_{\tilde{\nu}}\coloneqq (\mu_A\ot H)\xcirc (A\ot \tilde{\nu})$, satisfy the following properties: \begin{enumerate}[itemsep=0.7ex, topsep=1.0ex, label=\emph{(\arabic*)}] \item $\rho$ is a weak measure of $H$ on $A$, \item $f$ is a cocycle that satisfies the twisted module condition, \item $f = \mu_A\xcirc(A \ot\rho)\xcirc (f\ot \mu\ot A)\xcirc (\Delta_{H\ot H}\ot\eta_A)$, \item $\rho\xcirc (H\ot \eta_A) = \mu_A\xcirc (\rho\ot f) \xcirc (H\ot c\ot H)\xcirc (\Delta\ot \tilde{\nu})$, \item $\rho\xcirc (H\ot \eta_A) = \mu_A\xcirc (A\ot f)\xcirc (\tilde{\nu}\ot H)$, \item $(\mu_A\ot H)\xcirc (A\ot\rho\ot H)\xcirc (A\ot H\ot c)\xcirc (A\ot \Delta\ot A)\xcirc(\tilde{\nu}\ot A) = (\mu_A\ot H)\xcirc (A\ot \tilde{\nu})$. \end{enumerate} Moreover $$ \tilde{\mu}=\mu_{A\ot_{\rho}^f H},\qquad E \simeq A\times_{\rho}^f H \qquad \text{and} \qquad \gamma = w_{\gamma} \xcirc (\eta_A\ot H). $$ \end{theorem} \begin{proof} Since $\tilde{\nu} = \tilde{w}_{\gamma^{-1}}^E \xcirc \eta_E = (p_{\gamma^{-1}}^E\ot H)\xcirc \delta_E \xcirc \eta_E$, from the equality in Proposition~\ref{wbialgebras}(6), it follows that $\tilde{\nu} = (A\ot\Pi^{\hs L})\xcirc \tilde{\nu}$. Thus, by \cite{FGR}*{Theorems~3.11 and~4.2}, in order to prove the result it suffices to note that, by Proposition~\ref{multiplicacon en Ec ot C es Ec lineal a izq}, the map $\tilde{\mu}$ is left $A$-linear, right $H$-colinear and associative, the map $\tilde{\nu}$ is a preunit of $\tilde{\mu}$; and $\tilde{\mu}$ is normalized with respect to $\nabla_{\!\tilde{\nu}}$. \end{proof} \begin{remark}\label{calculos} Set $w\coloneqq w_{\gamma}^E$, $\tilde{w}\coloneqq \tilde{w}_{\gamma^{-1}}^E$ and $p\coloneqq p_{\gamma^{-1}}^E$. By Remark~\ref{w'E es E_C lineal a izq}, $$ \begin{tikzpicture}[scale=0.44] \def\unit(#1,#2){\draw (#1,#2) circle[radius=2pt] (#1,#2-0.07) -- (#1,#2-1)} \def\comult(#1,#2)[#3]{\draw (#1,#2)-- (#1,#2-1*#3/2) (#1,#2-1*#3/2) .. controls (#1+0.555*#3/2,#2-1*#3/2) and (#1+1*#3/2,#2-1.445*#3/2) .. (#1+1*#3/2,#2-2*#3/2) (#1,#2-1*#3/2) .. controls (#1-0.555*#3/2,#2-1*#3/2) and (#1-1*#3/2,#2-1.445*#3/2) .. (#1-1*#3/2,#2-2*#3/2)} \def\rcoaction(#1,#2)[#3,#4]{\draw (#1,#2)-- (#1,#2-2*#4/2) (#1,#2-1*#4/2) -- (#1+1*#4/2+#3*#4/2,#2-1*#4/2).. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-1.445*#4/2) .. (#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\transposition(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\raction(#1,#2)[#3,#4]{\draw (#1,#2) -- (#1,#2-2*#4/2) (#1,#2-1*#4/2)--(#1+1*#4/2+#3*#4/2,#2-1*#4/2) .. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-0.555*#4/2) .. (#1+2*#4/2+#3*#4/2,#2)} \def\mult(#1,#2)[#3]{\draw (#1,#2) .. controls (#1,#2-0.555*#3/2) and (#1+0.445*#3/2,#2-#3/2) .. (#1+#3/2,#2-#3/2) .. controls (#1+1.555*#3/2,#2-#3/2) and (#1+2*#3/2,#2-0.555*#3/2) .. (#1+2*#3/2,#2) (#1+#3/2,#2-#3/2) -- (#1+#3/2,#2-2*#3/2)} \def\map(#1,#2)[#3]{\draw (#1,#2-0.5) node[name=nodemap,inner sep=0pt, minimum size=10pt, shape=rectangle, draw, rounded corners]{$#3$} (#1,#2)-- (nodemap) (nodemap)-- (#1,#2-1)} \def\laction(#1,#2)[#3,#4]{\draw (#1,#2) .. controls (#1,#2-0.555*#4/2) and (#1+0.445*#4/2,#2-1*#4/2) .. (#1+1*#4/2,#2-1*#4/2) -- (#1+2*#4/2+#3*#4/2,#2-1*#4/2) (#1+2*#4/2+#3*#4/2,#2)--(#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\twisting(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.25*#3,#2-0.8*#3) and (#1+0.45*#3,#2-0.69*#3) .. (#1+0.50*#3,#2-0.66*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.1*#3,#2-0.65*#3) and (#1+0.22*#3,#2-0.54*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.75*#3,#2-0.47*#3) and (#1+0.9*#3,#2-0.4*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.5*#3,#2-0.34*#3) .. controls (#1+0.6*#3,#2-0.27*#3) and (#1+0.65*#3,#2-0.2*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-1*#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3)(#1+0.72*#3,#2-0.50*#3) -- (#1+0.50*#3,#2-0.66*#3) -- (#1+0.275*#3,#2-0.505*#3) -- (#1+0.5*#3,#2-0.34*#3) -- (#1+0.72*#3,#2-0.50*#3)} \def\doublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1)} \def\doublesinglemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublesinglemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublesinglemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublesinglemapnode) (doublesinglemapnode)-- (#1+0.5,#2-1)} \def\singledoublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1+0.5,#2)--(doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1) } \def\counit(#1,#2){\draw (#1,#2) -- (#1,#2-0.93) (#1,#2-1) circle[radius=2pt]} \def\braid(#1,#2)[#3]{\draw (#1+1*#3,#2) .. controls (#1+1*#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) -- (#1+0.37*#3,#2-0.41*#3) (#1+0.62*#3,#2-0.59*#3)-- (#1+0.9*#3,#2-0.8*#3) (#1+1*#3,#2-1*#3) .. controls (#1+1*#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\cocicle(#1,#2)[#3]{\draw (#1,#2) .. controls (#1,#2-0.555*#3/2) and (#1+0.445*#3/2,#2-#3/2) .. (#1+#3/2,#2-#3/2) .. controls (#1+1.555*#3/2,#2-#3/2) and (#1+2*#3/2,#2-0.555*#3/2) .. (#1+2*#3/2,#2) (#1+#3/2,#2-#3/2) -- (#1+#3/2,#2-2*#3/2) (#1+#3/2,#2-#3/2) node [inner sep=0pt,minimum size=3pt,shape=circle,fill] {}} \begin{scope}[xshift=-1.2cm, yshift=-1.3cm] \node at (0,0){$\jmath'_{\tilde{\nu}}=$}; \end{scope} \begin{scope}[xshift=0cm, yshift=0cm] \draw (0,0) -- (0,-2); \unit(1.5,0); \singledoublemap(1,-1)[\scriptstyle \tilde{\nu}]; \mult(0,-2)[1]; \draw (2,-2) -- (2,-3); \end{scope} \begin{scope}[xshift=2.8cm, yshift=-1.3cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=3.5cm, yshift=0cm] \draw (0,0) -- (0,-2); \unit(1.5,0); \singledoublemap(1,-1)[\scriptstyle \tilde{w}]; \mult(0,-2)[1]; \draw (2,-2) -- (2,-3); \end{scope} \begin{scope}[xshift=6.3cm, yshift=-1.3cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=7.4cm, yshift=0cm] \map(0,0)[\scriptstyle \jmath]; \unit(1,0); \mult(0,-1)[1]; \singledoublemap(0,-2)[\scriptstyle \tilde{w}]; \end{scope} \begin{scope}[xshift=9.2cm, yshift=-1.3cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=10.3cm, yshift=-0.5cm] \map(0.5,0)[\scriptstyle \jmath]; \singledoublemap(0,-1)[\scriptstyle \tilde{w}]; \end{scope} \begin{scope}[xshift=11.9cm, yshift=-1.3cm] \node at (0,0){.}; \end{scope} \end{tikzpicture} $$ Consequetly, by the definitions of $\tilde{\mu}$, $w$, $\tilde{w}$, $\rho$ and $f$, we have $$ \begin{tikzpicture}[scale=0.44] \def\unit(#1,#2){\draw (#1,#2) circle[radius=2pt] (#1,#2-0.07) -- (#1,#2-1)} \def\comult(#1,#2)[#3]{\draw (#1,#2)-- (#1,#2-1*#3/2) (#1,#2-1*#3/2) .. controls (#1+0.555*#3/2,#2-1*#3/2) and (#1+1*#3/2,#2-1.445*#3/2) .. (#1+1*#3/2,#2-2*#3/2) (#1,#2-1*#3/2) .. controls (#1-0.555*#3/2,#2-1*#3/2) and (#1-1*#3/2,#2-1.445*#3/2) .. (#1-1*#3/2,#2-2*#3/2)} \def\rcoaction(#1,#2)[#3,#4]{\draw (#1,#2)-- (#1,#2-2*#4/2) (#1,#2-1*#4/2) -- (#1+1*#4/2+#3*#4/2,#2-1*#4/2).. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-1.445*#4/2) .. (#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\transposition(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\raction(#1,#2)[#3,#4]{\draw (#1,#2) -- (#1,#2-2*#4/2) (#1,#2-1*#4/2)--(#1+1*#4/2+#3*#4/2,#2-1*#4/2) .. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-0.555*#4/2) .. (#1+2*#4/2+#3*#4/2,#2)} \def\mult(#1,#2)[#3,#4]{\draw (#1,#2) arc (180:360:0.5*#3 and 0.5*#4) (#1+0.5*#3, #2-0.5*#4) -- (#1+0.5*#3,#2-#4)} \def\map(#1,#2)[#3]{\draw (#1,#2-0.5) node[name=nodemap,inner sep=0pt, minimum size=10pt, shape=rectangle, draw, rounded corners]{$#3$} (#1,#2)-- (nodemap) (nodemap)-- (#1,#2-1)} \def\laction(#1,#2)[#3,#4]{\draw (#1,#2) .. controls (#1,#2-0.555*#4/2) and (#1+0.445*#4/2,#2-1*#4/2) .. (#1+1*#4/2,#2-1*#4/2) -- (#1+2*#4/2+#3*#4/2,#2-1*#4/2) (#1+2*#4/2+#3*#4/2,#2)--(#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\twisting(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.25*#3,#2-0.8*#3) and (#1+0.45*#3,#2-0.69*#3) .. (#1+0.50*#3,#2-0.66*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.1*#3,#2-0.65*#3) and (#1+0.22*#3,#2-0.54*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.75*#3,#2-0.47*#3) and (#1+0.9*#3,#2-0.4*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.5*#3,#2-0.34*#3) .. controls (#1+0.6*#3,#2-0.27*#3) and (#1+0.65*#3,#2-0.2*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-1*#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3)(#1+0.72*#3,#2-0.50*#3) -- (#1+0.50*#3,#2-0.66*#3) -- (#1+0.275*#3,#2-0.505*#3) -- (#1+0.5*#3,#2-0.34*#3) -- (#1+0.72*#3,#2-0.50*#3)} \def\doublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1)} \def\doublesinglemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublesinglemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=9.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublesinglemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublesinglemapnode) (doublesinglemapnode)-- (#1+0.5,#2-1)} \def\singledoublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1+0.5,#2)--(doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1) } \def\counit(#1,#2){\draw (#1,#2) -- (#1,#2-0.93) (#1,#2-1) circle[radius=2pt]} \def\braid(#1,#2)[#3]{\draw (#1+1*#3,#2) .. controls (#1+1*#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) -- (#1+0.37*#3,#2-0.41*#3) (#1+0.62*#3,#2-0.59*#3)-- (#1+0.9*#3,#2-0.8*#3) (#1+1*#3,#2-1*#3) .. controls (#1+1*#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\cocicle(#1,#2)[#3]{\draw (#1,#2) .. controls (#1,#2-0.555*#3/2) and (#1+0.445*#3/2,#2-#3/2) .. (#1+#3/2,#2-#3/2) .. controls (#1+1.555*#3/2,#2-#3/2) and (#1+2*#3/2,#2-0.555*#3/2) .. (#1+2*#3/2,#2) (#1+#3/2,#2-#3/2) -- (#1+#3/2,#2-2*#3/2) (#1+#3/2,#2-#3/2) node [inner sep=0pt,minimum size=3pt,shape=circle,fill] {}} \begin{scope}[xshift=-0.5cm, yshift=-1.3cm] \node at (0,0){$\rho =$}; \end{scope} \begin{scope}[xshift=-0.3cm, yshift= 0cm] \unit(1,2); \draw (1,1) -- (1,0); \draw (2,2) -- (2,0); \map(3.5,2)[\scriptstyle \jmath]; \singledoublemap(3,1)[\scriptstyle \tilde{w}]; \doublesinglemap(1,0)[\scriptstyle w]; \doublesinglemap(3,0)[\scriptstyle w]; \mult(1.5,-1)[2,1.5]; \singledoublemap(2,-2.5)[\scriptstyle \tilde{w}]; \draw (2,-3.5) -- (2,-4.5); \counit(3,-3.5); \end{scope} \begin{scope}[xshift=4.4cm, yshift=-1.3cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=4.1cm, yshift=0.5cm] \unit(1,1); \draw (2,1) -- (2,0); \draw (3.5,1) -- (3.5,0); \map(3.5,0)[\scriptstyle \jmath]; \doublesinglemap(1,0)[\scriptstyle w]; \mult(1.5,-1)[2,1.5]; \singledoublemap(2,-2.5)[\scriptstyle \tilde{w}]; \draw (2,-3.5) -- (2,-4.5); \counit(3,-3.5); \end{scope} \begin{scope}[xshift=8.6cm, yshift=-1.3cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=8.6cm, yshift=0.75cm] \unit(1,1); \map(1,0)[\scriptstyle \jmath]; \draw (2,1) -- (2,0); \map(2,0)[\scriptstyle \gamma]; \draw (3,1) -- (3,0); \map(3,0)[\scriptstyle \jmath]; \mult(1,-1)[1,1]; \draw (3,-1) -- (3,-2); \mult(1.5,-2)[1.5,1.5]; \rcoaction(2.25,-3)[0,1]; \counit(3.25,-4); \map(2.25,-4)[\scriptstyle p]; \end{scope} \begin{scope}[xshift=12.5cm, yshift=-1.3cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=11.5cm, yshift=0.3cm] \map(2,0)[\scriptstyle \gamma]; \map(3,0)[\scriptstyle \jmath]; \mult(2,-1)[1,1]; \map(2.5,-2)[\scriptstyle p]; \end{scope} \begin{scope}[xshift=16.7cm, yshift=-1.3cm] \node at (0,0){and}; \end{scope} \begin{scope}[xshift=19cm, yshift=-1.3cm] \node at (0,0){$f = $}; \end{scope} \begin{scope}[xshift=19.1cm, yshift= 0.5cm] \unit(1,1); \draw (2,1) -- (2,0); \doublesinglemap(1,0)[\scriptstyle w]; \unit(3,1); \draw (4,1) -- (4,0); \doublesinglemap(3,0)[\scriptstyle w]; \mult(1.5,-1)[2,1.5]; \singledoublemap(2,-2.5)[\scriptstyle \tilde{w}]; \draw (2,-3.5) -- (2,-4.5); \counit(3,-3.5); \end{scope} \begin{scope}[xshift=23.8cm, yshift=-1.3cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=23.8cm, yshift=1cm] \unit(1,1); \map(1,0)[\scriptstyle \jmath]; \draw (2,1) -- (2,0); \map(2,0)[\scriptstyle \gamma]; \unit(3,1); \draw (4,1) -- (4,0); \map(4,0)[\scriptstyle \gamma]; \map(3,0)[\scriptstyle \jmath]; \mult(1,-1)[1,1]; \mult(3,-1)[1,1]; \mult(1.5,-2)[2,1.5]; \rcoaction(2.5,-3)[0,1]; \counit(3.5,-4); \map(2.5,-4)[\scriptstyle p]; \end{scope} \begin{scope}[xshift=28.7cm, yshift=-1.3cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=27.7cm, yshift=0.3cm] \map(2,0)[\scriptstyle \gamma]; \map(3,0)[\scriptstyle \gamma]; \mult(2,-1)[1,1]; \map(2.5,-2)[\scriptstyle p]; \end{scope} \begin{scope}[xshift=31.3cm, yshift=-1.3cm] \node at (0,0){.}; \end{scope} \end{tikzpicture} $$ Moreover $\jmath_{\tilde{\nu}} = w\xcirc \tilde{w}\xcirc \jmath = \jmath$. \end{remark} \begin{remark}\label{prev} By Remark~\ref{igualdad 0} and the fact that $\jmath \xcirc p_{\gamma^{-1}}^E = q_{\gamma^{-1}}^E$, $\gamma\xcirc \eta = \eta _E$ and $\gamma * \gamma^{-1} = \gamma\xcirc \Pi^L$, we have $$ \jmath \xcirc p_{\gamma^{-1}}^E\xcirc \eta_E = q_{\gamma^{-1}}^E\xcirc \gamma\xcirc \eta = (\gamma * \gamma^{-1})\xcirc \eta = \gamma\xcirc \Pi^L\xcirc\eta = \gamma\xcirc \eta = \eta_E = \jmath\xcirc \eta_A. $$ Since $\jmath$ is a monomorphism, this implies that $p_{\gamma^{-1}}^E\xcirc \eta_E = \eta_A$. Consequently $$ p_{\gamma^{-1}}^E\xcirc \jmath = p_{\gamma^{-1}}^E\xcirc \mu_E \xcirc (\jmath \ot \eta_E) = \mu_A\xcirc (A \ot p_{\gamma^{-1}}^E) \xcirc (A \ot \eta_E) = \ide_A, $$ where the second equality holds by Proposition~\ref{q y multiplicacion nivel 1}. \end{remark} \begin{remark}By Theorem~\ref{teo 1} the map $\gamma$ satisfies Lemma~\ref{caso particular} and Proposition~\ref{multiplicacion y gamma}. We will use these facts in the proof of Proposition~\ref{es un H modulo algebra debil} and Lemma~\ref{algo}. \end{remark} \begin{proposition}\label{es un H modulo algebra debil} The algebra $A$ is a left weak $H$-module algebra. \end{proposition} \begin{proof} Since $\rho$ is a weak measure we know that the equality in Proposition~\ref{wbialgebras'}(2) is satisfied. Thus, in order to finish the proof it suffices to check that the equalities in items~(1), (3) and~(7)~of Proposition~\ref{wbialgebras'} also are satisfied. For the legibility in the diagrams we set $p\coloneqq p_{\gamma^{-1}}^E$ and $q\coloneqq q_{\gamma^{-1}}^E$. For the equality in Proposition~\ref{wbialgebras'}(1) we have $$ \rho\xcirc (\eta\ot A) = p\xcirc \mu_E \xcirc (\gamma\ot \jmath) \xcirc (\eta\ot A) = p\xcirc \jmath = \ide_A, $$ where the first equality holds by Remark~\ref{calculos}; the second one, since $\gamma\xcirc \eta = \eta_E$; and the last one, by Remark~\ref{prev}. We next prove that the equality in Proposition~\ref{wbialgebras'}(3) is true. We have $$ \begin{tikzpicture}[scale=0.44] \def\unit(#1,#2){\draw (#1,#2) circle[radius=2pt] (#1,#2-0.07) -- (#1,#2-1)} \def\comult(#1,#2)[#3]{\draw (#1,#2)-- (#1,#2-1*#3/2) (#1,#2-1*#3/2) .. controls (#1+0.555*#3/2,#2-1*#3/2) and (#1+1*#3/2,#2-1.445*#3/2) .. (#1+1*#3/2,#2-2*#3/2) (#1,#2-1*#3/2) .. controls (#1-0.555*#3/2,#2-1*#3/2) and (#1-1*#3/2,#2-1.445*#3/2) .. (#1-1*#3/2,#2-2*#3/2)} \def\rcoaction(#1,#2)[#3,#4]{\draw (#1,#2)-- (#1,#2-2*#4/2) (#1,#2-1*#4/2) -- (#1+1*#4/2+#3*#4/2,#2-1*#4/2).. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-1.445*#4/2) .. (#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\transposition(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\raction(#1,#2)[#3,#4]{\draw (#1,#2) -- (#1,#2-2*#4/2) (#1,#2-1*#4/2)--(#1+1*#4/2+#3*#4/2,#2-1*#4/2) .. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-0.555*#4/2) .. (#1+2*#4/2+#3*#4/2,#2)} \def\mult(#1,#2)[#3,#4]{\draw (#1,#2) arc (180:360:0.5*#3 and 0.5*#4) (#1+0.5*#3, #2-0.5*#4) -- (#1+0.5*#3,#2-#4)} \def\map(#1,#2)[#3]{\draw (#1,#2-0.5) node[name=nodemap,inner sep=0pt, minimum size=10.5pt, shape=rectangle, draw, rounded corners]{$#3$} (#1,#2)-- (nodemap) (nodemap)-- (#1,#2-1)} \def\laction(#1,#2)[#3,#4]{\draw (#1,#2) .. controls (#1,#2-0.555*#4/2) and (#1+0.445*#4/2,#2-1*#4/2) .. (#1+1*#4/2,#2-1*#4/2) -- (#1+2*#4/2+#3*#4/2,#2-1*#4/2) (#1+2*#4/2+#3*#4/2,#2)--(#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\twisting(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.25*#3,#2-0.8*#3) and (#1+0.45*#3,#2-0.69*#3) .. (#1+0.50*#3,#2-0.66*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.1*#3,#2-0.65*#3) and (#1+0.22*#3,#2-0.54*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.75*#3,#2-0.47*#3) and (#1+0.9*#3,#2-0.4*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.5*#3,#2-0.34*#3) .. controls (#1+0.6*#3,#2-0.27*#3) and (#1+0.65*#3,#2-0.2*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-1*#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3)(#1+0.72*#3,#2-0.50*#3) -- (#1+0.50*#3,#2-0.66*#3) -- (#1+0.275*#3,#2-0.505*#3) -- (#1+0.5*#3,#2-0.34*#3) -- (#1+0.72*#3,#2-0.50*#3)} \def\doublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1)} \def\doublesinglemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublesinglemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=9.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublesinglemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublesinglemapnode) (doublesinglemapnode)-- (#1+0.5,#2-1)} \def\singledoublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1+0.5,#2)--(doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1) } \def\counit(#1,#2){\draw (#1,#2) -- (#1,#2-0.93) (#1,#2-1) circle[radius=2pt]} \def\braid(#1,#2)[#3]{\draw (#1+1*#3,#2) .. controls (#1+1*#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) -- (#1+0.37*#3,#2-0.41*#3) (#1+0.62*#3,#2-0.59*#3)-- (#1+0.9*#3,#2-0.8*#3) (#1+1*#3,#2-1*#3) .. controls (#1+1*#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\cocicle(#1,#2)[#3]{\draw (#1,#2) .. controls (#1,#2-0.555*#3/2) and (#1+0.445*#3/2,#2-#3/2) .. (#1+#3/2,#2-#3/2) .. controls (#1+1.555*#3/2,#2-#3/2) and (#1+2*#3/2,#2-0.555*#3/2) .. (#1+2*#3/2,#2) (#1+#3/2,#2-#3/2) -- (#1+#3/2,#2-2*#3/2) (#1+#3/2,#2-#3/2) node [inner sep=0pt,minimum size=3pt,shape=circle,fill] {}} \begin{scope}[xshift=0cm, yshift= 0cm] \draw (1,0) -- (1,-1); \draw (0,0) -- (0,-1); \unit(2,0); \map(0,-1)[\scriptstyle \gamma]; \map(1,-1)[\scriptstyle \gamma]; \map(2,-1)[\scriptstyle \jmath]; \mult(1,-2)[1,1]; \map(1.5,-3)[\scriptstyle p]; \map(1.5,-4)[\scriptstyle \jmath]; \mult(0,-5)[1.5,1]; \map(0.75,-6)[\scriptstyle q]; \draw (0,-2) -- (0,-5); \end{scope} \begin{scope}[xshift=2.8cm, yshift=-3.5cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=3.8cm, yshift=-1.5cm] \map(0,0)[\scriptstyle \gamma]; \map(1,0)[\scriptstyle \gamma]; \draw (0,-1) -- (0,-2); \map(1,-1)[\scriptstyle q]; \mult(0,-2)[1,1]; \map(0.5,-3)[\scriptstyle q]; \end{scope} \begin{scope}[xshift=5.7cm, yshift=-3.5cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=6.7cm, yshift=-1.5cm] \map(0,0)[\scriptstyle \gamma]; \map(1,-1)[\scriptstyle \gamma]; \draw (0,-1) -- (0,-2); \map(1,0)[\scriptstyle \Pi^{\!L}]; \mult(0,-2)[1,1]; \map(0.5,-3)[\scriptstyle q]; \end{scope} \begin{scope}[xshift=8.6cm, yshift=-3.5cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=9.6cm, yshift=-1.5cm] \draw (0,0) -- (0,-1); \map(0.5,-2)[\scriptstyle \gamma]; \map(1,0)[\scriptstyle \Pi^{\!L}]; \mult(0,-1)[1,1]; \map(0.5,-3)[\scriptstyle q]; \end{scope} \begin{scope}[xshift=11.5cm, yshift=-3.5cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=12.5cm, yshift=-1.5cm] \draw (0,0) -- (0,-1); \map(0.5,-3)[\scriptstyle \gamma]; \map(1,0)[\scriptstyle \Pi^{\!L}]; \mult(0,-1)[1,1]; \map(0.5,-2)[\scriptstyle \Pi^{\!L}]; \end{scope} \begin{scope}[xshift=14.4cm, yshift=-3.5cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=15.4cm, yshift=-1cm] \map(0.5,-3)[\scriptstyle \gamma]; \mult(0,-1)[1,1]; \map(0.5,-2)[\scriptstyle \Pi^{\!L}]; \end{scope} \begin{scope}[xshift=17cm, yshift=-3.5cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=17.8cm, yshift=-2cm] \mult(0,0)[1,1]; \map(0.5,-1)[\scriptstyle \gamma]; \map(0.5,-2)[\scriptstyle q]; \end{scope} \begin{scope}[xshift=19.5cm, yshift=-3.5cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=20.2cm, yshift=-1.5cm] \mult(0,0)[1,1]; \map(0.5,-1)[\scriptstyle \gamma]; \unit(2,0); \mult(0.5,-2)[1.5,1]; \map(1.25,-3)[\scriptstyle q]; \map(2,-1)[\scriptstyle \jmath]; \end{scope} \begin{scope}[xshift=23cm, yshift=-3.5cm] \node at (0,0){,}; \end{scope} \end{tikzpicture} $$ where the first and last equality hold since $\jmath$ and $\mu_E$ are unitary and $\jmath\xcirc p = q$; the second, fourth and sixth one, by Remark~\ref{igualdad 0} and the fact that $\gamma * \gamma^{-1} = \gamma\xcirc \Pi^L$; the third one, by Proposition~\ref{multiplicacion y gamma}(1); and the fifth one, by Proposition~\ref{mu delta Pi^R, etc}. This, combined with Remark~\ref{calculos} and the fact that $\jmath\xcirc p = q$ and $\jmath$ is a monomorphism, proves Proposition~\ref{wbialgebras'}(3). Finally, the equality in Proposition~\ref{wbialgebras'}(6) is satisfied, since $$ \begin{tikzpicture}[scale=0.44] \def\unit(#1,#2){\draw (#1,#2) circle[radius=2pt] (#1,#2-0.07) -- (#1,#2-1)} \def\comult(#1,#2)[#3]{\draw (#1,#2)-- (#1,#2-1*#3/2) (#1,#2-1*#3/2) .. controls (#1+0.555*#3/2,#2-1*#3/2) and (#1+1*#3/2,#2-1.445*#3/2) .. (#1+1*#3/2,#2-2*#3/2) (#1,#2-1*#3/2) .. controls (#1-0.555*#3/2,#2-1*#3/2) and (#1-1*#3/2,#2-1.445*#3/2) .. (#1-1*#3/2,#2-2*#3/2)} \def\rcoaction(#1,#2)[#3,#4]{\draw (#1,#2)-- (#1,#2-2*#4/2) (#1,#2-1*#4/2) -- (#1+1*#4/2+#3*#4/2,#2-1*#4/2).. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-1.445*#4/2) .. (#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\transposition(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\raction(#1,#2)[#3,#4]{\draw (#1,#2) -- (#1,#2-2*#4/2) (#1,#2-1*#4/2)--(#1+1*#4/2+#3*#4/2,#2-1*#4/2) .. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-0.555*#4/2) .. (#1+2*#4/2+#3*#4/2,#2)} \def\mult(#1,#2)[#3,#4]{\draw (#1,#2) arc (180:360:0.5*#3 and 0.5*#4) (#1+0.5*#3, #2-0.5*#4) -- (#1+0.5*#3,#2-#4)} \def\map(#1,#2)[#3]{\draw (#1,#2-0.5) node[name=nodemap,inner sep=0pt, minimum size=10.5pt, shape=rectangle, draw, rounded corners]{$#3$} (#1,#2)-- (nodemap) (nodemap)-- (#1,#2-1)} \def\laction(#1,#2)[#3,#4]{\draw (#1,#2) .. controls (#1,#2-0.555*#4/2) and (#1+0.445*#4/2,#2-1*#4/2) .. (#1+1*#4/2,#2-1*#4/2) -- (#1+2*#4/2+#3*#4/2,#2-1*#4/2) (#1+2*#4/2+#3*#4/2,#2)--(#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\twisting(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.25*#3,#2-0.8*#3) and (#1+0.45*#3,#2-0.69*#3) .. (#1+0.50*#3,#2-0.66*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.1*#3,#2-0.65*#3) and (#1+0.22*#3,#2-0.54*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.75*#3,#2-0.47*#3) and (#1+0.9*#3,#2-0.4*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.5*#3,#2-0.34*#3) .. controls (#1+0.6*#3,#2-0.27*#3) and (#1+0.65*#3,#2-0.2*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-1*#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3)(#1+0.72*#3,#2-0.50*#3) -- (#1+0.50*#3,#2-0.66*#3) -- (#1+0.275*#3,#2-0.505*#3) -- (#1+0.5*#3,#2-0.34*#3) -- (#1+0.72*#3,#2-0.50*#3)} \def\doublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1)} \def\doublesinglemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublesinglemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=9.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublesinglemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublesinglemapnode) (doublesinglemapnode)-- (#1+0.5,#2-1)} \def\singledoublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1+0.5,#2)--(doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1) } \def\counit(#1,#2){\draw (#1,#2) -- (#1,#2-0.93) (#1,#2-1) circle[radius=2pt]} \def\braid(#1,#2)[#3]{\draw (#1+1*#3,#2) .. controls (#1+1*#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) -- (#1+0.37*#3,#2-0.41*#3) (#1+0.62*#3,#2-0.59*#3)-- (#1+0.9*#3,#2-0.8*#3) (#1+1*#3,#2-1*#3) .. controls (#1+1*#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\cocicle(#1,#2)[#3]{\draw (#1,#2) .. controls (#1,#2-0.555*#3/2) and (#1+0.445*#3/2,#2-#3/2) .. (#1+#3/2,#2-#3/2) .. controls (#1+1.555*#3/2,#2-#3/2) and (#1+2*#3/2,#2-0.555*#3/2) .. (#1+2*#3/2,#2) (#1+#3/2,#2-#3/2) -- (#1+#3/2,#2-2*#3/2) (#1+#3/2,#2-#3/2) node [inner sep=0pt,minimum size=3pt,shape=circle,fill] {}} \begin{scope}[xshift=0cm, yshift= 0cm] \map(0,0)[\scriptstyle \cramped{\Pi^{\!L}}]; \map(0,-1)[\scriptstyle \gamma]; \unit(1,0); \map(1,-1)[\scriptstyle \jmath]; \mult(0,-2)[1,1]; \map(0.5,-3)[\scriptstyle p]; \end{scope} \begin{scope}[xshift=1.9cm, yshift=-2.1cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=3.1cm, yshift=-0.6cm] \map(0,0)[\scriptstyle \cramped{\Pi^{\!L}}]; \map(0,-1)[\scriptstyle \gamma]; \map(0,-2)[\scriptstyle p]; \end{scope} \begin{scope}[xshift=4.2cm, yshift=-2.1cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=5.2cm, yshift=-0.6cm] \map(0,0)[\scriptstyle \gamma]; \map(0,-1)[\scriptstyle q]; \map(0,-2)[\scriptstyle p]; \end{scope} \begin{scope}[xshift=6.1cm, yshift=-2.1cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=7.2cm, yshift=-1.1cm] \map(0,0)[\scriptstyle \gamma]; \map(0,-1)[\scriptstyle p]; \end{scope} \begin{scope}[xshift=8.2cm, yshift=-2.1cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=9.2cm, yshift= 0cm] \draw (0,0) -- (0,-1); \map(0,-1)[\scriptstyle \gamma]; \unit(1,0); \map(1,-1)[\scriptstyle \jmath]; \mult(0,-2)[1,1]; \map(0.5,-3)[\scriptstyle p]; \end{scope} \begin{scope}[xshift=11cm, yshift=-2.1cm] \node at (0,0){,}; \end{scope} \end{tikzpicture} $$ where the first and last equalities hold since $\jmath$ and $\mu_E$ are unitary; the second one, by Remark~\ref{igualdad 0} and the fact that $\gamma * \gamma^{-1} = \gamma\xcirc \Pi^L$; and the third one, since $p\xcirc q = q$. \end{proof} Next we are going to prove that $f$ is regular. By Theorem~\ref{teo 1}(3) we know that $f\xst u_2 = f$. Moreover, by Remark~\ref{a derecha implica aizquierda} and the equality in Proposition~\ref{wbialgebras'}(3), we also have $u_2\xst f = f$. Let $\sigma_E\colon H\ot H\to E$ be the morphism defined by $$ \sigma_E\coloneqq \mu_E\xcirc (\mu_E\ot \gamma^{-1}) \xcirc (\gamma\ot \Upsilon)\xcirc (\Delta\ot \gamma). $$ Clearly $\sigma_E = \bigl(\mu_E\xcirc (\gamma\ot \gamma)\bigr) \xst (\gamma^{-1}\xcirc \mu)$. By Remark~\ref{calculos} and \cite{AFGR2}*{Proposition~1.17} we know that $$ u_2 = p_{\gamma^{-1}}\xcirc \gamma\xcirc \mu\quad\text{and}\quad \sigma_E = \jmath \xcirc f. $$ Note that \begin{align*} &(q_{\gamma^{-1}}\xcirc \gamma\xcirc \mu)\xst \sigma_E = (\jmath\xcirc u_2)\xst (\jmath\xcirc f) = \jmath\xcirc (u_2\xst f) = \jmath\xcirc f = \sigma_E \shortintertext{and} &\sigma_E\xst (q_{\gamma^{-1}}\xcirc \gamma\xcirc \mu) = (\jmath\xcirc f)\xst (\jmath\xcirc u_2) = \jmath \xcirc (f\xst u_2) = \jmath\xcirc f = \sigma_E. \end{align*} \begin{remark}\label{para usar} Since $\sigma_E$ and $q_{\gamma^{-1}}\xcirc \gamma$ factorize through $\jmath$, $$ \delta_E\xcirc \sigma_E = (E\ot \Pi^{\hs L})\xcirc \delta_E\xcirc \sigma_E\quad\text{and}\quad \delta_E\xcirc q_{\gamma^{-1}}\xcirc \gamma= (E\ot \Pi^{\hs L})\xcirc \delta_E\xcirc q_{\gamma^{-1}}\xcirc \gamma. $$ \end{remark} \begin{lemma}\label{algo} Let $\sigma_E^{-1}\colon H\ot H\to E$ be the map defined by $\sigma_E^{-1}\coloneqq (\gamma \xcirc \mu)\xst \bigl(\mu_E\xcirc c\xcirc (\gamma^{-1}\ot \gamma^{-1})\bigr)$. The following equalities hold: \begin{equation}\label{A1} \sigma_E\xst \sigma_E^{-1} = q_{\gamma^{-1}}\xcirc \gamma\xcirc \mu\qquad\text{and}\qquad \sigma_E^{-1}\xst \sigma_E = q_{\gamma^{-1}}\xcirc \gamma\xcirc \mu \end{equation} \end{lemma} \begin{proof} First we show that the first equality in~\eqref{A1} is satisfied. Since $\mu$ is comultiplicative, $$ (\gamma^{-1}\xcirc \mu)\xst (\gamma \xcirc \mu) = (\gamma^{-1}\xst \gamma) \xcirc \mu = \gamma\xcirc\Pi^{\hs R}\xcirc \mu. $$ We claim that $$ \bigl(\mu_E\xcirc (\gamma\ot \gamma)\bigr) \xst (\gamma\xcirc\Pi^{\hs R}\xcirc \mu) = \mu_E\xcirc (\gamma\ot \gamma). $$ In fact, this is true since $$ \begin{tikzpicture}[scale=0.44] \def\unit(#1,#2){\draw (#1,#2) circle[radius=2pt] (#1,#2-0.07) -- (#1,#2-1)} \def\comult(#1,#2)[#3,#4]{\draw (#1,#2) -- (#1,#2-0.5*#4) arc (90:0:0.5*#3 and 0.5*#4) (#1,#2-0.5*#4) arc (90:180:0.5*#3 and 0.5*#4)} \def\rcoaction(#1,#2)[#3,#4]{\draw (#1,#2)-- (#1,#2-2*#4/2) (#1,#2-1*#4/2) -- (#1+1*#4/2+#3*#4/2,#2-1*#4/2).. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-1.445*#4/2) .. (#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\transposition(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\raction(#1,#2)[#3,#4]{\draw (#1,#2) -- (#1,#2-2*#4/2) (#1,#2-1*#4/2)--(#1+1*#4/2+#3*#4/2,#2-1*#4/2) .. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-0.555*#4/2) .. (#1+2*#4/2+#3*#4/2,#2)} \def\mult(#1,#2)[#3,#4]{\draw (#1,#2) arc (180:360:0.5*#3 and 0.5*#4) (#1+0.5*#3, #2-0.5*#4) -- (#1+0.5*#3,#2-#4)} \def\map(#1,#2)[#3]{\draw (#1,#2-0.5) node[name=nodemap,inner sep=0pt, minimum size=10.5pt, shape=rectangle, draw, rounded corners]{$#3$} (#1,#2)-- (nodemap) (nodemap)-- (#1,#2-1)} \def\laction(#1,#2)[#3,#4]{\draw (#1,#2) .. controls (#1,#2-0.555*#4/2) and (#1+0.445*#4/2,#2-1*#4/2) .. (#1+1*#4/2,#2-1*#4/2) -- (#1+2*#4/2+#3*#4/2,#2-1*#4/2) (#1+2*#4/2+#3*#4/2,#2)--(#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\twisting(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.25*#3,#2-0.8*#3) and (#1+0.45*#3,#2-0.69*#3) .. (#1+0.50*#3,#2-0.66*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.1*#3,#2-0.65*#3) and (#1+0.22*#3,#2-0.54*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.75*#3,#2-0.47*#3) and (#1+0.9*#3,#2-0.4*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.5*#3,#2-0.34*#3) .. controls (#1+0.6*#3,#2-0.27*#3) and (#1+0.65*#3,#2-0.2*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-1*#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3)(#1+0.72*#3,#2-0.50*#3) -- (#1+0.50*#3,#2-0.66*#3) -- (#1+0.275*#3,#2-0.505*#3) -- (#1+0.5*#3,#2-0.34*#3) -- (#1+0.72*#3,#2-0.50*#3)} \def\doublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1)} \def\doublesinglemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublesinglemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=9.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublesinglemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublesinglemapnode) (doublesinglemapnode)-- (#1+0.5,#2-1)} \def\singledoublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1+0.5,#2)--(doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1) } \def\counit(#1,#2){\draw (#1,#2) -- (#1,#2-0.93) (#1,#2-1) circle[radius=2pt]} \def\braid(#1,#2)[#3]{\draw (#1+1*#3,#2) .. controls (#1+1*#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) -- (#1+0.37*#3,#2-0.41*#3) (#1+0.62*#3,#2-0.59*#3)-- (#1+0.9*#3,#2-0.8*#3) (#1+1*#3,#2-1*#3) .. controls (#1+1*#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\cocicle(#1,#2)[#3]{\draw (#1,#2) .. controls (#1,#2-0.555*#3/2) and (#1+0.445*#3/2,#2-#3/2) .. (#1+#3/2,#2-#3/2) .. controls (#1+1.555*#3/2,#2-#3/2) and (#1+2*#3/2,#2-0.555*#3/2) .. (#1+2*#3/2,#2) (#1+#3/2,#2-#3/2) -- (#1+#3/2,#2-2*#3/2) (#1+#3/2,#2-#3/2) node [inner sep=0pt,minimum size=3pt,shape=circle,fill] {}} \def\solbraid(#1,#2)[#3]{\draw (#1,#2-0.5) node[name=nodemap,inner sep=0pt, minimum size=8pt, shape=circle,draw]{$#3$} (#1-0.5,#2) .. controls (#1-0.5,#2-0.15) and (#1-0.4,#2-0.2) .. (#1-0.3,#2-0.3) (#1-0.3,#2-0.3) -- (nodemap) (#1+0.5,#2) .. controls (#1+0.5,#2-0.15) and (#1+0.4,#2-0.2) .. (#1+0.3,#2-0.3) (#1+0.3,#2-0.3) -- (nodemap) (#1+0.5,#2-1) .. controls (#1+0.5,#2-0.85) and (#1+0.4,#2-0.8) .. (#1+0.3,#2-0.7) (#1+0.3,#2-0.7) -- (nodemap) (#1-0.5,#2-1) .. controls (#1-0.5,#2-0.85) and (#1-0.4,#2-0.8) .. (#1-0.3,#2-0.7) (#1-0.3,#2-0.7) -- (nodemap) } \begin{scope}[xshift=0cm, yshift=-1.75cm] \comult(0.5,0)[1,1]; \comult(2.5,0)[1,1]; \draw (0,-1) -- (0,-2); \braid(1,-1)[1]; \draw (3,-1) -- (3,-2); \map(0,-2)[\scriptstyle \gamma]; \map(1,-2)[\scriptstyle \gamma]; \mult(2,-2)[1,1]; \mult(0,-3)[1,1]; \draw (0.5,-4) -- (0.5,-5); \map(2.5,-3)[\scriptstyle \Pi^{\!R}]; \map(2.5,-4)[\scriptstyle \gamma]; \mult(0.5,-5)[2,1.5]; \end{scope} \begin{scope}[xshift=3.6cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=4.4cm, yshift=-1.5cm] \comult(0.5,0)[1,1]; \comult(2.5,0)[1,1]; \draw (0,-1) -- (0,-2); \braid(1,-1)[1]; \draw (3,-1) -- (3,-2); \map(0,-2)[\scriptstyle \gamma]; \map(1,-2)[\scriptstyle \gamma]; \mult(2,-2)[1,1]; \draw (1,-3) -- (1,-5); \map(2.5,-3)[\scriptstyle \Pi^{\!R}]; \map(2.5,-4)[\scriptstyle \gamma]; \mult(1,-5)[1.5,1]; \mult(0,-6)[1.75,1]; \draw (0,-3) -- (0,-6); \end{scope} \begin{scope}[xshift=8cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=8.8cm, yshift=-1.5cm] \comult(0.5,0)[1,1]; \comult(2.5,0)[1,1]; \draw (0,-1) -- (0,-2); \braid(1,-1)[1]; \draw (3,-1) -- (3,-2); \map(0,-2)[\scriptstyle \gamma]; \mult(2,-2)[1,1]; \draw (1,-2) -- (1,-4); \map(2.5,-3)[\scriptstyle \Pi^{\!R}]; \map(1.75,-5)[\scriptstyle \gamma]; \mult(1,-4)[1.5,1]; \mult(0,-6)[1.75,1]; \draw (0,-3) -- (0,-6); \end{scope} \begin{scope}[xshift=12.3cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=13.1cm, yshift=-1.5cm] \comult(0.5,0)[1,1]; \comult(2.75,0)[1.5,1]; \draw (0,-1) -- (0,-2); \braid(1,-1)[1]; \comult(3.5,-1)[1,1]; \map(0,-2)[\scriptstyle \gamma]; \braid(2,-2)[1]; \draw (1,-2) -- (1,-4); \map(2,-3)[\scriptstyle \Pi^{\!R}]; \mult(3,-3)[1,1]; \draw (3.5,-4) -- (3.5,-6); \counit(3.5,-6); \draw (4,-2) -- (4,-3); \map(1.5,-5)[\scriptstyle \gamma]; \mult(1,-4)[1,1]; \mult(0,-6)[1.5,1]; \draw (0,-3) -- (0,-6); \end{scope} \begin{scope}[xshift=17.5cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=18.4cm, yshift=-3cm] \comult(0.5,0)[1,1]; \comult(2.5,0)[1,1]; \draw (0,-1) -- (0,-2); \braid(1,-1)[1]; \draw (3,-1) -- (3,-2); \map(0,-2)[\scriptstyle \gamma]; \map(1,-2)[\scriptstyle \gamma]; \mult(2,-2)[1,1]; \mult(0,-3)[1,1]; \counit(2.5,-3); \end{scope} \begin{scope}[xshift=21.8cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=22.8cm, yshift=-2.5cm] \comult(0.5,0)[1,1]; \draw (0,-1) -- (0,-3); \map(1,-1)[\scriptstyle \Pi^{\!R}]; \mult(1,-2)[1,1]; \draw (2,0) -- (2,-2); \map(0,-3)[\scriptstyle \gamma]; \map(1.5,-3)[\scriptstyle \gamma]; \mult(0,-4)[1.5,1]; \end{scope} \begin{scope}[xshift=25.3cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=25.8cm, yshift=-1.5cm] \draw (0,-0.5) -- (0,-3); \draw (1,-2) -- (1,-3); \map(2,-2)[\scriptstyle S]; \unit(1.5,-0.5); \mult(0,-3)[1,1]; \draw (3,-0.5) -- (3,-3); \mult(2,-3)[1,1]; \comult(1.5,-1)[1,1]; \map(0.5,-4)[\scriptstyle \gamma]; \map(2.5,-4)[\scriptstyle \gamma]; \mult(0.5,-5)[2,1.5]; \draw (3,-1) -- (3, -2); \end{scope} \begin{scope}[xshift=29.3cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=30.1cm, yshift=-4cm] \map(0,0)[\scriptstyle \gamma]; \map(1,0)[\scriptstyle \gamma]; \mult(0,-1)[1,1]; \end{scope} \begin{scope}[xshift=31.8cm, yshift=-5.05cm] \node at (0,0){,}; \end{scope} \end{tikzpicture} $$ where the first equality holds since $\mu_E$ is associative; the second one, by Proposition~\ref{multiplicacion y gamma}(2); the third one, by Proposition~\ref{Delta compuesto con Pi^R}(2); the fourth one, by Remark~\ref{ide ast Pi^R = ide and} and the fact that $\Delta$ is coassociative and $c$ is natural; the fifth one, by Proposition~\ref{mu Pi^R, etc}(2); the sixth one, by Proposition~\ref{Delta eta con S}(1); and the last one, by Lemma~\ref{caso particular}. For the sake of legibility in the following diagramas we set $\bar{\gamma}\coloneqq \gamma^{-1}$ and $q\coloneqq q_{\gamma^{-1}}^E$. In order to end the proof of the first equality in~\eqref{A1} it suffices to note that $$ \begin{tikzpicture}[scale=0.43] \def\unit(#1,#2){\draw (#1,#2) circle[radius=2pt] (#1,#2-0.07) -- (#1,#2-1)} \def\comult(#1,#2)[#3,#4]{\draw (#1,#2) -- (#1,#2-0.5*#4) arc (90:0:0.5*#3 and 0.5*#4) (#1,#2-0.5*#4) arc (90:180:0.5*#3 and 0.5*#4)} \def\rcoaction(#1,#2)[#3,#4]{\draw (#1,#2)-- (#1,#2-2*#4/2) (#1,#2-1*#4/2) -- (#1+1*#4/2+#3*#4/2,#2-1*#4/2).. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-1.445*#4/2) .. (#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\transposition(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\raction(#1,#2)[#3,#4]{\draw (#1,#2) -- (#1,#2-2*#4/2) (#1,#2-1*#4/2)--(#1+1*#4/2+#3*#4/2,#2-1*#4/2) .. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-0.555*#4/2) .. (#1+2*#4/2+#3*#4/2,#2)} \def\mult(#1,#2)[#3,#4]{\draw (#1,#2) arc (180:360:0.5*#3 and 0.5*#4) (#1+0.5*#3, #2-0.5*#4) -- (#1+0.5*#3,#2-#4)} \def\map(#1,#2)[#3]{\draw (#1,#2-0.5) node[name=nodemap,inner sep=0pt, minimum size=10.5pt, shape=rectangle, draw, rounded corners]{$#3$} (#1,#2)-- (nodemap) (nodemap)-- (#1,#2-1)} \def\laction(#1,#2)[#3,#4]{\draw (#1,#2) .. controls (#1,#2-0.555*#4/2) and (#1+0.445*#4/2,#2-1*#4/2) .. (#1+1*#4/2,#2-1*#4/2) -- (#1+2*#4/2+#3*#4/2,#2-1*#4/2) (#1+2*#4/2+#3*#4/2,#2)--(#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\twisting(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.25*#3,#2-0.8*#3) and (#1+0.45*#3,#2-0.69*#3) .. (#1+0.50*#3,#2-0.66*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.1*#3,#2-0.65*#3) and (#1+0.22*#3,#2-0.54*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.75*#3,#2-0.47*#3) and (#1+0.9*#3,#2-0.4*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.5*#3,#2-0.34*#3) .. controls (#1+0.6*#3,#2-0.27*#3) and (#1+0.65*#3,#2-0.2*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-1*#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3)(#1+0.72*#3,#2-0.50*#3) -- (#1+0.50*#3,#2-0.66*#3) -- (#1+0.275*#3,#2-0.505*#3) -- (#1+0.5*#3,#2-0.34*#3) -- (#1+0.72*#3,#2-0.50*#3)} \def\doublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1)} \def\doublesinglemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublesinglemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=9.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublesinglemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublesinglemapnode) (doublesinglemapnode)-- (#1+0.5,#2-1)} \def\singledoublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1+0.5,#2)--(doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1) } \def\counit(#1,#2){\draw (#1,#2) -- (#1,#2-0.93) (#1,#2-1) circle[radius=2pt]} \def\braid(#1,#2)[#3]{\draw (#1+1*#3,#2) .. controls (#1+1*#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) -- (#1+0.37*#3,#2-0.41*#3) (#1+0.62*#3,#2-0.59*#3)-- (#1+0.9*#3,#2-0.8*#3) (#1+1*#3,#2-1*#3) .. controls (#1+1*#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\cocicle(#1,#2)[#3]{\draw (#1,#2) .. controls (#1,#2-0.555*#3/2) and (#1+0.445*#3/2,#2-#3/2) .. (#1+#3/2,#2-#3/2) .. controls (#1+1.555*#3/2,#2-#3/2) and (#1+2*#3/2,#2-0.555*#3/2) .. (#1+2*#3/2,#2) (#1+#3/2,#2-#3/2) -- (#1+#3/2,#2-2*#3/2) (#1+#3/2,#2-#3/2) node [inner sep=0pt,minimum size=3pt,shape=circle,fill] {}} \def\solbraid(#1,#2)[#3]{\draw (#1,#2-0.5) node[name=nodemap,inner sep=0pt, minimum size=8pt, shape=circle,draw]{$#3$} (#1-0.5,#2) .. controls (#1-0.5,#2-0.15) and (#1-0.4,#2-0.2) .. (#1-0.3,#2-0.3) (#1-0.3,#2-0.3) -- (nodemap) (#1+0.5,#2) .. controls (#1+0.5,#2-0.15) and (#1+0.4,#2-0.2) .. (#1+0.3,#2-0.3) (#1+0.3,#2-0.3) -- (nodemap) (#1+0.5,#2-1) .. controls (#1+0.5,#2-0.85) and (#1+0.4,#2-0.8) .. (#1+0.3,#2-0.7) (#1+0.3,#2-0.7) -- (nodemap) (#1-0.5,#2-1) .. controls (#1-0.5,#2-0.85) and (#1-0.4,#2-0.8) .. (#1-0.3,#2-0.7) (#1-0.3,#2-0.7) -- (nodemap) } \begin{scope}[xshift=0cm, yshift=-1.75cm] \comult(0.5,0)[1,1]; \comult(2.5,0)[1,1]; \draw (0,-1) -- (0,-2); \braid(1,-1)[1]; \braid(2,-2)[1]; \draw (3,-1) -- (3,-2); \map(0,-2)[\scriptstyle \gamma]; \map(1,-2)[\scriptstyle \gamma]; \mult(0,-3)[1,1]; \map(2,-3)[\scriptstyle \bar{\gamma}]; \map(3,-3)[\scriptstyle \bar{\gamma}]; \draw (0.5,-4) -- (0.5,-5); \mult(2,-4)[1,1]; \mult(0.5,-5)[2,1.5]; \end{scope} \begin{scope}[xshift=3.9cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=4.8cm, yshift=-1.25cm] \comult(0.5,0)[1,1]; \draw (0,-1) -- (0,-2); \braid(1,-1)[1.5]; \comult(1,-2.5)[1,1]; \draw (2.5,0) -- (2.5,-1); \map(0,-2)[\scriptstyle \gamma]; \map(0.5,-3.5)[\scriptstyle \gamma]; \draw (0,-3) -- (0,-5.5); \map(1.5,-3.5)[\scriptstyle \bar{\gamma}]; \map(2.5,-2.5)[\scriptstyle \bar{\gamma}]; \mult(0.5,-4.5)[1,1]; \mult(0,-5.5)[1,1]; \draw (2.5,-3.5) .. controls (2.5,-5) and (1.5,-5) .. (1.5,-6.5); \mult(0.5,-6.5)[1,1]; \end{scope} \begin{scope}[xshift=8.2cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=9cm, yshift=-2cm] \comult(0.5,0)[1,1]; \draw (0,-1) -- (0,-1.5); \braid(1,-1)[1]; \draw (2,0) -- (2,-1); \map(0,-1.5)[\scriptstyle \gamma]; \map(1,-2)[\scriptstyle \Pi^{\!L}]; \draw (0,-2.5) -- (0,-4); \map(1,-3)[\scriptstyle \gamma]; \map(2,-2.5)[\scriptstyle \bar{\gamma}]; \mult(0,-4)[1,1]; \mult(0.5,-5)[1,1]; \draw (2,-3.5) .. controls (2,-4) and (1.5,-4) .. (1.5,-5); \draw (2,-2) -- (2,-2.5); \end{scope} \begin{scope}[xshift=11.9cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=12.4cm, yshift=-2cm] \comult(0.5,0)[1,1]; \draw (0,-1) -- (0,-3); \braid(1,-1)[1]; \draw (2,0) -- (2,-1); \map(1,-2)[\scriptstyle \Pi^{\!L}]; \draw (0,-2.5) -- (0,-3); \map(0.5,-4)[\scriptstyle \gamma]; \map(2,-2.5)[\scriptstyle \bar{\gamma}]; \mult(0,-3)[1,1]; \mult(0.5,-5)[1,1]; \draw (2,-3.5) .. controls (2,-4) and (1.5,-4) .. (1.5,-5); \draw (2,-2) -- (2,-2.5); \end{scope} \begin{scope}[xshift=15.25cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=15.7cm, yshift=-2.5cm] \comult(0.5,-1)[1,1]; \comult(1.25,0)[1.5,1]; \draw (0,-2) -- (0,-3); \braid(2,-1)[1]; \braid(1,-2)[1]; \draw (3,0) -- (3,-1); \draw (3,-2) -- (3,-3); \mult(0,-3)[1,1]; \map(2,-3)[\scriptstyle \gamma]; \map(3,-3)[\scriptstyle \bar{\gamma}]; \mult(2,-4)[1,1]; \counit(0.5,-4); \end{scope} \begin{scope}[xshift=19.55cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=19.95cm, yshift=-2.5cm] \comult(0.5,0)[1,1]; \draw (0,-1) -- (0,-2); \braid(1,-1)[1]; \draw (2,0) -- (2,-1); \comult(2,-2)[1,1]; \map(1.5,-3)[\scriptstyle \gamma]; \map(2.5,-3)[\scriptstyle \bar{\gamma}]; \mult(1.5,-4)[1,1]; \mult(0,-2)[1,1]; \counit(0.5,-4); \draw (0.5,-3) -- (0.5,-4); \end{scope} \begin{scope}[xshift=23.25cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=23.7cm, yshift=-3cm] \comult(0.5,0)[1,1]; \draw (0,-1) -- (0,-2); \braid(1,-1)[1]; \draw (2,0) -- (2,-1); \map(2,-2)[\scriptstyle \Pi^{\!L}]; \map(2,-3)[\scriptstyle \gamma]; \mult(0,-2)[1,1]; \counit(0.5,-3); \end{scope} \begin{scope}[xshift=26.65cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=27.2cm, yshift=-3cm] \draw (0,0) -- (0,-1); \map(1,0)[\scriptstyle \Pi^{\!L}]; \mult(0,-1)[1,1]; \map(0.5,-2)[\scriptstyle \Pi^{\!L}]; \map(0.5,-3)[\scriptstyle \gamma]; \end{scope} \begin{scope}[xshift=29.15cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=29.7cm, yshift=-3.5cm] \mult(0,0)[1,1]; \map(0.5,-1)[\scriptstyle \Pi^{\!L}]; \map(0.5,-2)[\scriptstyle \gamma]; \end{scope} \begin{scope}[xshift=31.25cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=31.85cm, yshift=-3.5cm] \mult(0,0)[1,1]; \map(0.5,-1)[\scriptstyle \gamma]; \map(0.5,-2)[\scriptstyle q]; \end{scope} \begin{scope}[xshift=33.1cm, yshift=-5.05cm] \node at (0,0){,}; \end{scope} \end{tikzpicture} $$ where the first equality holds since $c$ is natural and $\mu_E$ is associative; the second and sixth one, since $\gamma\xst \bar{\gamma} = \gamma\xcirc \Pi^L$; the third one, by Proposition~\ref{multiplicacion y gamma}(1); the fourth and seventh one, by Proposition~\ref{mu Pi^R, etc}(1); the fifth one, since $\Delta$ is coassociative and $c$ is natural; the eighth one, by Proposition~\ref{mu delta Pi^R, etc}; and the last one, by Remark~\ref{igualdad 0} and the fact that $\gamma\xcirc \Pi^L = \gamma\xst\bar{\gamma}$. We next prove the second equality in~\eqref{A1}. To begin with we have $$ \begin{tikzpicture}[scale=0.43] \def\unit(#1,#2){\draw (#1,#2) circle[radius=2pt] (#1,#2-0.07) -- (#1,#2-1)} \def\comult(#1,#2)[#3,#4]{\draw (#1,#2) -- (#1,#2-0.5*#4) arc (90:0:0.5*#3 and 0.5*#4) (#1,#2-0.5*#4) arc (90:180:0.5*#3 and 0.5*#4)} \def\rcoaction(#1,#2)[#3,#4]{\draw (#1,#2)-- (#1,#2-2*#4/2) (#1,#2-1*#4/2) -- (#1+1*#4/2+#3*#4/2,#2-1*#4/2).. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-1.445*#4/2) .. (#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\transposition(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\raction(#1,#2)[#3,#4]{\draw (#1,#2) -- (#1,#2-2*#4/2) (#1,#2-1*#4/2)--(#1+1*#4/2+#3*#4/2,#2-1*#4/2) .. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-0.555*#4/2) .. (#1+2*#4/2+#3*#4/2,#2)} \def\mult(#1,#2)[#3,#4]{\draw (#1,#2) arc (180:360:0.5*#3 and 0.5*#4) (#1+0.5*#3, #2-0.5*#4) -- (#1+0.5*#3,#2-#4)} \def\map(#1,#2)[#3]{\draw (#1,#2-0.5) node[name=nodemap,inner sep=0pt, minimum size=10.5pt, shape=rectangle, draw, rounded corners]{$#3$} (#1,#2)-- (nodemap) (nodemap)-- (#1,#2-1)} \def\laction(#1,#2)[#3,#4]{\draw (#1,#2) .. controls (#1,#2-0.555*#4/2) and (#1+0.445*#4/2,#2-1*#4/2) .. (#1+1*#4/2,#2-1*#4/2) -- (#1+2*#4/2+#3*#4/2,#2-1*#4/2) (#1+2*#4/2+#3*#4/2,#2)--(#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\twisting(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.25*#3,#2-0.8*#3) and (#1+0.45*#3,#2-0.69*#3) .. (#1+0.50*#3,#2-0.66*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.1*#3,#2-0.65*#3) and (#1+0.22*#3,#2-0.54*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.75*#3,#2-0.47*#3) and (#1+0.9*#3,#2-0.4*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.5*#3,#2-0.34*#3) .. controls (#1+0.6*#3,#2-0.27*#3) and (#1+0.65*#3,#2-0.2*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-1*#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3)(#1+0.72*#3,#2-0.50*#3) -- (#1+0.50*#3,#2-0.66*#3) -- (#1+0.275*#3,#2-0.505*#3) -- (#1+0.5*#3,#2-0.34*#3) -- (#1+0.72*#3,#2-0.50*#3)} \def\doublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1)} \def\doublesinglemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublesinglemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=9.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublesinglemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublesinglemapnode) (doublesinglemapnode)-- (#1+0.5,#2-1)} \def\singledoublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1+0.5,#2)--(doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1) } \def\counit(#1,#2){\draw (#1,#2) -- (#1,#2-0.93) (#1,#2-1) circle[radius=2pt]} \def\braid(#1,#2)[#3]{\draw (#1+1*#3,#2) .. controls (#1+1*#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) -- (#1+0.37*#3,#2-0.41*#3) (#1+0.62*#3,#2-0.59*#3)-- (#1+0.9*#3,#2-0.8*#3) (#1+1*#3,#2-1*#3) .. controls (#1+1*#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\cocicle(#1,#2)[#3]{\draw (#1,#2) .. controls (#1,#2-0.555*#3/2) and (#1+0.445*#3/2,#2-#3/2) .. (#1+#3/2,#2-#3/2) .. controls (#1+1.555*#3/2,#2-#3/2) and (#1+2*#3/2,#2-0.555*#3/2) .. (#1+2*#3/2,#2) (#1+#3/2,#2-#3/2) -- (#1+#3/2,#2-2*#3/2) (#1+#3/2,#2-#3/2) node [inner sep=0pt,minimum size=3pt,shape=circle,fill] {}} \def\solbraid(#1,#2)[#3]{\draw (#1,#2-0.5) node[name=nodemap,inner sep=0pt, minimum size=8pt, shape=circle,draw]{$#3$} (#1-0.5,#2) .. controls (#1-0.5,#2-0.15) and (#1-0.4,#2-0.2) .. (#1-0.3,#2-0.3) (#1-0.3,#2-0.3) -- (nodemap) (#1+0.5,#2) .. controls (#1+0.5,#2-0.15) and (#1+0.4,#2-0.2) .. (#1+0.3,#2-0.3) (#1+0.3,#2-0.3) -- (nodemap) (#1+0.5,#2-1) .. controls (#1+0.5,#2-0.85) and (#1+0.4,#2-0.8) .. (#1+0.3,#2-0.7) (#1+0.3,#2-0.7) -- (nodemap) (#1-0.5,#2-1) .. controls (#1-0.5,#2-0.85) and (#1-0.4,#2-0.8) .. (#1-0.3,#2-0.7) (#1-0.3,#2-0.7) -- (nodemap) } \begin{scope}[xshift=0cm, yshift=-1.75cm] \comult(0.5,0)[1,1]; \comult(2.5,0)[1,1]; \draw (0,-1) -- (0,-2); \braid(1,-1)[1]; \braid(0,-2)[1]; \draw (3,-1) -- (3,-2); \map(0,-3)[\scriptstyle \bar{\gamma}]; \map(1,-3)[\scriptstyle \bar{\gamma}]; \mult(0,-4)[1,1]; \map(2,-2)[\scriptstyle \gamma]; \map(3,-2)[\scriptstyle \gamma]; \draw (2.5,-4) -- (2.5,-5); \mult(2,-3)[1,1]; \mult(0.5,-5)[2,1.5]; \end{scope} \begin{scope}[xshift=3.9cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=4.9cm, yshift=-1.25cm] \comult(2,0)[1,1]; \braid(0,-1)[1.5]; \comult(1.5,-2.5)[1,1]; \draw (0,0) -- (0,-1); \map(0,-2.5)[\scriptstyle \bar{\gamma}]; \map(1,-3.5)[\scriptstyle \bar{\gamma}]; \draw (0,-3.5) ..controls (0,-5) and (1,-5) .. (1,-6.5); \map(2,-3.5)[\scriptstyle \gamma]; \map(2.5,-2)[\scriptstyle \gamma]; \mult(1,-4.5)[1,1]; \mult(1.5,-5.5)[1,1]; \draw (2.5,-1) -- (2.5,-2); \draw (2.5,-3) -- (2.5,-5.5); \mult(1,-6.5)[1,1]; \end{scope} \begin{scope}[xshift=8.3cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=9.3cm, yshift=-2cm] \comult(1.5,0)[1,1]; \draw (2,-1) -- (2,-1.5); \braid(0,-1)[1]; \draw (0,0) -- (0,-1); \draw (0,-2) -- (0,-2.5); \map(0,-2.5)[\scriptstyle \bar{\gamma}]; \map(1,-2)[\scriptstyle \Pi^{\!R}]; \draw (0,-3.5) .. controls (0,-4.25) and (0.5,-4.25) .. (0.5,-5); \map(1,-3)[\scriptstyle \gamma]; \map(2,-1.5)[\scriptstyle \gamma]; \mult(1,-4)[1,1]; \mult(0.5,-5)[1,1]; \draw (2,-2.5) -- (2,-4); \end{scope} \begin{scope}[xshift=12.2cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=13.2cm, yshift=-2cm] \comult(1.5,0)[1,1]; \draw (2,-1) -- (2,-3); \braid(0,-1)[1]; \draw (0,0) -- (0,-1); \draw (0,-2) -- (0,-2.5); \map(0,-2.5)[\scriptstyle \bar{\gamma}]; \map(1,-2)[\scriptstyle \Pi^{\!R}]; \draw (0,-3.5) .. controls (0,-4.25) and (0.5,-4.25) .. (0.5,-5); \map(1.5,-4)[\scriptstyle \gamma]; \mult(1,-3)[1,1]; \mult(0.5,-5)[1,1]; \end{scope} \begin{scope}[xshift=15.8cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=16.7cm, yshift=-2.5cm] \comult(2.5,-1)[1,1]; \comult(1.75,0)[1.5,1]; \draw (0,-2) -- (0,-3); \braid(0,-1)[1]; \braid(1,-2)[1]; \draw (0,0) -- (0,-1); \draw (3,-2) -- (3,-3); \mult(2,-3)[1,1]; \map(0,-3)[\scriptstyle \bar{\gamma}]; \map(1,-3)[\scriptstyle \gamma]; \mult(0,-4)[1,1]; \counit(2.5,-4); \end{scope} \begin{scope}[xshift=20.2cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=21.7cm, yshift=-2.5cm] \comult(1.5,0)[1,1]; \draw (2,-1) -- (2,-2); \braid(0,-1)[1]; \draw (0,0) -- (0,-1); \comult(0,-2)[1,1]; \map(-0.5,-3)[\scriptstyle \bar{\gamma}]; \map(0.5,-3)[\scriptstyle \gamma]; \mult(-0.5,-4)[1,1]; \mult(1,-2)[1,1]; \counit(1.5,-4); \draw (1.5,-3) -- (1.5,-4); \end{scope} \begin{scope}[xshift=24.3cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=25.4cm, yshift=-3cm] \comult(1.5,0)[1,1]; \draw (2,-1) -- (2,-2); \braid(0,-1)[1]; \draw (0,0) -- (0,-1); \map(0,-2)[\scriptstyle \Pi^{\!R}]; \map(0,-3)[\scriptstyle \gamma]; \mult(1,-2)[1,1]; \counit(1.5,-3); \end{scope} \begin{scope}[xshift=28cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=28.9cm, yshift=-3cm] \draw (1,0) -- (1,-1); \map(0,0)[\scriptstyle \Pi^{\!R}]; \mult(0,-1)[1,1]; \map(0.5,-2)[\scriptstyle \Pi^{\!R}]; \map(0.5,-3)[\scriptstyle \gamma]; \end{scope} \begin{scope}[xshift=30.5cm, yshift=-5.05cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=31.3cm, yshift=-3.5cm] \mult(0,0)[1,1]; \map(0.5,-1)[\scriptstyle \Pi^{\!R}]; \map(0.5,-2)[\scriptstyle \gamma]; \end{scope} \begin{scope}[xshift=32.9cm, yshift=-5.05cm] \node at (0,0){,}; \end{scope} \end{tikzpicture} $$ where the first equality holds since $c$ is natural and $\mu_E$ is associative; the second and sixth one, since $\bar{\gamma}\xst \gamma = \gamma\xcirc \Pi^{\hs R}$; the third one, by Proposition~\ref{multiplicacion y gamma}(2); the fourth and seventh one, by Proposition~\ref{mu Pi^R, etc}(2); the fifth one, since $\Delta$ is coassociative and $c$ is natural; and the last one, by Proposition~\ref{mu delta Pi^R, etc}. Thus the proof of the second equality in~\eqref{A1} follows, because $$ \begin{tikzpicture}[scale=0.44] \def\unit(#1,#2){\draw (#1,#2) circle[radius=2pt] (#1,#2-0.07) -- (#1,#2-1)} \def\comult(#1,#2)[#3,#4]{\draw (#1,#2) -- (#1,#2-0.5*#4) arc (90:0:0.5*#3 and 0.5*#4) (#1,#2-0.5*#4) arc (90:180:0.5*#3 and 0.5*#4)} \def\rcoaction(#1,#2)[#3,#4]{\draw (#1,#2)-- (#1,#2-2*#4/2) (#1,#2-1*#4/2) -- (#1+1*#4/2+#3*#4/2,#2-1*#4/2).. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-1.445*#4/2) .. (#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\transposition(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\raction(#1,#2)[#3,#4]{\draw (#1,#2) -- (#1,#2-2*#4/2) (#1,#2-1*#4/2)--(#1+1*#4/2+#3*#4/2,#2-1*#4/2) .. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-0.555*#4/2) .. (#1+2*#4/2+#3*#4/2,#2)} \def\mult(#1,#2)[#3,#4]{\draw (#1,#2) arc (180:360:0.5*#3 and 0.5*#4) (#1+0.5*#3, #2-0.5*#4) -- (#1+0.5*#3,#2-#4)} \def\map(#1,#2)[#3]{\draw (#1,#2-0.5) node[name=nodemap,inner sep=0pt, minimum size=10.5pt, shape=rectangle, draw, rounded corners]{$#3$} (#1,#2)-- (nodemap) (nodemap)-- (#1,#2-1)} \def\laction(#1,#2)[#3,#4]{\draw (#1,#2) .. controls (#1,#2-0.555*#4/2) and (#1+0.445*#4/2,#2-1*#4/2) .. (#1+1*#4/2,#2-1*#4/2) -- (#1+2*#4/2+#3*#4/2,#2-1*#4/2) (#1+2*#4/2+#3*#4/2,#2)--(#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\twisting(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.25*#3,#2-0.8*#3) and (#1+0.45*#3,#2-0.69*#3) .. (#1+0.50*#3,#2-0.66*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.1*#3,#2-0.65*#3) and (#1+0.22*#3,#2-0.54*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.75*#3,#2-0.47*#3) and (#1+0.9*#3,#2-0.4*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.5*#3,#2-0.34*#3) .. controls (#1+0.6*#3,#2-0.27*#3) and (#1+0.65*#3,#2-0.2*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-1*#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3)(#1+0.72*#3,#2-0.50*#3) -- (#1+0.50*#3,#2-0.66*#3) -- (#1+0.275*#3,#2-0.505*#3) -- (#1+0.5*#3,#2-0.34*#3) -- (#1+0.72*#3,#2-0.50*#3)} \def\doublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1)} \def\doublesinglemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublesinglemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=9.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublesinglemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublesinglemapnode) (doublesinglemapnode)-- (#1+0.5,#2-1)} \def\singledoublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1+0.5,#2)--(doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1) } \def\counit(#1,#2){\draw (#1,#2) -- (#1,#2-0.93) (#1,#2-1) circle[radius=2pt]} \def\braid(#1,#2)[#3]{\draw (#1+1*#3,#2) .. controls (#1+1*#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) -- (#1+0.37*#3,#2-0.41*#3) (#1+0.62*#3,#2-0.59*#3)-- (#1+0.9*#3,#2-0.8*#3) (#1+1*#3,#2-1*#3) .. controls (#1+1*#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\cocicle(#1,#2)[#3]{\draw (#1,#2) .. controls (#1,#2-0.555*#3/2) and (#1+0.445*#3/2,#2-#3/2) .. (#1+#3/2,#2-#3/2) .. controls (#1+1.555*#3/2,#2-#3/2) and (#1+2*#3/2,#2-0.555*#3/2) .. (#1+2*#3/2,#2) (#1+#3/2,#2-#3/2) -- (#1+#3/2,#2-2*#3/2) (#1+#3/2,#2-#3/2) node [inner sep=0pt,minimum size=3pt,shape=circle,fill] {}} \def\solbraid(#1,#2)[#3]{\draw (#1,#2-0.5) node[name=nodemap,inner sep=0pt, minimum size=8pt, shape=circle,draw]{$#3$} (#1-0.5,#2) .. controls (#1-0.5,#2-0.15) and (#1-0.4,#2-0.2) .. (#1-0.3,#2-0.3) (#1-0.3,#2-0.3) -- (nodemap) (#1+0.5,#2) .. controls (#1+0.5,#2-0.15) and (#1+0.4,#2-0.2) .. (#1+0.3,#2-0.3) (#1+0.3,#2-0.3) -- (nodemap) (#1+0.5,#2-1) .. controls (#1+0.5,#2-0.85) and (#1+0.4,#2-0.8) .. (#1+0.3,#2-0.7) (#1+0.3,#2-0.7) -- (nodemap) (#1-0.5,#2-1) .. controls (#1-0.5,#2-0.85) and (#1-0.4,#2-0.8) .. (#1-0.3,#2-0.7) (#1-0.3,#2-0.7) -- (nodemap) } \begin{scope}[xshift=0cm, yshift=0cm] \comult(0.5,0)[1,1]; \comult(2.5,0)[1,1]; \draw (0,-1) -- (0,-2); \braid(1,-1)[1]; \mult(0,-2)[1,1]; \draw (3,-1) -- (3,-2); \map(0.5,-3)[\scriptstyle \gamma]; \draw (0.5,-4) -- (0.5,-5); \map(2.5,-3)[\scriptstyle \Pi^{\!R}]; \map(2.5,-4)[\scriptstyle \gamma]; \mult(2,-2)[1,1]; \mult(0.5,-5)[2,1.5]; \end{scope} \begin{scope}[xshift=3.6cm, yshift=-3.3cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=4.6cm, yshift=-1cm] \mult(0,0)[1,1]; \comult(0.5,-0.5)[1,1]; \draw (0,-1.5) -- (0,-2); \map(0,-2)[\scriptstyle \gamma]; \draw (0,-3) -- (0,-3.5); \map(1,-1.5)[\scriptstyle \Pi^{\!R}]; \map(1,-2.5)[\scriptstyle \gamma]; \mult(0,-3.5)[1,1]; \end{scope} \begin{scope}[xshift=6.7cm, yshift=-3.3cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=7.4cm, yshift=-2.2cm] \mult(0,0)[1,1]; \map(0.5,-1)[\scriptstyle \gamma]; \end{scope} \begin{scope}[xshift=11cm, yshift=-3.3cm] \node at (0,0){and}; \end{scope} \begin{scope}[xshift=14cm, yshift=-0.5cm] \comult(0.5,0)[1,1]; \comult(2.5,0)[1,1]; \draw (0,-1) -- (0,-2); \braid(1,-1)[1]; \mult(0,-2)[1,1]; \draw (3,-1) -- (3,-2); \map(2.5,-3)[\scriptstyle \bar{\gamma}]; \map(0.5,-3)[\scriptstyle \gamma]; \mult(2,-2)[1,1]; \mult(0.5,-4)[2,1.5]; \end{scope} \begin{scope}[xshift=17.6cm, yshift=-3.3cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=18.7cm, yshift=-1.5cm] \mult(0,0)[1,1]; \comult(0.5,-0.5)[1,1]; \map(1,-1.5)[\scriptstyle \bar{\gamma}]; \map(0,-1.5)[\scriptstyle \gamma]; \mult(0,-2.5)[1,1]; \end{scope} \begin{scope}[xshift=20.6cm, yshift=-3.3cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=21.3cm, yshift=-1.75cm] \mult(0,0)[1,1]; \map(0.5,-1)[\scriptstyle \gamma]; \map(0.5,-2)[\scriptstyle q]; \end{scope} \begin{scope}[xshift=23cm, yshift=-3.3cm] \node at (0,0){,}; \end{scope} \end{tikzpicture} $$ where the first and third equalities hold by Definition~\ref{weak bialgebra}(1); the second one, by Corollary~\ref{gamma xst (gamma xcirc Pi^R) = gamma}; and the last one, by Remark~\ref{igualdad 0}. \end{proof} \begin{remark}\label{esta donde debe} We have $$ \sigma_E^{-1}\xst (q_{\gamma^{-1}}\xcirc \gamma\xcirc \mu) = \sigma_E^{-1}\xst \sigma^E \xst \sigma_E^{-1} = (q_{\gamma^{-1}}\xcirc \gamma\xcirc \mu) \xst \sigma_E^{-1} = \sigma_E^{-1}, $$ where the last equality follows from the definition of $\sigma_E^{-1}$ and the fact that, by~Remark~\ref{igualdad 0}, equality $\gamma \xst \gamma^{-1}=\gamma\circ \Pi^L$ and Corollary~\ref{gamma xst (gamma xcirc Pi^R) = gamma}, $$ (q_{\gamma^{-1}}\xcirc \gamma\xcirc \mu) \xst (\gamma \xcirc \mu) = \bigl((q_{\gamma^{-1}}\xcirc \gamma) \xst \gamma \bigr)\xcirc \mu = \bigl((\gamma\xcirc \Pi^{\hs L}) \xst \gamma \bigr)\xcirc \mu = \gamma \xcirc \mu. $$ \end{remark} \begin{remark}\label{use} Set $\bar{\sigma}_E\coloneqq \sigma_E^{-1}$. We have $$ \begin{tikzpicture}[scale=0.43] \def\unit(#1,#2){\draw (#1,#2) circle[radius=2pt] (#1,#2-0.07) -- (#1,#2-1)} \def\comult(#1,#2)[#3]{\draw (#1,#2)-- (#1,#2-1*#3/2) (#1,#2-1*#3/2) .. controls (#1+0.555*#3/2,#2-1*#3/2) and (#1+1*#3/2,#2-1.445*#3/2) .. (#1+1*#3/2,#2-2*#3/2) (#1,#2-1*#3/2) .. controls (#1-0.555*#3/2,#2-1*#3/2) and (#1-1*#3/2,#2-1.445*#3/2) .. (#1-1*#3/2,#2-2*#3/2)} \def\rcoaction(#1,#2)[#3,#4]{\draw (#1,#2)-- (#1,#2-2*#4/2) (#1,#2-1*#4/2) -- (#1+1*#4/2+#3*#4/2,#2-1*#4/2).. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-1.445*#4/2) .. (#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\transposition(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3
) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\raction(#1,#2)[#3,#4]{\draw (#1,#2) -- (#1,#2-2*#4/2) (#1,#2-1*#4/2)--(#1+1*#4/2+#3*#4/2,#2-1*#4/2) .. controls (#1+1.555*#4/2+#3*#4/2,#2-1*#4/2) and (#1+2*#4/2+#3*#4/2,#2-0.555*#4/2) .. (#1+2*#4/2+#3*#4/2,#2)} \def\mult(#1,#2)[#3,#4]{\draw (#1,#2) arc (180:360:0.5*#3 and 0.5*#4) (#1+0.5*#3, #2-0.5*#4) -- (#1+0.5*#3,#2-#4)} \def\map(#1,#2)[#3]{\draw (#1,#2-0.5) node[name=nodemap,inner sep=0pt, minimum size=10.5pt, shape=rectangle, draw, rounded corners]{$#3$} (#1,#2)-- (nodemap) (nodemap)-- (#1,#2-1)} \def\laction(#1,#2)[#3,#4]{\draw (#1,#2) .. controls (#1,#2-0.555*#4/2) and (#1+0.445*#4/2,#2-1*#4/2) .. (#1+1*#4/2,#2-1*#4/2) -- (#1+2*#4/2+#3*#4/2,#2-1*#4/2) (#1+2*#4/2+#3*#4/2,#2)--(#1+2*#4/2+#3*#4/2,#2-2*#4/2)} \def\twisting(#1,#2)[#3]{\draw (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.25*#3,#2-0.8*#3) and (#1+0.45*#3,#2-0.69*#3) .. (#1+0.50*#3,#2-0.66*#3) (#1+0.1*#3,#2-0.8*#3) ..controls (#1+0.1*#3,#2-0.65*#3) and (#1+0.22*#3,#2-0.54*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.75*#3,#2-0.47*#3) and (#1+0.9*#3,#2-0.4*#3).. (#1+0.9*#3,#2-0.2*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.5*#3,#2-0.34*#3) .. controls (#1+0.6*#3,#2-0.27*#3) and (#1+0.65*#3,#2-0.2*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+#3,#2-#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+#3,#2) .. controls (#1+#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.3*#3,#2-0.2*#3) and (#1+0.46*#3,#2-0.31*#3) .. (#1+0.5*#3,#2-0.34*#3) (#1+0.1*#3,#2-0.2*#3) .. controls (#1+0.1*#3,#2-0.38*#3) and (#1+0.256*#3,#2-0.49*#3) .. (#1+0.275*#3,#2-0.505*#3) (#1+0.50*#3,#2-0.66*#3) .. controls (#1+0.548*#3,#2-0.686*#3) and (#1+0.70*#3,#2-0.8*#3)..(#1+0.9*#3,#2-0.8*#3) (#1+#3,#2-1*#3) .. controls (#1+#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3) (#1+0.72*#3,#2-0.50*#3) .. controls (#1+0.80*#3,#2-0.56*#3) and (#1+0.9*#3,#2-0.73*#3)..(#1+0.9*#3,#2-0.8*#3)(#1+0.72*#3,#2-0.50*#3) -- (#1+0.50*#3,#2-0.66*#3) -- (#1+0.275*#3,#2-0.505*#3) -- (#1+0.5*#3,#2-0.34*#3) -- (#1+0.72*#3,#2-0.50*#3)} \def\doublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1)} \def\doublesinglemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublesinglemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=9.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1,#2) .. controls (#1,#2-0.075) .. (doublesinglemapnode) (#1+1,#2) .. controls (#1+1,#2-0.075).. (doublesinglemapnode) (doublesinglemapnode)-- (#1+0.5,#2-1)} \def\singledoublemap(#1,#2)[#3]{\draw (#1+0.5,#2-0.5) node [name=doublemapnode,inner xsep=0pt, inner ysep=0pt, minimum height=8.5pt, minimum width=17pt,shape=rectangle,draw,rounded corners] {$#3$} (#1+0.5,#2)--(doublemapnode) (doublemapnode) .. controls (#1,#2-0.925)..(#1,#2-1) (doublemapnode) .. controls (#1+1,#2-0.925).. (#1+1,#2-1) } \def\counit(#1,#2){\draw (#1,#2) -- (#1,#2-0.93) (#1,#2-1) circle[radius=2pt]} \def\braid(#1,#2)[#3]{\draw (#1+1*#3,#2) .. controls (#1+1*#3,#2-0.05*#3) and (#1+0.96*#3,#2-0.15*#3).. (#1+0.9*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.8*#3)--(#1+0.9*#3,#2-0.2*#3) (#1,#2-1*#3) .. controls (#1,#2-0.95*#3) and (#1+0.04*#3,#2-0.85*#3).. (#1+0.1*#3,#2-0.8*#3) (#1,#2) .. controls (#1,#2-0.05*#3) and (#1+0.04*#3,#2-0.15*#3).. (#1+0.1*#3,#2-0.2*#3) (#1+0.1*#3,#2-0.2*#3) -- (#1+0.37*#3,#2-0.41*#3) (#1+0.62*#3,#2-0.59*#3)-- (#1+0.9*#3,#2-0.8*#3) (#1+1*#3,#2-1*#3) .. controls (#1+1*#3,#2-0.95*#3) and (#1+0.96*#3,#2-0.85*#3).. (#1+0.9*#3,#2-0.8*#3)} \def\cocicle(#1,#2)[#3]{\draw (#1,#2) .. controls (#1,#2-0.555*#3/2) and (#1+0.445*#3/2,#2-#3/2) .. (#1+#3/2,#2-#3/2) .. controls (#1+1.555*#3/2,#2-#3/2) and (#1+2*#3/2,#2-0.555*#3/2) .. (#1+2*#3/2,#2) (#1+#3/2,#2-#3/2) -- (#1+#3/2,#2-2*#3/2) (#1+#3/2,#2-#3/2) node [inner sep=0pt,minimum size=3pt,shape=circle,fill] {}} \begin{scope}[xshift=0cm, yshift= 0cm] \comult(0.5,0)[1]; \comult(2.5,0)[1]; \draw (0,-1) -- (0,-2); \braid(1,-1)[1]; \draw (3,-1) -- (3,-2); \doublesinglemap(0,-2)[\scriptstyle \bar{\sigma}_E]; \doublesinglemap(2,-2)[\scriptstyle \sigma_E]; \rcoaction(0.5,-3)[0,1]; \rcoaction(2.5,-3)[0,1]; \draw (0.5,-4) -- (0.5,-6); \map(1.5,-4)[\scriptstyle \Pi^{\!L}]; \draw (2.5,-4) -- (2.5,-5); \braid(1.5,-5)[1]; \draw (3.5,-4) -- (3.5,-6); \mult(0.5,-6)[1,1]; \mult(2.5,-6)[1,1] \end{scope} \begin{scope}[xshift=3.9cm, yshift=-3.6cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=4.5cm, yshift=0cm] \comult(0.5,0)[1]; \comult(2.5,0)[1]; \draw (0,-1) -- (0,-2); \braid(1,-1)[1]; \draw (3,-1) -- (3,-2); \doublesinglemap(0,-2)[\scriptstyle \bar{\sigma}_E]; \doublesinglemap(2,-2)[\scriptstyle \sigma_E]; \rcoaction(0.5,-3)[0,1]; \rcoaction(2.5,-3)[0,1]; \draw (0.5,-4) -- (0.5,-5); \braid(1.5,-4)[1]; \draw (3.5,-4) -- (3.5,-5); \mult(0.5,-5)[1,1]; \mult(2.5,-5)[1,1]; \draw (1,-6) -- (1,-7); \map(3,-6)[\scriptstyle \Pi^{\!L}]; \end{scope} \begin{scope}[xshift=8.4cm, yshift=-3.6cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=9.1cm, yshift=-0.5cm] \comult(0.5,0)[1]; \comult(2.5,0)[1]; \draw (0,-1) -- (0,-2); \braid(1,-1)[1]; \draw (3,-1) -- (3,-2); \doublesinglemap(0,-2)[\scriptstyle \bar{\sigma}_E]; \doublesinglemap(2,-2)[\scriptstyle \sigma_E]; \mult(0.5,-3)[2,1.5]; \rcoaction(1.5,-4)[0,1]; \draw (1.5,-5) -- (1.5,-6); \map(2.5,-5)[\scriptstyle \Pi^{\!L}]; \end{scope} \begin{scope}[xshift=12.7cm, yshift=-3.6cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=13.4cm, yshift=-1.1cm] \mult(0,0)[1,1]; \map(0.5,-1)[\scriptstyle \gamma]; \map(0.5,-2)[\scriptstyle q]; \rcoaction(0.5,-3)[0,1]; \draw (0.5,-4) -- (0.5,-5); \map(1.5,-4)[\scriptstyle \Pi^{\!L}]; \end{scope} \begin{scope}[xshift=15.8cm, yshift=-3.6cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=16.6cm, yshift=-1.5cm] \mult(0,0)[1,1]; \map(0.5,-1)[\scriptstyle \gamma]; \map(0.5,-2)[\scriptstyle q]; \rcoaction(0.5,-3)[0,1]; \end{scope} \begin{scope}[xshift=20.15cm, yshift=-3.6cm] \node at (0,0){and}; \end{scope} \begin{scope}[xshift=22.2cm, yshift=0.5cm] \comult(0.5,0)[1]; \comult(2.5,0)[1]; \draw (0,-1) -- (0,-2); \braid(1,-1)[1]; \draw (3,-1) -- (3,-2); \doublesinglemap(0,-2)[\scriptstyle \bar{\sigma}_E]; \mult(2,-2)[1,1]; \map(2.5,-3)[\scriptstyle \gamma]; \map(2.5,-4)[\scriptstyle q]; \rcoaction(0.5,-3)[0,1]; \draw (0.5,-4) -- (0.5,-7); \draw (1.5,-4) -- (1.5,-5); \map(1.5,-5)[\scriptstyle \Pi^{\!L}]; \braid(1.5,-6)[1]; \rcoaction(2.5,-5)[0,1]; \mult(0.5,-7)[1,1]; \mult(2.5,-7)[1,1]; \draw (3.5,-6) -- (3.5,-7); \end{scope} \begin{scope}[xshift=26cm, yshift=-3.6cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=26.8cm, yshift=0.95cm] \comult(0.5,0)[1]; \comult(2.5,0)[1]; \draw (0,-1) -- (0,-2); \braid(1,-1)[1]; \draw (3,-1) -- (3,-2); \doublesinglemap(0,-2)[\scriptstyle \bar{\sigma}_E]; \mult(2,-2)[1,1]; \map(2.5,-3)[\scriptstyle \gamma]; \map(2.5,-4)[\scriptstyle q]; \rcoaction(0.5,-3)[0,1]; \draw (0.5,-4) -- (0.5,-7); \draw (1.5,-4) -- (1.5,-6); \braid(1.5,-6)[1]; \rcoaction(2.5,-5)[0,1]; \mult(0.5,-7)[1,1]; \mult(2.5,-7)[1,1]; \draw (3.5,-6) -- (3.5,-7); \map(3,-8)[\scriptstyle \Pi^{\!L}]; \draw (1,-8) -- (1,-9); \end{scope} \begin{scope}[xshift=30.7cm, yshift=-3.6cm] \node at (0,0){=}; \end{scope} \begin{scope}[xshift=31.5cm, yshift=-2cm] \doublesinglemap(0,0)[\scriptstyle \bar{\sigma}_E]; \rcoaction(0.5,-1)[0,1]; \map(1.5,-2)[\scriptstyle \Pi^{\!L}]; \draw (0.5,-2) -- (0.5,-3); \end{scope} \begin{scope}[xshift=33.5cm, yshift=-3.6cm] \node at (0,0){,}; \end{scope} \end{tikzpicture} $$ where the first and fifth equality hold by Proposition~\ref{subalgebras}(1) and Remark~\ref{para usar}; the second one, since $\delta_E$ is multiplicative; the third one, by Lemma~\ref{algo}; the fourth one, by Remark~\ref{para usar}; and the last one, by Remark~\ref{esta donde debe} and the fact that $\delta_E$ is multiplicative. \end{remark} \begin{theorem}\label{se factoriza} The cocycle $f$ is invertible. \end{theorem} \begin{proof} In order to abbreviate expressions we set $U\! =\! \delta_E \xcirc \sigma_E$, $\bar{U}\! =\! \delta_E \xcirc \sigma_E^{-1}$ and $N\! =\! \delta_E\xcirc q_{\gamma^{-1}}^E\xcirc \gamma\xcirc \mu$. We have $$ \bar{U} = N\xst \bar{U} = ((E\ot \Pi^{\hs L})\xcirc \bar{U})\xst U\xst \bar{U} = ((E\ot \Pi^{\hs L})\xcirc \bar{U})\xst N = (E\ot \Pi^{\hs L})\xcirc \bar{U}, $$ where the first equality holds by Remark~\ref{esta donde debe} and the fact that $\delta_E$ is multiplicative; the second one, by the first part of Remark~\ref{use}; the third one, by Lemma~\ref{algo} and the fact that $\delta_E$ is multiplicative; and the last one, by the second part of Remark~\ref{use}. Consequently $\sigma^{-1}_E$ factorize through $\jmath$. Let $f^{-1}\colon H\ot H\to A$ be such that $\sigma^{-1}_E = \jmath\xcirc f^{-1}$. Since $\jmath$ is a monomorphism and \begin{align*} & \jmath\xcirc (f\xst f^{-1}) = (\jmath\xcirc f)\xst (\jmath\xcirc f^{-1}) = \sigma_E \xst \sigma_E^{-1} = q_{\gamma^{-1}}\xcirc \gamma\xcirc \mu = \jmath \xcirc p_{\gamma^{-1}}\xcirc \gamma\xcirc \mu \shortintertext{and} & \jmath\xcirc (f^{-1}\xst f) = (\jmath\xcirc f^{-1})\xst (\jmath\xcirc f) = \sigma_E^{-1} \xst \sigma_E = q_{\gamma^{-1}}\xcirc \gamma \xcirc \mu = \jmath \xcirc p_{\gamma^{-1}}\xcirc \gamma\xcirc \mu, \end{align*} we obtain that $f\xst f^{-1} = p_{\gamma^{-1}}\xcirc \gamma\xcirc \mu$ and $f^{-1} \xst f = p_{\gamma^{-1}}\xcirc \gamma\xcirc \mu$, as desired. \end{proof} \begin{theorem}\label{cleft implica crossed product con cociclo inversible} Let $(E,\jmath)$ be a $H$-cleft extension of $A$ by $H$ and let $\rho$, $f$ and $\tilde{\nu}$ be as in Theorem~\ref{teo 1}. Then $A$ is a weak $H$-module algebra via $\rho$, the hypotheses of Theorem~\ref{weak crossed prod} are fulfilled, the cocycle $f$ is invertible and $E\simeq A\times_{\rho}^f H$. \end{theorem} \begin{proof} By Theorem~\ref{teo 1}, Proposition~\ref{es un H modulo algebra debil} and Theorem~\ref{se factoriza}. \end{proof} \begin{remark} From Theorem~\ref{crossed prod cleft}, Remark~\ref{prod cruzados equiv equivale a extensiones equiv} and Theorem~\ref{cleft implica crossed product con cociclo inversible} it follows that if $H$ is a weak Hopf algebra and $A$ is an algebra, then the category of unitary crossed products of $A$ by $H$ with invertible cocycle and $A$ a weak $H$-module algebra, and the category of $H$-cleft extensions of $A$, are equivalent. \end{remark} \begin{bibdiv} \begin{biblist} \bib{AFG}{article}{ author={Alonso \'Alvarez, J. N.}, author={Fern\'andez Vilaboa, J. M.}, author={Gonz\'alez Rodr\'\i guez, R.}, title={Cohomology of algebras over weak Hopf algebras}, journal={Homology Homotopy Appl.}, volume={16}, date={2014}, number={1}, pages={341--369}, issn={1532-0073}, review={\MR{3217310}}, doi={10.4310/HHA.2014.v16.n1.a19}, } \bib{AFGLV}{article}{ author={Alonso {\'A}lvarez, J. N.}, author={Fern{\'a}ndez Vilaboa, J. M.}, author={Gonz{\'a}lez Rodr{\'{\i}}guez, R.}, author={L\'opez L\'opez, J. M.}, author={Villanueva Novoa, E.}, title={Weak Hopf algebras with projection and weak smash bialgebra structures}, journal={J. Algebra}, volume={269}, date={2003}, number={2}, pages={701-725}, issn={0021-8693}, } \bib{AFGR}{article}{ author={Alonso {\'A}lvarez, J. N.}, author={Fern{\'a}ndez Vilaboa, J. M.}, author={Gonz{\'a}lez Rodr{\'{\i}}guez, R.}, author={Rodr{\'{\i}}guez Raposo, A. B.}, title={Crossed products in weak contexts}, journal={Appl. Categ. Structures}, volume={18}, date={2010}, number={3}, pages={231--258}, issn={0927-2852}, review={\MR{2640214 (2011d:18009)}}, doi={10.1007/s10485-008-9139-2}, } \bib{AFGR2}{article}{ author={Alonso \'{A}lvarez, J. N.}, author={Fern\'{a}ndez Vilaboa, J. M.}, author={Gonz\'{a}lez Rodr\'{i}guez, R.}, author={Rodr\'{i}guez Raposo, A. B.}, title={Weak $C$-cleft extensions, weak entwining structures and weak Hopf algebras}, journal={J. Algebra}, volume={284}, date={2005}, number={2}, pages={679--704}, issn={0021-8693}, review={\MR{2114575}}, doi={10.1016/j.jalgebra.2004.07.043}, } \bib{AVR}{article}{ author={\'{A}lvarez, Jos\'{e} Nicanor Alonso}, author={Vilaboa, Jos\'{e} Manuel Fern\'{a}ndez}, author={Rodr\'{i}guez, Ram\'{o}n Gonz\'{a}lez}, title={Crossed products over weak Hopf algebras related to cleft extensions and cohomology}, journal={Chin. Ann. Math. Ser. B}, volume={35}, date={2014}, number={2}, pages={161--190}, issn={0252-9599}, review={\MR{3174232}}, doi={10.1007/s11401-014-0828-x}, } \bib{BNS1}{article}{ author={B\"{o}hm, Gabriella}, author={Nill, Florian}, author={Szlach\'anyi, Kornel}, title={Weak Hopf Algebras, I. Integral Theory and $C^*$-Structure}, journal={J. Algebra}, volume={221}, date={1999}, number={2}, pages={385--438}, issn={0021-8693}, } \bib{BNS2}{article}{ author={B\"{o}hm, Gabriella}, author={Nill, Florian}, author={ Szlach\'anyi, Kornel}, title={Weak Hopf Algebras, II. Representation theory, dimensions and the Markov trace}, journal={J. Algebra}, volume={233}, date={2000}, pages={156--212}, issn={0021-8693}, } \bib{Br}{article}{ author={Brzezi{\'n}ski, Tomasz}, title={Crossed products by a coalgebra}, journal={Comm. Algebra}, volume={25}, date={1997}, number={11}, pages={3551--3575}, issn={0092-7872}, review={\MR{1468823 (98i:16034)}}, doi={10.1080/00927879708826070}, } \bib{CGGV}{article}{ author={Carboni, Graciela}, author={Guccione, Jorge A.}, author={Guccione, Juan J.}, author={Valqui, Christian}, title={Cyclic homology of Brzezi\'{n}ski's crossed products and of braided Hopf crossed products}, journal={Adv. Math.}, volume={231}, date={2012}, number={6}, pages={3502--3568}, issn={0001-8708}, review={\MR{2980507}}, doi={10.1016/j.aim.2012.09.006}, } \bib{CG}{article}{ author={Caenepeel, S.}, author={De Groot, E.}, title={Modules over weak entwining structures}, conference={ title={New trends in Hopf algebra theory}, address={La Falda}, date={1999}, }, book={ series={Contemp. Math.}, volume={267}, publisher={Amer. Math. Soc., Providence, RI}, }, date={2000}, pages={31--54}, review={\MR{1800705}}, } \bib{FGR}{article}{ author={Fern{\'a}ndez Vilaboa, J. M.}, author={Gonz{\'a}lez Rodr{\'{\i}}guez, R.}, author={Rodr{\'{\i}}guez Raposo, A. B.}, title={Preunits and weak crossed products}, journal={Journal of Pure and Applied Algebra}, volume={213}, date={2009}, pages={2244--2261}, issn={0022-4049}, } \bib{FGR2}{article}{ author={Fern\'{a}ndez Vilaboa, J. M.}, author={Gonz\'{a}lez Rodr\'{i}guez, R.}, author={Rodr\'{i}guez Raposo, A. B.}, title={Equivalences for weak crossed products}, journal={Comm. Algebra}, volume={44}, date={2016}, number={10}, pages={4519--4545}, issn={0092-7872}, review={\MR{3508315}}, doi={10.1080/00927872.2015.1094484}, } \bib{GG}{article}{ author={Guccione, Jorge A.}, author={Guccione, Juan J.}, title={Theory of braided Hopf crossed products}, journal={J. Algebra}, volume={261}, date={2003}, number={1}, pages={54--101}, issn={0021-8693}, review={\MR{1967157 (2004d:16054)}}, doi={10.1016/S0021-8693(02)00546-X}, } \bib{GGV}{article}{ author={Guccione, Jorge A.}, author={Guccione, Juan J.}, author={Valqui, Christian}, title={(Co)homology of crossed products in weak contexts}, status={preprint}, year={2018}, } \bib{K}{book}{ author={Kassel, Christian}, title={Quantum groups}, series={Graduate Texts in Mathematics}, volume={155}, publisher={Springer-Verlag, New York}, date={1995}, pages={xii+531}, isbn={0-387-94370-6}, review={\MR{1321145}}, } \bib{Ra}{article}{ author={Rodr\'\i guez Raposo, Ana Bel\'en}, title={Crossed products for weak Hopf algebras}, journal={Comm. Algebra}, volume={37}, date={2009}, number={7}, pages={2274--2289}, issn={0092-7872}, review={\MR{2536918}}, doi={10.1080/00927870802620274}, } \end{biblist} \end{bibdiv} \end{document}
\section{Introduction} A \emph{classical network} is a capaciated directed acyclic graph $((V,A),(C_a:a\in A))$, where $V$ and $A$ are the node and the arc sets of the graph respectively, and $C_a$ is the link capacity for arc $a \in A$. A \emph{broadcast network} is a classical work for which all source messages are \emph{collocated} at a single source node. Consider a general broadcast network with one source node $s$ and $K$ sink nodes $t_k$, $k=1,\ldots,K$ (see Figure~\ref{fig:broadNet}). The source node $s$ has access to a collection of \emph{independent} messages $\mathsf{W}_I=(\mathsf{W}_i:i \in I)$, where $I$ is a finite index set. The messages intended for the sink node $t_k$ are given by $\mathsf{W}_{I_k}$, where $I_k$ is a nonempty subset of $I$. When all messages from $\mathsf{W}_I$ are \emph{unicast} messages, i.e., each of them is intended for \emph{only} one of the sink nodes, it follows from the celebrated max-flow min-cut theorem \cite{For-CJM56} that routing can achieve the entire capacity region of the network. On the other hand, when some of the messages from $\mathsf{W}_I$ are \emph{multicast} messages, i.e., they are intended for \emph{multiple} sink nodes, the capacity region of the network is generally \emph{unknown} except when there is only one multicast message at the source node \cite{Ahl-IT00,Li-IT03,Koe-ToN03} or there are only two sink nodes ($K=2$) in the network \cite{Ere-WCCC03,Nga-ICCCAS04,Ram-CWIT05}. In this paper, we are interested in establishing strong network coding bounds for \emph{general} broadcast networks with multiple (multicast) messages and more than two sink nodes ($K \geq 3$). In particular, we are interested in network coding bounds that rely only on the \emph{cut} structure of the network. The rational behind this particular interest is two-folded. First, cut is a well-understood combinatorial structure for networks. Second, the fact that standard cut-set bounds \cite[Ch.~15.10]{Cov-B06} are \emph{tight} for the aforementioned special cases \cite{For-CJM56,Ahl-IT00,Li-IT03,Koe-ToN03,Ere-WCCC03,Nga-ICCCAS04,Ram-CWIT05} suggests that cut as a combinatorial structure can be useful for more general broadcast-network coding problems as well. The starting point of this work is the following simple observation. For each $k =1,\ldots,K$, let $A_k$ be a ``basic" cut that separates the source node $s$ from the (single) sink node $t_k$. Then, for any nonempty subset $U \subseteq [K]:=\{1,\ldots,K\}$ the union $\cup_{k \in U}A_k$ is also a cut that separates the source node $s$ from the ``super" sink node $t_U$, whose intended messages are given by $\mathsf{W}_{\cup_{k \in U}I_k}$. By the standard cut-set bound \cite[Ch.~15.10]{Cov-B06}, we have \begin{align} R(\cup_{k \in U}I_k) \leq C(\cup_{k \in U}A_k) \label{eq:CSB} \end{align} for any achievable rate tuple $R_I:=(R_i:i \in I)$. Here, $R: 2^I \rightarrow \mathbb{R}^+$ is the \emph{rate} function that corresponds to the rate tuple $R_I$ and is given by \begin{align} R(I') := \sum_{i \in I'}R_i, \quad \forall I' \subseteq I, \label{eq:rate} \end{align} and $C: 2^A \rightarrow \mathbb{R}^+$ is the \emph{capacity} function of the network where \begin{align} C(A') := \sum_{a \in A'}C_a, \quad \forall A' \subseteq A. \label{eq:cap} \end{align} Note that the above observation depends critically on the fact that all messages $\mathsf{W}_I$ are \emph{collocated} at the source node $s$. When the messages are \emph{distributed} among several source nodes, it is well known that the union of several basic cuts may \emph{no longer} be a cut that separates the super source node from the super sink node and hence may not lead to any network coding bounds \cite{Kra-JNSM06}. \begin{figure}[!t] \centering \includegraphics[width=0.75\linewidth,draft=false]{Model.eps} \caption{Illustration of a general broadcast network.} \label{fig:broadNet} \end{figure} Based on the above discussion, it is clear that for broadcast networks the \emph{standard} cut-set bounds \cite[Ch.~15.10]{Cov-B06} are closely related to \emph{union} as a specific set operation to combine different basic cuts of the network. Therefore, a natural question that one may ask is whether there are any other set operations (besides the union) that will also lead to nontrivial network coding bounds. In this paper, we provide a positive answer to the above question by establishing a new set of network coding bounds for general broadcast networks. We term these bounds \emph{generalized} cut-set bounds based on the facts that: 1) they rely only on the cut structure of the network; and 2) the set operations within the rate and the capacity functions are \emph{identical} (but not just the union any more), both similar to the case of standard cut-set bounds as in \eqref{eq:CSB}. From the proof viewpoint, as we shall see, these bounds are established via only the \emph{Shannon-type} inequalities. It is well known that all Shannon-type inequalities can be derived from the simple fact that Shannon entropy as a set function is \emph{submodular} \cite[Ch.~14.A]{Yeu-B08}. So, at heart, the generalized cut-set bounds are reflections of several new results that we establish on submodular function optimization. The rest of the paper is organized as follows. In Section~\ref{sec:Mod} we establish several new results on submodular function optimization, which we shall use to prove the generalized cut-set bounds. A new set of network coding bounds that relate \emph{three} basic cuts of the network is provided in Section~\ref{sec:GCSB3}. The proof of these bounds is rather ``hands-on" and hence provides a good illustration on the essential idea on how to establish the generalized cut-set bounds. In Section~\ref{sec:GCSBK}, a new set of network coding bounds that relate arbitrary $K$ basic cuts of the network is provided, generalizing the bounds provided in Section~\ref{sec:GCSB3}. In Section~\ref{sec:CN}, the tightness of the generalized cut-set bounds is demonstrated via applications to \emph{combination} networks \cite{Nga-ITW04}. Finally, in Section~\ref{sec:Con} we conclude the paper with some remarks. \section{Modular and Submodular Functions}\label{sec:Mod} Let $S$ be a finite ground set. A function $f: 2^S \rightarrow \mathbb{R}^+$ is said to be \emph{submodular} if \begin{align} f(S_1)+f(S_2) &\geq f(S_1\cup S_2)+f(S_1\cap S_2), \quad \forall S_1,S_2 \subseteq S, \label{eq:submod2} \end{align} and is said to be \emph{modular} if \begin{align} f(S_1)+f(S_2) &= f(S_1\cup S_2)+f(S_1\cap S_2), \quad \forall S_1,S_2 \subseteq S. \label{eq:mod2} \end{align} More generally, let $S_k$, $k=1,\ldots,K$, be a subset of $S$. For any nonempty subset $U$ of $[K]$ and any $r\in[|U|]$, let \begin{align} S^{(r)}(U) :=\cup_{\{U' \subseteq U: |U'|=r\}}\cap_{k \in U'}S_k. \end{align} Clearly, we have \begin{align} \cup_{k \in U}S_k = S^{(1)}(U) \supseteq S^{(2)}(U) \supseteq \cdots \supset
eq S^{(|U|)}(U)=\cap_{k\in U}S_k \label{eq:myOrd1} \end{align} for any nonempty $U \subseteq [K]$ and \begin{align} S^{(r)}(U') \subseteq S^{(r)}(U) \label{eq:myOrd2} \end{align} for any $\emptyset \subset U' \subseteq U \subseteq [K]$ and any $r\in[|U'|]$. Furthermore, it is known that \cite[Th.~2]{Har-IT06} \begin{align} \sum_{k\in U}f(S_k) &\geq \sum_{r=1}^{|U|}f(S^{(r)}(U)) \label{eq:submodK} \end{align} if $f$ is a submodular function, and \begin{align} \sum_{k\in U}f(S_k) &= \sum_{r=1}^{|U|}f(S^{(r)}(U)) \label{eq:modK} \end{align} if $f$ is a modular function. Note that the standard submodularity \eqref{eq:submodK} relates $S^{(r)}(U)$ for different $r$ but a \emph{fixed} $U$. To establish the generalized cut-set bounds, however, we shall need the following technical results on modular and submodular functions that relate $S^{(r)}(U)$ for not only different $r$ but also \emph{different} $U$. \begin{lemma}\label{lemma:1} Let $r'$ and $J$ be two integers such that $0 \leq r' \leq J \leq K$. We have \begin{align} \sum_{r=1}^{r'}f(S_r)+\sum_{r=r'+1}^{J}f(S_r\cup S^{(r'+1)}([r])) & \ge \sum_{r=1}^{r'}f(S^{(r)}([J]))+\sum_{r=r'+1}^{J}f(S^{(r'+1)}([r])) \label{eq:1a} \end{align} if $f$ is a submodular function, and \begin{align} \sum_{r=1}^{r'}f(S_r)+\sum_{r=r'+1}^{J}f(S_r\cup S^{(r'+1)}([r])) & = \sum_{r=1}^{r'}f(S^{(r)}([J]))+\sum_{r=r'+1}^{J}f(S^{(r'+1)}([r])) \end{align} if $f$ is a modular function. \end{lemma} Note that when $r'=0$, we have $S^{(r'+1)}([r]) = S^{(1)}([r]) = \cup_{k=1}^{r}S_k \supseteq S_r$ for any $r=1,\ldots,J$. In this case, the inequality \eqref{eq:1a} reduces to the trivial equality \begin{align} \sum_{r=1}^{J}f(S^{(1)}([r])) &= \sum_{r=1}^{J}f(S^{(1)}([r])). \end{align} On the other hand, when $r'=J$, the inequality \eqref{eq:1a} reduces to the standard submodularity \begin{align} \sum_{r=1}^{J}f(S_r) & \geq \sum_{r=1}^{J}f(S^{(r)}([J])). \end{align} For the general case where $0 < r' < J$, a proof of the lemma is provided in Appendix~\ref{app:pf-lemma1}. Let $S'_k:=S_k\cup S_0$ for $k=1,\ldots,K$. For any nonempty $U \subseteq [K]$ and any $r=1,\ldots,|U|$ we have \begin{align} S'^{(r)}(U) &=\cup_{\{U' \subseteq U: |U'|=r\}}\cap_{k \in U'}S'_k\\ &=\cup_{\{U' \subseteq U: |U'|=r\}}\cap_{k \in U'}(S_k\cup S_0)\\ & = \left(\cup_{\{U' \subseteq U: |U'|=r\}}\cap_{k \in U'}S_k\right)\cup S_0\\ &= S^{(r)}(U) \cup S_0.\label{eq:1c} \end{align} Applying Lemma~\ref{lemma:1} for $S'_k$, $k=1,\ldots,K$, and \eqref{eq:1c}, we have the following corollary. \begin{coro}\label{cor:1} Let $r'$ and $J$ be two integers such that $0 \leq r' \leq J \leq K$, and let $S_0$ be a subset of $S$. We have \begin{align} \sum_{r=1}^{r'}f(S_r\cup S_0)&+\sum_{r=r'+1}^{J}f(S_r\cup S^{(r'+1)}([r])\cup S_0) \notag\\ & \ge \sum_{r=1}^{r'}f(S^{(r)}([J])\cup S_0)+\sum_{r=r'+1}^{J}f(S^{(r'+1)}([r])\cup S_0) \label{eq:2a} \end{align} if $f$ is a submodular function, and \begin{align} \sum_{r=1}^{r'}f(S_r\cup S_0)&+\sum_{r=r'+1}^{J}f(S_r\cup S^{(r'+1)}([r])\cup S_0)\notag\\ & = \sum_{r=1}^{r'}f(S^{(r)}([J])\cup S_0)+\sum_{r=r'+1}^{J}f(S^{(r'+1)}([r])\cup S_0) \end{align} if $f$ is a modular function. \end{coro} We shall also need the following lemma, for which a proof is provided in Appendix~\ref{app:pf-lemma2}. \begin{lemma}\label{lemma:2} Let $U$ and $T$ be two nonempty subsets of $[K]$. Write, without loss of generality, that $T=\{t_1,\ldots,t_{|T|}\}$ where $1 \leq t_1 < t_2 < \cdots < t_{|T|} \leq K$. Let $q$ and $r_q$ be two integers such that $1 \leq q \leq |U|$, $1 \leq r_q \leq |T|$, and $S^{(q)}(U) \subseteq S^{(r_q)}(T)$. We have \begin{align} \sum_{r=1}^{|T|}&f(S_{t_r})+r_qf(S^{(q)}(U))\notag\\ & \geq \sum_{r=1}^{r_q}\left(f(S^{(r)}(T))+f(S_{t_r}\cap S^{(q)}(U))\right)+\sum_{r=r_q+1}^{|T|}f(S_{t_r}\cap(S^{(q)}(U)\cup S^{(r_q+1)}(\{t_1,\ldots,t_r\}))) \label{eq:3a} \end{align} if $f$ is a submodular function, and \begin{align} \sum_{r=1}^{|T|}&f(S_{t_r})+r_qf(S^{(q)}(U))\notag\\ & = \sum_{r=1}^{r_q}\left(f(S^{(r)}(T))+f(S_{t_r}\cap S^{(q)}(U))\right)+\sum_{r=r_q+1}^{|T|}f(S_{t_r}\cap(S^{(q)}(U)\cup S^{(r_q+1)}(\{t_1,\ldots,t_r\})))\label{eq:3b} \end{align} if $f$ is a modular function. \end{lemma} For specific functions, let $\mathsf{Z}_S:=(\mathsf{Z}_i:i \in S)$ be a collection of jointly distributed random variables, and let $H(\mathsf{Z}_S)$ be the joint (Shannon) entropy of $\mathsf{Z}_S$. Then, it is well known \cite[Ch.~14.A]{Yeu-B08} that $H_\mathsf{Z}: 2^S \rightarrow \mathbb{R}^+$
1}}(h_{2+\ep_1}(\lam))$. \noindent (3) Define $L_2(\lam)=D_0 L_1(\lam)(1+L_1(\lam))^{-1}$. We have $L_2(\lam)\in \Og^{(3)}_{\Hg_2}(h_2(\lam))$. Define $k_{\lam}(y)=|L_2(\lam)||\nu|(y)$. We have $k_\lam\in \HL$ and \reflm(important) implies \begin{align*} & \int_{\R^4}|(M_{\mu} D_0 \tilde{L_1}(\lam) M_{\nu})(x,y)|dy \\ & \leq \int_{\R^4} |(M_{\mu} D_0 (g_1 (\lam)\lam^2 P+ \lam^2 \Mg_1 + \tilde{\Mg}_2(\lam))(x,z)| |k_\lam(z)|dz\,. \end{align*} The right side is bounded by the sum of the quantities which are considered above and we obtain $\|(M_{\mu} D_0 \tilde{L}_1(\lam)M_{\nu})(x,y)\|_{\Lg_{2,1}} \leq C h_2(\lam)\lam^2\la \log \lam\ra$. Likewise we obtain for the derivatives that $\|\pa_\lam^j (M_{\mu} D_0 \tilde{L}_1(\lam)M_{\nu})(x,y)\|_{\Lg_{2,1}} \leq C h_2(\lam)\lam^{2-j}\la \log \lam\ra$. Combining these together implies that $(M_{\mu} D_0 \tilde{L_1}(\lam)M_{\nu})(x,y) \in \Og^{(3)}_{\Lg_{2,1}}(h_{2}(\lam)^2)$. The last statement of the lemma is obvious. \edpf For shortening formulas we denote $\Kg(\lam) = (\Mg(\lam)+S_1)^{-1}$. Recall \refeq(NMlam) and that $M_{\m}{D_0}M_{\nu} \in \Lg$ for $\m, \nu \in (L^2 \cap L^8)$ (\reflm(7-2)). \bglm \lblm(7-3) Let $\m, \nu \in (L^2 \cap L^8)$. Then, $M_{\m} \Kg(\lam) M_{\nu} = M_{\m} D_0M_{\nu} + \Og^{(3)}_{\Lg}(h_{2}(\lam))$ and $M_v \Kg(\lam) M_v$ is a good producer. \edlm \bgpf Let $\Kg_1(\lam)= \Kg(\lam)-D_0$, then \refeq(L1tL1-def) implies that \bqn \lbeq(7-3-2) \Kg_1(\lam)= -D_0(g_1 (\lam)\lam^2 P + \lam^2 \Mg_1+ \tilde{\Mg}_2(\lam))D_0 + \tilde{L}_1(\lam) \in \Og^{(3)}_{\Hg_2}(h_2(\lam)) \eqn and $(M_{\m} \Kg_1(\lam)M_{\nu})(x,y) \in \Og^{(3)}_{\Lg_1}(h_2(\lam))$. We prove $M_{\m}\Kg_1(\lam)M_{\nu}\in \Og^{(3)}_{\Lg_{2,1}}(h_2(\lam))$ which finishes the proof. \reflm(phDMp) with $D_0 \nu$ in replace of $\nu$ implies that all operators on the right of \refeq(7-3-2) except $g_1 (\lam) \lam^2 M_{\m}D_0 P D_0 M_\nu$ are of the class $\Og^{(3)}_{\Lg_{2,1}}(h_2(\lam))$ if sandwiched by $M_\m$ and $M_\n$. The latter operator is also of class $\Og^{(3)}_{\Lg_{2,1}}(h_2(\lam))$ since it can be written as $g_1 (\lam) \lam^2 \|V\|_1^{-1} (M_{\m}D_0 v)\otimes (M_{\nu} D_0 v)(y)$ and $M_{\m}D_0 v, M_{\nu} D_0 v\in L^1 \cap L^2$ by virtue of \reflm(7-2). This completes the proof. \edpf \section{The case $H$ has singularity of the first kind at zero} In this section we prove \refth(main-theorem)\, (2a). Thus, we assume $\ax^{\d}V \in (L^1\cap L^4)$ for some $\d=3+\ep$, $\ep>0$ and $H$ has singularities of the first kind at zero. Without losing generalities we may assume $0<\ep \leq 4$. We use the notation of \refdf(GT-1). Then $T_1= S_1P{S_1}\vert_{S_1 \HL}$ is of rank one and is invertible in $S_1\HL$. Hence, ${\textrm{rank}}\, T_1 = {\textrm {rank}}\, S_1=1$ and, since $\Mg_0$ is a real operator, we may choose a real function $\ph(x)$ as the basis vector of $S_1 \HL$ so that $S_1 = \ph \otimes \ph$. \bglm \lblm(B-inv) The operator $B(\lam)$ of \refeq(JN-B) for the pair $(\Mg(\lam), S_1)$ is invertible for small $\lam>0$ and $B(\lam)^{-1} = m(\lam)(\ph \otimes \p)$, where with an $\ep_1>1$, $c_1= |\la \ph, v\ra|^2/\|V\|_1>0$ and $c_2= - \la \ph, \Mg_1 \ph\ra$, \[ m(\lam)=\lam^{-2}\mu(\lam), \quad \m(\lam)= (c_1g_1(\lam)+ c_2+ \Og^{(3)}(h_{\ep_1}(\lam)))^{-1} \] and, $\m(\lam)$ is and Mikhlin multiplier. \edlm \bgpf We substitute \refeqs(NMlam,L1tL1-def) for $\Kg(\lam)$ in $B(\lam)= S_1 - S_1\Kg(\lam)S_1$. Then, the identity $D_0 S_1= S_1 D_0 = S_1$ and \reflm(M0-3) yield \[ B(\lam) = g_1 (\lam)\lam^2 S_1 P S_1- \lam^2 S_1 \Mg_1 S_1+ S_1\Og^{(3)}_{\Hg_2}(h_{2+\ep_1}(\lam))S_1 = \lam^{2}\m(\lam)^{-1}\ph\otimes \ph \,. \] The lemma follows. \edpf \reflm(JN) and \reflm(B-inv) imply that $\Mg(\lam)$ is invertible for small $\lam>0$ and \bqn \lbeq(B-2) \Mg(\lam)^{-1}= \Kg(\lam)+ \Kg(\lam)S_1B(\lam)^{-1}S_1\Kg(\lam), \quad \Kg(\lam) = (\Mg(\lam)+S_1)^{-1}. \eqn \bglm \lblm(7-4) Let $m(\lam)$ be as in \reflm(B-inv). Then modulo a good producer \bqn \lbeq(B-f) M_v \Mg(\lam)^{-1} M_v \equiv m(\lam)|v\ph\ra \la v\ph| \,. \eqn \edlm \bgpf By \reflm(7-3) $M_v \Kg(\lam)M_v $ is a good producer. \reflm(B-inv) implies \bqn \lbeq(B-fa) M_v \Kg(\lam)S_1B(\lam)^{-1}S_1\Kg(\lam)M_v =m(\lam)|M_v \Kg(\lam)\ph\ra \la M_v \Kg(\lam)^\ast \ph|\,. \eqn Let $w(\lam,x)= M_v \Kg(\lam)\ph(x)$. Since $\Kg(\lam)= (M_U + M_v G_0(\lam)M_v + S_1)^{-1}$ and $G_0(\lam)^\ast(x,y)= \overline{G_0(\lam)(x,y)}$, we have $\Kg(\lam)^\ast(x,y)= \overline {\Kg(\lam)(x,y)}$ and $M_v \Kg(\lam)^\ast \ph(x) = \overline{w(\lam,x)}$ (recall $\ph(x)$ is real). By \refeq(NMlam) and $D_0\ph = \ph$ \bqn \lbeq(B-3) w(\lam)= M_v\ph - g_1(\lam)\lam^2 M_v D_0P \ph - \lam^2 M_v D_0 \Mg_1 \ph -M_v D_0(\tilde{\Mg}_2(\lam)- \tilde{L}_1(\lam))\ph. \eqn Observe that the $\lam$-dependence of the first three terms on the right is explicit. Since $\ph \in \ax^{-2-\d/2} (L^2 \cap L^8)$ by virtue of \reflm(7-12), we have \\[2pt] (i) $M_v\ph =v\ph,\, M_v D_0P \ph ,\, M_v D_0 \Mg_1 \ph \in (L^1 \cap L^2)$ by \reflm(7-2) and \refeq(phDMp-2); \\[2pt] (ii) $M_v D_0(\tilde{\Mg}_2(\lam)- \tilde{L}_1)\ph\in \Og^{(3)}_{L^1\cap L^2} (h_{2+\ep_1}(\lam))$ for an $1<\ep_1<\d/2$ by \refeqs(phDMp-2,phDMp-3). \\[2pt] Thus, substituting \refeq(B-3) in $m(\lam)w(\lam) \otimes \overline{w(\lam)}$, we obtain from (i) and (ii) that \[ \refeq(B-fa)= m(\lam)|v \ph\ra \la v\ph| + \sum \m_{jk}(\lam)|\p_j \ra \la \overline{\p_k}| + \Og^{(3)}_{\Lg}(h_{\ep_1}(\lam))\,. \] with Mikhlin multipliers $\m_{jk}(\lam)$ and $|\p_j \ra \la \overline{\p_k}|\in \Lg$. Since $\ep_1>1$, \reflm(Funda) and \refprop(R-theo) imply that the last two terms on the right are good producers and \refeq(B-f) follows. \edpf \paragraph{\bf Proof of \refthb(main-theorem) (2a)} Since $W_{\pm}$ are isomtries of $L^2$, we may assume $p\not=2$. By virtue of \reflm(7-4), it suffices to prove \refth(main-theorem) (2a) for \bqn \lbeq(Wleq1) \W_{\leq{a}}^{(1)} u(x) = \int_0^\infty (G_0(-\lam)(v\ph\otimes v\ph) \Pi({\lam})u)(x)\m(\lam)\lam^{-1} \chi_{\leq {a}}(\lam)d\lam. \eqn By \refeq(WL) we have $\W_{\leq{a}}^{(1)}= \W^{(0,0)}_{\leq{a}}(\mu(\lam)v\ph\otimes v\ph)$. Since $ v\ph\otimes v\ph\in \Lg$ by virtue of \reflm(7-12) and $\m(\lam)$ is Mikhlin multiplier, \reflm(Funda) implies that $\W_{\leq{a}}$ is bounded in $L^p$ for $1<p<2$. We next prove that $\W_{\leq{a}}^{(1)}$ is unbounded in $L^p$ if $2<p<\infty$. Since $\tchi_{\leq{a}}(|D|)$ is bounded in $L^p$ for all $1\leq p \leq \infty$, it suffices to show this for $\tchi_{\leq{a}}(|D|)\W_{\leq{a}}^{(1)}$. By \refeq(WL) \bqn \lbeq(WL-sum) \tchi_{\leq{a}}(|D|)\W_{\leq{a}}^{(1)}u= \int_{\R^8} (v\ph)(z)(v\ph)(y) (\t_z K_{a,\leq}^{(0,0)}\t_{-y} \m(|D|)u) dz dy \eqn and \refeq(KT-2) implies that this is equal to $R_2u - R_1 u$ where $R_1$ and $R_2$ are the operators obtained from \refeq(WL-sum) by replacing $K_{a,\leq}^{(0,0)}$ by $(8\pi^2)^{-1}\tilde{T}_1$ and $(8\pi^2)^{-1}Q$ respectively where \begin{align} \tilde{T}_1 u(x) & = \int_{\R^4} (\t_z T_1 \t_{w}u)(x) \widehat{\tchi_{\leq{a}}}(z)\widehat{\chi_{\leq{a}}}(w)dz dw, \\ Q u(x)& =\left( \frac1{|x|^2}\ast \widehat{{\tchi}_{\leq{a}}}\right) \otimes \left(\frac1{|y|^2} \ast \widehat{\chi_{\leq {a}}})\right)\m(|D|)u(x). \end{align} Since that $T_1$ is bounded in $L^p$ for $2<p<\infty$ by \reflm(T), Minkowski's inequality implies so is $\tilde{T}_1$ and, hence, $R_1$. We show that $R_2$ is unbounded in $L^p$ for $2<p<\infty$. Let \[ F(x) = \int_{\R^4}\frac{(v\ph \ast \widehat{{\tchi}_{\leq {a}}})(z)}{|x-z|^2}dz, \ \ G(y) = \int_{\R^4}\frac{(v\ph \ast \widehat{\chi_{\leq{a}}})(z)}{|y-z|^2}dz, \ \ \ell(u)= \la u, \m(|D|) G \ra. \] Then, $F \in L^p\setminus\{0\}$ for $2<p<\infty$ and $R_2 u(x)= \int_{\R^8} F(x)G(y) \m(|D|)u(y)dy= \ell(u) F(x)$. It follows that, if $R_2$ were bounded in $L^p$, $\ell$ were bounded functional on $L^p$ and, by the Fourier inversion formula and Riesz' theorem, it must be that \[ \m(|D|)G(x)=\frac1{4\pi^2} \int_{\R^4}e^{ix\xi}\hat{G}(\xi)\m(|\xi|)d\xi \in L^q, \quad 1<q=p/(p-1)<2 \] which would imply by Hausdorff-Young's inequality that \bqn \lbeq(HY) \hat{G}(\xi)\mu(|\xi|)= \frac{\m(|\xi|)\widehat{v\ph}(|\xi|) \tchi(|\xi|)} {|\xi|^{2}}\in L^p. \eqn Since $\widehat{v\ph}\in C^{2}$ and and $|\m(\lam)|\geq C (|\log \lam|)^{-1}$ with $C>0$ for small $\lam>0$, \refeq(HY) could happen only when $\widehat{v\ph}(0)=\frac1{4\pi^2}\la v, \ph\ra =0$. But $S_1 P S_1 \not=0$ and $\la v, \ph\ra\not=0$. Hence $R_2$ must be unbounded in $L^p$ for any $2<p<\infty$. This completes the proof. \qed \section{The case $H$ has singularities of the second kind} \lbsec(second) In this section we prove \refth(main-theorem) (2b) assuming that $\ax^{\d} V \in (L^{1} \cap L^4)$, $\d=4+\ep$ {for an $\ep>0$} and that $H$ has singularities of the second kind at zero, viz. the projection $S_1$ onto $\Ker \Mg_0$ satisfies $S_1 P S_1\vert_{S_1\HL}=0$ which is equivalent to $S_1 P = P S_1 =0$ or $\la v, \ph\ra=\widehat{v\ph}(0)=0$ for all $\ph \in S_1\HL$. As in \refsec(generalities) we let ${\textrm{rank}}\, S_1=n$ and $\{\ph_1, \dots, \ph_n\}$ be an orthonormal basis of $S_1 \HL$. \bglm \lblm(S1P0) Let $\ph\in S_1\HL\setminus \{0\}$. Then, $\ph\in \ax^{-2-(\d/2)}(L^2\cap L^8)$. The function $u$ defined $u = N_0 v\ph$ is eigenfunction of $H$ with eigenvalue $0$. \edlm \bgpf \reflm(7-12) implies the first statement. Hence $\widehat{v\ph}\in H^{2+\d}(\R^4)$ and $\widehat{v\ph}(0)=0$. It follows that $|\xi|^{-2} \widehat{v\ph}(\xi)\in \HL$ and $u = N_0 v\ph \in \HL$. \reflm(7-12) implies $(-\lap + V)u(x)=0$. This proves the lemma. \edpf We study $\Mg(\lam)^{-1}$ for $0<\lam<4a$ for an arbitrarily small but fixed $0<a<1$. We apply \reflm(JN) for the pair $(A,S)=(\Mg(\lam), S_1)$. The following is Lemma 7.4 of \cite{EGG}. \bglm \lblm(10-1) If $H$ has singularities of the second kind at zero, then $S_1 \Mg_1 S_1\vert_{S_1\HL}$ is non-singular. Let $D_1$ denotes $(S_1 \Mg_1S_1\vert_{S_1\HL})^{-1}$. \edlm In what follows we denote $\Og_{\Hg_2}^{(j)}(\cdot)$ simply by $\Og^{(j)}(\cdot)$ for operators $T(\lam)$ in $S_1\HL$. We observe that $\ph_j \otimes \ph_k \in \Lg$ for $j,k=1, \dots, n$ and $T(\lam) \in \Og^{(j)}(f(\lam))$ if and only if \bqn \lbeq(observe) T(\lam) = \sum_{j,k=1}^n \a_{jk}(\lam) \ph_j \otimes \ph_k, \quad a_{jk}(\lam)\in \Og^{(j)}_{\C}(f(\lam))\,. \eqn \bglm \lblm(B-inv-2) The operator $B(\lam)$ for $(A,S)=(\Mg(\lam), S_1)$ is invertible and \bqn \lbeq(B-inv-2) B(\lam)^{-1}= \lam^{-2}(D_1 + \tilde{F}(\lam)), \quad \tilde{F}(\lam)\in \Og^{(3)}(\lam^2(\log \lam)^2)\,. \eqn \edlm \bgpf We recall that $\Kg(\lam)=(\Mg+ S_1)^{-1}$, \refeq(L1tL1-def) and \refeq(NMlam). Then, since $D_0S_1= S_1 D_0 = S_1$ and $S_1 P= PS_1=0$, we have on $B(\lam)= S_1 (\lam^2 \Mg_1 + \tilde{\Mg}_2(\lam)- \tilde L_1(\lam))S_1$ or \bqn B(\lam)= \lam^2 (S_1+ \lam^{-2}S_1 \tilde{\Mg}_2(\lam)S_1 D_1 - \lam^{-2}S_1 \tilde L_1(\lam)S_1 D_1)(S_1\Mg_1 S_1). \lbeq(BS1) \eqn Here $\lam^{-2} S_1\tilde L_1(\lam)S_1 D_1 = \Og^{(3)}(\lam^2\la \log \lam\ra^2)$ and $\lam^{-2}S_1 \tilde{\Mg}_2(\lam) S_1 = \Og^{(3)}(h_{2}(\lam))$ by virtue of \refeq(M-2a) with $\ep_1=2$ and that $v\ph\in \ax^{-2-\d}L^2$ for $\ph \in S_1\HL$, $\d>4$. Hence $B(\lam)$ is invertible and $B(\lam)^{-1}$ may be written in the form \refeq(B-inv-2). \edpf \bglm \lblm(Minv-case2) Modulo a good producer $M_v\Mg(\lam)^{-1}M_v \equiv \lam^{-2}M_v S_1 D_1 S_1 M_v$. \edlm \bgpf \reflm(JN) implies $\Mg(\lam)^{-1}= \Kg(\lam)+\Kg(\lam)S_1B^{-1}(\lam)S_1\Kg(\lam)$. $M_v\Kg(\lam)M_v$ is a good producer by \reflm(7-3). Let $\Ga(\lam)= (\lam^2 \Mg_1+ D_0^{-1}\tilde{\Mg}_2(\lam))D_0 -\tilde L_1(\lam)$. Then $\Ga(\lam)\in \Og^{(3)}_{\Hg_2}(\lam^2)$ and \refeq(NMlam), \reflm(S1P0) and \refeq(B-inv-2) imply \begin{align} & \Kg(\lam)S_1B^{-1}(\lam)S_1\Kg(\lam)= \lam^{-2}D_0(1 -\Ga(\lam))S_1 (D_1+\tilde{F}(\lam) )S_1 (1-\Ga(\lam)) \notag \\ & = \lam^{-2}S_1 D_1S_1 + \lam^{-2} S_1\tilde{F}(\lam)S_1 - \lam^{-2}D_0 \Ga(\lam)S_1 (D_1+ \tilde{F}(\lam))S_1 \notag \\ & - \lam^{-2}S_1 (D_1+ \tilde{F}(\lam))S_1 \Ga(\lam) + \lam^{-2}D_0 \Ga(\lam)S_1 (D_1+ \tilde{F}(\lam))S_1 \Ga(\lam) \,. \lbeq(32p) \end{align} We show that, if we sandwich the right of \refeq(32p) by $M_v$, all terms become good producers except $\lam^{-2}M_vS_1D_1S_1M_v$, which proves the lemma. \\[2pt] (i) Let $E_1(\lam)=\lam^{-2}M_vS_1 \tilde{F}(\lam)S_1 M_v$. By \refeq(B-inv-2), $E_1(\lam)= \sum_{j,k=1}^n (\log \lam)^2 \ta_{jk}(\lam) L_{jk}$ with $\ta_{jk}(\lam) \in \Og_{\C}^{(3)}(1)$ and $L_{jk}=(v\ph_j) \otimes (v\ph_k)\in \Lg$. Then, $\W_{\leq{a}}(E_1(\lam))=\sum_{j,k=1}^n \W^{(2,2)}_{\leq{a}}(\ta_{jk}(\lam)L_{jk})$ and is a good operator by virtue of \reflm(Funda) and $E_1(\lam)$ is a good producer (recall \refdf(Wtilde) for $\W_{\leq{a}}(\tilde\Ng(\lam))$). \noindent (ii) Let $E_2(\lam){=} \lam^{-2}M_v D_0 \Ga(\lam) S_1 D_1S_1 M_v $ and $c_{jk}= \la \ph_j, S_1\ph_k\ra$, $j,k=1, \dots, n$. Then, \bqn \lbeq(E-1) E_2(\lam) = \sum_{j,k=1}^n c_{jk} \lam^{-2}M_v D_0 \Ga(\lam)\ph_j \otimes (v\ph_k)\,. \eqn $\{c_{jk}\}$ is non-singular by \reflm(10-1) and $v\ph_k\in (L^1\cap L^2)$. We shall show \bqn \lam^{-2}M_v D_0 \Ga(\lam)\ph_j \in (L^1\cap L^2) + \Og^{(3)}_{L^1 \cap L^2}(\lam^2 \la \log\lam \ra^2)\,, \lbeq(Gamma-est) \eqn which will imply that $E_2(\lam)= \Lg + \Og^{(3)}_{\Lg}(\lam^2 \la \log\lam \ra^2)$ and $E_2(\lam)$ is a good producer by virtue of \refprop(R-theo) which holds actually with $\Og^{(2)}_{\Lg}(\lam^2 \la \log\lam \ra^2)$ in place of $\Og^{(2)}_{\Lg}(\lam^2 \la \log\lam \ra^2)$. We separately examine the operators on the right of \[ \lam^{-2}M_v D_0 \Ga(\lam)\ph_j= M_v D_0 \Mg_1 \ph_j + \lam^{-2}M_v D_0^{-1}\tilde{\Mg}_2(\lam)\ph_j- \lam^{-2}M_v D_0 \tilde L_1(\lam)\ph_j. \] and the following (a), (b) and (c) will jointly prove \refeq(Gamma-est). \noindent (a) $(M_v D_0 \Mg_1)\ph_j \in (L^1\cap L^2)$ by the first of \refeq(phDMp-2). \noindent (b) \reflm(M0-3) implies \refeq(phDMp-2) with $\m=\n=v$ and $\ep_1=2$ and, \refeq(phDMp-2) remains to hold if $D_0$ is replaced by $D_0^{-1}=\Mg_0 + S_1$ (see \refeq(7-1)). Hence, $\lam^{-2}M_v D_0^{-1}\tilde{\Mg}_2(\lam)\ph_j \in \Og^{(3)}_{L^1\cap L^2}(h_2(\lam))$. \noindent (c) $M_v \lam^{-2}\tilde L_1(\lam) \ph_j \in \Og^{(3)}_{L^1\cap L^2}(\lam^2 \la \log\lam \ra^2)$ by the remark below \refeq(phDMp-3). \noindent (iii) $E_3(\lam){=} \lam^{-2}M_v D_0 \Ga(\lam) S_1 \tilde{F}(\lam)S_1 M_v $ is given by \refeq(E-1) with $a_{jk}(\lam)$ of \refeq(B-inv-2) in place of $c_{jk}$. Then (a), (b) and (c) above imply that $E_3(\lam)\in \Og^{(3)}_{\Lg}(\lam^2 \la \log\lam \ra^2)$ and it is a good producer by virtue of \refprop(R-theo). \noindent (iv) $E_4(\lam){=} \lam^{-2} M_v S_1 (D_1+ \tilde{F}(\lam))S_1\Ga(\lam)M_v {=} \sum (c_{jk}+ a_{jk}(\lam))(v\ph_j) \otimes \lam^{-2}M_v \Ga(\lam)^\ast \ph_k$ is the sum of (almost) the adjoints of operators studied in (ii) and (iii). Then, the argument similar to the one used for proving \refeq(Gamma-est) impies that $\lam^{-2}M_v \Ga(\lam)^\ast \ph_k$ which is equal to $M_v(D_0 \Mg_1 + \lam^{-2}D_0 \tilde{\Mg}_2(\lam)^\ast D_0^{-1} + \lam^{-2}\tilde L_1(\lam)^\ast)\ph_k$ satisfies \bqn \lbeq(Gamma-ast) \lam^{-2}M_v \Ga(\lam)^\ast \ph_k\in (L^1 \cap L^2)+ \Og^{(3)}_{L^1\cap L^2}(\lam^2 \la \log\lam \ra^2). \eqn Hence, $E_4(\lam)$ is a good producer as in (iv) by virtue of \refprop(R-theo). \noindent (v) Finally we need show that $\lam^{-2}M_v D_0 \Ga(\lam)S_1 (D_1+ \tilde{F}(\lam))S_1 \Ga(\lam)M_v$ which is equal to \[ \sum (c_{jk}+ a_{jk}(\lam) |\lam^{-2}vD_0\Ga(\lam) \ph_j\ra \la M_v \Ga(\lam)^\ast \ph_k|\, \] is a good producer. However, this is obvious from estimates \refeq(Gamma-est) and \refeq(Gamma-ast) and \refprop(R-theo). This completes the proof of the lemma. \edpf \paragraph{\bf Proof of \refthb(main-theorem) (2b)} For shortening formulas we denote $f_j = v\ph_j, \ \ j=1, \dots, n$. \reflm(7-12) implies $f_j \in \ax^{-2-\d}(L^1 \cap L^4)$ and $PS_1=S_1P=0$ does \bqn \lbeq(fj) \int_{\R^4}f_j(x) dx =0, \ \ j=1, \dots, n. \eqn \noindent (i) We have $M_vS_1 D_1 S_1 M_v = \sum_{j,k=1}^n c_{jk} f_j \otimes f_k $ and thanks to \reflm(Minv-case2) \begin{align} \W_{\leq{a}} u(x) & \equiv \int_0^\infty (G_0(-\lam) \lam^{-2}M_vS_1 D_1 S_1 M_v \Pi(\lam)u)(x) \chi_{\leq a}(\lam) \lam d\lam \notag \\ & = \sum_{j,k=1}^n c_{jk}\int_{\R^4\times \R^4} f_j(z)f_k(w) (\t_z K_{a}^{(0,0)}\t_{-w} u)(x) dz dw \lbeq(chuto-1) \end{align} modulo a good operator. Then \refeq(WL) implies \bqn \W_{\leq{a}} u(x) \equiv \sum_{j,k=1}^n c_{jk}\W^{(0,0)}_{\leq{a}}(f_j \otimes f_k)u(x) \lbeq(chuto-2) \eqn and $\W_{\leq {a}}$ is bounded in $
L^p$ for $1<p<2$ by virtue of \reflm(Funda). \noindent (ii) We next show that the cancellation property \refeq(fj) widens the range of $p$ to $1<p<4$ for which $\W_{\leq {a}}$ is bounded in $L^p$. By the result in (i) we have only to show that $\W_{\leq {a}}$ is bounded in $L^p$ for $2<p<4$. Recall that $\tilde{\W}^{(0,0)}_{a,\leq}$ and $\tilde{\W}^{(0,0)}_{a,\geq}$ are defined by \refeq(WL) with $K^{(0,0)}_{a,\leq}$ and $K^{(0,0)}_{a,\geq}$ respectively in place $K_{a}^{(0,0)}$. Substituting $K_{a}^{(0,0)}=K^{(0,0)}_{a,\leq}+ K^{(0,0)}_{a,\geq}$ in \refeq(chuto-1), we have \bqn \W^{(0,0)}_{\leq a}(f_j \otimes f_k)= \tilde{\W}^{(0,0)}_{a,\leq}(f_j \otimes f_k) + \tilde{\W}^{(0,0)}_{a,\geq}(f_j \otimes f_k) {=} {\W}_{jk,\leq a}+ {\W}_{jk,\geq a} \lbeq(chuto-3) \eqn where definitions should be obvious. By virtue of \reflm(Kgeq) \begin{align} \W_{jk,\geq a}u(x) & \equiv \frac1{4\pi^2}\int_{\R^4\times \R^4} f_j(z)f_k(w) \t_z ((\widehat{\mu_{1,a}} \otimes \n^{(0,0)}_{a})\t_{-w} u)(x) dz dw \notag \\ & = \frac1{4\pi^2}(f_j\ast \widehat{{\m}_{1,a}})(x) \la {f}_k \ast \n^{(0,0)}_a, u\ra \lbeq(chu-3) \end{align} modulo a good operator. Since $f_j \in \ax^{-2-\d}(L^1 \cap L^4)$ we have $f_j\ast \widehat{\m_{1,a}} \in L^p$ for $1 \leq p \leq \infty$ from \refeq(mu-1-2). Recalling the definition of $\n^{(0,0)}_a$ in \reflm(mu-nu) and observing that $\Fg(\xi_j |\xi|^{-2})(x)$ is homogenous of order $3$ and $\widehat{\chi_{\leq a}}\in \Sg(\R^4)$, we see that \[ (\nabla_y\n^{(0,0)}_a)(y) = \frac{-i}{4\pi^2}\int_{\R^4} e^{-iy\xi}\chi_{\leq a}(\xi) \xi |\xi|^{-2} d\xi \in L^q, \quad \frac43< q\leq \infty. \] Then, since $\int_{\R^4} f_k(x)dx=0$, Taylor's formula and Minkowki's inequality imply \[ (f_k \ast \n^{(0,0)}_a)(y)= - \int_0^1 \left(\int_{\R^4}f_k(z)(z\cdot \nabla_y\n^{(0,0)}_a)(y-\th z) dz \right)d\th \in L^q, \quad \frac43< q\leq \infty. \] It follows that $\W_{jk,\geq{a}}$ is bounded in $L^p$ for $1\leq p<4$ for $j,k=1,\dots, n$. We next show that $\W_{jk,\leq{a}}$, $j,k=1,\dots, n$, are bounded in $L^p$ for $2<p<4$. We denote slightly formally the integral kernel of $T_1$ by $(x^2+y^2-i0)^{-1}|y|^{-2}$ and define \begin{align*} & K_{a,\leq,1}^{(0,0)}(x,y)= -\left(\frac{1}{8\pi^3(x^2-y^2+i0)y^2} \right) (\widehat{\tchi_{\leq a}}(x)\otimes \widehat{\chi_{\leq a}}(y)), \\ & K_{a,\leq,2}^{(0,0)}(x,y) = \left(\frac{1}{8\pi^3 x^2 y^2}\right) (\widehat{\tchi_{\leq a}}(x)\otimes \widehat{\chi_{\leq a}}(y)), \\ & \W_{jk,\leq {a}}^{(\ell)}u(x)=\int_{\R^4\times \R^4} f_j(z)f_k(w) (\t_z K^{(0,0)}_{a,\leq, \ell}\t_{-w} u)(x) dz dw, \quad \ell=1,2 \end{align*} so that $\W_{jk,\leq{a}}= \W_{jk,\leq {a}}^{(1)}+ \W_{jk,\leq {a}}^{(2)}$ by virtue of the first equation of \refeq(KT-2). By virtue of \reflm(T) $\W_{jk,\leq{a} }^{(1)}$ is bounded in $L^p$ for $2<p<\infty$. By the cancellation property \refeq(fj) we have as previously that, for $4/3<q \leq \infty$, \bqn \lbeq(wjka2) \W_{jk,\leq{a} }^{(2)}(x,y)=\frac{1}{8\pi^3} \left( \frac1{x^2}\ast (\widehat{\tchi_{\leq {a}}}\ast f_j) \right) \otimes \left( \frac1{y^2}\ast (\widehat{\chi_{\leq {a}}}\ast f_k)\right) \in L^q \otimes L^q\,. \eqn Hence $\W_{jk,\leq{a} }^{(2)}$ is bounded in $L^p$ for $4/3< p<4$ and $\W_{jk,\leq{a}}$ is bounded in $L^p$ for $2<p<4$. \noindent (iii) Next we show that if all $\ph \in S_1\HL$ satisfy the extra cancellation property \bqn \lbeq(but) \int_{\R^4} x_j v(x)\ph(x) dx=0, \ 1\leq j \leq 4,\ \ph \in S_1\HL, \eqn then $\W_{\leq{a}}$ is a good operator. It suffices by virtue of the result (i) to prove that $\W_{\leq{a}}$ is bounded in $L^p$ for $2<p<\infty$. In view of \refeq(chuto-2) and \refeq(chuto-3), we do this for $\W_{jk,\leq {a}}^{(\ell)}$, $\ell=1,2$ and $1\leq j,k \leq n$. If \refeq(fj) and \refeq(but) are satisfied, then Taylors formula produces \[ (f_k \ast \n^{(0,0)}_{a})(y) = \frac12 \int_0^1 (1-\th) \left( \int_{\R^4} f_k (z)\la z, (\nabla_y^2 \n^{(0,0)}_{a})(y-\th z)z\ra dz \right) d\th \] and, since $\xi_j \xi_l |\xi|^{-2}$ is homogeneous of order $0$, \[ (\pa_j \pa_l \n^{(0,0)}_{a})(y) = \frac{-1}{4\pi^2}\int_{\R^4} e^{-iy\xi}\chi_{\leq a}(\xi) \xi_j \xi_l |\xi|^{-2} d\xi \in L^q, \quad 1< q \leq \infty. \] It follows that $f_k \ast \n^{(0,0)}_{a}\in L^q$, $k=1, \dots, n$, for $1<q\leq \infty$ and \refeq(chu-3) implies that $\W_{jk,\geq{a}}$ are good operators. Under conditions \refeq(fj) and \refeq(but), \refeq(wjka2) can be likewise improved for all $ L^q$ for $1<q\leq \infty$ and $\W_{jk,\leq{a} }^{(2)}$ becomes good operators. Since $\W_{jk,\leq{a} }^{(1)}$ is bounded in $L^p$ for $2<p<\infty$ as is shown above, $\W_{jk, \leq{a}}$ is also bounded in $L^p$ for $2<p<\infty$. \noindent (iv) We finally show that $\W_{\leq{a}}$ is unbounded in $L^p$ for any $4<p<\infty$ unless \refeq(but) is satisfied for all $\ph \in S_1 \HL$. Of course it suffices to show this for an $a>0$ small enough. Since $\tchi_{\geq a}(|D|)$ is a good operator, \refeq(chuto-2) and \refeq(chuto-3) imply that it suffices to prove this for \[ \sum_{j,k=1}^n c_{jk} \W_{jk,\geq {a}} = \sum_{k=1}^n \p_{k,a} \otimes (f_k \ast \n^{(0,0)}_a), \quad \p_{k,a}= \sum_{j=1}^n c_{jk}(f_j\ast \widehat{\m_{1,a}})\,, \] where $\p_{k,a}\in L^p$ for $1\leq p \leq \infty$ and $1\leq k \leq n$ as is shown above. We first show that $\p_{1,a}, \dots, \p_{n,a}$ are linearly independent in $L^p(\R^4)$ for any $1<p<\infty$ when $a>0$ is sufficiently small. Suppose the contrary. Then, since $(c_{jk})$ is non-singular, $f_1\ast \widehat{{\m}_{1,a}}, \dots, f_n\ast \widehat{{\m}_{1,a}}$ are linearly dependent and hence, via Fourier transform, so are $\widehat{f_1}(\xi)\tchi_{\geq a_m}(\xi)|\xi|^{-2}, \dots, \widehat{f_n}(\xi)\tchi_{\geq a_m}(\xi)|\xi|^{-2}$. Then, for any decreasing sequence $a_1, a_2 \dots \to 0$, there exist null sets $N_1, N_2, \dots$ such that for any $m=1,2, \dots$ there exists $(\a_{m1}, \dots, \a_{mn} )\in {\mathbb S}^{n-1}$ such that $\sum {\a_{mj}}\widehat{f_j}(\xi)\tchi_{\geq a_m}(\xi)|\xi|^{-2} =0$ for $\xi \notin N_m$. Set $N= \cup_{m=1}^\infty N_m$. Then $N$ is still a null set and \bqn \lbeq(Nset) \sum {\a_{mj}} \widehat{f_j}(\xi)=0, \quad \xi \not\in N \ \mbox{and} \ |\xi|\geq a_m. \eqn Let, for $m=1,2, \dots$, $S_m$ the set of $(\a_{m1}, \dots, \a_{mn} )\in {\mathbb S}^{n-1}$ for which \refeq(Nset) is satisfied. Then $S_1 \subset S_2\supset \dots$ and they are non-empty compact subset. Hence $S= \cap_{m=1}^\infty S_m\not=\emptyset$ and, for $(\a_{1}, \dots, \a_{n} )\in S$, $\sum {\a_{j}} \widehat{f_j}(\xi)=0$ for all $\xi\not\in N$ and $f_1, \dots, f_n$ must be linearly dependent. But, this is a contradiction since $\ph_1, \dots, \ph_n$ are orthonormal and $\ph_j + w N_0 f_j=0$, $j=1, \dots, n$. Thus $\p_{1,a}, \dots, \p_{n,a}$ are linearly independent for some $a>0$. Then, if $\W_{\leq{a}}$ were bounded in $L^p$ for a $4<p<\infty$, the linear functional $\la u, (v\ph_k)\ast \n^{(0,0)}_{a}\ra$ must be continuous in $L^p$ for all $k=1, \dots, n$ by the Hahn-Banach theorem and hence $(v\ph_k)\ast \n^{(0,0)}_{a}\in L^q$, $q=p/(p-1)$ by the Riesz theorem. It follows $(v\ph)\ast \n^{(0,0)}_{a}\in L^q$ for all $\ph\in S_1\HL$. Then, since $1<q<4/3$, it must be by the Hausdorff-Young inequality that \[ (2\pi)^{-2}\Fg((v\ph)\ast \n^{(0,0)}_{a})(\xi)= \widehat{v\ph}(\xi)\widehat{\n^{(0,0)}_{a}}(\xi)= \widehat{v\ph}(\xi)|\xi|^{-2}\chi_{\leq a}(\xi) \in L^p. \] But this is impossible for any $4<p<\infty$ if $\ph\in S_1\HL$ does not satisfy \refeq(but) for a $j$ because $\pa_j \widehat{v\ph}(0)\not=0$ and $|\widehat{v\ph}(\xi)||\xi|^{-2}\geq C|\xi|^{-1}$ for a constant $C>0$ in an open conic subset set $\{|\xi|\leq a \colon \xi_j >\ep |\xi|\}$ for a $a>0$ and $\ep>0$. Thus, $\W_{\leq {a}}$ is unbounded in $L^p(\R^4)$ for any $4<p\leq \infty$ \qed \section{The case $H$ has singularities of the third kind} In this section we assume $\ax^{\d} V(x)\in (L^1 \cap L^4)$, $\d=4+\ep$ for an $\ep>0$ and that $H$ has singularities of the third kind: $S_1 P S_1\vert_{S_1\HL}$ is singular and $S_1 P S_1\vert_{S_1\HL}\not=0$. Let $S_2$ be the orthogonal projection in $S_1 \HL$ onto ${\Ker} (S_1 P S_1\vert_{S_1\HL})$ and denote $S_2 S_1$ in $\HL$ again $S_2$, viz. we consider $S_2$ is an orthogonal projection in $\HL$ which vanishes on $(S_2\HL)^{\perp}$. It is shown in Corollary 7.3 of \cite{EGG} that ${\textrm {rank}}\, (S_1 \ominus S_2) =1$. We take the orthonormal basis $\{\ph_1, \dots, \ph_n\}$ of $S_1\HL$ such that $\ph_1$ spans $(S_1 \ominus S_2)\HL$ and $\{\ph_2, \dots, \ph_n\}$ does $S_2\HL$. We denote $S_1 \ominus S_2= S_2^\perp$. We have \bqn \lbeq(cancellation-3) \int_{\R^4} v \ph_1 dx \not=0, \quad \int_{\R^4} v \ph_2 dx = \cdots = \int_{\R^4} v \ph_n dx =0. \eqn If we define $u_j(x) = N_0 v\ph_j(x)$, $u_2, \dots, u_n$ are eigenfunctions of $H$ with the eigenvalue $0$. We study $\Mg(\lam)^{-1}$ for small $\lam>0$ by using \reflm(JN), however, $B(\lam)^{-1}$ via the Feshbach formula. Recall that $B(\lam) = S_1 - S_1 \Kg(\lam) S_1$ and that $\Kg(\lam)=(\Mg(\lam)+ S_1)^{-1}$ has the expression \refeq(NMlam). Let \begin{gather*} \ta_{11}= S_2^\perp(g_1(\lam)P + \Mg_1)S_2^\perp, \ \ \ta_{12} = S_2^\perp \Mg_1 S_2, \ \ \ta_{21} = S_2 \Mg_1 S_2^\perp, \\ \ta_{22} = S_2 \Mg_1 S_2, \ \ L_2(\lam) = S_1(\tilde{\Mg}_2(\lam)+ \tilde{L}_1)S_1\,. \end{gather*} Then, in the decomposition $S_1\HL= S_2^\perp \HL \oplus S_2 \HL$ we have \begin{gather} \lbeq(B-matrix) B(\lam) = B_1(\lam) +L_2(\lam), \quad B_1(\lam)= \lam^2 \begin{pmatrix} \ta_{11} & \ta_{12} \\ \ta_{21} & \ta_{22} \end{pmatrix}\,. \end{gather} Note that $\ta_{11}$ is one dimensional and $\ta_{12}, \ta_{21}$ and $\ta_{22}$ are $\lam$-independent. \bglm We have for small $0<\lam$ that \begin{gather} \ta_{11}= (g_1(\lam)|\la \ph_1, v_1\ra|^2 + \la \ph_1, \Mg_1 \ph_1\ra)|\ph_1\ra \la \ph_1| \not = 0 \,. \lbeq(ta11) \\ L_2(\lam) = \sum_{j,k=2}^n e_{jk}(\lam) |\ph_j\ra \la \ph_k|, \quad e_{jk}(\lam)=\Og^{(3)}(\lam^4\la \log \lam\ra^2) \lbeq(L2lam4) \end{gather} \edlm \bgpf \refeq(ta11) is obvious. \reflm(7-12) implies $v\ph_j=\ax^{-2-\d}L^2$ for $j=1, \dots, n$ and, by virtue of \refeq(M-2a), $\la \ph_j, \tilde{\Mg}_2(\lam)\ph_k\ra= \Og^{(3)}(h_{4}(\lam))$. Hence $\la \ph_j |\tL_1| \ph_k\ra = \Og^{(3)}(\lam^4\la \log \lam \ra^2)$. \edpf It is shown in Lemma 7.4 of \cite{EGG} that $\ta_{22} = S_2 \Mg_1 S_2$ is invertible in $S_2 \HL$. It follows from \refeq(ta11) that $\ta_{11}- \ta_{12}\ta_{22}^{-1}\ta_{21} = g_1(\lam)(|\la \ph_1, v_1\ra|^2 + cg_1(\lam)^{-1}) |\ph_1\ra \la \ph_1|$ with a constant $c$ and $\ta_{11}- \ta_{12}\ta_{22}^{-1}\ta_{21}$ is invertible for small $\lam>0$ with the inverse \bqn d(\lam)= d_1(\lam)|\ph_1\ra \la \ph_1|, \quad d_1(\lam)= \frac{1|}{g_1(\lam)(|\la \ph_1, v_1\ra|^2 + cg_1(\lam)^{-1})}\,. \lbeq(d1-def) \eqn $d_1(\lam)$ is a Mikhlin multiplier. Then, $B_1(\lam)$ is invertible by \reflm(FS) and \begin{gather} B_1(\lam)^{-1}= \lam^{-2}({S_2(S_2 \Mg_1 S_2)^{-1}S_2}+ {d_1(\lam)}Q)\,, \lbeq(B1-inv) \\ Q=\begin{pmatrix} |\ph_1\ra \la \ph_1| & -|\ph_1\ra \la \ph_1|\ta_{12}\ta_{22}^{-1} \\ -|\ta_{22}^{-1}\ta_{21}\ph_1\ra \la \ph_1 & |\ta_{22}^{-1}\ta_{21}\ph_1\ra \la \ph_1| \ta_{12}\ta_{22}^{-1} \end{pmatrix}. \lbeq(B1-inv-a) \end{gather} Note that $Q$ is $\lam$-independent, ${\rm rank}\, Q=2$ and $Q= (\ph_1 \oplus \tph) \otimes (\ph_1 \oplus \tph)$ with $\tph=- \ta_{22}^{-1}\ta_{21}\ph_1$. Then, \refeq(L2lam4) and \refeq(B1-inv) implies $B(\lam)= (1 +L_2(\lam)B_1(\lam)^{-1})B_1(\lam)$ is invertible and $B(\lam)^{-1}$ is given by \bq B_1(\lam)^{-1} + L_3(\lam), \ L_3(\lam)= - B_1(\lam)^{-1}L_2(\lam)B_1(\lam)^{-1} + \Og^{(3)}(\lam^2\la \log\lam\ra^4). \lbeq(L3def) \eq Hece $\Mg(\lam)^{-1}$ exists by \reflm(JN) and $\Mg(\lam)^{-1}= \Kg(\lam) + \Kg(\lam)S_1 B(\lam)^{-1}S_1 \Kg(\lam)$. \bglm \lblm(last-reduction) Modulo a good producer $M_v \Mg(\lam)^{-1}M_v \equiv M_vS_1 B_1(\lam)^{-1} S_1 M_v$\,. \edlm \bgpf We have $M_v\Mg(\lam)^{-1}M_v= M_v\Kg(\lam)M_v+ M_v\Kg(\lam)S_1 B(\lam)^{-1}S_1 \Kg(\lam)M_v$. By \reflm(7-3) again $M_v \Kg(\lam) M_v$ is a good producer. Substituting $B(\lam)^{-1}$ by \refeq(L3def), we see that the second term on the right is equal to $ E_1(\lam) + E_2(\lam)$ where \[ E_1(\lam)=M_v\Kg(\lam)S_1 B_1 (\lam)^{-1}S_1 \Kg(\lam)M_v, \quad E_2(\lam) = M_v\Kg(\lam)S_1 L_3 (\lam)S_1 \Kg(\lam)M_v\,. \] \noindent (i) We first show that $E_2(\lam)$ is a good producer. We obtain by combining \refeq(L2lam4), \refeq(B1-inv) and \refeq(L3def) that $S_1 L_3(\lam)S_1= \sum_{j,k=1}^n (\log\lam)^2 g_{jk}(\lam) |\ph_j\ra \la \ph_k|$ with Mikhlin multipliers $g_{jk}(\lam)\in \Og^{(3)}_{\C}(1)$, $j,k=1, \dots, n$. We then recall \refeq(B-3) which implies that for $j=1, \dots, n$ \begin{gather} M_v \Kg(\lam)\ph_j= \p_{j0} + g_1 (\lam)\lam^2 \p_{j1}(x) + \lam^2 \p_{j2}(x) + \p_{j3}(\lam,x), \lbeq(vNlamph) \\ \p_{j0},\, \p_{j1},\, \p_{j2} \in (L^1\cap L^2), \quad \p_{j3} \in \Og^{(3)}_{L^1\cap L^2}(\lam^4 (\log\lam)^2)\,. \lbeq(vNlamph-1) \end{gather} Since the integral kernel of $\Kg(\lam)^\ast$ is the complex conjugate of $\Kg(\lam)$, $M_v\Kg(\lam)^\ast \ph_k$, $k=1, \dots, n$ is expressed similarly. Hence $E_2(\lam)= \sum (\log{\lam})^2 g_{jk}(\lam)(\Lg + \Og^{(3)}_{\Lg}(h_2(\lam)))$ and \reflm(Funda) for $(j,\ell)=(2,2)$ and \refprop(R-theo) imply that $E_2(\lam)$ is a good producer. \noindent ii) Define $B_2(\lam)= S_1 B_1 (\lam)^{-1}S_1$. By virtue of \refeqs(B1-inv,B1-inv-a) we have \bqn \lbeq(S1-coeff) B_2(\lam)= \sum_{j,k=1}^n \lam^{-2} f_{jk}(\lam)|\ph_j \ra \la \ph_k|, \quad f_{jk}(\lam) \in \Og^{(3)}_{\C}(1). \eqn Substituting $\Kg(\lam)$ by \refeq(NMlam) and using $D_0 S_1= S_1 D_0 = S_1$, we express $E_1(\lam)= E_{11}(\lam)+ E_{12}(\lam) + E_{13}(\lam) + E_{14}(\lam)$ where \begin{gather*} E_{11}= M_v (D_0 - D_0 L_1(\lam)) B_2(\lam)(D_0 - D_0 L_1(\lam))M_v, \ E_{12}= M_v \Kg(\lam) B_2(\lam) \tilde L_1(\lam)M_v, \\ E_{13}= M_v D_0 \tilde L_1(\lam) B_2(\lam) \Kg(\lam) M_v, \quad E_{14}= M_v D_0 \tilde L_1(\lam) B_2(\lam) \tilde L_1(\lam)M_v . \end{gather*} \noindent (a) We first show that $E_{12}(\lam)$ is a good producer. $E_{12}= \sum \lam^{-2}f_{jk}(\lam) (M_v \Kg(\lam)\ph_j) \otimes (M_v \tilde L_1(\lam)^\ast \ph_k)$ by \refeq(S1-coeff). We can apply \refeqs(vNlamph,vNlamph-1) to $M_v \Kg(\lam)\ph_j$ and we have $M_v \tilde L_1(\lam)^\ast \ph_k \in \Og^{(3)}_{\Hg\cap \Lg}(\lam^4(\log\lam)^2)$ by virtue of \refeqs(L1tL1-def,NMlam). It follows $E_{12}(\lam) \in \Og^{(3)}_{\Lg}(\lam^2(\log\lam)^2)$ and $E_{12}(\lam)$ is a good producer by virtue of \refprop(R-theo). Similar argument implies $E_{13}(\lam)$ and $E_{14}(\lam)$ are both good producers. \noindent (b) We have $E_{11}(\lam)=M_v B_2(\lam)M_v+ E_3(\lam)$ where $E_3(\lam)$ is defined by \[ E_3(\lam)= - M_vB_2 (\lam) L_1(\lam)M_v - M_vD_0 L_1(\lam)B_2 (\lam)M_v + M_vD_0 L_1(\lam) B_2 (\lam)L_1(\lam)M_v. \] We prove $E_3(\lam)$ is a good producer to finish the proof. Recalling \refeq(L1tL1-def) and that $v\ph \in \ax^{-2-\d}(L^1\cap L^4)$ for $\ph \in S_1 \HL$, we obtain as previously that, for $j=1, \dots, n$, \begin{gather*} \lam^{-2} M_vD_0 L_1(\lam) \ph_j(x)= g_1 (\lam)\tp_{j1}(x) + \tp_{j2}(x) + \tp_{j3}(\lam,x),\\ \tp_{j1}, \ \tp_{j2} \in (L^1\cap L^2), \ \tp_3(\lam) \in \Og^{(3)}_{L^1\cap L^2}(h_{2}(\lam)). \end{gather*} An obvious modification of the argument shows that similar expressions are satisfied by $\lam^{-2} M_v L_1(\lam)^\ast\ph_k$ for $k=1, \dots, n$. Combining these with \refeq(S1-coeff) produces the expression for $E_3(\lam)$: \[ E_3(\lam)= \sum_{j,k=1}^n (\log\lam)^2 h_{jk}(\lam)\Lg_{jk} + \Og^{(3)}_{\Lg}(\lam^2 \la \log\lam \ra^2) \] with $\Lg_{jk} \in \Lg$ and $h_{jk}(\lam)\in \Og_{\C}^{(3)}(1)$ for $1\leq j,k \leq n$. Thus, $E_3(\lam)$ is a good producer by \reflm(Funda) and \refprop(R-theo). \edpf \paragraph{\bf Proof of \refthb(main-theorem) (2c)} In view of \reflm(last-reduction), it suffices to prove that the operator $Z$ defined by \bqn \lbeq(Z) Zu(x)= \int_0^\infty G_0(-\lam) M_{v}S_1 B_1(\lam)^{-1}S_1 M_v \Pi(\lam)u(x) \chi_{\leq a}(\lam) \lam d\lam \eqn is bounded in $L^p$ for $1<p <2$ and unbounded for $2<p<\infty$. We substitute \refeq(B1-inv) for $B_1(\lam)^{-1}$, which makes $Z= Z_1+Z_2$ where $Z_1$ and $Z_2$ are produced by $\lam^{-2}S_2(S_2 \Mg_1S_2)^{-1}S_2$ and $\lam^{-2}d_1(\lam)(v(\ph_1+\tph))\otimes (v(\ph_1+\tph))$. We may repeat the argument of the previous section \refsec(second) to $Z_1$ with obvious modifications, which proves that $Z_1$ is bounded in $L^p$ for $1<p<4$ and unbounded for $4<p<\infty$ in general and, it becomes a good operator if all $\ph \in S_2\HL$ satisfy the extra cancellation property $\la v, y_j\ph\ra=0$, $j=1,\dots, 4$. The operator $Z_2$ is the same as the one defined by \refeq(Wleq1) if $\ph$ and $\m(\lam)$ are replaced by $\ph_1+\tph$ and $d_1(\lam)$ respectively. Then, the repetition of the argument below \refeq(Wleq1) implies that $Z_1$ is bounded in $L^p$ for $1<p<2$ and is unbounded for $2<p<\infty$. This completes the proof of \refth(main-theorem). \qed
\section{Appendix} \label{sec:appendix} \input{convergence-proof} \subsection{GPU implementations} \paragraph{Edge contraction} We use a specialized implementation for edge contraction using Thrust~\cite{Thrust} which is faster than performing it via general sparse matrix-matrix multiplication routines and most importantly has lesser memory footprint allowing to run larger instances. We store the adjacency matrix $A = (I, J, C)$ in COO format, where $I, J, C$ correspond to row indices, column indices and edge costs resp. The pseudo-code is given in Algorithm~\ref{alg:gpu-edge-contraction}. \begin{algorithm} \DontPrintSemicolon \caption{\texttt{GPU Edge-Contraction}} \label{alg:gpu-edge-contraction} \KwData{Adjacency matrix $A = (I, J, C)$, Contraction mapping $f : V \rightarrow V'$} \KwResult{Contracted adjacency matrix $A' = (I', J', C')$} \tcp{Assign new node IDs} $\hat{I}(v) = I(f(v)),\ \forall v \in V$ \; $\hat{J}(v) = J(f(v)),\ \forall v \in V$ \; \texttt{COO-Sorting}($\hat{I}, \hat{J}, C$)\; \tcp{Remove duplicates and add costs} $(I', J', C') =$ \texttt{reduce\_by\_key}(\; \nonl\Indp \texttt{keys}$ = (\hat{I}, \hat{J}), $\texttt{values}$ = \hat{C}, $ \texttt{acc} $= +)$ \; \end{algorithm} \paragraph{Conflicted cycles} For detecting conflicted cycles we use specialized CUDA kernels. The pseudo code for detecting 5-cycles is given in Algorithm~\ref{alg:gpu-5-cycles}. The algorithm searches for conflicted cycles in parallel in the positive neighbourhood $\NE^+$ of each negative edge. To efficiently check for intersection in Line~\ref{alg:check-intersect-5-cycles} we store the adjacency matrix in CSR format. \begin{algorithm} \DontPrintSemicolon \caption{\texttt{GPU Conflicted-5-Cycles}} \label{alg:gpu-5-cycles} \KwData{Adjacency matrix $A=(V,E,c)$} \KwResult{Conflicted cycles $Y$ in $A$} \tcp{Partition edges based on costs} $E^+ = \{ij \in E: c(ij) > 0\}$\; $E^- = \{ij \in E: c(ij) < 0\}$\; $Y = \varnothing $ \; \tcp{Check for attractive paths} \For{$v_1v_3 \in \NE^+(v_0) \times \NE^+(v_4): v_0v_4 \in E^-$ in parallel}{ \label{alg:check-intersect-5-cycles} \For{$v_2 \in \NE^+(v_1) \cap \NE^+(v_3)$}{ $Y = Y \cup \{v_0, v_1, v_2, v_3, v_4 \}$ \; } } \end{algorithm} \section{Conclusion} \label{sec:conclusion} We have demonstrated that multicut, an important combinatorial optimization problem for machine learning and computer vision, can be effectively parallelized on GPU producing solutions that are good, i.e.\ comparable (or better) to those produced by state of the art efficient heuristics that run on CPU. We estimate that the runtime gap will even widen in the future with the ever-increasing computing power of GPUs as compared to CPUs. We hope that our work will enable more compute intensive applications of multicut, where until now the slower serial CPU codepath has hindered its adoption. \subsection{Proof of Theorem~\ref{thm:edge-triangle-agreement-convergence}.} The proofs are a condensation and adaptation of the corresponding proofs in~\cite{tourani2018mplp++,kolmogorov2005convergent,kolmogorov2014new}. Changes are necessary since Algorithm~\ref{alg:parallel-message-passing} solves a different problem and uses different message passing updates and schedules than the algorithms from~\cite{tourani2018mplp++,kolmogorov2005convergent,kolmogorov2014new}. \begin{definition}[$\epsilon$-optimal local solutions] For $e \in E$ define \begin{equation} \OO^{\epsilon}_e(\lambda) := \{x \in \{0,1\} : x \cdot c^{\lambda}_e \leq \min(0,c^{\lambda}_e) + \epsilon\} \end{equation} and for $t \in T$ \begin{equation} \OO^{\epsilon}_t(\lambda) := \{x \in \MCD : c^{\lambda}_t(x) \leq \min_{x' \in \MCD} c^{\lambda}_t(x') + \epsilon\} \end{equation} to be the $\epsilon$- optimal local solutions. \end{definition} Hence, $\OO^0_e(\lambda) = \overline{c^{\lambda}_e}$ for $e \in E$ and likewise $\OO^0_t(\lambda) = \overline{c^{\lambda}_t}$ for $t \in T$. \begin{definition}[$\epsilon$-tolerance] The minimal value $\epsilon(\lambda)$ for which $\mathcal{O}^{\epsilon}(\lambda)$ has edge-triangle agreement is called called the $\epsilon$-tolerance. \end{definition} \begin{definition}[Algorithm Mapping] Let \begin{enumerate}[(i)] \item $\HH_{E \rightarrow T}(\lambda)$ be the Lagrange multipliers that result from executing lines~\ref{alg:edge-to-triangle-start}-\ref{alg:edge-to-triangle-end} in Algorithm~\ref{alg:parallel-message-passing}, \item $\HH_{T \rightarrow E}(\lambda)$ be the Lagrange multipliers that result from executing lines~\ref{alg:triangle-to-edge-start}-\ref{alg:triangle-to-edge-end} in Algorithm~\ref{alg:parallel-message-passing}, \item $\HH = \HH_{T \rightarrow E} \circ \HH_{E \rightarrow T}$ be one pass of Algorithm~\ref{alg:parallel-message-passing}, \item $\HH^i(\cdot) = \underbrace{\HH(\HH(\ldots(\HH(\cdot))\ldots))}_{i \text{times}}$ be the $i$-fold composition of $\HH$. \end{enumerate} \end{definition} Note that $\HH$ is indeed a well-defined mapping even though Algorithm~\ref{alg:parallel-message-passing} is parallel since the updates do not depend on the order in which they are processed. \begin{lemma} \label{lemma:e->t} Let $\alpha \in (0,1]$ and let $\lambda$ be Lagrange multipliers. Let $e \in E$ and $t \in T$ with $e \subsetneq t$. Define new Lagrange multipliers as \begin{equation} \lambda'_{t',e'} = \begin{cases} \lambda_{e',t'} - \alpha c^{\lambda}_e, & e = e', t = t' \\ \lambda_{e',t'} , & e \neq e' \text{ or } t = t' \end{cases} \end{equation} \begin{enumerate}[(i)] \item $LB(c^{\lambda}) \leq LB(c^{\lambda'})$. \item $\OO_e(c^{\lambda}) \subseteq \OO_e(c^{\lambda'})$. \item \label{lemma:e->t:locally-optimal-intersection} $LB(c^{\lambda}) < LB(c^{\lambda'}) \Leftrightarrow \OO_e(c^{\lambda}) \cap \Pi_{t,e}(\OO_t(c^{\lambda})) = \varnothing$. \item $LB(c^{\lambda}) = LB(c^{\lambda'}) \Rightarrow \OO_t(c^{\lambda'}) \subseteq \OO_t(c^{\lambda})$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(i)] \item If $c^{\lambda}_e \geq 0$ then $LB(c^{\lambda})_e = LB(c^{\lambda'})_e$ and $LB(c^{\lambda})_t \leq LB(c^{\lambda'})_t$ since $c^{\lambda}_{t,e} \leq c^{\lambda'}_{t,e}$. If $c^{\lambda}_e < 0$ then $LB(c^{\lambda})_e = c^{\lambda}_e < (1-\alpha) c^{\lambda}_e = LB(c^{\lambda'})_e$. $\leq LB(c^{\lambda'})_t = \min_{y \in \MC_{\Delta}} c^{\lambda'}(e) \geq \min_{y \in \MC_{\Delta}} c^{\lambda}(e) - \alpha c^{\lambda}_e = LB(c^{\lambda})_t - \alpha c^{\lambda}_e$. \item It holds that $c^{\lambda'}_e = (1-\alpha) c^{\lambda}_e$. Hence, if $\alpha = 1$ then $\OO_e(c^{\lambda}) = \{0,1\}$ and the claim trivially holds. Otherwise $\OO_e(c^{\lambda}) = \OO_e(c^{\lambda})$. \item Assume $\OO(c^{\lambda})_t \cap \Pi_e(\OO(c^{\lambda'}_t) = \varnothing$. Assume first that $\alpha = 1$. Then it must hold that $\abs{\OO(c^{\lambda})_t} = 1$. Let $\{y_e^*\} = \OO(c^{\lambda})_e$ and $y^*_t \in \argmin_{y \in \MC_{\Delta}} c^{\lambda}_t(y)$. Let $y'_e \in \argmin_{y \in \{0,1\}} c^{\lambda'}_e y$ and $y'_t \in \argmin_{y \in \MC_{\Delta}} c^{\lambda'}_t(y)$ such that $y'_e = \Pi_e(y'_t)$ (this is possible due to $\OO(c^{\lambda'})_e = \{0,1\}$ for $\alpha = 1$. Then \begin{multline} LB(c^{\lambda})_e + LB(c^{\lambda})_t = c^{\lambda}_e y_e^* + c^{\lambda}_t(y^*_e) \\ < c^{\lambda}_e y'_e + c^{\lambda}_t(y'_e) \\ = c^{\lambda'}_e y'_e + c^{\lambda'}_t(y'_e) = LB(c^{\lambda'})_e + LB(c^{\lambda'})\,. \end{multline} For $\alpha < 1$ the result follows from the above and the concavity of $LB$. Assume now $\OO(c^{\lambda})_t \cap \Pi_e(\OO(c^{\lambda'}_t) \neq \varnothing$. Choose $y_e^* \in \OO(c^{\lambda})_e$ and $y_t^* \in \OO(c^{\lambda})_t$ such that $y^*_t(e) = y_e^*$. Then it holds that \begin{multline} LB(c^{\lambda})_e + LB(c^{\lambda})_t = c^{\lambda}_e y_e^* + c^{\lambda}_t(y^*t) \\ = c^{\lambda'}_e y_e^* + c^{\lambda'}_t(y^*t) > LB(c^{\lambda'})_e + LB(c^{\lambda'})_t \end{multline} Since $LB$ is non-decreasing, it follows that $LB(c^{\lambda}) = LB(c^{\lambda'})$. \item If $c^{\lambda}_e = 0$ there is nothing to show since $\lambda' = \lambda$. Assume that $c^{\lambda}_e > 0$. Then it must hold that $0 \in \Pi_{t,e}(\OO_t(c^{\lambda}))$ due to~\ref{lemma:e->t:locally-optimal-intersection}. Since $c^{\lambda'}_{t}(e) > c^{\lambda}_{t}(e)$ and all other costs stay the same, it holds that \begin{equation} y_t \begin{cases} \in \OO_t(c^{\lambda'}), & y_t \in \OO_t(c^{\lambda}), y_t(e) = 0 \\ \notin \OO_t(c^{\lambda'}), & y_t \notin \OO_t(c^{\lambda}), y_t(e) = 0 \\ \notin \OO_t(c^{\lambda'}), & y_t \in \OO_t(c^{\lambda}), y_t(e) = 1 \\ \notin \OO_t(c^{\lambda'}), & y_t \notin \OO_t(c^{\lambda}), y_t(e) = 1 \end{cases}\,. \end{equation} Hence, the result follows. The case $c^{\lambda}_e < 0$ can be proved analoguously. \end{enumerate} \end{proof} \begin{lemma} \label{lemma:t->e} Let $\alpha \in (0,1]$ and let $\lambda$ be Lagrange multipliers. Let $e \in E$ and $t \in T$ with $e \subsetneq t$. Define \begin{equation} \lambda'_{t',e'} = \begin{cases} \lambda_{e',t'} + \alpha m_{t \rightarrow e}(c^{\lambda}_t), & e = e', t = t' \\ \lambda_{e',t'} , & e \neq e' \text{ or } t = t' \end{cases} \end{equation} \begin{enumerate}[(i)] \item $LB(c^{\lambda}) \leq LB(c^{\lambda'}$. \item $\OO(c^{\lambda})_t \subseteq \OO(c^{\lambda'}_t$. \item $LB(c^{\lambda}) < LB(c^{\lambda'}$. iff $\OO(c^{\lambda})_e \neq \OO(c^{\lambda'}_e$. \item $ LB(c^{\lambda}) = LB(c^{\lambda'}) \Rightarrow \OO_e(c^{\lambda'}) \subseteq \OO_e(c^{\lambda}) $ \end{enumerate} \end{lemma} \begin{proof} Analoguous to the proof of Lemma~\ref{lemma:e->t}. \end{proof} \begin{lemma} If $LB(c^{\lambda}) = LB(\HH(c^{\lambda}))$ then $\OO_e(\HH(c^{\lambda})) \subseteq \OO_e(c^{\lambda})$ for all $e \in E$. \end{lemma} \begin{proof} If $\OO_e(c^{\lambda}) = \{0,1\}$, there is nothing to show. Assume $\{0\} = \OO_e(c^{\lambda})$. Then $\HH_{E \rightarrow T}(c^{\lambda})_e = 0$ and since $... $ The case $\{1\} = \OO_e(c^{\lambda})$ can be proved analoguously. \end{proof} \begin{lemma} If $LB(c^{\lambda}) = LB(\HH(c^{\lambda}))$ then $\OO_e(\HH(c^{\lambda})) \subseteq \OO_e(c^{\lambda})$. \end{lemma} \begin{lemma} $\HH$ is a continuous mapping. \end{lemma} \begin{proof} All the operations used in Algorithm~\ref{alg:parallel-message-passing} are continuous, i.e.\ adding and subtracting, dividing by a constant and taking the minimum w.r.t.\ elements for the min-marginals. Hence, $\HH$, the composition all such continuous operations, is continuous again. \end{proof} \begin{lemma} The lower bound $LB$ from~\eqref{eq:dual-multicut} is continuous in $\lambda$. \end{lemma} \begin{proof} Taking minima is continuous as well as addition. Hence $LB$ is continuous as well. \end{proof} \begin{lemma} \label{lemma:lb-monotonicity} Each iteration of Algorithm~\ref{alg:parallel-message-passing} is non-decreasing in the lower bound $LB$ from~\eqref{eq:dual-multicut}. \end{lemma} \begin{proof} It is enough to prove that all edge to triplet operations (lines~\ref{alg:edge-to-triangle-start}-\ref{alg:edge-to-triangle-end}) have this property and that all triplet to edge operations (lines~\ref{alg:triangle-to-edge-start}-\ref{alg:triangle-to-edge-end}) do so as well. To this end we define $LB(\lambda)_e = \min(0,c^{\lambda}_e)$ and $LB(\lambda)_t = \min_{x \in \MCD} c^{\lambda}_t(x)$ to be the edge and triplet lower bounds. The overall lower bound $LB$ is clearly their sum. \begin{description} \item[Edge to triplet:] If $c^{\lambda}_e \geq 0$ the lower bound $LB(\lambda)_e$ is $0$ before and after the updates, while the lower bound for the triplets $LB(\lambda)_t$ with $e \subset t$ is non-decreasing (the difference in the corresponding $\lambda$ entries is positive). If $c^{\lambda}_e < 0$, the lower bound $LB(\lambda)_e$ increases by the corresponding change in $\lambda_{t,e}$. The lower bound for each affected triplet subproblem decreases by at most by the change $\lambda_{t,e}$, hence the overall change in lower bound is non-negative. \item[Triplet to edge:] It suffices to prove that for any $\omega \in [0,1]$ the update operation $\lambda_{t,e} \mathrel{+}= \omega \cdot m_{t \rightarrow e}$ is non-decreasing in $LB(\lambda)_t + LB(\lambda)_e$. If $m_{t \rightarrow e} \geq 0$, the lower bound $LB(\lambda)_t$ stays equal, while $LB(\lambda)_e$ is non-decreasing. If $m_{t \rightarrow e} < 0$, the lower bound $LB(\lambda)_t$ increases by $\omega \cdot m_{t\rightarrow e}$, while the lower bound $LB(\lambda)_e$ descreases by at most the same amount. \end{description} \end{proof} \begin{lemma} $\epsilon$-tolerance is continuous in $\lambda$. \end{lemma} \begin{proof} We first prove that for any arc-consistent subset $\xi$ the minimal $\epsilon$ for which $\xi \subseteq \OO^{\epsilon}(\lambda)$ is continuous. To this end, note that the minimum $\epsilon$ such that $\xi_e \subseteq \OO^{\epsilon}_e(\lambda)$ for any edge $e \in E$ can be computed as \begin{equation} \xi_e = \begin{cases} c^{\lambda}_e,& \xi = \{0\} \\ -c^{\lambda}_e,& \xi = \{1\} \\ \abs{c^{\lambda}_e},& \xi = \{0,0\} \end{cases} \end{equation} All the expressions are continuous, hence the minimum $\epsilon$ for any edge is continuous. A similar observation holds for triplets. Since the $\xi$-specific $\epsilon$ is the maximum over all edges and triplets, it is continuous as well. Since the $\epsilon$-tolerance is the minimum over all minimal $\xi$-specific $\epsilon$ and there is a finite number of arc-consistent subsets $\xi$, the result follows. \end{proof} \begin{lemma} For any edge costs $c \in \mathbb{R}^{E}$ there exists $M > 0$ such that $\norm{\HH^i(c)} \leq M$ for any $i \in \mathbb{N}$. \end{lemma} \begin{proof} Assume $\HH^i(c)$ is unbounded. If all $\HH^i(c)_t$ $t \in T$ are bounded, all $\HH^i(c)_e$ are bounded as well due to~\eqref{eq:reparametrization}. Hence, there must exist $t \in T$ such that $\HH^i(c)_t$ is unbounded. Since $LB(H^i)(c)_t$ is bounded below by Lemma~\ref{lemma:lb-monotonicity} and trivially above by $0$, it must hold that either \begin{enumerate}[(i)] \item there exists one edge $e \subsetneq t$ such that $\HH^i(c)_t(e)$ converges towards $-\infty$ on a subsequence or \item there exists at most one edge $e \subsetneq t$ such that $\HH^i(c)_t(e)$ converges towards $\infty$ and there exist $e'\neq e'' \subsetneq t$ with $e' \neq e$ and $e'' \neq e$ such that $\HH^i(c)_t(e')$ and $\HH^i(c)_t(e'')$ converge towards $-\infty$ with $\HH^i(c)_t(e) - \HH^i(c)_t(e') \leq M'$ and $\HH^i(c)_t(e) - \HH^i(c)_t(e'') \leq M'$ where $M' > 0$ is a constant, since otherwise $LB(\HH^i(c))_t$ would converge to $-\infty$. \end{enumerate} Hence there must be at least double the number of Lagrange multipliers $\lambda_{t,e}$ that converge towards $-\infty$ than those that converge towards $\infty$ with at least the same rate. Hence, there must be $\tilde{e} \in E$ such that on a subsequence $\HH^i(c)_{\tilde{e}}$ converges towards $-\infty$, contadicting that $LB(\HH^i(c))_e$ is bounded below by Lemma~\ref{lemma:lb-monotonicity}. \end{proof} \begin{lemma} \label{lemma:locally-optimal-inclusion} $LB(\HH(c^\lambda)) = LB(c^\lambda)$ implies $\OO^0(\HH(c^{\lambda})) \subseteq \OO^0(c^{\lambda})$. This means that at a fixed point no new locally optimal solutions are generated. If the tolerance is $\epsilon(c^{\lambda}) > 0$ then $\OO^0(\HH(c^{\lambda})) \subsetneq \OO(c^{\lambda})$. \end{lemma} \begin{proof} Let $\lambda'$ be the reparametrization produced by $\HH$. Assume that $LB(\HH(c^\lambda)) = LB(c^\lambda)$. We only prove the case for $e \in E$ and $\OO_e(c^{\lambda'}) \not\subset \OO_e(c^{\lambda})$. The case for triangles is similar. Assume that there exists $y^*_e \in \{0,1\}$ such that $y^*_e \in \OO^0(c^{\lambda'})_e$ but $y^*_e \notin \OO^0(c^{\lambda})_e$. For each $t \in T$ with $e \subsetneq t$ there exists $y^*_t \in \MC_{\Delta}$ such that $y^*_t(e) = y^*_e$ and $LB(c^{\lambda'})_t = c^{\lambda'}(y^*_t)$ (possibly add lemma for this). Let $\overline{\lambda}$ be Lagrange multipliers such that \begin{equation} \overline{\lambda}_{t,e} = \begin{cases} \lambda_{t',e'},& e \neq e'\\ \lambda'_{t',e'},& e = e'\\ \end{cases}\,. \end{equation} Then it holds that \begin{multline} LB(c^{\lambda}) = LB(c^{\overline{\lambda}}) \\ < c^{\overline{\lambda}}(y^*_e) + \sum_{t \in T: e \subsetneq t} c^{\overline{\lambda}}_t(y^*_t) + \sum_{e' \neq e} LB(c^{\overline{\lambda}})_{e'} + \sum_{t' \in T: e \not\subsetneq t} LB(c^{\overline{\lambda}})_{t'} \\ = c^{\lambda'}(y^*_e) + \sum_{t \in T: e \subsetneq t} c^{\lambda'}_t(y^*_t) + \sum_{e' \neq e} LB(c^{\lambda'})_{e'} + \sum_{t' \in T: e \not\subsetneq t} LB(c^{\lambda'})_{t'} \\ = LB(c^{\lambda'})\,. \end{multline} However, this conflicts with $LB(c^{\lambda}) = LB(c^{\lambda'})$. This proved the first part of the Lemma. Now additionally assume that $\epsilon(c^{\lambda}) > 0$. This means that there exists $e \in E$, $t \in T$, $e \subsetneq t$ and $\xi \in \OO^{\epsilon(c^{\lambda}}$ such that $\xi$ is arc-consistent. Since $\epsilon(c^{\lambda}) > 0$ there must exist either $e \in E$ such that $\xi_e \not\subseteq \OO(c^{\lambda})_e$ or $t \in T$ such that $\xi_t \not\subseteq \OO(c^{\lambda})_t$. Let us assume this is the case for some $e \in E$ (the case $t \in T$ is analoguous). \end{proof} \section{Experiments} \label{sec:experiments} We evaluate solvers on multicut problems for neuron segmentation for connectomics in the fruit-fly brain~\cite{pape2017solving} and unsupervised image segmentation on Cityscapes~\cite{cordts2016cityscapes}. We use a single NVIDIA Volta V-100 (16GB) GPU for our GPU solvers unless otherwise stated and an AMD EPYC 7702 for our CPU solvers. Our solvers are implemented using the CUDA~\cite{cuda} and Thrust~\cite{Thrust} GPU programming frameworks. \paragraph{Datasets} We have chosen two sets containing the largest multicut problem instances we are aware of. \begin{description} \item[Connectomics] for neuron segmentation in the fruit-fly brain~\cite{pape2017solving}. The raw data is taken from the CREMI-challenge~\cite{cremi} acquired by~\cite{zheng2018complete} and converted to multiple multicut instances by~\cite{pape2017solving}. The majority of these instances are different crops of one global problem. There are 3 small ($400000-600000$ edges), 3 medium ($4-5$ million edges) and 5 large ($28-650$ million edges) multicut instances. For the largest problem problem we use NVIDIA Quadro RTX 8000 (48GB) GPU. \item[Cityscapes] Unsupervised image segmentation on $59$ high resolution images ($2048 \times 1024$) taken from the validation set~\cite{cordts2016cityscapes}. Conversion to multicut instances is done by computing the edge affinities produced by~\cite{abbas2021combinatorial} on a grid graph with $4$-connectivity and additional coarsely sampled longer range edges. Each instance contains approximately $2$ million nodes and $9$ million edges. \end{description} \paragraph{Algorithms} As baseline methods we have chosen, to our knowledge, the fastest primal heuristics from the literature. \begin{description} \item[GAEC~\cite{keuper2015efficient}:] The greedy additive edge contraction corresponds to Algorithm~\ref{alg:parallel-message-passing} with choosing a single highest edge to contract. We use our own CPU implementation that is around $1.5$ times faster than the one provided by the authors. \item[GAEC-KLj~\cite{keuper2015efficient}:] The Kernighan\&Lin with joins (KLj) algorithm performs local move operations which can improve the objective. To avoid large runtimes the output of GAEC is used for initialization. \item[GEF~\cite{levinkov2017comparative}:] The greedy edge fixation algorithm is similar to GAEC but additionally visits negative-values (repulsive) edges and adds non-link constraints between endpoints. \item[BEC~\cite{kardoost2018solving}:] Balanced edge contraction, a variant of GAEC which chooses edges to contract based on their cost normalized by the size of the two endpoints. \item[ICP~\cite{lange2018partial}:] The iterated cycle packing algorithm searches for cycles and greedily solves a packing problem associated with the multicut approximatively solving the multicut dual~\eqref{eq:dual-multicut}. \item[P:] Our purely primal Algorithm~\ref{alg:edge-contraction} using the maximum matching and spanning forest based edge contraction strategy. \item[PD:] Our primal-dual Algorithm~\ref{alg:primal-dual-parallel-multicut} which additionally makes use of dual information. \item[PD+:] Variant of PD in which we consider larger conflicted cycles for reparametrization, which can lead to better primal solutions. \item[D:] Our dual cycle separation algorithm followed by message passing, Algorithm~\ref{alg:parallel-message-passing} producing lower bounds. \end{description} \begin{table*} \centering \begin{tabular}{l rr rr rr rr} \toprule & \multicolumn{6}{c}{Connectomics} & \multicolumn{2}{c}{Cityscapes} \\ \cmidrule(lr){2-7} \cmidrule(lr){8-9} & \multicolumn{2}{c}{Small ($\times 3$)} & \multicolumn{2}{c}{Med. ($\times 3$)} & \multicolumn{2}{c}{Large ($\times 5$)} & ($\times 59$) \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9} Method & Cost & t [s] & Cost & t [s] & Cost & t [s] & Cost & t [s] \\ \midrule \multicolumn{9}{c}{Primal} \\ \midrule GAEC-KLj~\cite{keuper2015efficient} & $ -\textbf{179377} $ & $3.8$ & $ -\textbf{922577} $ & $125$ & - & - & $-1857774$ & $47012$ \\ GAEC~\cite{keuper2015efficient} & $ -179368 $ & $0.4$ & $ -922437 $ & $4.7$ & $ -\textbf{151155} $ & $280$ & $-1826731$ & $13$ \\ GEF~\cite{levinkov2017comparative} & $-179340$ & $0.7$ & $-922316$ & $9.0$ & $-151092$ & $699$ & $-1743402$ & $14$ \\ BEC~\cite{kardoost2018solving} & $-178709$ & $0.5$ & $-919869$ & $5.6$ & $-150709$ & $309$ & $-1613014$ & $36$ \\ P & $-177973$ & $\textbf{0.1}$ & $-917312$ & $\textbf{0.6}$ & $-150466$ & $\textbf{6}$ & $-1711387$ & $\textbf{0.4}$ \\ PD & $-179130$ & $0.2$ & $-921749$ & $1.0$ & $-150907$ & $13$ & $-1845717$ & $1$ \\ PD+ & $-179131$ & $0.3$ & $-921957$ & $1.4$ & $-150928$ & $20$ & $-\textbf{1862092}$ & $2.2$ \\ \midrule \multicolumn{9}{c}{Dual} \\ \midrule ICP~\cite{lange2018partial} & $-179763$ & $0.8$ & $-924655$ & $11.3$ & $-151740$ & $1235$ & $-1930234$ & $41.1$ \\ D & $-\textbf{179729}$ & $\textbf{0.2}$ & $-\textbf{924165}$ & $\textbf{0.8}$ & $-\textbf{151727}$ & $\textbf{13}$ & $-\textbf{1928786}$ & $\textbf{1.3}$ \\ \bottomrule \end{tabular} \caption{Comparison of results on Connectomics and Cityscapes datasets. We report average primal and dual costs and runtime over all instances within each category. In terms of primal solutions our primal-dual solvers (PD, PD+) achieves objectives close or better than sequential solvers while being substantially faster especially on larger instances. Moreover, our parallel message passing approach (D) gives better lower bounds than ICP with two orders of magnitude reduction in runtime.} \label{tab:fruit-fly-cityscapes-results} \end{table*} \begin{figure} \centering \input{Figures/scatterplots/cityscapes/cityscapes_primal} \caption{Comparison of primal solutions from Cityscapes dataset. Our purely primal algorithm (P) without message passing is $30\times$ faster than GAEC~\cite{keuper2015efficient} and GEF~\cite{levinkov2017comparative}, although with worse objective values. Incorporating dual information enables our solvers (PD, PD+) to even exceed quality of the sequential solvers while being faster. Error bars mark 0.25, 0.75-quantile. (GAEC-KLj not shown due to large runtime). } \label{fig:cityscapes_primal_scatter} \end{figure} \begin{figure} \centering \input{Figures/scatterplots/cityscapes/cityscapes_dual} \caption{Comparison of lower bounds from Cityscapes dataset. Our parallel message passing scheme (D) is more than an order of magnitude faster than ICP~\cite{lange2018partial} and gives slightly better lower bounds. Error bars mark 0.25, 0.75-quantile. } \label{fig:cityscapes_dual_scatter} \end{figure} \begin{figure*} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=0.9\linewidth]{Figures/results/cs_50_w_tooltip/GAEC.png} \caption{GAEC~\cite{keuper2015efficient}, Cost = -2455070} \label{fig7:a} \vspace{4ex} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=0.9\linewidth]{Figures/results/cs_50_w_tooltip/P.png} \caption{P, cost = -2347254 } \label{fig7:b} \vspace{4ex} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=0.9\linewidth]{Figures/results/cs_50_w_tooltip/PD.png} \caption{PD, cost = -2499152 } \label{fig7:c} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=0.9\linewidth]{Figures/results/cs_50_w_tooltip/PD+.png} \caption{PD+, cost = $-\textbf{2523547}$} \label{fig7:d} \end{subfigure} \caption{Visual comparison of results on an instance from Cityscapes dataset highlighting the transitions. Yellow arrows indicate incorrect regions. Notice that our purely primal algorithm (P) suffers in localizing the sidewalks and trees. PD+ is able to detect an occluded car on the left side of the road which all other methods did not detect. (Best viewed digitally)} \label{fig:cityscapes_results_comparison} \end{figure*} \paragraph{Discussion} Results on both neuron segmentation and Cityscapes datasets are given in Table~\ref{tab:fruit-fly-cityscapes-results}. On the neuron segmentation problems we attain primal objectives very close to those produced by GAEC~\cite{keuper2015efficient} but faster by more than an order of magnitude on large instances. For the Cityscapes dataset we achieve even better primal solutions than sequential algorithms by incorporating dual information while also being being substantially faster. Our best solver (PD+) is more than $10^4$ times faster than GAEC-KLj~\cite{keuper2015efficient} and produces better solutions. Our second best solver (PD) is $13$ times faster than the second best CPU solver (GAEC) and also produces better solutions. Distributions of runtimes and primal resp.\ dual objectives for all instances of Cityscapes are shown in Figures~\ref{fig:cityscapes_primal_scatter} and \ref{fig:cityscapes_dual_scatter}. An example result comparison is given in Figure~\ref{fig:cityscapes_results_comparison}. Our dual algorithm produces speedups of almost two orders of magnitude and better lower bounds compared to the serial ICP~\cite{lange2018partial} for both datasets. \section{Introduction} \label{sec:introduction} Decomposing a graph into meaningful clusters is a fundamental problem in combinatorial optimization. The multicut problem~\cite{chopra1993partition} (also known as correlation clustering~\cite{bansal2004correlation}) is a popular approach to decompose a graph into an arbitrary number of clusters based on affinites between nodes. The multicut problem and its extensions such as higher order multicut~\cite{kim2011higher,kappes2016higher}, lifted multicut~\cite{keuper2015efficient}, (asymmetric) multiway cut~\cite{chopra1991multiway,kroeger2014asymmetric}, lifted disjoint paths~\cite{hornakova2020lifted} and joint multicut and node labeling~\cite{levinkov2017joint} have found numerous applications in machine learning, computer vision, biomedical image analysis, data mining and beyond. Examples include unsupervised image segmentation~\cite{alush2013break,andres2011closedness,yarkony2012fast,andres2013segment
ing}, instance-separating semantic segmentation\cite{kirillov2017instancecut,abbas2021combinatorial}, multiple object tracking~\cite{tang2017multiple,hornakova2020lifted}, cell tracking~\cite{jug2016moral}, articulated human body pose estimation~\cite{insafutdinov2017art}, motion segmentation~\cite{keuper2018motion}, image and mesh segmentation~\cite{keuper2015efficient}, connectomics~\cite{andres2012globally,beier2017multicut,pape2017solving} and many more. Multicut and its extensions are NP-hard to solve~\cite{bansal2004correlation,demaine2006correlation}. Since large problem instances with millions or even billions of variables typically occur, powerful approximative algorithms have been developed~\cite{keuper2015efficient,swoboda2017dual,beier2014cut,beier2016efficient,levinkov2019comparative}. However, even simple heuristics such as~\cite{keuper2015efficient} require very large running times for very large instances. In particular, some instances, such as those investigated in~\cite{pape2017solving} could not be solved in acceptable time (hence ad-hoc decomposition techniques were used). In other scenarios very fast running times are essential, e.g.\ when multicut is used in end-to-end training~\cite{song2019end,abbas2021combinatorial}. Hence, the need for parallelization arises, preferably on GPUs. The parallelism offered by GPUs is typically difficult to exploit due to irregular data structures and the inherently sequential nature of most combinatorial optimization algorithms. This makes design of combinatorial optimization algorithms challenging on GPUs. An additional benefit of running our algorithms on GPU is that memory transfers between CPU and GPU are avoided when used in a deep learning pipeline. Our contribution is a new primal-dual method that can be massively parallelized and run on GPU. This results in faster runtimes than previous multicut solvers while still computing solutions which are similar or better than CPU based solvers in terms of objective. Yet, our approach is rooted in solving a principled polyhedral relaxation and yields both a primal solution and a dual lower bound. In particular, finding primal solutions and approximate dual solving is interleaved such that both components of our algorithm can profit from each other. In more detail, our algorithmic contribution can be categorized as follows. \begin{description \item[\textit{Primal:} Edge Contraction:] Finding a primal solution depends similarly as in~\cite{keuper2015efficient} on contracting edges that are highly likely to end up in the same component of the final clustering. To this end we propose to use a linear algebra approach, i.e.\ express edge contractions as matrix-matrix multiplications between the adjacency matrix and contraction matrices. This allows us to accelerate edge contraction by exploiting highly parallel matrix-matrix multiplication GPU primitives. \item[\textit{Dual:} Lagrange Relaxation \& Message Passing:] To find good edge contraction candidates, we consider approximately solving a polyhedral relaxation by searching for conflicting cycles, adding them to a Lagrange relaxation and updating the resulting Lagrange multipliers iteratively by message passing. While searching for conflicted cycles is easily parallelizable, we propose a new message passing scheme that is both massively parallelizable yet yields monotonic increases w.r.t.\ the dual lower bound, speeding up the scheme of~\cite{swoboda2017dual} by orders of magnitude. \item[\textit{Recursive} Primal-Dual:] We recursively perform the above operations of finding and solving a Lagrange relaxation and contracting edges until no contraction candidates are available anymore, yielding the final graph decomposition. Hence, our algorithm goes beyond classical polyhedral approaches~\cite{swoboda2017dual,kappes2011globally,nowozin2009solution} that only consider the original graph. \end{description} On the experimental side we obtain primal solutions that are of comparable or better quality to those obtained by established high-quality heuristics~\cite{keuper2015efficient,lange2018partial} in a fraction of the execution time but with additional dual lower bounds that help in estimating the quality of the solutions. We perform experiments on segmentation and connectomics problems containing up to $\mathcal{O}(10^8)$ variables. \section{Method} \label{sec:method} A \emph{decomposition (or clustering)} of a graph $G = (V,E)$ is a partition $\Pi = \{V_1, \ldots, V_k\} $ of the node set such that $V_1 \cup \ldots, \cup V_k = V$ and $V_i \cap V_j = \varnothing$ $\forall i\neq j$. The \emph{cut} $\delta(V_1,\ldots,V_k)$ induced by a decomposition is the subset of those edges that straddle distinct clusters. Such edges are said to be \emph{cut}. See Figure~\ref{fig:multicut} for an illustration of a cut into three components. \begin{figure} \centering \input{Figures/multicut-example} \caption{ Decomposition of a graph into three components (\textcolor{mycolor1}{green}). The corresponding cut consists of the dashed edges straddling distinct components (\textcolor{red}{red}). } \label{fig:multicut} \end{figure} The space of all multicuts is \begin{equation} \mathcal{M}_G = \left\{ \delta(V_1,\ldots,V_k) : \begin{array}{c} k \in \mathbb{N} \\ V_1 \dot\cup \ldots \dot\cup V_k = V \end{array} \right\}\, \end{equation} The associated minimum cost multicut problem is defined by an additional edge cost vector $c \in \mathbb{R}^{E}$. For any edge $vw = e \in E$, negative costs $\theta_e < 0$ favour the nodes $v$ and $w$ to be in distinct components. Positive costs $\theta_e > 0$ favour these nodes to lie in the same component. The minimum cost multicut problem is \begin{equation} \label{eq:multicut} \min_{y \in \mathcal{M}_G} \langle c, x \rangle\, \end{equation} Below we detail the key components of our algorithm: Primal updates consist of edge contractions that correspond to building up increasingly large clusters. Dual updates optimize a Lagrange relaxation via message passing. Primal and dual updates are interleaved to yield our primal-dual multicut algorithm. We additionally detail how each operation can be done in a highly parallel manner. \subsection{Primal: Parallel Edge Contraction} The idea of edge contraction algorithms is to iteratively choose edges with large positive costs. Such edges prefer their endpoints to be in the same component, hence they are contracted and end up in the same cluster. Edge contraction is recursively performed until no contraction candidates are found. The special case of greedy additive edge contraction (GAEC)~\cite{keuper2015efficient} chooses in each iteration an edge with maximum edge weight for contraction and stops if each edge in the contracted graph has negative weight. \begin{definition}[Edge Contraction] \label{definition:edge-contraction} Let an undirected graph $G=(V,E,c)$ and a set of edges $S \subset E$ to contract be given. \begin{enumerate}[(i)] \item The corresponding surjective \emph{contraction mapping} $f : V \rightarrow V'$ mapping node set $V$ onto the contracted node set $V'$ is up to isomorphism uniquely defined by $f(u) = f(v) \iff \exists uv\textnormal{-path}(V,S)$, $E' = \{f(u)f(v): f(u) \neq f(v), uv \in E\}$. \item The contracted edge weights are $c'(u'v') = \sum_{uv: f(u) = u', f(v) = v'} c(uv)$. \end{enumerate} \end{definition} \begin{figure} \centering \input{Figures/contraction-example} \caption{ Contraction of a graph with $S=\{cd, de\}$. Notice that edges $bd$ and $be$ become parallel edges after contraction and their costs are summed up. Also notice the presence of self-loop in the contracted graph with cost indicating intra-cluster similarity. } \label{fig:contraction} \end{figure} An illustration of edge contraction if given in Figure~\ref{fig:contraction}. In order to perform edge contraction fast we will use a linear algebraic representation that will allow to use highly parallel matrix-matrix multiplication. \begin{definition}[Adjacency Matrix] Given a weighted graph $G=(V,E,c)$ its adjacency matrix $A \in \mathbb{R}^{V \times V}$ is defined by $A_{uv} = \begin{cases} c_{e}, & uv \in E \\ 0,& \text{otherwise} \end{cases}$. \end{definition} The edge contraction is done with the help of an edge contraction matrix. \begin{definition}[Edge Contraction Matrix] Given a weighted graph $G=(V,E,c)$ and an edge set $S \subset E$ to contract, let $f$ be the contraction mapping and $V'$ the contracted node set. The edge contraction matrix $K_S \in \{0,1\}^{V \times V'}$ is defined as $(K_{S})_{uu'} = \begin{cases} 1,& f(u) = u'\\ 0,& \text{otherwise} \end{cases}$ \end{definition} \begin{lemma} \label{lemma:adjacency-matrix-contraction} Given a weighted graph $G=(V,E,c)$, an edge set $S \subset E$ to contract and an associated edge contraction mapping $f$, \begin{enumerate}[(i)] \item \label{item:adjacency-matrix-contraction} the adjacency matrix of the contracted graph is equal to $K_S^{\top} A K_S - \operatorname{diag}(K_S^{\top} A K_S)$, where $\operatorname{diag}(\cdot)$ is the diagonal part of a matrix, \item \label{item:diagonal-entry-contraction} it holds for the diagonal entry $(K_S^{\top} A K_S)_{u'u'} = \sum_{uv \in E: u' = f(u) = f(v)} c(uv)$, \end{enumerate} \end{lemma} Lemma~\ref{lemma:adjacency-matrix-contraction}~(\ref{item:adjacency-matrix-contraction}) provides a way to compute the contracted graph in parallel by matrix-matrix multiplication. Lemma~\ref{lemma:adjacency-matrix-contraction}~(\ref{item:diagonal-entry-contraction}) allows to efficiently judge whether the newly formed clusters decrease the multicut objective. A primal update iteration is given in Algorithm~\ref{alg:edge-contraction} which performs edge contraction as in Lemma~\ref{lemma:adjacency-matrix-contraction} (\ref{item:adjacency-matrix-contraction}). \begin{algorithm} \DontPrintSemicolon \caption{\texttt{Parallel-Edge-Contraction}} \label{alg:edge-contraction} \KwData{Graph $G=(V,E,c)$} \KwResult{Contracted Graph $G' = (V',E',c')$, contraction mapping $f:V \rightarrow V'$} Compute contraction set $S \subseteq E$\; Compute adjacency matrix $A$ from $G$\; Construct contraction mapping $f: V \rightarrow V'$\; Construct contraction matrix $K_S$\; $A' = K_S^{\top} A K_S - \operatorname{diag}(K_S^{\top} A K_S)$\; Compute contracted graph $G'=(V',E',c')$ from $A'$\; \end{algorithm} \paragraph{Finding contraction edge set $S$:} A vital step for ensuring a good primal update is selecting the edges for contraction $S$ in Algorithm~\ref{alg:parallel-message-passing}. \begin{description} \item[Maximum positive weight edge (GAEC):] Choose in each iteration one largest positive edge for contraction. Algorithm~\ref{alg:edge-contraction} specializes to GAEC~\cite{keuper2015efficient} in this case. \item[Maximum matching:] Perform a fast maximum matching on the positive edges using a GPU version of the Luby-Jones handshaking algorithm~\cite{cohen2012graph} and select the matching edges for contraction. \item[Maximum spanning forest without conflicts:] Compute a maximum spanning forest on the positive edges with a fast GPU version of Bor\r{u}vka's algorithm~\cite{wen2011gpu_gems} to find initial contraction candidates. Iterate over all negative edges $(i, j)$ in $G$, find the unique path between $i$ and $j$ in the forest (if it exists) and remove the smallest positive edge. This strategy ensures that the resulting join operation decreases the multicut objective. We make use of GPU connected components~\cite{ecl_cc2018} to check for presence of these paths and to compute the final contraction mapping. \end{description} Since for efficiency our algorithm depends upon many simultaneous edge contractions, we do not use the GAEC strategy. We first find contraction edges via maximum matching and, if not enough can be found (i.e. fewer than $0.1 \abs{V}$), we switch to the spanning forest based approach. \subsection{Dual: Conflicted Cycles \& Message passing} Solving a dual of~\eqref{eq:multicut} can help in obtaining a lower bound on the objective value and also yields a reparametrization of the edge costs $c$ which can help in better primal updates. Our dual algorithm works on the cycle relaxation for the multicut problem~\cite{chopra1993partition}. We present for its solution massively parallel inequality separation routines to search for the most useful cycles and efficient dual block coordinate ascent procedure for optimizing the resulting relaxation. \subsubsection{Cycle Inequalities \& Lagrange Relaxation} Since the multicut problem is NP-hard~\cite{bansal2004correlation,demaine2006correlation}, we cannot hope to obtain a feasible polyhedral description of $\mathsf{conv}(\mathcal{M}_G)$. A good relaxation for most practical problems is given in terms of cycle inequalities. Given a cycle $C = \{e_1,\ldots,e_l\}$ in $G$, a feasible multicut must either not be cut or be cut at least twice along $C$, which is expressed by \begin{equation} \label{eq:cycle-inequality} \forall C \in \text{cycles}(G):\ \forall e \in C: y_e \leq \sum_{e' \in C\backslash \{e\}} y_{e'}\,. \end{equation} Cycle inequalities together with the binary constraints $y_e \in \{0,1\}$ actually define $\mathcal{M}_G$~\cite{chopra1993partition}. In other words, when relaxing $y_e \in [0,1]$ we obtain a linear program relaxation to $\mathsf{conv}(\mathcal{M}_G)$ with all integral points being valid multicuts. While cycle inequalities~\eqref{eq:cycle-inequality} give us a polyhedral relaxation of the multicut problem~\eqref{eq:multicut}, our algorithm will operate on a Lagrangean decomposition that was proposed in~\cite{swoboda2017message}. It consists of two types of subproblems joined together via Lagrange variables: (i)~Edge subproblems for each edge $e \in E$ and (ii)~Triplet subproblems for a subset of triplets $T \subset \begin{pmatrix} E \\ 3 \end{pmatrix}$. Triangulation of cycles of length more than three is done to get triplets defining the same polyhedral relaxation as the one with all possible cycle inequalities~\eqref{eq:cycle-inequality} without loss of generality~\cite{chopra1993partition}. First, we define the set of feasible multicuts on triplets: \begin{equation} \MC_\Delta = \left\{ (0,0,0), (1,1,0), (1,0,1), (0,1,1), (1,1,1)\right\} \end{equation} Given a set of triplets $T = \{t_1,\ldots, t_n\}$ our Lagrange decomposition is \begin{equation} \label{eq:dual-multicut} \max_{\lambda} \underbrace{ \sum_{e \in E} \min_{y \in \{0,1\}}c^{\lambda}_e(y) + \sum_{t \in T} \min_{y \in \MC_\Delta} c_t^{\lambda}(y) }_{ =: \text{LB}(\lambda)} \end{equation} where the \emph{reparametrized} edge and triplets costs are \begin{subequations} \label{eq:reparametrization} \begin{align} c_e^{\lambda}(y) &= c_e \cdot y_e + \sum_{t \in T: e \in t} \lambda_{t,e} \cdot y_e \\ c_t^{\lambda}(y) &= -\sum_{e \in t} \lambda_{t,e} \cdot y_e \end{align} \end{subequations} $\text{LB}(\lambda)$ in~\eqref{eq:dual-multicut} is a lower bound on the cost of the optimum multicut for any $\lambda$. For the optimum $\lambda$ in~\eqref{eq:dual-multicut} the lower bound equals the optimum of optimizing over the polyhedral relaxation~\cite{swoboda2017dual}. \subsubsection{Cycle Inequality Separation} For the dual problem~\eqref{eq:dual-multicut} one would need to enumerate all possible cycle inequalities~\eqref{eq:cycle-inequality}. However, as mentioned in~\cite{lange2018partial} we can restrict only to conflicted cycles of $G$ for efficiency without loosening the relaxation. A cycle is called a conflicted cycle if it contains exactly one repulsive edge. \begin{definition}[Conflicted cycles] Denote by $E^+, E^-$ the attractive ($\{c_e > 0: \forall e \in E^+\}$), repulsive edges ($\{c_e < 0: \forall e \in E^-\}$) in $E$ resp. Let $E_C$ be the set of edges in a cycle $C$. Then $\text{conf-cycles}(G) = \{C \in \text{cycles}(G): \lvert E_C^-\rvert\ = 1\}$. \end{definition} The search for conflicted cycles can be performed in parallel for each repulsive edge $ij \in E^-$ by finding shortest path w.r.t.\ hop distance between $i$ and $j$ in the graph $G=(V,E^+)$ making good use of parallelization capabilities of GPUs. \subsubsection{Dual Block Coordinate Ascent (DBCA)} DBCA (a.k.a.\ message passing) was studied in~\cite{swoboda2017message} for multicut. However, the resulting message passing schemes are not easily parallelizable. The underlying reason for the inherent sequential nature of these schemes is that the effectiveness of the proposed message passing operations depend on the previous ones being executed. We propose a message passing scheme for multicut that is invariant to message passing schedule, hence allowing parallel computation. As in~\cite{swoboda2017message}, our scheme iteratively improves the lower bound~\eqref{eq:dual-multicut} by message passing between edges and triplets. For each message passing operation we need to compute min-marginals, i.e.\ the difference of optimal costs on subproblems obtained by fixing a specified variable to $1$ and $0$. For edge costs, the min-marginal is just the reparametrized edge cost. For triangle subproblems it is given below. \begin{definition}[Marginalization for triangle subproblems] Let $t \in \mathcal{T}$ be a triangle and let an edge $e$ be in triangle $t$. \begin{equation} m_{t \rightarrow e}(c_t^{\lambda}) = \min_{\substack{y_e=1 \\ y \in \MC_\Delta}} c_t^{\lambda}(y) - \min_{\substack{y_e=0 \\ y \in \MC_\Delta}} c_t^{\lambda}(y) \end{equation} is called \emph{min-marginal} for triangle $t$ and edge $e$. \end{definition} The message passing algorithm iteratively sets min-marginal differences to zero first for edge subproblems and then for triplets described in Algorithm~\ref{alg:parallel-message-passing}. By sending back and forth messages the subproblems communicate their local optima and ultimately the min-marginals converge towards agreement. In~\cite{swoboda2017dual} it was shown that each such operation is non-decreasing in the dual lower bound, yielding an overall monotonic convergence of the dual lower bound. The edge to triangle messages updates (line~\ref{alg:edge-to-triangle-start}-\ref{alg:edge-to-triangle-end}) distribute messages uniformly among all triangles for which the edges are covered. The message update for triplets to edges (line~\ref{alg:triangle-to-edge-start}-\ref{alg:triangle-to-edge-end}) is a generalization of~\cite{tourani2018mplp++}. It prevents lopsided message updates by repeatedly dampening min-marginal updates and only sending the full min-marginal to an edge once every other edge in the triangle has been updated. \begin{algorithm} \DontPrintSemicolon \caption{\texttt{Parallel-Message-Passing}} \label{alg:parallel-message-passing} \KwData{Graph $(G=V,E,c)$, triplets $T \subset \begin{pmatrix}V \\ 3 \end{pmatrix}$, Lagrange multipliers $\lambda$.} \KwResult{Updated Lagrange multipliers $\lambda$} \tcp{Messages from edges to triplets} \For{$e \in E$ in parallel} { $\alpha = c^{\lambda}_e$ \; \label{alg:edge-to-triangle-start} \For{$t \in \mathcal{T}: e \in t$} { $\lambda_{t, e} -= \frac{\alpha}{\abs{t \in \mathcal{T}: e \in t}}$ \; } \label{alg:edge-to-triangle-end} } \tcp{Messages from triplets to edges} \For{$t = \{ij, jk, ki\} \in \mathcal{T}$ in parallel} { $\lambda_{t, ij} += \frac{1}{3} m_{t \rightarrow ij}(c^{\lambda}_t) $\; \label{alg:triangle-to-edge-start} $\lambda_{t, ik} += \frac{1}{2} m_{t \rightarrow ik}(c^{\lambda}_t) $\; $\lambda_{t, jk} += m_{t \rightarrow jk}(c^{\lambda}_t) $ \; $\lambda_{t, ij} += \frac{1}{2} m_{t \rightarrow ij}(c^{\lambda}_t) $\; $\lambda_{t, ik} += m_{t \rightarrow ik}(c^{\lambda}_t) $ \; $\lambda_{t, ij} += m_{t \rightarrow ij}(c^{\lambda}_t) $ \; \label{alg:triangle-to-edge-end} } \end{algorithm} \input{convergence} \subsection{Primal-Dual Updates} \begin{figure} \centering \input{Figures/primal-dual-example} \caption{Example iteration of our primal-dual multicut solver on a graph with \textcolor{orange}{repulsive} and \textcolor{mycolor1}{attractive} edges. Width of the edges indicate abs. cost. First we detect conflicted cycles and triangulate to get triplets (indicated by $\circlearrowright$). Next, dual update reparametrizes edge costs which resolves the conflicted cycles. Lastly a primal update is done by contracting attractive edges to get two clusters.} \label{fig:primal-dual} \end{figure} While the two building blocks of our multicut solver i.e.\ edge contraction and cycle separation + message passing can be used in isolation to compute a primal solution and dual lower bounds, we propose an interleaved primal-dual solver in Algorithm~\ref{alg:primal-dual-parallel-multicut}. \SetInd{0.25em}{0.75em} \begin{algorithm} \caption{\texttt{Primal-Dual Multicut}} \label{alg:primal-dual-parallel-multicut} \DontPrintSemicolon \KwData{Graph $G=(V,E,c)$} \KwResult{Contraction mapping $f : V \rightarrow V'$} \tcp{Initial contraction mapping} $f = V \rightarrow V$, $f(v) = v$ \ \ $\forall v \in V$\; \While{$G$ has positive edges without conflicts}{ $T = \texttt{Cycle-Separation}(V,E)$\; \For{iter= $1,\ldots,k$}{ $\lambda = \texttt{Message-Passing}(V, E, c, T)$\; $c = c^{\lambda}$\; } $(V,E,c), f =\texttt{Edge-Contraction}(V,E,c^{\lambda})$\; $f(v) = f'(f(v))$ $\forall v \in V$\; } \end{algorithm} In each iteration we separate cycles, perform message passing and then, based on the reparametrized edge costs, perform parallel edge contraction until no edge contraction candidate can be found anymore. This interleaved primal-dual update scheme has the following benefits: \begin{description} \item[Better edge contraction costs:] The reparametrization produces costs $c^{\lambda}$ that are more indicative of whether an edge is contracted or not in the final solution. In case the relaxation~\eqref{eq:dual-multicut} is tight, the sign of $c^{\lambda}_e$ perfectly predicts whether an edge is separating two clusters or is inside one. \item[Better cycle separation:] For fast execution times we stop cycle separation for cycles greater than a given length ($5$ in our case). Searching for cycles to separate in the contracted graph corresponds to longer cycles in the original graph, hence alleviating the need to perform a more exhaustive initial search. \end{description} Note that a valid lower bound can be obtained from Algorithm~\ref{alg:primal-dual-parallel-multicut} by recording~\eqref{eq:dual-multicut} after cycle separation and message passing in the first iteration. \section{Introduction} After receiving paper reviews, authors may optionally submit a rebuttal to address the reviewers' comments, which will be limited to a {\bf one page} PDF file. Please follow the steps and style guidelines outlined below for submitting your author response. The author rebuttal is optional and, following similar guidelines to previous CVPR conferences, is meant to provide you with an opportunity to rebut factual errors or to supply additional information requested by the reviewers. It is NOT intended to add new contributions (theorems, algorithms, experiments) that were absent in the original submission and NOT specifically requested by the reviewers. You may optionally add a figure, graph, or proof to your rebuttal to better illustrate your answer to the reviewers' comments. Per a passed 2018 PAMI-TC motion, reviewers should refrain from requesting significant additional experiments for the rebuttal or penalize for lack of additional experiments. Authors should refrain from including new experimental results in the rebuttal, especially when not specifically requested to do so by the reviewers. Authors may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers. Just like the original submission, the rebuttal must maintain anonymity and cannot include external links that reveal the author identity or circumvent the length restriction. The rebuttal must comply with this template (the use of sections is not required, though it is recommended to structure the rebuttal for ease of reading). \subsection{Response length} Author responses must be no longer than 1 page in length including any references and figures. Overlength responses will simply not be reviewed. This includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. \section{Formatting your Response} {\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.} All text must be in a two-column format. The total allowable size of the text area is $6\frac78$ inches (17.46 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The top margin should begin 1 inch (2.54 cm) from the top edge of the page. The bottom margin should be $1\frac{1}{8}$ inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately $1\frac{5}{8}$ inches (4.13 cm) from the bottom edge of the page. Please number any displayed equations. It is important for readers to be able to refer to any particular equation. Wherever Times is specified, Times Roman may also be used. Main text should be in 10-point Times, single-spaced. Section headings should be in 10 or 12 point Times. All paragraphs should be indented 1 pica (approx.~$\frac{1}{6}$ inch or 0.422 cm). Figure and table captions should be 9-point Roman type as in \cref{fig:onecol}. List and number all bibliographical references in 9-point Times, single-spaced, at the end of your response. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Alpher05}. Where appropriate, include the name(s) of editors of referenced books. \begin{figure}[t] \centering \fbox{\rule{0pt}{0.5in} \rule{0.9\linewidth}{0pt}} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:onecol} \end{figure} To avoid ambiguities, it is best if the numbering for equations, figures, tables, and references in the author response does not overlap with that in the main paper (the reviewer may wonder if you talk about \cref{fig:onecol} in the author response or in the paper). See \LaTeX\ template for a workaround. \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the response. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Readers (and reviewers), even of an electronic copy, may choose to print your response in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it is almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.pdf} \end{verbatim} } {\small \bibliographystyle{ieee_fullname} \section{Related Work} \label{sec:related-work} \paragraph{Preprocessing and Inprocessing:} For fixing variables to their optimal values and shrinking the problem before or during optimization, persistency or partial optimality methods have been proposed in~\cite{alush2012ensemble,lange2018partial,lange2019combinatorial}. These methods apply a family of criteria that, when passed, prove that any solution can be improved if its values do not coincide with the persistently fixed variables. \paragraph{Primal heuristics:} For obtaining primal solutions without optimality guarantees or estimates on the distance to optimum, a large number of methods have been proposed with different execution time/solution quality trade-offs. The first heuristic for multicut, the classical Kernighan\&Lin move-making algorithm was originally proposed in~\cite{kernighan1970efficient} and slightly generalized in~\cite{keuper2015efficient}. The algorithm consists of trying various moves such as joining two components, moving a node from one component to the next etc.\ and performing sequences of moves that decrease the objective. The faster but simpler greedy additive edge contraction (GAEC) heuristic, a move making algorithm that only can join individual components, was proposed in~\cite{keuper2015efficient}. It is used in~\cite{keuper2015efficient} to initialize the more complex Kernighan\&Lin algorithm. Variants involving different join selection strategies were proposed in~\cite{kardoost2018solving}. The greedy edge fixation algorithm~\cite{keuper2015efficient} generalizes GAEC in that it can additionally mark edges as cut, constraining their endpoints to be in distinct components. The more involved Cut, Glue \& Cut (CGC) move-making heuristic~\cite{beier2014cut} works by alternating bipartitioning of the graph and exchanging nodes in pairs of clusters. The latter operation is performed by computing a max-cut on a planar subgraph via reduction to perfect matching. CGC was extended to a more general class of possible ``fusion moves'' in~\cite{beier2016efficient}. A parallel algorithm for the simpler problem of unweighted correlation clustering problem was given in~\cite{pan2015parallel}. A comparative survey of some of the above primal heuristics is given in~\cite{levinkov2017comparative}. \paragraph{LP-based algorithms:} For obtaining dual lower bounds that estimate the distance to the optimum or even certify optimality of a solution a number of LP-relaxation based algorithms have been proposed. These algorithms can be used inside branch and bound and their computational results can be used to guide primal heuristics to provide increasingly better solutions. Quite surprisingly, it has been shown by~\cite{kappes2011globally,kim2011higher} that multicut problems of moderately large sizes can be solved with commercial integer linear programming (ILP) solvers like Gurobi~\cite{gurobi} in a cutting plane framework in reasonable time to global optimality. Column generation based on solving perfect matching subproblems has been proposed in~\cite{yarkony2012fast,lukasik2020benders}. Still, the above approaches break down when truly large scale problems need to be solved, since the underlying LP-relaxations are still solved by traditional LP-solvers that do not scale linearly with problem size and are not explicitly adapted to the multicut problem. Additionally, violated inequality separation (cutting planes) requires solving weighted shortest path problems which is not possible in linear time. The message passing algorithm~\cite{swoboda2017message} approximately solves a dual LP-relaxation faster than traditional LP-solvers and has faster separation routines than those of primal LP-solvers as well, thereby scaling to larger problems. An even faster, but less powerful, approximate cycle packing algorithm was proposed in~\cite{lange2018partial}. \paragraph{Other Efficient Clustering Methods:} The mutex watershed~\cite{wolf2020mutex} and its generalizations~\cite{bailoni2019generalized} are closely related to the greedy additive edge fixation heuristic for multicut~\cite{levinkov2017comparative}. The corresponding algorithms can be executed faster than their multicut counterparts on CPU, but are sequential. Fast GPU schemes~\cite{auer2012graph} were proposed for agglomerative clustering. Last, spectral clustering can be implemented on GPU with runtime gains~\cite{jin2016,naumov2016parallel}. All these approaches however are not based on any energy minimization problem, hence do not come with the theoretical benefits that an optimization formulation offers.
\section{Introduction} Gravitational lensing and microlensing have been treated and employed as powerful tools to study astrophysical objects and phenomena (Schnierder, Ehlers and Falco 1993; Mollerach, Esteban 2002). Among many examples and applications one could mention the search for MACHO candidates at the galactic halo using gravitational microlensing with the recent data pointing at the absence of them \cite{mil05}. Other application of gravitational microlensing include (a) study of stellar atmosphere (Cassan et al. 2004; Abe et al. 2003; Bryce et al. 2002), (b) planet searching (Bennet et al. 1999) ,(c) exotic matter searches such as magnetic masses \cite{rah03,nou97,lyn98} and wormholes (Safonova, Torres and Romero 2002), (d) possible black hole detection based on gravitational microlensing method (Agol et. al 2002). Furthermore it seems that future observations will be based on astrometric microlensing which is hoped to improve previous studies. Another application which is the concern of this paper includes angular momentum detection by photometric and astrometric microlensing. This in turn could enhance detection of black hole candidates through this extra parameter. \\ The outline of the paper is as follows: In Section \ref{photometry} we introduce photometric and astrometric microlensing. Section \ref{kerr} contains gravitational microlensing in kerr metric and sensitivity of angular momentum detection to the lens parameters. Conclusion and summery are given in Section \ref{sum}. \section{Photometric and Astrometric Microlensing}\label{photometry} Simple microlensing events occur when the approximation of a point-like deflector and point-like source in a relative uniform motion is valid in the Galactic scale \cite{pac86}. At a given time t, the light magnification $A(t)$ of a point-like source located at lens-source $D_{ls}$ and observer-lens $D_{ol}$ distances induced by a point-like deflector of mass $M$ is given by: \begin{equation} A(t)={u^2(t)+2\over u(t)\sqrt{4+u^2(t)}}, \end{equation} where $u(t)$ is the impact parameter (distance between lens and source in the lens plane), expressed in units of the "Einstein Radius" $R_{E}$ and is given by: \begin{equation} R_{E} = \sqrt{{4GDM \over c^2 }}\;\;\;\; , \;\;\;\; D = {D_{ol}D_{ls}\over D_{os}}, \end{equation} where $G$ is the Newtonian gravitational constant and $c$ is the velocity of light. Assuming a source moving at a constant relative transverse speed $v_{T}$, reaching its minimum distance $u_{0}$ to the lens in the lens plane at time $t_{0}$, $u(t)$ is given by: \begin{equation} u(t) = \sqrt{u_0^2 + ({t-t_0 \over t_E})^2}\;\;\;\; , \;\;\;\; t_E = {R_E\over v_T}, \end{equation} where $t_E$, the "Einstein crossing time", is the only measurable parameter providing useful information about the lens in the approximation of the simple microlensing. Within this approximation, the light-curve of a microlensed star is fully determined by the parameters $F_b,u_0,t_0$ and $t_E$. In the next two subsections we introduce astrometry and parallax effect and their signature in the gravitational microlensing. \subsection{Astrometric microlensing in Schwarzschild space time} For a point like source at distance $u={b\over R_E}$ in the lens plane from a point like gravitational lens, it is well known that the images are located at: \begin{equation} {u_{\pm}^{I}={u\pm\sqrt{u^2+4}\over2}}. \end{equation} Corresponding amplifications are given by: \begin{equation} A_{\pm}={1\over2}(1\pm{2+u^2\over u\sqrt{u^2+4}}). \end{equation} The location of the center of images with respect to lens, in the lens plane, is defined by: \begin{equation} {\bf R}_{image}={{{\bf u}_{+}^{I}|A_{+}|+{\bf u}_{-}^{I}|A_{-}|}\over {|A_{+}|+|A_{-}|}}. \end{equation} We are interested in the deflection of the combined images, i.e. the location of centroid with respect to the source, so we transform ${\bf R}_{image}$ to ${\bf R^{'}}_{image}$ i.e : \begin{equation} {\bf R^{'}}_{image}={\bf R}_{image}-{\bf u}\;\;\; then \;\;\; {\bf R^{'}}_{image}={u\over u^2+2}\frac{{\bf u}}{u}. \label{c_image} \end{equation} As $u$ changes with time ${\bf R^{'}}_{image}$ plots a pattern. In the case of the Schwarzschild space-time the pattern is an ellipse. \\ To study the asymptotic behavior of the standard astrometric microlensing, we examine equation (\ref{c_image}). If $u{\rightarrow}\infty$, the centroid shift falls off like $1\over u$ , this can be compared with the photometric amplification A, which falls off like: $1\over u^4$. These asymptotic results illustrate one of the important differences between astrometric and photometric microlensing namely that the {\it centroid shift falls off much more slowly than amplification}. In consequence, cross section for astrometric events is much larger than that for photometric events \cite{mir96,dom00,hon01}. \subsection{Parallax effect in microlensing} If the variation of the Earth's velocity around the sun is not negligible with respect to the projected transverse speed of the deflector, then the apparent trajectory of the deflector with respect to the line of sight is a cycloid instead of a straight line. The resulting amplification versus time is therefore affected by the so-called {\it parallax effect}. This effect is more easily observable for long duration events (several months), for which the change in the Earth`s velocity is important \cite{alc95}. If ${\bf u}_D(t)$ is the position of the deflector in the deflector's transverse plane and ${\bf u}_E(t)$ is the intercept of the Earth-source line of sight with this plane, then: \begin{equation} {u(t)=\left|{\bf u}_{D}(t)-{\bf u}_{E}(t)\right|^{1\over2}}, \end{equation} where \begin{equation} {\bf u}_D(t)=({t-t_0\over t_E}cos\theta-u_0sin\theta){\bf \hat{i}}+ ({t-t_0\over t_E}sin\theta-u_0cos\theta){\bf \hat{j}}, \end{equation} and $\theta$ is the angle between the projected lens trajectory and the projected major axis of the Earth's orbit in the deflector plane. Here $u_0$ is the closest approach of the lens to the sun at the lens plane. Neglecting the Earth's orbital eccentricity, ${\bf u}_E(t)$ is given by: \begin{equation} {{\bf u}_E(t)={\delta u}sin(\xi(t)){\bf \hat{i}}-{\delta u}{cos(\xi(t))}{cos(\beta}){\bf \hat{j}}}, \end{equation} where $\delta u={{a_\oplus (1-x)}/R_{E}} $ is the projection of the Earth's orbital radius in the deflector plane in unit of Einstein radius and $x=\frac{D_{ol}}{D_{os}}$, $\beta$ is the angle between the ecliptic and deflector planes and $\xi(t)$ is the phase of the Earth relative to its position at ${\bf u}_E ={\delta u}cos(\beta){\bf \hat{j}}$. The distortion of the light curve is important if the Earth's orbital velocity around the sun is not negligible with respect to the projected transverse speed of the deflector: \begin{equation} {\tilde{v}}={R_E\over t_E(1-x)}={a_{\oplus}\over \delta ut_E} \end{equation} Rahvar et al. (2003) proposed an observation
strategy, using the alert and follow-up telescopes for that can put constrain on the mass and distance of lens in the direction of Magellanic Clouds. Adding a third telescope such as GAIA or SIM one can break the degeneracy between the parameters of lens through astrometric measurements \cite{rah05}. Fig. \ref{f1} shows the photometry and the astrometry of center of images for the case of simple and considering the parallax effects for a typical lens located at $5 kpc$ from the observer with the source star at $10 kpc$. \section{Microlensing in Kerr space} \label{kerr} Gravitational lensing in the field of a rotating star has been studied previously, for example in \cite{bra86,iba83,gli99}. Here we employ the formalism introduced by Ibanez (1983) in which the first approximation in $G$ of Kerr metric was used. In the slow-motion or the fast-motion approximation, the bending angle is given by: \begin{equation} \label{dev_eq} \Delta {\bf n} = -\frac{2}{c}\int^{+\infty}_{-\infty}\nabla \phi dt +\frac{4G}{c^3 b^2}[\frac{2}{b^2} {\bf b}.({\bf n_i} \times {\bf S}){\bf b} -{\bf (n_i\times S)}], \end{equation} where ${\bf b}$ is the impact parameter, ${\bf S}$ spin of the lens and ${\bf n_i}$ a unit vector representing the initial direction of incoming photons toward the lens. We note that for ${\bf S}$ normal to the lens plane, the second term in the right hand side of equation (\ref{dev_eq}) vanishes and we can not observe the signature of angular momentum on the position of images. The relation between the position of image and source at the lens plane perpendicular to the line of sight is given by: \begin{eqnarray} \label{proj1} x_0 &=& x + {{R_E}^2\over x_0^2+y_0^2}x_0 + {2R^2\over (x_0^2+y_0^2)^2}x^2_0 - {R^2\over x_0^2+y_0^2}. \\ \label{proj2} y_0 &=& y + {{R_E}^2\over x_0^2+y_0^2}y_0 + {2R^2\over (x_0^2+y_0^2)^2}x_0y_0, \end{eqnarray} where $R^2 = {4GS D\over c^3}$ and $S$ is the projected angular momentum on the lens plane. $(x,y)$ and $(x_0, y_0)$ are the components of the source and image positions respectively. (Y-axis is chosen along projection of spin at the lens plane). At the lens plane, the ratio of the terms corresponding to that of the angular momentum to the of mass, in the deviation of light ray is $R^2/{{R_E}^2 x_0}=S/c/{M x_0}=(v/c)({R_g}/{x_0})$, where $v$ is the rotation speed of lens, $R_g$ is gyration radius and $x_0$ is the position of images. Sine $v<<c$ and $R_g<<x_0$, the ratio of the spin to the mass term should be smaller than one, means that $S<<M x_0 c$. For a typical lens of solar mass, in the Milky Way, the images are produced at an astronomical distance from the lens, in other words $S<< 10^{50} kg m^2/sec$. For lenses with larger masses and located at the galactic scale, the upper limit on the spin will be higher. \\ For a given source, equations (\ref{proj1}) and (\ref{proj2}) in principle have three solutions, which means that rotating stars produce three images through lensing where the third image is very close to the lens. Glicenstein (1999) has showed that the third image having a very small impact parameter, it always eclipsed by the lens itself. Using equations (\ref{proj1}) and (\ref{proj2}), the amplification of the light of an image produced by a rotating lens is given by: \begin{equation} A = (1- {{R_E}^4\over b^4} - {4R^4\over b^6} - {4R^2{R_E}^2x_0\over b^6})^{-1} \end{equation} \subsection {Astrometric microlensing in Kerr space} As the equations (\ref{proj1}) and (\ref{proj2}) are too complex to be solved analytically, we employ the method of initial guess and substitute the schwarzschild solution in (\ref{proj1}) and (\ref{proj2}) and through iteration obtain the location of images in the linear Kerr metric. Fig. \ref{f2} shows the astrometry of the center of images and the photometry of a microlensing event by a rotating star. Similar to the case of Schwarzschild metric the parallax effect alters the path of center of images in the Kerr metric. Astrometries in Schwarzschild and Kerr metrics with and without the parallax effect are compared in Fig. \ref{f3}. Since the parallax is not an intrinsic effect it alters the center of images with the same amount both in Kerr and Schwarzschild spaces, Fig .\ref{f3}. Subtraction of the parallax effect from the path of the center of the images may help us to distinguish the Kerr from Schwarzschild metric.\\ In practice, the astrometric measurements can be done by a ground-based telescope accompanying a space-based interferometry telescope. The ground-based telescope signals the microlensing events and undertakes photometric measurement and the space-based telescope measures the displacement of image centroids. We examine the sensitivity of angular momentum detection as a function of lens parameters. To do so we need a Monte Carlo simulation generating the mass function as well as the corresponding angular momentum for a given direction of Milky Way and use observational efficiency to estimate the number of events. However, since the dependence of angular momentum to the mass function of stars is not well know, we generate a uniform distribution of lens parameters and use the criterion that the difference between the astrometry of Kerr and ordinary lenses should be more than the angular resolution of GAIA or SIM ($10 \mu as>$). The fraction of events with the detectable angular momentum is shown in Fig. \ref{f4}. It is seen that the angular momentum detection is correlated to all of the parameters of the lens. Increasing the spin, in contrast to the mass rises the detection probability. Very short and long Einstein crossing times are not favored for the spin detection but on the other hand small impact parameter is more convenient for our propose. The distance of lens from the observer also decreases the sensibility of angular momentum detection. The effect of distance decreases the displacement of center of images, Fig. \ref{f4}. \section{Conclusion}\label{sum} In the present study we examined the possibility of angular momentum detection of rotating lenses by the gravitational microlensing. Out of two methods of photometric and astrometric microlensing, the latter is more feasible. Future space-based interferometry telescopes such as GAIA and SIM can be used for this purpose. Studying the detection sensitivity in terms of the lens parameters showed that the centriod of images in the lenses with angular momentum larger than $S\gtrsim \times 10^{48} kg m^2 sec^{-1}$ deviates more than $10 \mu a$ from that of the Schwarzschild case. Recent analysis of X-ray spectral data from {\it ASCA} and {\it RXTE} of the two black hole candidates GRO J1655-40 and 4U 1543-47 has estimated the corresponding dimensionless spin parameter $a_{*} = (Sc/GM^2)$, $\sim 0.7 - 0.75$ and $\sim 0.85-0.9$ respectively \cite{sha05}. Considering our estimation of the spin detection limit implies that the mass of an extreme black hole $(a_{*} = 1)$ with observable angular momentum should be more than $10^3 M_{\odot}$. \section*{Acknowledgments} The authors would like to thank S.E. Vazquez, O. Wucknitz, L. Wisotzkiand and C. Fendt for their usefull comments and for providing some of the references. M-NZ and H.E also thank University of Tehran for supporting this project under the grants provided by the research council.
\section{Introduction} The dynamics of reacting species presents several issues of great interest from a theoretical point of view~\cite{xin,Constantin:2000p15663,Ver_12}. { Moreover it} is also a problem of wide application in many fields, from front propagation in gases~\cite{combustion}, chemical reaction in liquids~\cite{chem,e95} and ecological dynamics of biological systems (e.g. plankton in oceans)~\cite{bio,alplmb00,guasto2012fluid,korolev2010genetic,d2010fluid}. In the most simplest model of reaction dynamics, the state of the system is described by a single scalar field $\theta({\bf r},t)$, that represents the concentration of products. The field $\theta$ vanishes in the regions filled with fresh material (the unstable phase), equals unity where only inert products are left (the stable phase) and takes intermediate values wherever reactants and products coexist, i.e., in the region where production takes place. In their seminal contributions, Fisher, Kolmogorov, Petrovskii and Piskunov~\cite{FKPP,f37} (FKPP) considered the simplest case of pure reaction/diffusion and proposed the so-called FKPP model \begin{equation} \partial_t \theta = D \Delta \theta + f(\theta)\,, \label{eq:rd} \end{equation} where $D$ is the molecular diffusivity and $f(\theta)$ describes the reaction process that obviously depends on the phenomenon under investigation. In this work, as in the original works of FKPP, we focus on pulled reaction, e.g. the autocatalytic reaction $f=\alpha\theta(1-\theta)$, where $\alpha$ is the reaction rate and its inverse, $\tau=1/\alpha$, is the reaction time. However, most natural phenomena take place in deformable media like fluids and therefore transport properties cannot be ignored. If the medium is stirred, e.g. an eulerian velocity field $\mathbf{u}(\mathbf{x})$ is present, Eq. (\ref{eq:rd}) can be generalized in the incompressible case to \begin{equation} \partial_t \theta + (\mathbf{u} \cdot \mathbf{\nabla})\theta=D \Delta \theta + f(\theta)\,\,. \label{eq:evol} \end{equation} The complete mathematical description of these phenomena is given by partial differential equations (PDE) for the coupled evolution of the velocity field and of the concentration of the reacting species~\cite{combustion}. Therefore the above Eq.~(\ref{eq:evol}) should be coupled with Navier-Stokes equations (usually in a non trivial way). This is the general framework for treating engineering combustion problems in gases~\cite{Peters,prudhomme,poinsot}. In some cases, e.g.~\cite{vladimirova2003model}, the coupling can be simplified using a Boussinesq term.\\ In this work, as a further simplification, we assume that the reactants do not influence the velocity field which evolves independently. In such a limit the dynamics is still non trivial and it is completely described by the above ARD Eq.~(\ref{eq:evol}) together with the proper definition of a given velocity field, $\mathbf{u}(\mathbf{x})$. This equation has been intensively studied in incompressible media~\cite{audoly,vladimirova2003flame,acvv1,ctvv}. In particular, it has been investigated the dependence of the front speed as a function of $D,\alpha$ and the velocity field $\mathbf{u}(\mathbf{x})$~\cite{acvv}. On the contrary, in the case of compressible flows, the ARD problem did not receive too much attention but only recently in a mathematical framework~\cite{constantin2008propagation,benzi}. To account for compressible flows is indeed not simple but it is a relevant issue in combustion~\cite{Peters,poinsot}, plankton dynamics in turbulent flows~\cite{lewis2000planktonic} and also in particle-laden flows, where the particle phase can be highly compressible even in incompressible flows, because of inertia~\cite{Min_01,falkovich2001particles,bec_07}. While the passive scalar approximation for reactive species is hardly tenable in gas combustion phenomena, it may be considered appropriate in aqueous or liquid reactions (notably plankton in oceans) and for dilute particle-laden flows. In those cases, it may give some relevant insights for front propagation and can be used as a model for the flame tracking in some limits\cite{vladimirova2006flame}. Our aim is to investigate the effects of the compressibility to the bulk burning rate of the reaction process by studying the following PDE: \begin{equation} \rho \left[ \frac{\partial \theta}{\partial t} + u_i \frac{\partial \theta}{\partial x_i} \right] = D_0 \frac{\partial^2 \theta}{\partial x_i^2} + f(\theta) \label{scalar} \end{equation} The scalar field $\theta$ represents the mass fraction of a single species of a binary mixture, while $u_i$ is the $i^{th}$ component of a given compressible flow, $D_0=\rho D$ is the diffusion coefficient (supposed to be constant), $ f(\theta) = \dot \omega$ the rate of production of the chosen species and $\rho$ the non constant density of the fluid.\\ The paper is organized as follows: Section~\ref{sec:2} is devoted to the presentation of the model and the principal aspects of the numerical computations. In Section~\ref{sec:3} we discuss the results for the front propagation in compressible shear flows. Section~\ref{sec:4} is devoted to the case of compressible cellular flows. Finally, in Section~\ref{sec:5}, the reader can find the conclusions. \section{The model\label{sec:2}} The PDE model described by Eq.~(\ref{scalar}) can be derived from the equation of conservation of species, that is relevant for combustion dynamics~\cite{combustion,prudhomme}. Let us consider two species (namely A,B) which diffuse and react together while they are passively transported by a compressible flow, being $\rho_A(x,y,t)$ the mass of species $A$ per unit volume, the conservation of species $A$ gives: \begin{equation} \frac{\partial{\rho_A}}{\partial t} + \frac{\partial}{\partial x_i} \left[{\rho_A (u_i+U_{A,i})} \right]=\dot \omega_A \end{equation} where $u_i$ is the $i^{th}$ component of the advective flow field, $U_{A,i}$ is the velocity of diffusion of species $A$ and $\dot \omega_A$ is the rate of production. Define the mass fraction $Y_k=\rho_k/\rho$, where $\rho$ is the density of the mixture and $k=A,B$. The species conservation can be written in terms of mass fraction as follows: \begin{equation} \frac{\partial{(\rho Y_k)}}{\partial t} + \frac{\partial}{\partial x_i} \left[{\rho Y_k (u_i+U_{k,i})} \right]=\dot \omega_k \label{generale} \end{equation} Where $Y_A+Y_B=1$. Moreover, if Fick's law is considered, the diffusion velocities can be defined as follow: \begin{equation} Y_A U_{A,i} = -Y_B U_{B,i}= - D \frac{\partial Y_A}{\partial x_i} \end{equation} In the following, we assume an auto-catalytic irreversible law $A+B \xrightarrow{} 2A$: \begin{equation} \dot \omega_A= \alpha {\rho_A \rho_B}=\alpha \rho^2 Y_A Y_B=\alpha \rho^2 Y_A (1-Y_A) \end{equation} where the constant $\alpha$ controls the speed of reaction and by definition $\dot \omega_A=-\dot \omega_B.$ Thus, the evolution of the mass fraction of species A is completely described by the following PDE: \begin{equation} \rho \left[ \frac{\partial \theta}{\partial t} + u_i \frac{\partial \theta}{\partial x_i} \right]= D_0 \frac{\partial^2 \theta}{\partial x_i^2} + \alpha \rho^2 \theta (1-\theta )\,, \label{scalar2} \end{equation} that holds if we neglect the coupling between conservation of species equation and the conservation of energy equation. That is the case in which the energy released by the reaction is negligible and thus the momentum and energy equations evolve independently. The left hand side of Eq.~(\ref{scalar2}) is written in non-conservative form using the continuity equation of the mixture and the product $\rho D=D_0$ is assumed constant (which is quite a reasonable hypothesis~\cite{Peters,prudhomme}) Since we are interested, in the front propagation, we consider the following geometry: \begin{equation} -\infty<x<\infty~~,~~0 \le y \le L \end{equation} Only for the sake of simplicity we assume periodic boundary conditions in the $y-$direction and $\theta(-\infty,y,t)=1$ (burned material in a combustion terminology) and $\theta(\infty,y,t)=0$ (fresh material). At $t=0$ the initial condition is given by: \begin{equation} \theta(x,y,t) = \left\{ \begin{array}{ll} 1 & \textrm{if $x<0$}\\ 0 & \textrm{if $x \geq 0$} \end{array} \right. \end{equation} Of course different boundary and initial conditions may be interesting. For instance, if one is interested in quenching issues, appropriate initial conditions would pose $\theta$ initially localized in a region of size $\ell$ \begin{eqnarray} \theta(x,y,0) &=& \left\{ \begin{array}{ll} 1 & \textrm{if $-\ell/2\leq x\leq \ell/2$}\\ 0 & \textrm{if $x<-\ell/2$ or $x>\ell/2$} \end{array} \right.\,. \end{eqnarray} Equation~(\ref{scalar2}) has been solved using a eighth-order central finite difference scheme in space and a fourth-order Runge-Kutta integration in time. The grid size is sufficiently small to guarantee a good representation of the shear across the reacting region and convergence of solutions has been verified. To compute accurately the asymptotic mean bulk burning rate, very long integration periods are required. The grid is remapped following the reacting front and the computational domain is extended upstream and downstream from the reactive zone so that the boundary effects are negligible. \section{Compressible shear flow\label{sec:3}} \begin{figure}[ht!] \includegraphics[angle=0,scale=0.5,keepaspectratio=true]{./spessori.eps} \caption{Shape of the active part of the front (we use the function $4\theta(1-\theta)$, which is maximal for $\theta=0.5$), for a fixed Peclet number ($Pe=100$) and for two different reaction rates. Taking as reference the incompressible test case ($\epsilon=0$) we can define two different thicknesses. The bare front thickness as $\delta \sim \sqrt{D/\alpha}$ and the distance between the tip and the tail of the reacting region $\Delta$. For a slow reaction (upper panel, $Da=1$), we have approximately $\Delta \approx 10$ and $\delta \approx 6$. For fast reaction (lower panel, $Da=100$) $\Delta \approx 0.8$ and $\delta \approx 0.15$. } \label{spessori} \end{figure} We investigate the effects of compressibility in a 2D steady-state shear flow using { the velocity field} \begin{equation} \bar{u}(x,y) = \left( \frac{ U_0 \sin \left(\frac{2 \pi y}{L}\right) }{ 1+\epsilon \sin \left( \frac{2 \pi x}{\lambda}\right) } , 0 \right)\,. \label{eq:kolm} \end{equation} Such a choice corresponds to a Kolmogorov flow with amplitude $U_0$ and wavelength $L$, perturbed by a steady wave of wavelength $\lambda$ and magnitude $\epsilon$ accounting for the compressibility of the flow. Let us stress that the perturbation is oriented along the direction of propagation of the reactive front, i.e. the $x$ axis.\\ In order to satisfy the continuity equation, $\partial_i(\rho u_i)=0$, it is necessary to impose a spatially dependence on $\rho$, as $$ \rho(x) = \rho_0 \left[ 1+\epsilon \sin \left( \frac{2 \pi x}{\lambda}\right) \right] \,. $$ Finally, Eq.~(\ref{scalar2}) can be written in non-dimensional form: \begin{equation} \frac{\partial \theta}{\partial t^*} + u^*_i \frac{\partial \theta}{\partial x^*_i} = \frac{1}{\rho^* Pe } \frac{\partial^2 \theta}{\partial {x^*_i}^2} + {\rho^*} Da \theta (1-\theta ) \label{adimensionale} \end{equation} if we define $\rho^*=\rho/\rho_0$, $x_i^*=x_i/L$, $u_i^*=u_i/U_0$, $t^*=(tU_0)/L$. \\ The adimensional { parameters} $Pe=(\rho_0 U_0 L)/D_0$ and $Da=(L \alpha \rho_0)/U_0$ are the Peclet and the Damk\"{o}hler numbers which define the ratio between the diffusive and advective time scale and the ratio between the advective and reactive time scale respectively. In the following, we will drop the star notation and we will solve Eq.~(\ref{adimensionale}) focusing on regimes at high Peclet number $Pe\gg1$. Varying the Damk\"{o}hler number in a range of $Da \in [1,1000]$, we will quantify the effects of $\lambda$ and $\epsilon$ on the asymptotic value of the bulk burning rate. The instantaneous bulk burning rate is: \begin{equation} v_f(t)=\int_0^{1} \int_{0}^{+\infty} \dot \omega~ dx dy=\int_0^{1} \int_{0}^{+\infty} Da \rho^2 \theta (1-\theta )~ dx dy\,, \label{vistantaneo} \end{equation} while the mean or asymptotic bulk burning rate is defined as the time average of $v_f(t)$ over a sufficiently long interval: \begin{equation} v=\
frac{1}{T}\int_0^{T} v_f(t) dt \label{vmedio} \end{equation} To shed some light on the role played by $\lambda$ we first run a simulation in absence of compressibility for two different Damk\"{o}hler (slow and fast reaction) and for a fixed Peclet. We characterize the thicknesses of the reactive front $\Delta$ and $\delta$ (see Figure \ref{spessori} for definition). From this figure, it is clear that the faster the reaction the thinner the flame. Then we have carried out simulations in which the compressibility is fixed ($\epsilon=0.5$) and we choose $\lambda$ approximately greater, lower or between the two thicknesses computed in the case of zero compressibility.\\ In Figure \ref{fronti} we show how the { geometrical} aspect of the reactive front changes by varying $\lambda$. In the low density zones, the front thickness appears broader than in high density zones due to the decreasing of the local Peclet and Damk\"{o}hler number ($Pe_l=\rho Pe$, $Da_l=\rho Da$). Compressibility perturbation wrinkles the front in the small-wavelength limit, whereas for large wavelengths it is only corrugated, since the entire reactive-diffusive front is embedded in a wavelength. Nevertheless, even though the front does not appear stationary (even in a co-moving reference system) and it is noticeably distorted by the presence of compressibility, the mean velocity of propagation ($v$) does not change, see Figure \ref{speedlambda}. The wavelength of the perturbation controls the frequency and the magnitude of the instantaneous value of the front speed but does not affect the mean value. Since asymptotic propagation is not affected by $\lambda$, from now on in all simulations we set $\lambda=1$. \begin{figure}[ht!] \includegraphics[angle=0,scale=0.5,keepaspectratio=true]{./spessorieps100.eps} \\ \includegraphics[angle=0,scale=0.5,keepaspectratio=true]{./spessorieps200.eps} \\ \includegraphics[angle=0,scale=0.5,keepaspectratio=true]{./spessorieps300.eps} \\ \includegraphics[angle=0,scale=0.5,keepaspectratio=true]{./spessorieps400.eps} \\ \caption{Snapshot of $4\theta(1-\theta)$ for a fixed Peclet number ($Pe=100$) and for Damk\"{o}hler number $Da=100$. Panel (a) refers to a incompressible simulation ($\epsilon=0$) while for the others $\epsilon=0.5$. For the compressible tests the characteristic length $\lambda$ is set to be approximately lower (panel (b), $\lambda=0.1$), between (panel (c), $\lambda=0.5$) or greater (panel (d), $\lambda=4$) than the two thicknesses $\Delta$ and $\delta$. } \label{fronti} \end{figure} \begin{figure}[ht!] \includegraphics[angle=0,scale=0.4,keepaspectratio=true]{./vf-a-1-l-1-4-15.eps} \includegraphics[angle=0,scale=0.4,keepaspectratio=true]{./vf-a-100-l-01-05-4.eps} \\ \includegraphics[angle=0,scale=0.4,keepaspectratio=true]{./mb-a-1-l-1-4-15.eps} \includegraphics[angle=0,scale=0.4,keepaspectratio=true]{./mb-a-100-l-01-05-4.eps} \\ \caption{(Color online) Front speed and burnt mass ($m_b(t)=\int_0^{t} v_f(t)~dt$) as function of time for a fixed $Pe=100$, two different Damk\"{o}hler ($Da=1$ in panel: a,c ; $Da=100$ in panel: b,d) and different $\lambda$. } \label{speedlambda} \end{figure} In order to quantitatively characterize the effect of compressibility, we vary both the parameter $\epsilon$ and $Da$, with a fixed Peclet number. For this purpose, it is convenient to define the percentage difference of the mean asymptotic front speed between the compressible and the incompressible case as follow: \begin{equation} \Delta v_\% = 100 ~\frac{v-v^0}{v^0} \end{equation} where $v^0$ is the asymptotic bulk burning rate, as defined in (\ref{vmedio}), for the incompressible case ($\epsilon=0$). Results are shown in Figure \ref{mappe}.\\ In general, in the regimes investigated here, we observe that the presence of compressibility can slightly improve the process of reaction and the effects grow by increasing both $\epsilon$ and $Da$. For a fixed characteristic reaction rate (see Figure \ref{mappe}.a), numerical simulations suggest a power (quadratic) law of the velocity enhancement as a function of the parameter $\epsilon$ \begin{equation} \Delta v_\%\sim \epsilon^2. \end{equation} Instead, the dependence on Damk\"{o}hler is much slower. As shown in Figure \ref{mappe}.b the parameter $\Delta v_\%$ is always positive an it grows following (approximately) a logarithmic law: \begin{equation} \Delta v_\% \sim a\ln(Da)+b \end{equation} where $a$ and $b$ may depend on $\epsilon$. { Therefore} even in the case of very strong compressibility ($\epsilon=0.5$) and very fast reaction ($\alpha=1000$) the difference never exceeds a modest $6\%$. \begin{figure}[ht!] \includegraphics[angle=0,scale=0.4,keepaspectratio=true]{./asintotico-Da.eps} \includegraphics[angle=0,scale=0.4,keepaspectratio=true]{./asintotico-eps.eps} \caption{ (Color online) Comparison of the front speed between compressible and incompressible shear flow for a fixed Peclet number ($Pe=100$) at different compressibility magnitude $\epsilon$ (on the left) and for different Damk\"{o}hler (on the right).} \label{mappe} \end{figure} The effect of compressible wave perturbations appears therefore to be: i) wrinkling; ii) second-order enhancement. \section{Compressible cellular flow}\label{sec:4} We discuss now the case of cellular flows, i.e., 2D steady flows of amplitude $U_0$ composed by counter-rotating vortex of dimension $L/2$. The compressibility is imposed in the following way: \begin{align} \rho(x,y) &= \rho_0 C(x,y) \\ \bar{u}(x,y) &= \left( \frac{ U_0 \sin \left(\frac{2 \pi y}{L}\right)\cos \left(\frac{2 \pi y}{L}\right) }{C(x,y)} , \frac{ -U_0 \cos \left(\frac{2 \pi y}{L}\right) \sin \left(\frac{2 \pi y}{L}\right) }{C(x,y)}\right) \end{align} We choose two different shapes for $C(x,y)$. In the first, that we call (a) case, the density of the mixture is higher in the centre of the vortex:\\ \begin{align} C(x,y)=1+ \epsilon \left( \left| \sin\left(\frac{2 \pi x}{L}\right)\sin\left(\frac{2 \pi y}{L}\right) \right| -\frac{4}{\pi^2} \right) \end{align} In the second, that we call (b) case, the density is higher in the periphery of the vortex: \begin{align} C(x,y)=1- \epsilon \left( \left| \sin\left(\frac{2 \pi x}{L}\right)\sin\left(\frac{2 \pi y}{L}\right) \right| -\frac{4}{\pi^2} \right) \end{align} \begin{figure}[ht!] \includegraphics[angle=0,scale=0.5,keepaspectratio=true]{./snap_campo_centro.eps} \includegraphics[angle=0,scale=0.5,keepaspectratio=true]{./snap_campo_periferia.eps} \caption{Compressible Cellular flow : (a) the density is higher in the centre of the vortex. (b) The density is higher in the outer region.} \label{fig:cellular} \end{figure} The two different configurations are shown in Figure~\ref{fig:cellular}. The constant factor $\frac{4}{\pi^2}$ has been introduced in order to have a density perturbation which is zero in average. As in the case of the shear flow, we study the dependence of the dynamics on the compressibility intensity $\epsilon$ and Damk\"{o}hler number in the more realistic case of fixed high Peclet number. We will consider a wider range of Damk\"{o}hler exploring the regimes at $Da\ll1$, $Da\approx 1$ and $Da\gg1$. Nevertheless we will remain in regimes $PeDa>1$ which means that the characteristic diffusion time is always larger than the reaction time. Unlike the shear flow, in the cellular flow $\Delta v_\%$ does not show a monotonic dependence neither on $\epsilon$ nor on $Da$, as it can be seen from Figure \ref{fig:cellular2}. Such a feature has been observed also for other configurations of $C(x,y)$ (simulations not shown here) confirming that the non monotonic behaviour of $\Delta v_\%$ is related to the whirling geometry of the flow rather than to the choice of the density perturbation. In the slow reaction regime ($Da \ll 1 $) it can be seen in Fig. \ref{fig:cellular2} an almost Damk\"{o}hler-indipendent behaviour of $\Delta v_\%$ in both cases (a) and (b). On the other hand, in a middle range of Damk\"{o}hler where the combined effect of advection and reaction is more intriguing, the two flow configurations show opposite trends for $\Delta v_\%$, the reaction is faster, when the density is higher in the centre (case (a)), whereas it is slower when the perturbation is at the periphery (case (b)). Such a behaviour is not surprising, since the interplay between reaction and diffusion in the presence of closed streamlines can lead to a non trivial behaviour also in the case of uncompressible flow~\cite{acvv1}, and the presence of variations in the density of the flow can act in a very non intuitive way. \begin{figure}[ht!] \includegraphics[angle=0,scale=0.45,keepaspectratio=true]{./diff100_Da_tutti_centro.eps} \includegraphics[angle=0,scale=0.45,keepaspectratio=true]{./diff100_Da_tutti_periferia.eps} \\ \caption{(Color online) (a,b) Percentage difference between compressible and incompressible test case of the mean asymptotic bulk burning rate. $1/Pe=0.003$. In panel $(a)$ the density of the fluid is higher in the centre of the vortexes while in panel $(b)$ the density is higher in the periphery.} \label{fig:cellular2} \end{figure} \section{Conclusions}\label{sec:5} We have studied the propagation of fronts through an advection-diffusion-reaction equation where the nonlinear reaction term is given by the classical FKPP source term. The advective flow is generated by an imposed field which is perturbed by compressible waves. The compressibility is controlled by the parameter $\epsilon$. Two velocity fields have been considered: a shear-flow and a cellular one. In the considered flows, the front can be strongly affected by compressibility and the compressibility field forces a strong localization of density, but the quantitative differences with respect to the incompressible model appear modest (of the order of some percent). On the basis of previous studies, we do not think that the presence of chaos (turbulent fluctuations) should change much the scenario~\cite{ctvv}. Some comments are in order to discuss the apparent difference of the behaviour of $\Delta v_\%$ in the cases of shear flow and cellular flow (see Figs.~\ref{mappe} and \ref{fig:cellular2}). The stream lines in the two cases are very different: namely open and closed, respectively. In the shear flow, the effect of compressibility on the front propagation is only slightly modified with respect to the uncompressible case, since the front is mainly driven by the stream. On the other hand, closed streamlines trigger entangled mechanisms between reaction and diffusion, that, coupled with the compressibility generate highly non trivial features. An example of this complicated behaviour can be found in the non monotonic dependence of $\Delta v_\%$ by the Damk\"{o}hler number, or in the well apparent difference between cases (a) and (b) of the cellular flows here considered. Finally, it is interesting to note that a similar model has been recently used for the study of population dynamics in turbulent flows~\cite{benzi}: \begin{equation} \frac{\partial{C}}{\partial t} + {\bf \nabla} \cdot ({\bf u} C) =D_0 \nabla^2 C+ \mu C (1-C) \label{Bianco_Federicoeq6} \end{equation} where the scalar $C({\bf x},t)$ is the concentration of a population~\cite{benzi}, which is the equivalent of our $\rho \theta$ in Eq.~(\ref{scalar2}). When ${\nabla} \cdot {\bf u} \ne 0$, clustering of the population near compression regions $({\nabla}\cdot{\bf u} < 0)$ is observed. In those regions, the concentration can take values greater then one and reaction rate on Eq.~(\ref{Bianco_Federicoeq6}) can be negative, so that the scalar $C({\bf x},t)$ is not a fractional parameter. Within this model, authors linked changes in the overall carrying capacity of the ecosystem (i.e. the density of biological mass of the system) to the compressibility and its effect of localisation. Yet, present results show that the change of the carrying capacity is not due to density localisation, but rather to the choice of a different reaction term, which allows negative rate in high density zones. Indeed, in the present work we have a strong density localisation but our FKPP model for a fractional parameter does not allow negative rate. The results is that the average carrying capacity does not change even in presence of compressible flows. The analysis of present results for the Lagrangian displacement of passive reactive tracers and irreversible reaction dynamics is ongoing. \section{Acknowledgements} We would like to thank Dr Guillaume Legros, P. Perlekar and Dr Roger Prud'homme for fruitful discussions. \bibliographystyle{unsrt}
\section{Introduction} \label{sec:introduction} \gls{soc} designs include the processors and associated peripheral blocks of silicon chip based computers and are an intrinsic piece of modern computing, owing their often complex design to lifetimes of work by hundreds of hardware and software engineers. The \gls{soc} in a RaspberryPi\cite{BCM2836} for example includes 4 ARM processors, memory caches, graphics processors, timers, and all of the associated interconnect components. Measuring, analysing, and understanding the behavior of these systems is important for the optimization of cost, size, power usage, performance, and resiliance to faults. Sampling the voltage levels of many individual wires is typically infeasible due to bandwidth and storage constraints so sparser event based measurements are often used instead; E.g. Observations like ``\texttt{cache\_miss} @ \SI{123}{\nano\second}''. This gives rise to datasets of very long concurrent streams of binary occurrence/non-occurrence data so an understanding of how these event measurements are related is key to the design optimization process. It is therefore desirable to have an effective estimate of the connectedness between bit vectors to indicate the existence of pairwise relationships. Given that a \gls{soc} may perform many different tasks the relationships may change over time which means that a windowed or, more generally, a weighted approach is required. Relationships between bit vectors are modelled as boolean functions composed of negation (NOT), conjunction (AND), inclusive disjunction (OR), and exclusive disjunction (XOR) operations since this fits well with natural language and has previously been successfully applied to many different system types\cite{MeasureLogicComplexity}; E.g. Relationships of a form like ``\texttt{flush} occurs when \texttt{filled} AND \texttt{read\_access} occur together''. This paper provides the following novel contributions: \begin{itemize} \item A probabilistic model for \gls{soc} data which allows a large amount of representative data to be generated and compared on demand. \item An empirical study on the accuracy of several weighted correlation and similarity metrics in the use of relationship estimation. \end{itemize} A collection of previous work is reviewed, and the metrics are formally defined with the reasoning behind them. Next, assumptions about the construction of \gls{soc} relationships are explained and the design of the experiment is described along with the method of comparison. Finally results are presented as a series of \gls{pdf} plots and discussed in terms of their application. \section{Previous Work} \label{sec:previouswork} An examination of currently available hardware and low-level software profiling methods is given by Lagraa\cite{LagraaThesis} which covers well known techniques such as using counters to generate statistics about both hardware and software events -- effectively a low cost data compression. Lagraa's thesis is based on profiling \gls{soc}s created specifically on Xilinx MPSoC devices, which although powerful, ensures it may not be applied to data from non-\gls{fpga} sources such as designs already manufactured in silicon which is often the end goal of \gls{soc} design. Lo et al\cite{MiningPastTemporalRules} described a system for describing behavior with a series of statements using a search space exploration process based on boolean set theory. While this work has a similar goal of finding temporal dependencies it is acknowledged that the mining method does not perform adequately for the very long traces often found in real-world \gls{soc} data. Ivanovic et al\cite{TSAnalysisPossApp} review time series analysis models and methods where characteristic features of economic time series are described such as drawn from noisy sources, high auto-dependence and inter-dependence, high correlation, and non-stationarity. \gls{soc} data is expected to have these same features, together with full binarization and much greater length. The expected utility approach to learning probabilistic models by Friedman and Sandow\cite{LearningProbabilisticModels} minimises the Kullbach-Leibler distance between observed data and a model, attempting to fit that data using an iterative method. As noted in Friston et al\cite{BayesModelReduct} fully learning all parameters of a Bayesian network through empirical observations is an intractable analytic problem which simpler non-iterative measures can only roughly approximate. The approach of modelling relationships as boolean functions has been used for measuring complexity and pattern detection in a variety of fields including complex biological systems from the scale of proteins to groups of animals\cite{InfoProcLivingSystems}. `Correlation' is a vague term which has several possible interpretations\cite{ThirteenWaysCorrelationCoefficient} including treating data as high dimensional vectors, sets, and population samples. A wide survey of binary similarity and distance measures by Choi et al\cite{SurveyBinarySimilarityMeasures} tabulates 76 methods from various fields and classify them as either distance, non-correlation, or correlation based. A similarity measure is one where a higher result is produced for more similar data, whereas a distance measure will give a higher results for data which are further apart, i.e, less similar. The distinction between correlation and similarity can be shown with an example: If it is noticed over a large number of parties that the pattern of attendance between Alice and Bob is similar then it may be inferred that there is some kind of relationship connecting them. In this case the attendance patterns of Alice and Bob are both similar and correlated. However, if Bob is secretly also seeing Eve it would be noticed that Bob only attends parties if either Alice or Eve attend, but not both at the same time. In this case Bob's pattern of attendance may not be similar to that of either Alice or Eve, but will be correlated with both. It can therefore be seen that correlation is a more powerful approach for detecting relationships, although typically involves more calculation. In a \gls{soc} design the functionallity is split into a number of discrete logical blocks such as a timer or an ARM processor which communicate via one or more buses. The configuration of many of these blocks and buses is often specified with a non-trivial set of parameters which affects the size, performance, and cost of the final design. The system components are usually a mixture of hardware and software which should all be working in harmony to achieve the designer's goal and the designer will usually have in mind how this harmony should look. For example the designer will have a rule that they would like to confirm ``software should use the cache efficiently'' which will be done by analysing the interaction of events such as \texttt{cache\_miss} and \texttt{enter\_someFunction}. By recording events and measuring detecting inter-event relationships the system designer can decide if the set of design parameters should be kept or changed\cite{paper0}, thus aiding the \gls{soc} design optimization process. \section{Metrics} \label{sec:metrics} A measured stream of events is written as $f_i$ where $i$ is an identifier for one particular event source such as \texttt{cache\_miss}. Where $f_i(t) = 1$ indicates event $i$ was observed at time $t$, and $f_i(t) = 0$ indicating $i$ was not observed at time $t$. A windowing or weighting function $w$ is used to create a weighted average of each measurement to give an expectation of an event occurrence. \begin{align} \label{eq:def_Ex} {\sEx}{\left[ f_i \right]} &= \frac{1}{\sum_{t} w(t)} \mathlarger{\sum}_{t} w(t) * f_i(t) \quad \in [0,1] \end{align} Bayes theorem may be rearranged to find the conditional expectation. \begin{align} \label{eq:bayes} \Pr(X|Y) &= \frac{\Pr(Y|X) \Pr(X)}{\Pr(Y)} = \frac{\Pr(Y \cap X)}{\Pr(Y)}, \quad \text{if}\ \Pr(Y) \neq 0 \\ \label{eq:def_cEx} {\sEx}{\left[ f_x | f_y \right]} &:= \begin{dcases} \NaN &: {\sEx}{\left[ f_y\right]} = 0 \\ \frac{{\sEx}{\left[ f_x * f_y \right]}} {{\sEx}{\left[ f_y \right]}} &: \text{otherwise} \end{dcases} \end{align} It is not sufficient to look only at conditional expectation to determine if $X$ and $Y$ are related. For example, the result $\Pr(X|Y) = 0.9$ may arise from $X$'s relationship with $Y$, but may equally arise from the case $\Pr(X) = 0.9$. A na\"ive approach might be to estimate how similar a pair of bit vectors are by counting the number of matching bits. The expectation that a pair of corresponding bits are equal is the Hamming Similarity\cite{Hamming1950}, as shown in \fref{eq:def_Ham}. Where $X$ and $Y$ are typical sets\cite{InfoMacKay} this is equivalent to $\left| {\Ex}{\left[ X \right]} - {\Ex}{\left[ Y \right]} \right|$. The absolute difference $\left| X - Y \right|$ may also be performed on binary data using a bitwise XOR operation. \begin{align} \label{eq:def_Ham} \sHam(f_x, f_y) &:= 1 - {\sEx}{\left[ \left| f_x - f_y \right| \right]} \end{align} The dot in the notation is used to show that this measure is similar to, but not necessarily equivalent to the standard definition. Modifications to the standard definitions may include disallowing $\NaN$, restricting or expanding the range to $[0,1]$, or reflecting the result. For example, reflecting the result of ${\sEx}{\left[ \left| f_x - f_y \right| \right]}$ in the definition of $\sHam$ a metric is given where $0$ indicates fully different and $1$ indicates exactly the same. A similar approach is to treat a pair of bit vectors as a pair of sets. The Jaccard index first described for comparing the distribution of alpine flora\cite{JaccardAlpineFlora}, and later refined for use in general sets is defined as the ratio of size the intersection to the size of the union. Tanimoto's reformulation\cite{TanimotoClassifyPlants} of the Jaccard index shown in \Fref{eq:tanimoto} was given for measuring the similarity of binary sets. \begin{align} \label{eq:tanimoto} J(X, Y) &= \frac{\left| X \cap Y \right|} {\left| X \cup Y \right|} = \frac{\left| X \cap Y \right|} {\left| X \right| + \left| Y \right| - \left| X \cap Y \right|} ,\quad \left| X \cup Y \right| \neq \varnothing \\ \label{eq:def_Tmt} \sTmt(f_x, f_y) &:= \frac{ {\sEx}{\left[ f_x * f_y \right]} } { {\sEx}{\left[ f_x \right]} + {\sEx}{\left[ f_y \right]} - {\sEx}{\left[ f_x * f_y \right]} } \end{align} Treating measurements as points in bounded high dimensional space allows the Euclidean distance to be calculated, then reflected and normalized to $[0,1]$ to show closeness rather than distance. This approach is common for problems where the alignment of physical objects is to be determined such as facial detection and gene sequencing\cite{ClusteringByPassingMessages}. \begin{align} \label{eq:def_Cls} \sCls(f_x, f_y) &:= 1 - \sqrt{{\sEx}{\left[ \left| f_x - f_y \right|^2 \right]}} \end{align} It can be seen that this formulation is similar to using the Hamming distance, albeit growing quadratically rather than linearly as the number of identical bits increases. Another geometric approach is to treat a pair of measurements as bounded high dimensional vectors and calculate the angle between them using the cosine similarity as is often used in natural language processing\cite{Word2Vec} and data mining\cite{SemanticCosineSimilarity}. \begin{align} \label{eq:cosinesimilarity} \text{CosineSimilarity}_{X,Y} &= \frac{X \cdot Y}{\left| X \right| \left| Y \right|}, \ X,Y \neq 0 \quad \in [-1,1] \\ \label{eq:def_Cos} \sCos(f_x, f_y) &:= \frac{ {\sEx}{\left[ f_x * f_y \right]} } {\sqrt{ {\sEx}{\left[ f_x^2 \right]} } \sqrt{ {\sEx}{\left[ f_y^2 \right]} }} \quad \in [0,1] \end{align} The strict interval of the measured bit vectors $f_x, f_y \in [0,1]$ mean that $\sCos$ is always positive. The above metrics attempt to uncover relationships by finding pairs of bit vectors which are similar to each other. These may be useful for simple relationships of forms similar to ``\texttt{X} leads to \texttt{Y}'' but may not be useful for finding relationships which incorporate multiple measurements via a function of boolean operations such as ``\texttt{A} AND \texttt{B} XOR \texttt{C} leads to \texttt{Y}''. Treating measurement data as samples from a population invites the use of covariance or the Pearson correlation coefficient as a distance metric. The covariance, as shown in \Fref{eq:covariance}, between two bounded-value populations is also bounded, as shown in \Fref{eq:covariance_limits}. This allows the $\sCov$ metric to be defined, again setting negative correlations to $0$. For binary measurements with equal weights $\sCov$ can be shown to be equivalent to the Pearson correlation coefficient. \begin{align} \label{eq:covariance} \cov(X, Y) &= {\Ex}{\left[ \left(X-{\Ex}{\left[ X \right]}\right) \left(Y-{\Ex}{\left[ Y \right]}\right) \right]} = {\Ex}{\left[ XY \right]} - {\Ex}{\left[ X \right]}{\Ex}{\left[ Y \right]} \\ \label{eq:covariance_limits} X,Y \in [0,1] &\implies \frac{-1}{4} \leq \cov(X,Y) \leq \frac{1}{4} \\ \label{eq:def_Cov} \sCov(f_x, f_y) &:= 4\ \Big| {\sEx}{\left[ f_x * f_y \right]} - {\sEx}{\left[ f_x \right]}{\sEx}{\left[ f_y \right]} \Big| \quad \in [0,1] \end{align} Using this definition it can be seen that if two random variables are independent then $\sCov(X,Y) = 0$, however the reverse is not true in general as the covariance of two dependent random variables may be $0$. The definition of independence in \Fref{eq:independence} may be used to define a metric of dependence. \begin{align} \label{eq:independence} X \indep Y &\iff \Pr(X) = \Pr(X|Y) \\ \label{eq:def_Dep} \sDep(f_x, f_y) &:= \Bigg| \frac{{\sEx}{\left[ f_x | f_y \right]} - {\sEx}{\left[ f_x \right]}} {{\sEx}{\left[ f_x | f_y \right]}} \Bigg|, \quad \text{if}\ {\sEx}{\left[ f_x \right]} \leq {\sEx}{\left[ f_x | f_y \right]} \end{align} Normalizing the difference in expectation ${\sEx}{\left[ f_x | f_y \right]} - {\sEx}{\left[ f_x \right]}$ to the range $[0,1]$ allows this to be rearranged showing that $\sDep(X,Y)$ is an undirected similarity, i.e. the order of $X$ and $Y$ is unimportant. \begin{align} \sDep(f_x, f_y) &= \frac{{\sEx}{\left[ f_x | f_y \right]} - {\sEx}{\left[ f_x \right]}} {{\sEx}{\left[ f_x | f_y \right]}} = 1 - \frac{{\sEx}{\left[ f_x \right
]} {\sEx}{\left[ f_y \right]}} {{\sEx}{\left[ f_x * f_y \right]}} = \sDep(f_y, f_x) \end{align} The metrics defined above $\sHam$, $\sTmt$, $\sCls$, $\sCos$, $\sCov$, and $\sDep$ all share the same codomain $[0,1]$ where $1$ means the strongest relationship. In order to compare these correlation metrics an experiment has been devised to quantify their effectiveness, as described in \fref{sec:experiment}. \section{Experimental Procedure} \label{sec:experiment} This experiment constructs a large number of \gls{soc}-like systems according to a probabilistic structure and records event-like data from them. The topology of each system is fixed which means relationships between bit vectors in each system are known in advance of applying any estimation metric. The metrics above are then applied to the recorded data and compared to the known relationships which allows the effectiveness of each metric to be demonstrated empirically. The maximum number of measurement nodes $2n_{\text{maxm}}$ is set to $100$ to keep the size of systems within reasonable limits. Each system is composed of a number of measurement nodes $e_{i \in [1, m]}$ such that $m = m_{\text{src}} + m_{\text{dst}}$ of either type `src' or `dst' arranged in a bipartite graph as shown in \Fref{fig:eg_soc_relest}. In each system the numbers of measurement nodes are chosen at random $m_{\text{src}}, m_{\text{dst}} \sim {\operatorname{U}}{(1, n_{\text{maxm}})}$. Src nodes are binary random variables with a fixed densitity $\sim {\operatorname{Arcsin}}{(0,1)}$ where the approximately equal number of high and low density bit vectors represents equal importance of detecting relationships and anti-relationships. The value of each dst node is formed by combining a number of edges $\sim {\operatorname{Lognormal}}{(0,1)}$ from src nodes. There are five types of systems which relate to the method by which src nodes are combined to produce the value at a dst node. One fifth of systems use only AND operations ($\land$) to combine connections to each dst node, another fifth uses only OR ($\lor$), and another fifth uses only XOR ($\oplus$). The fourth type of system uniformly chooses one of the $\land$, $\lor$, $\oplus$ methods to give a mix of homogeneous functions for each dst node. The fifth type gets the values of each dst node by applying chains of operations $\sim {\operatorname{U}}{( \{ \land, \lor, \oplus \} )}$ combine connections, implemented as \gls{lha}. By keeping different connection strategies separate it is easier to see how the metrics compare for different types of relationships. \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{eg_soc_relest.pdf} \caption{Example system with src and dst nodes connected via binary operations. \label{fig:eg_soc_relest}} \end{figure} The known relationships were used to construct an adjacency matrix where $K_{ij} = 1$ indicates that node $i$ is connected to node $j$, with $0$ otherwise. The diagonal is not used as these tautological relationships will provide a perfect score with every metric without providing any new information about the metric's accuracy or effectiveness. Each metric is applied to every pair of nodes to construct an estimated adjacency matrix $E$. Each element $E_{ij}$ is compared with $K_{ij}$ to give an amount of \gls{TP} and \gls{FN} where $K_{ij} = 1$ or an amount of \gls{TN} and \gls{FP} where $K_{ij} = 0$. For example if a connection is known to exist ($K_{ij} = 1$) and the metric calculated a value of $0.7$ then the \acrlong{TP} and \acrlong{FN} values would be $0.7$ and $0.3$ respectively, with both \acrlong{TN} and \acrlong{FP} equal to $0$. Alternatively if a connection is know to not exist ($K_{ij} = 0$) then \acrlong{TN} and \acrlong{FP} would be $0.3$ and $0.7$, with \acrlong{TP} and \acrlong{FN} equal to $0$. These are used to construct the confusion matrix and subsequently give scores for the \gls{TPR}, \gls{TNR}, \gls{PPV}, \gls{NPV}, \gls{ACC}, \gls{BACC}, \gls{BMI}, and \gls{MCC}. \begin{minipage}[t]{0.50\textwidth} \begin{align*} \text{TP} &= \sum_{i,j} {\min}{\left( K_{ij}, E_{ij} \right)} \\ \text{FP} &= \sum_{i,j} {\min}{\left( 1-K_{ij}, E_{ij} \right)} \\ \text{TPR} &= \frac{\text{TP}} {\text{TP} + \text{FN}} \\ \text{PPV} &= \frac{\text{TP}} {\text{TP} + \text{FP}} \\ \text{ACC} &= \frac{\text{TP} + \text{TN}} {\text{TP} + \text{FN} + \text{TN} + \text{FP}} \end{align*} \end{minipage}% \begin{minipage}[t]{0.50\textwidth} \begin{align*} \text{FN} &= \sum_{i,j} {\min}{\left( K_{ij}, 1-E_{ij} \right)} \\ \text{TN} &= \sum_{i,j} {\min}{\left( 1-K_{ij}, 1-E_{ij} \right)} \\ \text{TNR} &= \frac{\text{TN}} {\text{TN} + \text{FP}} \\ \text{NPV} &= \frac{\text{TN}} {\text{TN} + \text{FN}} \\ \text{BACC} &= \frac{\text{TPR} + \text{TNR}} {2} \\ \text{BMI} &= \text{TPR} + \text{TNR} - 1 \end{align*} \end{minipage} \begin{center} $ \displaystyle \begin{aligned} \text{MCC} &= \frac{\text{TP} \times \text{TN} - \text{TP} \times \text{TN}} {\sqrt{(\text{TP}+\text{FP})(\text{TP}+\text{FN})(\text{TN}+\text{FP})(\text{TN}+\text{FN})}} \end{aligned} $ \end{center} To create the dataset $1000$ systems were generated, with $10000$ samples of each node taken from each system. This procedure was repeated for each metric for each system and the \gls{pdf} of each metric's accuracy is plotted using \gls{kde} to see an overview of how well each performs over a large number of different systems. \section{Results and Discussion} \label{sec:results} The metrics defined in \fref{sec:metrics} function as binary classifiers therefore it is reasonable to compare their effectiveness using some of the statistics common for binary classifiers noted above. The \gls{TPR} measures the proportion of connections which are correctly estimated and the \gls{TNR} similarly measures the proportion of non-connections correctly estimated. The \gls{PPV} and \gls{NPV} measures the proportion of estimates which are correctly estimated to equal the known connections and non-connections. \gls{ACC} measures the likelihood of an estimation matching a known connection or non-connection. For imbalanced data sets \gls{ACC} is not necessarily a good way of scoring the performance of these metrics as it may give an overly optimistic score. Normalizing \gls{TP} and \gls{TN} by the numbers of samples gives the \acrlong{BACC}\cite{PREPMt} which may provide a better score for large systems where the adjacency matrices are sparse. \acrlong{MCC} finds the covariance between the known and estimated adjacency matrices which may also be interpreted as a useful score of metric performance. Youden's J statistic, also known as \acrlong{BMI} similarly attempts to capture the performance of a binary classifier by combining the sensitivity and specifitiy to give the probability of an informed decision. Each statistic was calculated for each metric for each system. Given the large number of systems of various types, \glspl{pdf} of these statistics are shown in \Fref{fig:results1} where more weight on the right hand side towards $1.0$ indicates a better metric. \Fref{fig:relest_all_TPR} shows that $\sCov$ and $\sDep$ correctly identify the existence of around $25\%$ of existing connections and other metrics identify many more connections. However, \fref{fig:relest_all_TNR} shows that $\sCov$ and $\sDep$ are much more likely to correctly identify non-connections than other metrics, especially $\sHam$ and $\sCov$. For a metric to be considered useful for detecting connections the expected value of both \gls{PPV} and \gls{NPV} must be greater than $0$, and \gls{ACC} must be greater than $0.5$. It can be seen in \fref{fig:relest_all_NPV} that all metrics score highly for estimating negatives; I.e. when a connection does not exist they give a result close to $0$. On its own this does not carry much meaning as a constant $0$ will always give a correct answer. Similarly, a constant $1$ will give a correct answer for positive links so the plots in the middle and right columns must be considered together with the overall accuracy to judge the usefulness of a metric. Given that \gls{ACC} is potentially misleading for imbalanced data sets such as this one it is essential to check against \gls{BACC}. $\sHam$ usually has \gls{ACC} of close to $0.5$ which alone indicates than it is close to useless for detecting connections in binary \gls{soc} data. The wider peaks of $\sCos$ and $\sTmt$ in both \gls{ACC} and \gls{BACC} indicate that these metrics are much more variable in their performance than the likes of $\sCls$, $\sCov$, and $\sDep$. In this pair of plots where $\sCov$ and $\sDep$ both have much more weight towards the right hand side this indicates that these metrics are more likely to give a good estimate of connectedness. Finally, using \fref{fig:relest_all_MCC} and \fref{fig:relest_all_BMI} as checks it can be see again that $\sCov$ and $\sDep$ outperform the other metrics. \gls{MCC} actually has an interval of $[-1,1]$ though the negative side is not plotted here, and given that all all metrics have weight on the positive side this shows that all of the defined metrics contain at least some information on the connectedness. The overall results indicate that $\sHam$, $\sTmt$, $\sCos$ and $\sCls$ are close to useless for detecting connections in datasets resembling the \gls{soc} data model described above. A characteristic feature employed by both $\sTmt$ and $\sCos$ is the convolution $f_x * f_y$, whereas $\sHam$ and $\sCls$ employ an absolute difference $\left| f_x - f_y \right|$. The best performing metrics $\sCov$ and $\sDep$ have consistently higher accuracy scores and employ both the convolution, and the product of expectations ${\sEx}{\left[ f_x \right]} {\sEx}{\left[ f_y \right]}$. The simplicity of these metrics allows hints about the system function to be found quickly in an automated manner, albeit without further information about the formulation or complexity of the relationships. Any information which can be extracted from a dataset about the workings of its system may be used to ease the work of a \gls{soc} designer. For example, putting the results into a suitable visualization provides an easy to consume presentation of how related a set of measurements are during a given time window. This allows the \gls{soc} designer to make a more educated choice about the set of design parameters in order to provide a more optimal design for their chosen market. \begin{figure}[h] \centering \begin{subfigure}[t]{0.5\textwidth} \includegraphics[width=1.0\linewidth]{relest_all_TPR.pdf} \caption{\acrlong{TPR}. \label{fig:relest_all_TPR}} \end{subfigure}% ~ \begin{subfigure}[t]{0.5\textwidth} \includegraphics[width=1.0\linewidth]{relest_all_TNR.pdf} \caption{\acrlong{TNR}. \label{fig:relest_all_TNR}} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \includegraphics[width=1.0\linewidth]{relest_all_PPV.pdf} \caption{\acrlong{PPV}. \label{fig:relest_all_PPV}} \end{subfigure}% ~ \begin{subfigure}[t]{0.5\textwidth} \includegraphics[width=1.0\linewidth]{relest_all_NPV.pdf} \caption{\acrlong{NPV}. \label{fig:relest_all_NPV}} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \includegraphics[width=1.0\linewidth]{relest_all_ACC.pdf} \caption{\acrlong{ACC}. \label{fig:relest_all_ACC}} \end{subfigure}% ~ \begin{subfigure}[t]{0.5\textwidth} \includegraphics[width=1.0\linewidth]{relest_all_BACC.pdf} \caption{\acrlong{BACC}. \label{fig:relest_all_BACC}} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \includegraphics[width=1.0\linewidth]{relest_all_BMI.pdf} \caption{\acrlong{BMI}. \label{fig:relest_all_BMI}} \end{subfigure}% ~ \begin{subfigure}[t]{0.5\textwidth} \includegraphics[width=1.0\linewidth]{relest_all_MCC.pdf} \caption{\acrlong{MCC}. \label{fig:relest_all_MCC}} \end{subfigure}% \caption{\gls{kde} plots of score \gls{pdf}s averaged across all system types. More weight on the right hand side is always better. \label{fig:results1}} \end{figure} \section{Conclusion} \label{sec:conclusion} The formulation and rationale behind six methods of measuring similarity or correlation to estimate relationships between weighted bit vectors has been given. The given formulations may also be applied more generally to bounded data in the range $[0,1]$, though this is not explored in this paper and may be the subject of future work. Other directions of future work include testing and comparing additional metrics or designing specialized metrics for . It has been shown that using methods which are common in other fields such as the Hamming distance, Tanimoto distance, Euclidean distance, or Cosine similarity are not well suited to low-cost relationship detection when the relationships are potentially complex. This result highlights a potential pitfall of not considering the system construction for data scientists working with related binary data streams. The metrics $\sCov$ and $\sDep$ are shown to consistently estimate the existance of relationships in \gls{soc}-like data with higher accuracy than the other metrics. This result gives confidence that detection systems may employ these approaches in order to make meaningful gains in the process of optimizing \gls{soc} behavior. By using more accurate metrics unknown relationships may be uncovered giving \gls{soc} designer the information they need to optimize their designs and sharpen their competitive edge. The Python code used to perform the experiments is available online\cite{relest}. \ifdefined\ShowReferences \newpage \bibliographystyle{splncs04}
\section{Introduction} Since the discovery of the Higgs boson (H) in 2012~\cite{Chatrchyan:2012ufa,Chatrchyan:2013lba,Aad:2012tfa}, enormous progress has been made in measuring its properties. The mass of the Higgs boson was measured to a per-mille precision level, and its interactions with vector bosons and fermions were established in multiple decay channels. However, the Higgs boson self-interaction and the energy potential of the Higgs field are not yet measured experimentally. A broad physics program is covered in the searches for di-Higgs (HH) production at the LHC. Measuring HH production is the only direct way to access the Higgs boson trilinear self-coupling \ensuremath{\lambda_{\PH\PH\PH}}. In the standard model (SM) the Higgs boson self-coupling and the structure of the scalar Higgs field potential are fully predicted in terms of the Higgs boson mass and the Fermi coupling constant. Any deviation from the predicted shape of the scalar potential can have fundamental implications on our understanding of the origin and the fate of the universe. Therefore, measuring the Higgs boson's trilinear self-coupling is of particular importance as it allows to characterize the Higgs field potential. At the LHC, HH production is a very rare process with a total production cross section roughly three orders of magnitude smaller than the one of the single H production. However, the direct relation to the scalar potential makes HH production very sensitive to contributions from physics beyond the SM (BSM). Some BSM models predict new spin-0 and spin-2 resonances with masses varying from 250 GeV to a few TeV, and which have a sizeable branching fraction (BR) to a pair of Higgs bosons. In addition, the effects of BSM physics in the quantum loops or through modification of the SM Higgs boson couplings could significantly enhance the nonresonant HH cross section and change the kinematic properties of the HH signal. This document reviews searches for HH production by the ATLAS~\cite{Aad:2008zzm} and CMS~\cite{Chatrchyan:2008aa} Collaborations. The most recent results using the full Run-II dataset of $\sqrt{s}$ = 13 TeV pp collisions with an integrated luminosity of about 140 \ensuremath{\mathrm{fb}^{-1}} are presented, and future prospects are discussed. \section{Overview of HH searches}\label{sec:overvew} Measuring the HH production requires reconstructing the decay products of the Higgs bosons. There is a rich variety of final states to explore at the LHC, however to keep the total branching fraction high the pursued searches largely rely on the final states where at least one of the Higgs bosons decays to a pair of b quarks. Different final states are complementary, and present a trade-off between the total branching fraction and background contamination associated to a particular final state. HH production dominantly occurs via gluon-gluon fusion (ggF), with the cross section predicted in the SM of $31.05^{+1.41}_{-1.99}$ fb~\cite{Grazzini:2018bsd,Borowka:2016ehy}, calculated at next-to-next-to-leading order (NLO) with the resummation at next-to-next-to-leading-logarithm accuracy and including top-quark mass effects at NLO. The second largest production mode, vector boson fusion (VBF), has a cross section of only $1.726\pm 0.036$ fb~\cite{Dreyer:2018qbw,Liu-Sheng:2014gxa}, calculated at next-to-NNLO, but gives a unique access to the coupling between a vector boson pair and a Higgs boson pair (VVHH). In addition, small deviations of the VVHH coupling with respect to the SM value lead to very large enhancements in the cross section allowing to constrain this coupling already with Run-II data of the LHC. The ATLAS and CMS Collaborations performed searches for HH production via ggF in the \ensuremath{\bbbar\bbbar}, \ensuremath{\bbbar\gamma\gamma}, \ensuremath{\bbbar\tautau}, \ensuremath{\bbbar\PV\PV}, \WWWW, $\ggWW$ final states at $\sqrt{s} = 13$ TeV using part of the Run-II data sample with an integrated luminosity of about 40 \ensuremath{\mathrm{fb}^{-1}} collected in 2016. Statistical combinations of these searches~\cite{Aad:2019uzh,Sirunyan:2018ayu} were performed, and observed (expected) upper limits at 95\% confidence level (CL) were set on the signal strength of HH production with respect to the SM expectation which correspond to 6.9 (10.0) in ATLAS and 22.5 (12.8) in CMS. The Higgs boson self-coupling was also constrained, and the 95\% CL observed (expected) allowed interval for the coupling modifier $\ensuremath{\kappa_{\lambda}}$, defined as a ratio of the measured $\ensuremath{\lambda_{\PH\PH\PH}}$ value to the value predicted in the SM, is $-5.0 < \ensuremath{\kappa_{\lambda}} < 12.0$ ($-5.8 < \ensuremath{\kappa_{\lambda}} < 12.0$) in ATLAS, and $-11.8 < \ensuremath{\kappa_{\lambda}} < 18.8$ ($-7.1 < \ensuremath{\kappa_{\lambda}} < 13.6$) in CMS. The new searches using full Run-II data sample are presented in the next Section~\ref{sec:run2}. \section{Searches for HH production with full LHC Run-II data }\label{sec:run2} \subsection{Nonresonant HH production in the final state with 4 leptons and 2 b jets} The first result of the search for nonresonant HH production where one Higgs boson decays to a Z boson pair which decays to 4 leptons (\ensuremath{\mathrm{l}}), where $\ensuremath{\mathrm{l}}$ is either an electron or a muon, and the other to a pair of b jets is presented by the CMS Collaboration~\cite{CMS:2020gxr}. The final state has a very small branching fraction of only 0.014\% but has small backgrounds and exhibits a clear signature of the $4\ensuremath{\mathrm{l}}$ mass peak. A signal event is required to have a least two Z boson candidates reconstructed from the pairs of isolated electrons or muons of opposite charges, and at least two jets which were identified as having originated from b quarks using multivariate (MVA) discriminants. The signal region is defined by requiring events to pass $115 <m_{4\ensuremath{\mathrm{l}}}<135 $ GeV. The dominant background is the SM single H production, followed by genuine nonresonant $\PZ\PZ^*$ events. To separate HH signal from the background a boosted decision tree (BDT) is trained using kinematic properties of signal events, the output of which is used in a multi-dimensional binned maximum likelihood fit to data to extract the results. Upper limits at 95\% CL are set on the signal strength of HH production with respect to the SM expectation at 30 (37), and $\ensuremath{\kappa_{\lambda}}$ is constrained to be within the observed (expected) range $-9~(-10.5) <\ensuremath{\kappa_{\lambda}} < 14~(15.5)$ at 95\% CL. While the constraints from this channel are weaker than the results obtained from the combination of the other channels with partial Run-II dataset presented in Section~\ref{sec:overvew}, this clean final state will increasingly become more important with more data. \subsection{HH production via VBF in the $\ensuremath{\bbbar\bbbar}$ final state} The first result of the search for HH production via VBF in the $\ensuremath{\bbbar\bbbar}$ final state is presented by the ATLAS Collaboration~\cite{Aad:2020kub}. The VBF production mode has unique sensitivity to the VVHH coupling modifier, $\ensuremath{\kappa_{2\PV}}$, which controls the coupling strength with respect to its SM value. In addition, this analysis targets the resonant production mode, and searches for spin-0 resonances with masses in the range 260-1000 GeV considering broad-width (10-20\% of resonant mass) and narrow-width (4 MeV) hypotheses. The 4 b jets are tagged using MVA techniques, and the b jet energy resolution is improved by about 10\% with a dedicated b jet energy regression trained with a BDT. The main challenge in this search is an accurate estimation of the background dominated by multijet production. The multijet background is modelled using data events with lower b jet multiplicity and reweighting them to model events with higher b jet multiplicity. The invariant mass of the HH system is reconstructed from the 4 b jets and is used as the final discriminant to extract the signal. The observed (expected) upper limits at 95\% CL are set on the cross section for nonresonant HH production via VBF at 840 (550) times the SM prediction, and also for resonant HH production via VBF as a function of resonance mass as shown on Fig.~\ref{fig:4b_res}. The 95\% CL observed (expected) allowed interval for the coupling modifier $\ensuremath{\kappa_{2\PV}}$ is $-0.43 < \ensuremath{\kappa_{2\PV}} < 2.56$ ($-0.55 < \ensuremath{\kappa_{\lambda}} < 2.72$). \begin{figure}[h!t] \centering \includegraphics[width=0.43\textwidth]{at_4b_fig_05a.pdf} \includegraphics[width=0.43\textwidth]{at_4b_fig_05b.pdf} \caption[Observed and expected upper limits at 95\% CL on the production cross section for resonant HH production via VBF as a function of resonant mass. The (a) narrow- and (b) broad-width resonance hypotheses are shown.]{\small Observed and expected upper limits at 95\% CL on the production cross section for resonant HH production via VBF as a function of resonant mass. The (a) narrow- and (b) broad-width resonance hypotheses are shown~\cite{Aad:2020kub}.} \label{fig:4b_res} \end{figure} \subsection{Nonresonant HH production via ggF and VBF in the $\ensuremath{\bbbar\gamma\gamma}$ final state} Search for nonresonant HH production via ggF and VBF in the final states with two photons and two bottom quarks is presented by the CMS Collaboration~\cite{Sirunyan:2020xok}. This final state has a tiny branching fraction of only 0.26\%, but offers a clean signature from Higgs boson decay to a pair of photons and excellent photon energy resolution. The energy resolution of b jets is improved by about 13\% by the deep neural network (DNN)-based b jet energy regression~\cite{Sirunyan:2019wwa}, which improves the dijet invariant mass of the SM HH signal by about 20\%. The main background is nonresonant $\ensuremath{\gamma\gamma\bbbar}$ production, followed by the single Higgs boson production in association with a top quark-ant
iquark pair ($\ensuremath{\mathrm{t} \bar{\mathrm{t}} \mathrm{H}}$). The analysis is optimized to be sensitive to the SM HH production, anomalous values of $\ensuremath{\kappa_{\lambda}}$, $\ensuremath{\kappa_{2\PV}}$, and other beyond SM signals described by effective field theory. Both ggF and VBF production modes are analyzed following similar strategies: the background is suppressed using MVA techniques, and the VBF and ggF signal regions are defined based on the MVA purity and reconstructed invariant mass of the HH system. Finally, the signal is extracted from a fit to the invariant masses of the Higgs boson candidates in the $\ensuremath{\mathrm{b}\bar{\mathrm{b}}}$ and $\gamma\gamma$ final states. The observed (expected) upper limit at 95\% CL on the HH production cross section corresponds to 7.7 (5.2) times the SM prediction for the ggF mode and 225 (208) for the VBF mode . The observed (expected) constraints at 95\% CL on $\ensuremath{\kappa_{\lambda}}$ and $\ensuremath{\kappa_{2\PV}}$ are $-3.3 < \ensuremath{\kappa_{\lambda}} < 8.5$ ($-2.5 < \ensuremath{\kappa_{\lambda}} < 8.2)$ and $-1.3 < \ensuremath{\kappa_{2\PV}} < 3.5$ ($-0.9 < \ensuremath{\kappa_{2\PV}} < 3.1$) as shown in Fig.~\ref{fig:2b2g}. This search has 4 times better sensitivity than the previous analysis~\cite{Sirunyan:2018iwt} benefiting equally from larger data sample, and the innovative analysis techniques. In addition, the search was combined with an analysis that targets $\ensuremath{\mathrm{t} \bar{\mathrm{t}} \mathrm{H}}$ where Higgs boson decays to a diphoton pair~\cite{Sirunyan:2020sum}, which allowed $\ensuremath{\lambda_{\PH\PH\PH}}$ and top Yukawa coupling to be measured simultaneously, and provide constraints applicable to a wide range of theoretical models, where both couplings have anomalous values. \begin{figure}[h!t] \centering \includegraphics[width=0.43\textwidth]{CMS-HIG-19-018_Figure_010.pdf} \includegraphics[width=0.43\textwidth]{CMS-HIG-19-018_Figure_014.pdf} \caption[Expected and observed 95\% CL upper limits on the product of the HH production cross section and branching fraction into the $\ensuremath{\bbbar\gamma\gamma}$ final state obtained for different values of $\ensuremath{\kappa_{\lambda}}$ (left) and $\ensuremath{\kappa_{2\PV}}$, denoted here as $\mathrm{c}_{2\PV}$, (right).]{\small Expected and observed 95\% CL upper limits on the product of the HH production cross section and branching fraction into the $\ensuremath{\bbbar\gamma\gamma}$ final state obtained for different values of $\ensuremath{\kappa_{\lambda}}$ (left) and $\ensuremath{\kappa_{2\PV}}$, denoted here as $\mathrm{c}_{2\PV}$, (right)~\cite{Sirunyan:2020xok}.} \label{fig:2b2g} \end{figure} \subsection{Resonant HH production in the boosted $\ensuremath{\bbbar\tautau}$ final state } Search for a heavy, narrow, scalar resonance with mass in a range 1-3 TeV, produced via ggF and decaying to HH is presented by the ATLAS Collaboration~\cite{Aad:2020ldt}. The final state of interest is a boosted $\ensuremath{\mathrm{b}\bar{\mathrm{b}}}$ pair and a hadronically decaying boosted $\tau^{+}\tau^{-}$. A new technique, di-$\tau$ tagger, is developed to reconstruct boosted $\tau$ pair as a large radius jet with two sub-jets of smaller radii. For the di-$\tau$ tagger a BDT is employed to reject background of quark- and gluon-initiated jets, and the tagger efficiency is measured in data. In the search for HH production the main background is from multi-jet production with misidentified di-$\tau$ objects, and $\PZ(\tau^{+}\tau^{-})$+jets production. To extract the HH signal a single-bin counting experiment is performed for every considered resonance mass hypothesis. Upper limits at 95\% CL are set on HH production cross section via narrow width scalar resonance as shown in Fig.~\ref{fig:res}. \begin{figure}[h!t] \centering \includegraphics[width=0.5\textwidth]{at_res_fig_13.pdf} \caption[Expected and observed 95\% CL upper limits on the production cross section of a heavy, narrow, scalar resonance decaying into HH in the boosted $\ensuremath{\bbbar\tautau}$ final state.]{\small Expected and observed 95\% CL upper limits on the production cross section of a heavy, narrow, scalar resonance decaying into HH in the boosted $\ensuremath{\bbbar\tautau}$ final state~\cite{Aad:2020ldt}. } \label{fig:res} \end{figure} \subsection{Heavy Higgs boson decay into two lighter Higgs bosons in the $\ensuremath{\bbbar\tautau}$ final state} A new search for the decay of a heavy resonance $\mathrm{X}$ into the Higgs boson and another resonance $\mathrm{Y}$ with a mass $\mathrm{m_Y} < \mathrm{m_X} - \ensuremath{m_\PH}$ is presented by the CMS Collaboration~\cite{CMS:2021hws}. It is motivated by the next-to-minimal supersymmetric SM model (NMSSM)~\cite{Ellwanger:2009dp}, and is the first search for this signature at the LHC. The branching fractions of the $\mathrm{Y}$ resonance into SM particles are expected to be similar to the Higgs boson ones, and a promising signature of the Higgs boson decay into a pair of tau leptons and the decay of the $\mathrm{Y}$ into a pair of b quarks is considered. The dijet mas resolution is improved by the DNN-based b jet energy regression~\cite{Sirunyan:2019wwa}, the di-$\tau$ mass resolution is improved with a likelihood based method~\cite{Bianchini:2014vza}, and a kinematic fit to the $\ensuremath{\bbbar\tautau}$ system is used for each considered $\mathrm{m_Y}$ and $ \mathrm{m_X}$ mass hypothesis. A DNN multi-classifier is trained to separate the HH signal from the different types of background classified as events containing genuine $\tau$ pairs, top quark pairs, events with quark or gluon induced jets misidentified as $\tau$, and other processes not included in the previous classes. The DNN output functions for background and signal classes are used in the maximum likelihood fit for the signal extraction. The search is performed in mass ranges of $\mathrm{m_X}$ $\in$ [240 GeV, 3 TeV] and $\mathrm{m_Y}$ $\in$ [60 GeV, 2.8 TeV]. No signal is observed in any of the investigated mass combinations and model-independent upper limits at 95\% CL are set on the production cross section $\mathrm{X} \rightarrow \mathrm{Y} \ensuremath{\mathrm{H}}$ times branching fractions as shown in Fig.~\ref{fig:xyh}. \begin{figure}[h!t] \centering \includegraphics[width=0.7\textwidth]{yxh.pdf} \caption[Expected and observed 95\% CL upper limits on production cross section $\mathrm{X}\rightarrow \mathrm{Y} \ensuremath{\mathrm{H}}$ times branching fractions for all tested $\mathrm{m_Y}$ values. In this figure $\mathrm{X}\rightarrow \mathrm{Y} \ensuremath{\mathrm{H}}$ is denoted as $\mathrm{H}\rightarrow \mathrm{h(\tau^{+}\tau^{-})} \mathrm{h_{S}}(\ensuremath{\mathrm{b}\bar{\mathrm{b}}})$, where h is the observed Higgs boson, H is a heavy Higgs boson and $\mathrm{h_S}$ is another neutral Higgs boson, following the notations of NMSSM. The limits for each corresponding mass value have been scaled by orders of ten as indicated in the annotations.]{\small Expected and observed 95\% CL upper limits on production cross section $\mathrm{X}\rightarrow \mathrm{Y} \ensuremath{\mathrm{H}}$ times branching fractions for all tested $\mathrm{m_Y}$ values~\cite{CMS:2021hws}. In this figure $\mathrm{X}\rightarrow \mathrm{Y} \ensuremath{\mathrm{H}}$ is denoted as $\mathrm{H}\rightarrow \mathrm{h(\tau^{+}\tau^{-})} \mathrm{h_{S}}(\ensuremath{\mathrm{b}\bar{\mathrm{b}}})$, where h is the observed Higgs boson, H is a heavy Higgs boson and $\mathrm{h_S}$ is another neutral Higgs boson, following the notations of NMSSM. The limits for each corresponding mass value have been scaled by orders of ten as indicated in the annotations. } \label{fig:xyh} \end{figure} \section{Future projections} The first results from the di-Higgs searches using the full Run-II dataset presented in Section~\ref{sec:run2} showed significant analysis improvements. During the next Run-III of the LHC, which will begin in 2022, about 150 $\ensuremath{\mathrm{fb}^{-1}}$ of data are expected to be collected. With the improvements seen for the first results with the full Run-II dataset, when combining data from Run-II and Run-III we can expect to reach much better precision on HH production than was originally expected at the LHC. The next phase of LHC, the High Luminosity LHC (HL-LHC) is scheduled to start in 2027 and collect between 3000-4000 $\ensuremath{\mathrm{fb}^{-1}}$ of data with $\sqrt{s}=14$ TeV over more than a decade of operation. The HL-LHC physics prospects summary~\cite{Dainese:2019rgk} shows that the expected sensitivity of a combined ATLAS and CMS HH measurement is on the threshold of a discovery with the expected precision on $\ensuremath{\kappa_{\lambda}}$ of 50\%. \section{Summary} Searches for di-Higgs production at the LHC using the full Run-II dataset were presented. The new results benefited not only from the larger collected datasets, but also from a wealth of innovative analysis techniques, and showed exploration of rare channels and new signatures as well a probe of the VBF production mode and the anomalous VVHH coupling. Numerous beyond SM hypotheses and coupling modifications were explored in the context of resonant and nonresonant Higgs boson pair production. While all the results are so far consistent with the SM predictions, the exploration of the HH production at the LHC has just started. Many new HH results with the full Run-II data will follow in the near future, and we have very exciting prospects for the future data taking runs of the LHC. \section*{References}
\section{Introduction} \qquad There is a great deal of field theory models describing a system in quenched random fields or coupling constants (\cite{Mezard}, \cite{Dotsenko}, \cite{CFT}, {\it etc}.). In solid state physics such models naturally arise the corresponding pure systems whenever impurities are introduced. It is interesting to extend randomness to other well-studied field theories, just as, for example in \cite{CFT}, disorder was implemented into minimal conformal models. As shown in \cite{Parisi} and subsequent papers stochastic equations as well as the field theories in presence of random external sources often prove to possess some hidden supersymmetry. Kurchan \cite{Kurchan} indorsed this result for spin glass dynamics. Because supersymmetry handle perturbative corrections, such random theories are especially interesting. This is what we will do in this paper. On the other hand in field theories with apparent space-time supersymmetry superpotential is holomorphic function not only of fields but also of coupling constants \cite{Seiberg}. Therefore couplings and fields enter potential on equal footing, so that it seems very natural to introduce random (gaussian) distribution of some couplings in the Lagrangian. But the power of supersymmetry is so strong that superpotential gets no quantum corrections \cite{Seiberg}, \cite{Intr}, i.e. provided that the coupling has no dynamical D-terms integrating it over solves the problem. In Section 2 we formulate four-dimensional supersymmetric Wess-Zumino theory in random field. In the context of replica method infrared fixed points of one-loop $\beta$-functions are found in Section 3. Analysis of these fixed points suggests two phases on the moduli space ${\cal M}=RP^2$. Numerical evaluation of the most general expressions is eventuated in the phase diagram which is illustrated by two simple examples of the fourth section. Section 5 is devoted to discussions and conclusions. \section{Wess-Zumino model perturbed by randomness} \qquad {}From above arguments it follows that the SUSY analog of a theory with disorder must contain dynamical terms for the random field. In the present paper we consider a four-dimensional Wess-Zumino model that is the supersymmetric counterpart of $\varphi^4$-model (the both theories are defined in the same critical dimension and the scalar potential after integrating auxiliary field in the former model is actually $\varphi^4$). Since, according to \cite{Intr}, Wess-Zumino theory is defined only as a low-energy field theory, we will study the Wilsonian effective action by integrating fast modes with momentum $\Lambda'<P< \Lambda$. Thereby, let us define a chiral superfield $\Phi=\varphi + \theta \psi + \theta^2 F$ and a random superfield $H$. In this notations the original action is\footnote{For the sake of simplicity the mass terms are omitted.}: \begin{eqnarray} S=\int d^4x d^2\theta d^2 \bar \theta (g \Phi^{+} \Phi - \Phi^{+}H-H^{+}\Phi+{1 \over u}H^{+}H) +\nonumber \\ + {1 \over 3!} \int d^4x d^2\theta ( \lambda_1' \Phi H^2 + \lambda_2' \Phi^2 H + \lambda_3' \Phi^3 + \lambda_4' H^3) + h.c. \label{action} \end{eqnarray} This action admits the following treatment. It may be obtained (for the certain set of parameters) from the usual Wess-Zumino action by the replacement $\Phi \rightarrow \Phi + H$, as one usually does in summation over local extremes \cite{Dotsenko}. One of the most powerful method to deal with random fields is the replica trick \cite{Mezard}, which we will use here to solve this "toy" model. It reduces to introducing $n$ copies (replicas) of our system, integrating $H$ field out, then solving $n$-replica problem and taking $n=0$ at the end of calculations. After replication the action (\ref{action}) takes the form : \begin{eqnarray} S=\int d^4x d^2\theta d^2 \bar \theta [ \sum_{a=1}^n (g \Phi_a^{+} \Phi_a - \Phi_a^{+}H - H^{+}\Phi_a)+ {1 \over u}H^{+}H] +\nonumber \\ +{1 \over 3!} \int d^4x d^2\theta [ \sum_{a=1}^n (\lambda_1' \Phi_a H^2 + \lambda_2' \Phi_a^2 H + \lambda_3' \Phi_a^3) + \lambda_4' H^3] + h.c. \label{replact} \end{eqnarray} As will be shown later the model depends only on the relative values of lambdas, so that one can put them small enough to determine $H$ field from the saddle point equation on D-term only: \begin{equation} H=u \sum_{a=1}^n \Phi_a \qquad and \qquad H^{+}=u \sum_{a=1}^n \Phi_a^{+} \label{h} \end{equation} Substituting it back into (\ref{replact}) yields: \begin{eqnarray} \begin{array}{c} S= \sum_{a,b=1}^n \int d^4x d^2\theta d^2 \bar \theta g_{ab} \Phi_a^{+} \Phi_b + \label{act} \\ +{1 \over 3!} \int d^4x d^2\theta ( \sum_{a,b,c=1}^n \lambda_1 \Phi_a \Phi_b \Phi_c + \sum_{a,b=1}^n \lambda_2 \Phi_a^2 \Phi_b + \sum_{a=1}^n \lambda_3 \Phi_a^3 ) + h.c. \end{array} \end{eqnarray} where $g_{aa}=g+3u$, $g_{a \ne b}=3u$ and three types of vertexes $\lambda_1=\lambda_1' u^2+\lambda_4' u^3$, $\lambda_2=\lambda_2' u$, $\lambda_3=\lambda_3'$ band differently replica indices. It is the action (\ref{act}) that we are going to study. \section{Fixed points of $\beta$-functions} \qquad Renormalisation group (RG) equations for $g_{ab}$ easily follow from the one-loop diagram for the pure Wess-Zumino theory \cite{Wess}: \begin{eqnarray} {dg_{ab} \over d \ln{\Lambda}} = {1 \over 288 \pi^2} \{9\lambda_3^2 g_{ab}^2 + 2\lambda_2^2\sum_{c,d=1}^n [(g_{ac}+ g_{bc})g_{cd}+g_{ac}g_{bd}]+ \nonumber \\ +3\lambda_2 \lambda_3\left[ \sum_{c=1}^n (g_{ac}^2+g_{bc}^2) + 2 g_{ab} \sum_{c=1}^n (g_{ac} + g_{bc}) \right] + 9 \lambda_1 \lambda_3 \sum_{c,d=1}^n (g_{ac}g_{ad}+g_{bc}g_{bd})\} \label{beta} \end{eqnarray} Taking into account the possible replica symmetry breaking we take the Parisi ansatz for $g_{ab}$ \cite{Mezard}: off-diagonal part of $g_{ab}$ is parametrized by internal function $g(x)$ defined on a unite interval $x \in [ 0,1 ]$ and diagonal part is $g_{aa} = \tilde g$. Replica-symmetric case is obtained by putting $g(x)=g=constant$. Algebra of Parisi matrices ${\bf a}= (\tilde a, a(x))$ is defined by the multiplication rule \cite{Mezard}: \begin{eqnarray} {\bf c}={\bf ab}: \qquad \tilde c= \tilde a \tilde b - \int^1_0 dx a(x)b(x) \nonumber \\ c(x)=b(x) [\tilde a - \int^1_0 dx a(y) ] + a(x) [\tilde b - \int^1_0 dx b(y) ] -\label{mult} \\ -\int^x_0 dy (a(x)-a(y))(b(x)-b(y)) \nonumber \end{eqnarray} By means of this rule we get sums over replica indices that appear in (\ref{beta}) in the $n \rightarrow 0$ limit: \begin{equation} \sum_{b=1}^n g_{ac} \rightarrow \tilde g - \bar g \qquad \sum_{c,d=1}^n g_{ac} g_{cd} \rightarrow (\tilde g - \bar g)^2 \qquad \sum_{b=1}^n g^2_{ac} \rightarrow \tilde g^2 - \bar{g^2} \nonumber \end{equation} where \begin{equation} \bar g = \int^1_0 dx g(x) \qquad and \qquad \bar{ g^2} = \int^1_0 dx g^2(x) \nonumber \end{equation} A question arises, as usual in spin glass theory, to find infrared (IR) fixed points\footnote{The points where $\beta$-functions vanish.} of (\ref{beta}) which determine the dynamics of the system: \begin{eqnarray} {3 \over 2}\lambda_3^2 \tilde g^2 + (\lambda_2^2+3\lambda_1 \lambda_3) (\tilde g - \bar g)^2 + \lambda_2 \lambda_3 [2 \tilde g (\tilde g - \bar g) + \tilde g^2 - \bar{g^2}]=0 \label{rg} \\ {3 \over 2}\lambda_3^2 g^2(x) + (\lambda_2^2+3\lambda_1 \lambda_3) (\tilde g - \bar g)^2 + \lambda_2 \lambda_3 [2 g(x) (\tilde g - \bar g) + \tilde g^2 - \bar{g^2}]=0 \label{rgg} \end{eqnarray} For example, the $\lambda_2^2$-term is produced by the two non-vanishing (with the number of replicas) diagrams shown on Fig.1. \begin{figure} \epsfxsize 400pt \epsffile{loops.ps} \caption{Surviving (in the $n \rightarrow 0$ limit) $\lambda_2^2$-contributions.} \end{figure} These equations have two remarkable properties: they are homogeneous in $\lambda$ and $g$, i.e. depend only on the squares of the both. Such dependence on $\lambda$ tells us that zeroes of beta-functions (\ref{rg})-(\ref{rgg}) do not depend
on the values of the couplings themselves, but only on their mutual ratios, so that the moduli space of the theory is $RP^2$ instead of $R^3= \{ \lambda_1,\lambda_2,\lambda_3 \}$. Therefore, without loss of generality, we may put couplings very small keeping their ratios fixed. In this limit the results that we are going to obtain are exact. Moreover, in what follows we will assume $\lambda_3 \ne 0$, so that we can choose it to be $\lambda_3=1$ and denote $\lambda_2=\lambda$ and $\lambda_1=\mu$ (affined map). \footnote{If $\lambda_3 \ne 1$ then the right parameters are $\lambda = {\lambda_2 \over \lambda_3}$ and $\mu = {\lambda_1 \over \lambda_3}$.} The special case of $\lambda_3=0$ will be studied in the first example of Section 4. Quadratic dependence on $g$ in (\ref{rgg}) means that for each set of general characteristics, such as $\bar g$, $\bar{g^2}$ and $\tilde g$, there are only two possible values $g_{1,2}$ (if any) which the function $g(x)$ can take in a IR-fixed point. Moreover, the same must be true for $\tilde g$ because formally it also satisfies similar equation (\ref{rg}). We are free to chose $\tilde g = g_1$, for instance. Let us denote the measure of points on a unite interval of $x$ where $g(x)=g_1$ as $1-x_0$ and the measure of points where $g(x)=g_2$ as $x_0$. For example, it may be a stepwise distribution: \begin{eqnarray} g(x)=\left\{ \begin{array}{ccc} g_1, \qquad x_0<x<1 \\ g_2, \qquad 0<x<x_0 \label{param} \end{array} \right. \end{eqnarray} Thus we have two equations (\ref{rg})-(\ref{rgg}) on three quantities $g_{1,2}$ and $x_0$ with $\bar g$ and $\bar{g^2}$ depending on them. If $g_1$ and $g_2$ are not simultaneously equal to zero\footnote{Otherwise we get a trivial replica-symmetric fixed point.} then, actually, we have only two unknowns: $x_0$ and the ratio $p={g_2 \over g_1}$. In this notations (\ref{rg})-(\ref{rgg}) may be rewritten as: \begin{eqnarray} \left\{ \begin{array}{ccc} 1+({2\over3} \lambda^2 +2 \mu) x_0^2 (1-p)^2 + {2 \over 3} x_0 \lambda \left[ 2(1-p) + (1-p^2) \right] = 0 \label{last} \\ p^2+({2\over3} \lambda^2 +2 \mu) x_0^2 (1-p)^2 + {2 \over 3} x_0 \lambda \left[ 2p(1-p) + (1-p^2) \right] = 0 \end{array} \right. \end{eqnarray} determining both $p$ and $x_0$, and, consequently, the phase of the system. Curiously enough, for a given solution $p$ and $x_0$ we get the whole set of RG-fixed points $\{ \tilde g, g(x)\}$ differing in arbitrary factor. Of course, this degeneracy will be removed by higher loop corrections, so that particular value of the fixed point will be determined by the full perturbative expansion. At the one-loop approximation, the explicit data $(\tilde g, g(x))$ in the fixed point may be determined by the initial conditions $g$ and $u$. If for some set of couplings there is no solution to (\ref{last}) except the trivial one $\tilde g= g(x)=0$, we will refer to this point on the phase space $\{ \lambda, \mu \} \in {\cal M}=RP^2$ as a replica-symmetric point and will denote the corresponding phase "RS". Otherwise, replica symmetry is broken with $x_0$ being the solution of (\ref{last}), and the corresponding phase "RSB" looks like a spin glass system. Since (\ref{last}) must be solved by the same $p$, equating the solutions to each equation we get the relation between $x_0$ and $\{ \lambda, \mu\} \in {\cal M}$. Instead of writing the resulting complicated formulae (partly because it can not be resolved relatively $x_0$), we display it for $x_0=1$: \begin{equation} \frac{\lambda^2+3\mu+\lambda \pm \sqrt{{5 \over 2}\lambda^2 - {9 \over 2} \mu + {3 \over 2} \lambda}}{\lambda^2 +3 \mu -\lambda} = \frac{\lambda^2+3\mu-\lambda \pm \sqrt{{5 \over 2}\lambda^2 - {9 \over 2} \mu -{3 \over 2} \lambda}}{{3\over 2}+\lambda^2 +3 \mu - 3 \lambda} \label{gen} \end{equation} where sings in the both parts are taken independently. Replacing $\lambda \rightarrow x_0 \lambda$ and $\mu \rightarrow x_0^2 \mu$, we get the equation (\ref{gen}) for arbitrary $x_0$. This expression describes (part of) a curve in ${\cal M}$ that separates RS and RSB phases as shown on Fig.2. The shaded region indicate replica-symmetric phase and the unshaded region corresponds to replica symmetry breaking where there is a non-trivial solution to (\ref{gen}), and the trivial point $\tilde g = g(x) =0$ becomes unstable, as will be descanted in the second example of the next section. \begin{figure} \epsfxsize 400pt \epsffile{phases.ps} \caption{The Phase Diagram (not drawn in scale).} \end{figure} \section{Two simple examples} \begin{itemize} \item $\lambda_3=0$ \\ In this case the beta-functions (\ref{beta}) become \begin{eqnarray} {d \tilde g \over d \ln{\Lambda}} = {1 \over 48 \pi^2} \lambda_2^2(\tilde g - \bar g)^2 \nonumber \\ {d g(x) \over d \ln{\Lambda}} = {1 \over 48 \pi^2} \lambda_2^2(\tilde g - \bar g)^2 \nonumber \end{eqnarray} These equations may be easily integrated with the result: \begin{eqnarray} \tilde g_{\Lambda} = \tilde g_{ 0,\Lambda' } + {A \over 48 \pi^2}\lambda_2^2 \ln{{\Lambda \over \Lambda' }} \nonumber \\ g_{\Lambda} (x) = g_{ 0, \Lambda' } (x) + {A \over 48 \pi^2} \lambda_2^2 \ln{{\Lambda \over \Lambda'}} \nonumber \end{eqnarray} where a constant $A=(\tilde g - \bar g)^2$ is determined by initial conditions and remains unchanged during renormalisation group flow. Since for any $\lambda_2$ the only fixed point is $\tilde g = g(x)=0$, this phase is always replica-symmetric and is not as interesting as others. \\ \item $\lambda_2=0 \leftrightarrow \lambda=0$ \\ Equations (\ref{rg})-(\ref{rgg}) take the form: \begin{eqnarray} \left\{ \begin{array}{ccc} \tilde g^2 + 2 \mu (\tilde g - \bar g)^2 =0 \label{r20} \\ g^2 (x) + 2 \mu (\tilde g - \bar g)^2 =0 \label{r201} \end{array} \right. \end{eqnarray} for which $g_{1,2}= \pm g$ for some $g \ne 0$ in the SG phase. In parametrization (\ref{param}) \begin{equation} \bar g = (g_1-g_2)x_0 = 2gx_0 \qquad and \qquad \bar g^2 = (g_1^2-g_2^2)x_0 = 2g^2 x_0 \end{equation} Substitution it into (\ref{r20}) yields a nontrivial solution: \begin{equation} -8 \mu x_0^2 =1 \qquad or \qquad x_0 = {1 \over \sqrt{-8 \mu}} \label{s20} \end{equation} which exists only for $\mu < - {1 \over 8}$. It is the range of $\mu$ where the RSB phase can be found. Let us emphasize that exactly for these points in ${\cal M}$ the trivial fixed point $\tilde g = g(x)=0$ becomes unstable, for example, with respect to perturbations in $\tilde g$. To see this consider $\tilde g= \epsilon$: \begin{equation} {d \epsilon \over d \ln{\Lambda}} = \alpha \epsilon^2 \nonumber \\ \end{equation} where $\alpha<0$ if (\ref{s20}) is true (i.e. arbitrary small $\epsilon$ increases during the flow to low energies). This simple case illustrates the behavior of general system (\ref{last}). On the phase diagram it corresponds to $\mu$ axis where the both RS and RSB phases exist. \end{itemize} \section{Summary} \qquad Having started from the (space-time) supersymmetric Wess-Zumino model in a random and quenched background (\ref{action}) we have found that renormalisation group equations (\ref{beta}) in a fixed point are quadratic homogenous equations in couplings and in $g$. The former property allowed us to take couplings very small as well as to reduce the moduli space to ${\cal M}=RP^2$. There are two types of points (phases) on this moduli space with either broken replica symmetry or not. Though we have found all IR-fixed points of one-loop $\beta$-function, stability of nontrivial fixed points and analytic RG flow to them remain unexplored. Finally, it is interesting to generalize this analysis to more complex supersymmetric theories as well as to find realistic models whose critical behavior correspond to such theories. \vskip5mm \section*{Acknowledgments} \qquad I would like to thank D.Ivanov, A.Marshakov and A. Mironov for wholehearted atmosphere and helpful conversations. I am especially indebted to A.Morozov and I.Polyubin for their stimulating suggestions and comments. Also I am grateful to Vik. Dotsenko from whom I had for the first time known what spin glass is, and with whom many of the ideas presented here have been discussed. This work was supported in part by RFBR grant No. 96-15-96939. \newpage
\section{Introduction} \label{sec:intro} \begin{figure}[th] \centering \includegraphics[width=\linewidth]{images/architecture.png} \vspace{0.5pt} \caption{A diagram of our proposed method. We add a new end-to-end trainable branch to the network (proxy head $\mathcal{H}$) that projects highly dimensional vectors $\mathbf{x}_i$ into very compact representations $\mathbf{z}_i$ ; we use the latter to compute one proxy descriptor $\mathbf{c}_i$ for each place in the mini-batch. We detach each proxy from the computation graph and cache it into a memory bank $\Omega$. Then, at the begining of each epoch, we construct an index upon $\Omega$, in which places are gathered together according to the similarity of their proxies. This index is used to sample mini-batches containing similar places, which yields highly informative pairs or triplets. We call this strategy Global Proxy-based Hard Mining (GPM).} \label{fig:arch} \end{figure} Visual place recognition (VPR) consists of determining the location of a place depicted in a query image by comparing it to a database of previously visited places with known geo-references. This is of major importance for many robotics and computer vision tasks, such as autonomous driving~\cite{chowdhary2013gps, maddern20171}, SLAM~\cite{milford2012seqslam, engel2014lsd}, image geo-localization~\cite{baik2020domain, hausler2021patch, wang2022transvpr} and 3D reconstruction~\cite{cieslewski2016point, sattler2017large}. Recently, advances in deep learning~\cite{menghani2021efficient} have made retrieval-based place recognition a preferable choice for efficient and large-scale localization. Current VPR techniques \cite{arandjelovic2016netvlad, liu2019stochastic, warburg2020mapillary, thoma2020soft, zhu2020regional, hausler2021patch, wang2022transvpr} use metric learning loss functions to train deep neural networks for VPR. These loss functions operate on the relationships between images in a mini-batch. As such, representations of images from the same place are brought closer and those from different places are distanced~\cite{musgrave2020metric}. For instance, in the most used architecture for VPR, NetVLAD~\cite{arandjelovic2016netvlad, liu2019stochastic, warburg2020mapillary, hausler2021patch, wang2022transvpr}, the network is trained using a triplet ranking loss function that operates on triplets, each of which consists of a query image, a positive image depicting the same place as the query, and a negative image depicting a different place. Moreover, the triples need to be informative in order for the network to converge~\cite{hermans2017defense}, meaning that for each query, the negative must be hard for the network to distinguish from the positive. To do so, these techniques rely on offline hard negative mining, where every image representation generated by the network is kept in a memory bank (cache), to be used offline (out of the training loop) to find the hardest negatives for each training query. Although offline mining allows the network to converge~\cite{warburg2020mapillary}, it involves a large memory footprint and computational overhead. Another approach for informative example mining is online hard negative mining (OHM)~\cite{hermans2017defense, wu2017sampling}, which consists of first forming mini-batches, by randomly selecting a subset of places from the dataset and sampling images from each of them. Then, in a later stage of the forward pass, select only the most informative triples (or pairs) present in the mini-batch and use them to compute the loss. Nevertheless, randomly constructed mini-batches can generate a large number of triplets (or pairs), most of which may be uninformative~\cite{hermans2017defense}. Yet selecting informative samples is crucial to robust feature learning~\cite{musgrave2020metric}. The advantage of OHM is that there is no memory bank (cache) and no out-of-the-loop mining step. However, as training progresses and the network eventually learns robust representations, the fraction of informative triplets (or pairs) within the randomly sampled mini-batches becomes limited (i.e., the network becomes good at distinguishing hard negatives). Therefore, it's recommended to use very large batch sizes~\cite{hermans2017defense} to potentially increase the presence of hard examples at each iteration. In this work, we propose a new globally informed mini-batch sampling technique, which instead of randomly sampling places at each iteration, it uses a proxy index to construct mini-batches containing visually similar places. The main idea behind our technique is the following: instead of caching highly dimensional individual image descriptors to mine hard negatives, we propose to add an auxiliary branch that computes compact place-specific representations that we call proxies. Thus, each place in the dataset can be globally represented by one low-dimensional proxy that can be effectively cached during the training. This allows us to build an index in which places are gathered in the same mini-batch according to the similarity of their proxies. Our technique involves negligible computational and memory overhead, while drastically improving performance. \section{Related Work} \label{sec:related} \subsection{Visual Place Recognition}\label{ssec:vpr} Most state-of
-the-art techniques in VPR~\cite{arandjelovic2016netvlad, liu2019stochastic, seymour2019semantically, warburg2020mapillary, kim2017learned, liu2020digging, hausler2021patch, wang2022transvpr} train the network with mini-batches of triplets of images. Such techniques employ offline hard negative mining to form informative triplets. This is done by storing in a memory cache all image representations generated during the training, and using $k$-NN to retrieve, for each training query, the hardest negatives among all references in the cache and form informative triplets (the hard negatives are the images that do not depict the same place as the query but are too close to it in the representation space). However, most SOTA methods generate highly dimensional representations during the training phase, for instance, techniques that rely on NetVLAD~\cite{arandjelovic2016netvlad} generate descriptors of size $d = 32768$. As a result, caching representations when training with large datasets such as Mapillary~SLS~\cite{warburg2020mapillary} or GSV-Cities~\cite{ali2022gsv} quickly becomes infeasible, because of both the computational overhead and the memory footprint of $k$-NN, which has a computational complexity of $\mathcal{O}(QRd)$ and a memory footprint of $\mathcal{O}(Rd)$~\cite{cunningham2021k}, where $R$ is the number of reference samples (cached representations), $d$ the dimensionality of each sample, and $Q$ is the number of queries to be searched. In \cite{thoma2020soft, arandjelovic2016netvlad, liu2019stochastic} the representations of all the training examples of Pitt250k dataset are cached. Then, after a fixed number of iterations, the training is paused and the cache is used to mine the hardest $10$ negatives for each training query (to form hard triplets). Importantly, the cache is recalculated every $250$ to $1000$ iterations. Warburg \emph{et al}\bmvaOneDot~\cite{warburg2020mapillary} trained NetVLAD on Mapillary-SLS, which is a dataset comprising $1.6$M images. Faced with the huge memory overhead, they used a subcaching strategy, where only a subset of the training images are cached, from which the hard negatives were periodically mined. Note that, if the NetVLAD representations of all images in MSLS dataset~\cite{warburg2020mapillary} were cached, the memory cache would be $196$GB in size. From the above, it is evident that the extra memory and computational cost of offline hard mining for VPR remains an issue to be addressed. \subsection{Deep Metric Learning}\label{ssec:dml} Place recognition networks are generally trained using ranking loss functions issued from deep metric learning~\cite{zhang2021visual}, such as triplet ranking loss~\cite{schroff2015facenet} and contrastive loss~\cite{thoma2020soft}. However, during the training, deep metric learning (DML) networks often generate very compact representations compared to VPR, ranging from $d =128$ to $d=512$~\cite{chen2021deep}. This makes any caching mechanism much less greedy and computationally inexpensive. Related to our work are DML approaches~\cite{ge2018deep, smirnov2018hard} that perform negative mining on class-level representations (a class could be regarded as the equivalent of a place in VPR), under the assumption that class-level similarity is a good approximation of the similarity between instances. Smirnov~\emph{et al}\bmvaOneDot~\cite{ge2018deep} developed a technique that constructs a hierarchical tree for the triplet loss function. The strategy behind their approach is to store class-level representations during the training, identify neighbouring classes and put them in the same mini-batch, resulting in more informative mini-batches that can be further exploited by online hard mining. Applying these techniques directly to train VPR networks would require to cache highly dimensional image-level representations (e.g. $32$K for NetVLAD), which is not feasible when the training dataset contains thousands of different places. \section{Methodology} \label{sec:method} As mentioned above, VPR techniques generate highly dimensional representations, making caching and hard mining with $k$-NN impractical for large-scale datasets. Knowing that the complexity of $k$-NN is linearly dependent on the number of references $Q$ that need to be cached and their dimensionality $d$~\cite{cunningham2021k}. And considering that the only purpose of the caching mechanism is to help retrieve hard examples. We propose to project the highly dimensional pooling representations (e.g. the resulting NetVLAD representations) into a separate branch ($\mathcal{H}$ in figure~\ref{fig:arch}) that we call \textit{proxy head}. $\mathcal{H}$ is an end-to-end trainable module that learns place-specific compact vectors of significantly smaller dimension compared to the pooling module. During each epoch, we capture and cache the semantics of each place (instead of each image) with one compact vector, acting as its global proxy. Therefore, the number of proxies to be cached is one order of magnitude smaller than the number of images in the
{R}, \label{31} \end{equation} see Figure \ref{pic7}, left. The restriction of the field \eqref{12} to every invariant leaf \eqref{31} is a node {\rm (}for $D_n${\rm )} or a focus {\rm (}for $D_f${\rm )} with spectrum $\varepsilon_1,\varepsilon_2$, and the geodesics $\gamma_{\alpha, \beta}^+ \in \Gam_0$ are projections of the integral curves of \eqref{12} on the leaves \eqref{31} to the $(x,y)$-plane {\rm (Figure \ref{pic7}, right)}. The geodesics $\gamma_{\alpha, \beta}^+ \in \Gam_0$ are timelike if $\alpha<0$, spacelike if $\alpha>0$ and isotropic if $\alpha=0$. \end{theorem} \begin{figure}[ht] \begin{center} \includegraphics[height=5.2cm]{pic7.eps} \end{center} \vspace{-1ex} \caption{The cases $D_n$ and $D_f$: invariant foliation of the field \eqref{12} (left), integral curves of the field \eqref{12} on an invariant leaf and their projection to the $(x,y)$-plane in the cases $D_n$ (center) and $D_f$ (right). } \label{pic7} \end{figure} \begin{proof} We start with the question of linearizability of the germ \eqref{16} at the origin. In the case $D_n$, the germ \eqref{16} is smoothly equivalent to the linear normal form \eqref{24} with $0 < \varepsilon_{1}, \varepsilon_{2} < \frac{1}{2}$ (Theorem~\ref{PT1} in Appendix~A), and the conjugating diffeomorphism has the form \eqref{25}. In the case $D_f$, Theorem~\ref{PT1} does not apply because the spectrum $(1, \varepsilon_1,\varepsilon_2)$ with \begin{equation} \varepsilon_{1}=a +bi,\ \ \varepsilon_2 = a - i b, \ \ a = \tfrac{1}{4}, \ \ b= \tfrac{1}{4}\sqrt{16\varepsilon - 1}, \ \ \varepsilon = \varepsilon_{1}\varepsilon_{2} > \tfrac{1}{16}, \label{32} \end{equation} has the resonance $2(\varepsilon_1+\varepsilon_2) = 1$ of the order $|s|=4$ impeding the linearizability in the $C^4$-smooth category. Consider along with \eqref{10} the resonances obtained by taking the real part of both sides of \eqref{10}. The spectrum \eqref{32} has five resonances of this type of order greater than one: $\operatorname{Re} (s_1 \varepsilon_1 + s_2 \varepsilon_2) = 1$, $|s|=s_1+s_2=4$. Thus in the case $D_f$, the germ \eqref{16} satisfies the condition of Theorem~\ref{PT3} (Appendix~A) with $k=1$, and consequently, it is $C^1$-smoothly equivalent to \begin{equation} \xi \partial_{\xi} + (a \eta + b \zeta) \partial_{\eta} + (a \zeta - b\eta) \partial_{\zeta} \label{33} \end{equation} with $a,b$ as in \eqref{32}. Thus in the case $D_n$ (resp. $D_f$) the germ \eqref{16} is ${C^{\infty}}$ (resp. $C^1$) smoothly equivalent to the linear normal form \eqref{24} (resp. \eqref{33}), and the conjugating diffeomorphism has the form \begin{equation} \begin{aligned} & x = \eta \varphi_1 + \zeta \varphi_2, \ \ p = \eta \varphi_3 + \zeta \varphi_4, \ \ v = \xi, \\ & \eta = x \psi_1 + p \psi_2, \ \ \zeta = x \psi_3 + p \psi_4, \ \ \xi = v, \end{aligned} \label{34} \end{equation} for some ${C^{\infty}}$ (resp. $C^1$) smooth functions $\varphi_i=\varphi_i (\xi,\eta,\zeta)$ and $\psi_i=\psi_i(x,v,p)$. By Lemma \ref{PL2} (Appendix~A), equations \begin{align} \label{35} \xi &= \alpha (|\eta|^{1/\varepsilon_1} + |\zeta|^{1/\varepsilon_2}), \\ \label{36} \xi &= \alpha (\eta^2+\zeta^2)^2, \end{align} where $\alpha \in \mathbb{R} P^1$, define invariant foliations of the linear fields \eqref{24} and \eqref{33}, respectively. Here $\alpha=\infty$ gives the $\xi$-axis in the normal coordinates, i.e., the $v$-axis in the $(x,v,p)$-space, and consequently, the origin in the $(x,y,p)$-space. Hence we shall consider both equations \eqref{35} and \eqref{36} with $\alpha \in \mathbb{R}$ only. Substituting expressions \eqref{34} for $\xi, \eta, \zeta$ in \eqref{35} and \eqref{36}, gives the invariant foliation of the field \eqref{16} in the case $D_n$ and $D_f$ respectively: \begin{align} \label{37} v &= \alpha \bigl(|x \psi_1 + p \psi_2|^{1/\varepsilon_1} + |x \psi_3 + p \psi_4|^{1/\varepsilon_2}\bigr), \\ \label{38} v &= \alpha \bigl(|x \psi_1 + p \psi_2|^2+ |x \psi_3 + p \psi_4|^2\bigr)^2. \end{align} Note that the right-hand sides of \eqref{37} and \eqref{38} are at least $C^1$-smooth (in the case $D_n$, $0 < \varepsilon_1, \varepsilon_2 < \frac{1}{2}$). By the implicit function theorem, \eqref{37} and \eqref{38} can be solved in $v$ near the origin. In both cases we get the equation $v = \alpha Y_{\alpha}(x,p)$ with a $C^1$-smooth function $Y_{\alpha}$ vanishing at the origin. Substituting this expression into $y = \varepsilon x^2 + (1+v)p^2$, gives the invariant foliation \eqref{31} of the field \eqref{12}. Similarly to the case $Z$, one can divide the restriction of the field \eqref{12} to every invariant leaf by the common factor $p$ (see formula~\eqref{23}). Then the restriction of the field \eqref{12} to every invariant leaf \eqref{31} becomes a node (in the case $D_n$) and a focus (in the case $D_f$) with the spectrum $(\varepsilon_1,\varepsilon_2)$ (Figure \ref{pic7}, center, right). \end{proof} Figure \ref{pic8} presents computer generated pictures of geodesics on a surface endowed with the metric \eqref{11} with $\omega \equiv -1$ and $\Theta \equiv 0$ in the cases $D_s, D_n, D_f$. \begin{figure}[ht] \begin{center} \includegraphics[height=5.5cm]{pic8.eps} \end{center} \vspace{-1ex} \caption{ Geodesics (solid lines) in the metric \eqref{11} with $\omega \equiv -1$ and $\Theta \equiv 0$ in the cases $D_s$ (left), $D_n$ (center), $D_f$ (right). The dotted line is the discriminant curve. } \label{pic8} \end{figure} \section{Appendix A. Local normal forms of vector fields} Here we give a brief survey of local normal forms for vector fields in real phase space, which were used in this paper (for more details, see also surveys in \cite{AI, JBG}). All vector fields are supposed to be smooth (that means ${C^{\infty}}$ unless stated otherwise). By ${C^{\om}}$ we denote the class of analytic mappings. For convenience, we also present vector fields as autonomous differential equations, where the differentiation is by an auxiliary parameter playing a role of time. \begin{defin} {\rm Two vector fields $V_1$ and $V_2$ are $C^k$-smoothly (resp. topologically) equivalent, if there exists a $C^k$-diffeomorphism (resp. homeomorphism) $h \colon \mathbb{R}^n \to \mathbb{R}^n$ that conjugates their phase flows $g^t_1$ and $g^t_2$, i.e., $h \circ g^t_1 = g^t_2 \circ h$. Here $k$ is an integer number (finite-smooth equivalence) or $\infty$ (infinite-smooth equivalence) or $\omega$ (analytic equivalence). } \end{defin} \begin{defin} {\rm Two vector fields $V_1$ and $V_2$ are {\it orbitally} $C^k$-smoothly (resp. topologically) equivalent, if there exists a $C^k$-diffeomorphism (resp. homeomorphism) that conjugates their integral curves (orbits of their phase flows). } \end{defin} \begin{remark} {\rm The second definition slightly differs from the generally accepted definition of the orbital equivalence, where coincidence of the orientation of integral curves is also required. In fact, our definition is naturally related to directions fields, whose integral curves do not have a orientation a priori. } \label{RemA} \end{remark} A great deal of work is done to bring a vector field to a normal form in a neighborhood of its singular point under a chosen equivalence relation. In particular, the germ of a vector field at its singular point is called {\it linearizable} (in a certain category) if it is equivalent (in this category) to its linear part. Of course, not all vector fields are linearizable, and when it is not, the question is what normal formal can we get. \begin{defin} {\rm Let $V$ be the germ of a vector field with singular point the origin $0$ in $\mathbb{R}^n$ and $\lambda = (\lambda_1, \ldots, \lambda_n) \in \mathbb{C}^n$ be the spectrum of $V$ at $0$. The germ $V$ is called {\it hyperbolic} if the spectrum $\lambda$ does not contain neither zeros nor pure imaginary eigenvalues. The germ $V$ is called {\it partially hyperbolic} if the spectrum $\lambda$ contains at least one eigenvalue with non-zero real part. } \end{defin} \subsection{Hyperbolic germs} Let $V$ be the germ of a hyperbolic vector field at $0$. For topological equivalence the only local invariants are the scalars $a_i = \operatorname{sgn} (\operatorname{Re} \lambda_i)$. The Grobman-Hartman Theorem \cite{AI, Hartman} states that $V$ is topologically equivalent to the field $\dot \xi_i = a_i \xi_i$, $i=1, \ldots, n$. For the $C^k$-smooth equivalence, resonances of two types play an important role: \begin{align} \label{39} \lambda_j - (s,\lambda) = 0, \ \ s_i \in \mathbb{Z}_+, \ \ j\in \{1,\ldots, n\}, \\ \label{40} \operatorname{Re} (\lambda_j - (s,\lambda)) = 0, \ \ s_i \in \mathbb{Z}_+, \ \ j\in \{1,\ldots, n\}, \end{align} where $(s,\lambda) = s_1\lambda_1 + \cdots + s_n\lambda_n$ is the standard scalar product. In both relations \eqref{39}, \eqref{40} the natural number $|s| = s_1 + \cdots + s_n$ is called the {\it order} of the resonance, and $\xi^s = \xi_1^{s_1} \cdots \xi_n^{s_n}$ is called the {\it resonant monomial}. In general, non-trivial resonances \eqref{39}, \eqref{40} of orders $|s|\ge 2$ are obstacles for the germ $V$ to be $C^k$-smoothly linearizable. \begin{theorem}[\cite{AI, Chen, Sternberg}] If the spectrum $\lambda$ does not have resonances \eqref{39} of any order $|s| \ge 2$, the germ $V$ is ${C^{\infty}}$-smoothly linearizable. Moreover, if $\lambda \in \mathbb{R}^n$ and has only trivial resonances \eqref{39}, the germ $V$ is ${C^{\infty}}$-smoothly equivalent to \begin{equation} \dot \xi_i = \lambda_i \xi_i, \ \ \ i=1, \ldots, n. \label{41} \end{equation} \label{PT1} \end{theorem} \vspace{-4ex} \begin{theorem}[\cite{AI, Chen, Hartman}] If $\lambda \in \mathbb{R}^n$, then the germ $V$ is ${C^{\infty}}$-smoothly equivalent to the field \begin{equation} \dot \xi_i = \lambda_i \xi_i + \sum_{s \in \mathbb{Z}_+^n} {a_{is}} \xi^s, \ \ \ i=1, \ldots, n. \label{42} \end{equation} where ${a_{is}} \neq 0$ only if the resonance \eqref{39} holds. \label{PT2} \end{theorem} In other words, Theorems \ref{PT1} and \ref{PT2} states that the normal forms \eqref{41}, \eqref{42} contain resonant monomials only. Here the linear terms $\lambda_i \xi_i$ correspond to trivial resonances \eqref{39}, while all remaining terms ${a_{is}} \xi^s$ correspond to non-trivial resonances \eqref{39} if they exist. Under some additional restrictions, Theorems \ref{PT1}, \ref{PT2} are valid in ${C^{\om}}$ category. For instance, in the case $\lambda \in \mathbb{R}^n$ it is sufficient to require that all $\lambda_i$ have the same sign. However, if $\lambda_i$ have different signs, the restrictions are much stronger, especially if $\lambda$ has non-trivial resonances (Theorem~\ref{PT2}), see \cite{AI, JBG} and the references therein. On the contrary, requirements become weaker if we consider $C^k$-smooth equivalence with $k<\infty$, see e.g. \cite{AI, Sam82, Sam96}. In this paper, we need the following result. \begin{theorem}[\cite{AI, Sell}] Suppose that for some positive integer $k$ the spectrum $\lambda$ has neither resonances \eqref{39} of order $2 \le |s|\le 2k$ nor resonances \eqref{40} of order $|s| = 2k$. Then $V$ is $C^k$-smoothly linearizable. \label{PT3} \end{theorem} We establish below the existence of invariant foliations of a hyperbolic linear vector field in 3-dimensional real phase. Assume, without loss of generality, that one of the eigenvalues $\lambda_i$ of is equal to 1. It is sufficient to consider the following two fields \begin{align} \label{43} \dot \xi_1 &= \lambda_1 \xi_1, \ \ \dot \xi_2 = \lambda_2 \xi_2, \ \ \dot \xi_3 = \xi_3, \\ \label{44} \dot \xi_1 &= (\alpha \xi_1 + \beta \xi_2), \ \ \dot \xi_2 = (\alpha \xi_2 - \beta \xi_1), \ \ \dot \xi_3 = \xi_3, \end{align} where $\lambda_{1},\lambda_{2}$ and $\alpha, \beta$ are real and non-zero. \begin{lemma} For any real constants $c_i$ the equations \begin{align*} c_1 |\xi_1|^{1/\lambda_1} + c_2 |\xi_2|^{1/\lambda_2} + c_3 \xi_3 &= 0,\\ c_1 (\xi_1^2+\xi_2^2)^{1/2\alpha} + c_3 \xi_3& = 0, \end{align*} define invariant surfaces of the fields \eqref{43} and \eqref{44}, respectively, \label{PL2} \end{lemma} The proof is trivial and is omitted. \subsection{Partially hyperbolic germs} Let $W^s$, $W^u$, $W^c$ be the unstable, stable and center manifold of the partial hyperbolic germ $V$ at $0$, and let $d_i = \operatorname{dim} W^i$, $i=s,u,c$. Set $d=d_s+d_u$, then $d, d_c >0$ and $d+d_c=n$. One can choose local coordinates $(\xi_1, \ldots, \xi_{d_s}) \in W^s$, $(\xi_{d_s+1}, \ldots, \xi_{d}) \in W^u$, $\zeta = (\zeta_1, \ldots, \zeta_{d_c}) \in W^c$. \begin{theorem}[\cite{AI, HPS}] The germ $V$ is topologically equivalent to the direct product of $d$-dimensional standard saddle (the first $d$ equations) and the restriction of $V$ to the center manifold (the last $d_c$ equations): \begin{equation*} \begin{aligned} \dot \xi_i = \xi_i, \ \, i = 1, &\ldots,d_s; \ \ \, \dot \xi_i = -\xi_i, \ \, i = d_s+1,\ldots,d; \\ &\dot \zeta_j = Z_j(\zeta), \ \, j = 1,\ldots,d_c. \end{aligned} \end{equation*} \label{PT4} \end{theorem} \medskip In this paper, we deal with a special class of partially hyperbolic vector fields, which were studied by many authors, see e.g. \cite{Rouss,Takens}. From now on, we assume that all components of the germ $V$ belong to the ideal $I$ (in the ring of smooth functions vanishing at $0$) generated by two of them. More specifically, such a germ $V$ has the form \begin{equation} \dot \xi = v, \ \ \dot \eta = w, \ \ \dot \zeta_j = \alpha_j v + \beta_j w, \ \ j =1, \ldots, n-2, \label{48} \end{equation} where $v,w$ and $\alpha_j, \beta_j$ are smooth functions of the variables $\xi, \eta, \zeta_j$. The components of the germ \eqref{48} belong to the ideal $I = \<v,w\>$, and the spectrum of $V$ contains at most two non-zero eigenvalues: $\lambda = (\
lambda_1, \lambda_2, 0, \ldots, 0)$. We shall further assume that $\operatorname{Re} \lambda_{1}\neq 0$ and $\operatorname{Re} \lambda_{2} \neq 0$. Hence the center manifold $W^c = \{ v=w=0 \} \subset \mathbb{R}^n$ is a smooth manifold of codimension 2 and the restriction of the field $V$ to $W^c$ is identically zero, so $W^c$ consists of singular points of $V$. By Theorem \ref{PT4}, the germ $V$ is topologically equivalent to $$ \dot \xi = a_1 \xi, \ \ \ \dot \eta = a_2 \eta, \ \ \, \dot \zeta_j = 0, \ \ j =1, \ldots, n-2, $$ where $a_i= \operatorname{sgn} (\operatorname{Re} \lambda_i)$, $i=1,2$. For $C^k$-smooth classification of the germ \eqref{48}, we need to introduce two types of resonances between the non-zero eigenvalues $\lambda_{1}, \lambda_{2}$ being, in fact, partial cases of \eqref{39}: \begin{eqnarray} && s_1\lambda_1 + s_2\lambda_2 = 0, \ \ s_i \in \mathbb{Z}_+, \ \ i=1,2, \label{49}\\ && s_1\lambda_1 + s_2\lambda_2 = \lambda_j, \ \ s_i \in \mathbb{Z}_+, \ \ i,j=1,2. \label{50} \end{eqnarray} For simplifying the presentation, further we shall always assume that $|\lambda_1| \ge |\lambda_2|$ and exclude from consideration trivial resonances \eqref{49} ($s_1=s_2=0$) and \eqref{50} ($s_1=1$, $j=1$ or $s_2=1$, $j=2$). Then the absence of resonances \eqref{50} implies the absence of \eqref{49}. On the other hand, in the absence of \eqref{49}, resonances \eqref{50} may have only the form $\lambda_1=m\lambda_2$, for positive integers $m$. Given the germ \eqref{48} with $\operatorname{Re} \lambda_{1}\neq 0$ and $\operatorname{Re} \lambda_{2} \neq 0$ we choose local coordinates $\xi, \eta$ (called {\it hyperbolic variables}) and $\zeta = (\zeta_1, \ldots, \zeta_{n-2}) \in W^c$ (called {\it non-hyperbolic variables}) such that the ideal $I = \<\xi,\eta\>$, and consequently, the center manifold $W^c$ is given by $\xi=\eta=0$. The linearization of $V$ with respect to the hyperbolic variables has two eigenvalues which are continuous functions $\lambda_{1}(\zeta)$ and $\lambda_{2}(\zeta)$ of $\zeta \in W^c$. We have $\lambda_{j}(0)=\lambda_j$, $j=1,2$ and $\lambda_j$ as above. An analogue of Theorems \ref{PT1} and \ref{PT2} is the following. \begin{theorem}[\cite{GR, Rouss}] Suppose that between $\lambda_{1}(\zeta)$ and $\lambda_{2}(\zeta)$ there are no non-trivial resonances \eqref{49} of any order $|s| \ge 1$ for all $\zeta$ sufficiently close to zero. Then the germ \eqref{48} is ${C^{\infty}}$-smoothly equivalent to \begin{equation} \dot \xi = X, \ \ \, \dot \eta = Y, \ \ \, \dot \zeta_j = 0, \ \ j =1, \ldots, n-2, \label{51} \end{equation} where $X,Y$ are smooth functions of $\xi, \eta, \zeta_j$ such that the ideal $I = \<X,Y\> = \<\xi,\eta\>$. Moreover, if in addition the eigenvalues $\lambda_{1}, \lambda_{2} \in \mathbb{R}$, then \begin{equation} X = \lambda_1(\zeta)\xi + \varphi(\zeta)\eta^m, \ \ Y = \lambda_2(\zeta)\eta, \label{52} \end{equation} where $\varphi(\zeta) \not\equiv 0$ only if $\lambda_{1} = m \lambda_{2}$ with some natural $m \ge 1$. \label{PT5} \end{theorem} \begin{remark} {\rm If the pair $(\lambda_{1}, \lambda_{2})$ belongs to the Poincar\'e domain (i.e., $\lambda_{1}$ and $\lambda_{2}$ are real and of the same sign or complex conjugate), then the condition \begin{equation} s_1 \lambda_1(\zeta) + s_2\lambda_2(\zeta) \neq 0, \ \ \forall \, s_i \in \mathbb{Z}_+, \ \ i=1,2, \ \ \forall \, \zeta \in W^c, \label{53} \end{equation} follows from \eqref{49}. Moreover, in this case Theorem \ref{PT5} is valid in ${C^{\om}}$ category. However, if the pair $(\lambda_{1}, \lambda_{2})$ belongs to the Siegel domain (i.e., $\lambda_{1}$ and $\lambda_{2}$ are real and of different signs), the condition \eqref{53} is equivalent to $\lambda_{1}(\zeta):\lambda_{2}(\zeta) \equiv {\rm{const}}$ for all $\zeta \in W^c$. } \label{RemB} \end{remark} The conditions in Theorem \ref{PT5} become weaker if we consider $C^k$-smooth equivalence with $k<\infty$. Set \begin{equation*} N(k) = 2 \biggl[ (2k+1) \frac{\max |\operatorname{Re} \lambda_{1,2}|}{\min \, |\operatorname{Re} \lambda_{1,2}|} \, \biggr] + 2, \quad k \in \mathbb N, \end{equation*} where the square brackets is the integer part of a number. \begin{theorem}[\cite{GR, Sam82}] For any $k\in \mathbb{N}$, the statements in Theorem \ref{PT5} still hold true if ${C^{\infty}}$ is replaced with $C^k$ and the inequalities $1 \le |s|$, $1 \le m$ are replaced with $1 \le |s| \le N(k)$, $1 \le m \le N(k)$ respectively. \label{PT55} \end{theorem} The normal form \eqref{51}, \eqref{52} can be further simplified. For our purposes, we are interested in orbital normal form in the case when the resonance $\lambda_1(\zeta) = m\lambda_2(\zeta)$ holds at all $\zeta \in W^c$. Then, dividing by $\lambda_2(\zeta)$, from \eqref{51}, \eqref{52} we get the orbital normal form \begin{equation} \dot \xi = (m\xi + \psi(\zeta)\eta^m), \ \ \dot \eta = \eta, \ \ \, \dot \zeta_j = 0, \ \ j =1, \ldots, n-2. \label{54} \end{equation} where the smooth functions $\varphi(\zeta)$ and $\psi(\zeta)$ vanish simultaneously. The following lemma gives a simple geometric criterion for $\psi(\zeta) \equiv 0$, which is important for applications. \begin{lemma} Let $V$ be the germ of a field from Theorem~\ref{PT5} with the normal form \eqref{51}, \eqref{52} and the resonance $\lambda_1(\zeta) = m\lambda_2(\zeta)$, $m>1$, holds at all points $\zeta \in W^c$. Then in the orbital normal form \eqref{54}, $\psi(\zeta) = 0$ if and only if $V$ has a $C^{m}$-smooth integral curve that passes through the corresponding point $\zeta \in W^c$ with the tangential direction parallel to the eigenvector with $\lambda_2(\zeta)$. \label{PL1} \end{lemma} \begin{proof} The field \eqref{54} can be integrated explicitly. It has the invariant foliation $\zeta = {\rm{const}}$ and each leaf contains a single integral curve $\eta=0$ with tangential direction $\partial_{\xi}$ and one-parameter family of integral curves \begin{equation} \xi = \eta^m (c+\psi(\zeta)\ln |\eta|), \ \ c={\rm{const}}, \label{55} \end{equation} with the common tangential direction $\partial_{\eta}$ at the point $\xi=\eta=0$. All the curves \eqref{55} are $C^{m-1}$-smooth at $\xi=\eta=0$ if $\varphi(\zeta) \neq 0$ and ${C^{\infty}}$-smooth at zero if $\varphi(\zeta)\equiv 0$. Given $\zeta \in W^c$, the existence of at least one $C^{m}$-smooth integral curve passing through the point $\xi=\eta=0$ (the intersection of $W^c$ with the corresponding invariant leaf $\zeta = {\rm{const}}$) with the tangential direction parallel to the eigenvector with $\lambda_2(\zeta)$ is equivalent to the condition $\psi(0)=0$. \end{proof} \begin{theorem}[\cite{GR, Rouss}] Suppose that the resonance $\lambda_1(\zeta) + \lambda_2(\zeta)=0$ holds at all singular points $\zeta \in W^c$ and $\operatorname{Re} \lambda_{1} \neq 0$, $\operatorname{Re} \lambda_{2} \neq 0$. Then, for any natural $k$, the germ $V$ is $C^k$-smoothly equivalent to \begin{equation} \begin{aligned} \dot \xi = \xi(\lambda_1(&\zeta) + \rho \Phi_1(\rho,\zeta)), \quad \dot \eta = \eta(\lambda_2(\zeta) + \rho \Phi_2(\rho,\zeta))_, \\ &\dot \zeta_j = \rho \Psi_j(\rho,\zeta), \, \quad j=1,\ldots, n-2, \end{aligned} \label{4.15} \end{equation} where $\Phi_i(\rho,\zeta)$ and $\Psi_j(\rho,\zeta)$ are polynomials in $\rho = \xi \eta$ of degrees $N(k)-1$. If $\Psi_j(0,0) \neq 0$ for at least one $j=1,\ldots, n-2$, then the germ $V$ is ${C^{\infty}}$-smoothly orbitally equivalent to \begin{equation} \dot \xi = \xi, \ \ \dot \eta = -\eta, \ \ \dot \zeta = \xi\eta, \, \quad j=1,\ldots, n-2. \label{4.16} \end{equation} \label{PT6} \end{theorem} Theorem \ref{PT6} is not valid in ${C^{\om}}$ category. \section{Appendix B. Naturally parametrized geodesics} Naturally parametrized geodesics can be defined as extremals of the action functional \begin{equation*} J(\gamma) = \int\limits_{\gamma} \bigl(a{\dot x}^2 + 2b{\dot x}\dot y + c{\dot y}^2\bigr)\,dt, \quad \dot x = \frac{dx}{dt}, \ \ \dot y = \frac{dy}{dt}, \end{equation*} where $\gamma \subset S$ is a differentiable curve. The corresponding Euler-Lagrange equation reads \begin{equation} \left \{ \ \begin{aligned} & 2(a \ddot x + b \ddot y) = (c_x-2b_y) {\dot y}^2 - 2a_y {\dot x} {\dot y} - a_x {\dot x}^2, \\ & 2(b \ddot x + c \ddot y) = (a_y-2b_x) {\dot x}^2 - 2c_x {\dot x} {\dot y} - c_y {\dot y}^2. \\ \end{aligned} \right. \label{ELE} \end{equation} The definition of geodesics as auto-parallel curves in the Levi-Civita connection generated by the metric \eqref{1} leads to the same equation~\eqref{ELE}. Equation~\eqref{ELE} defines a direction field on the tangent bundle $TS$. The standard projectivization $TS \to PTS$ sends this direction field to the field parallel to \eqref{5}, see \cite{Rem15}. \medskip Firstly, using equation~\eqref{ELE} of parametrized geodesics, we prove the omitted statement in the case $Z$ that the line $\Psi (\Pi) = \{y=p=0\}$ does not correspond to a geodesic. Recall that in the case $Z$ there exist local coordinates such that $$ ds^2 = (y\omega + \ldots) dx^2 + (0 + \ldots) dxdy - (\omega + \ldots)dy^2, \quad \omega(0,0)=-1. $$ where the dots mean terms that belong to the ideal ${\mathfrak M}^{\infty}(y)$. Using appropriate change of variables $y \mapsto yu(x,y)$, where $u$ is a solution of equation $cu_x + b/y = 0$ with the condition $u(0,0) \neq 0$, one can bring locally the metric to the diagonal form $ds^2 = a dx^2 + c dy^2$ with the coefficients $a = yu\omega + \ldots$ and $c = -\omega (u+yu_y)$. Substituting $y \equiv b \equiv 0$ into the second equation of \eqref{ELE}, we get $a_y(x,0) {\dot x}^2 = 0$. Since $a_y(0,0) \neq 0$, this yields $\dot x \equiv 0$, and the restriction of the system \eqref{ELE} to $y=0$ has only constant solutions, which are not geodesics. At first sight it contradicts the fact established in \cite{GR}: $\mathscr {F}$ is an invariant surface of the field \eqref{5}, and consequently, any trajectory of \eqref{5} that lies entirely in $\mathscr {F}$ after the projection on the $(x,y)$-plane gives a geodesic or a point. (Example: for the metric $ydx^2 - dy^2$ the isotropic surface $p^2=y$ is filled with one-parameter family of integral curves intersecting $\Psi (\Pi)$ transversally. Projecting this family down, we get the isotropic geodesics $y = \frac{1}{4}(x-c)^2$.) In fact, there is no contradiction: the curve $\Psi (\Pi) \subset \mathscr {F}$ consists of singular points of the field \eqref{5}, and every such a point is a trajectory of \eqref{5}. \medskip Consider the family $\Gam_0$ of geodesics outgoing from a point $q \in \mathscr {D}$ with the isotropic direction $p_0$. Choose the natural parametrization so that the motion along geodesics proceeds toward $q$. In the paper \cite{Rem15}, it was proved that in the cases $C_1, C_3$ any geodesic $\gamma \in \Gam_0$ incomes into the point $q$ in finite time with infinite velocity. The same statement is valid in the case $C_2$, since it deals with the root $p_0$ only. In the case $Z$, the same result follows from the asymptotic formula established in Theorem~\ref{TAPP} below. The cases $D_s, D_n$ can be considered similarly, the case $D_f$ is excluded from consideration, since every geodesic $\gamma \in \Gam_0$ intersects the discriminant curve $\mathscr {D}$ infinite number of times in any neighborhood of $q$. \begin{theorem} \label{TAPP} The natural parametrization of geodesics \eqref{17} is given by the formula $x = t^{\frac{1}{3}} \bigl(1+X_{\alpha}(t^{\frac{1}{3}})\bigr)$, where $X_{\alpha}(\cdot)$ are smooth functions vanishing at zero. \end{theorem} \begin{proof} Choosing the local coordinates in Theorem~\ref{T3}, from the formula \eqref{17} we have $y=\frac{1}{4}x^2+O(x^4)$, $\dot y=(\frac{1}{2}x+O(x^3))\dot x$, and $\ddot y=(\frac{1}{2}x+O(x^3))\ddot x + (\frac{1}{2}+O(x^2)){\dot x}^2$. Substituting these expressions together with the coefficients $a,b,c$ from \eqref{11} into the first equation in \eqref{ELE}, after a straightforward transformation we obtain \begin{equation} \frac{\ddot x}{\dot x} = \Bigl( -\frac{2}{x} + f_{\alpha}(x) \Bigr) \dot x, \label{5.2} \end{equation} where $f_{\alpha}(x)$ are smooth functions. Equation \eqref{5.2} defines the natural parametrization uniquely up to non-degenerate affine transformations of the $t$-axis. Integrating it, we get $\ln |\dot x| = -2\ln |x| + F_{\alpha}(x) + C$, and $\dot x = K x^{-2} e^{F_{\alpha}(x)}$, where $F_{\alpha}$ is the primitive of $f_{\alpha}$. Without loss of generality put $F_{\alpha}(0)=0$ and $K = \frac{1}{3}$ (this corresponds to the choice of the initial velocity of motion along the geodesic). Then we arrive at the differential equation $\frac{dt}{dx} = 3x^{2} e^{- F_{\alpha}(x)}$, whose general solution is $t = x^3 (1+T_{\alpha}(x)) + t_0$, where $T_{\alpha}(x)$ is a smooth function vanishing at zero. Setting $t_0=0$ and inverting, we get $x = t^{\frac{1}{3}} \bigl(1+X_{\alpha}(t^{\frac{1}{3}})\bigr)$. \end{proof} As an example, for the metric $ds^2 = dy^2 - y dx^2$ the system~\eqref{ELE} reads $y \ddot x = - \dot x \dot y$, $2 \ddot y = - {\dot x}^2$. Substituting here the isotropic geodesic $y = \frac{1}{4} x^2$, we get $x = k(t-t_0)^{\frac{1}{3}}$. Substituting $y = 0$ (the line $\Psi (\Pi)$), we get $\dot x = 0$, that is, $y=0$ is not a geodesic as was stated above. \small
\section{INTRODUCTION} The dwarf planet Haumea \citep{2006ApJ...639.1238R}, its two moons \citep{2005ApJ...632L..45B,2006ApJ...639L..43B}, and its collisional family \citep{2007Nature..446..296} provide important constraints on the formation of Kuiper belt and outer solar system. This well-studied object is the fastest-rotating large body in the solar system \citep{2006ApJ...639.1238R} with rotational variability in color \citep{2008AJ....135.1749L,2009AJ....137.3404L}, an unexpectedly high density \citep{2014EM&P..111..127L}, and large albedo \citep{2010Natur.465..897E}. It has two moons on dynamically excited orbits \citep[][hereafter RB09]{2009AJ....137.4766R} which have scaled mass ratios and distances similar to the Earth-Moon system. Dynamical, photometric, and spectroscopic observations of objects in the vicinity of Haumea clearly indicate a collisional family of icy fragments with similarly high albedos \citep{2007AJ....134.2160R,2008ApJ...684L.107S,2010A&A...511A..72S}. However, though the expected dispersion velocity of these fragments is of order several hundred meters per second, the observed dispersion is well constrained within $\sim$150 m s$^{-1}$. The apparent lack of high velocity ejecta is confirmed by observational surveys and dynamical studies \citep[e.g.,][]{2012ApJ...749...33F,2012MNRAS.421.1331L,2012Icar..221..106V}, though it is possible that some high velocity ejecta would be unrecognizable dynamically \citep{2011ApJ...733...40M} and/or compositionally \citep{2012A&A...544A.137C,2012AJ....143..146B}. There is no simple high-probability formation scenario that naturally explains all of these observational constraints: Haumea's rapid near-breakup rotation rate, the two moons on distant and dynamically warm orbits, and a collisional family that is an order of magnitude smaller in velocity dispersion than expected. Though multiple explanations and variations have been proposed \citep[e.g.,][]{2008AJ....136.1079L,2009ApJ...700.1242S,2010ApJ...714.1789L,2011ApJ...733...40M,2012MNRAS.419.2315O,2013AJ....146...89C}, none have adequately and self-consistently explained all of the unique features of this interesting system and its family. Attempting to place the formation of the Haumea system in context with other similar systems in the Kuiper belt quickly leads to comparisons with Kuiper belt objects (KBOs) of similar sizes, particularly Eris, Pluto, and Makemake. Of these, Pluto is the best understood due to a wealth of observational data and the recent flyby by the New Horizons Mission \citep{2015Sci...350.1815S}. Furthermore, there are similiarities between some of the theories for the formation of Haumea's satellites \citep[e.g.][]{2010ApJ...714.1789L} and for the formation of Pluto's satellites \citep[e.g.][]{2011AJ....141...35C}: both suggest a relatively large impactor with very low incoming velocities that undergo a grazing collision to form a satellite system. With the discovery of a retinue of small satellites exterior to Charon's orbit -- now dubbed Styx, Nix, Kerberos, and Hydra -- there is a renewal in interest in observational constraints on the formation of the Pluto system \citep{2006Natur.439..943W,2011IAUC.9221....1S,2012IAUC.9253....1S,2015Natur.522...45S}. Standard explanations for the formation of Nix and Hydra were already problematic \citep{2006Sci...313.1107W,2008arXiv0802.2951L}, and the characteristics of Styx and Kerberos are even more puzzling \citep[][]{2012CeMDA.114..341P,2014AJ....147....8K,2014arXiv1407.1059C}. For example, in the current orbital configuration, the dynamical stability of Styx requires that Charon's eccentricity at its present semi-major axis was never above $\sim$0.035, using the circumbinary stability criterion of \citet[][see Equation 3]{1999AJ....117..621H}. Thus, the discovery of Styx combined with dynamical stability immediately precludes some of the more extreme proposed orbital histories of \citet{2014Icar..233..242C} if Styx formed concurrently with Charon \citep{2014arXiv1407.1059C}. Long-term dynamical stability can also place some of the best constraints on the masses of these small moons \citep{2012ApJ...755...17Y,2013AJ....146...89C,2015arXiv150505933P,2015Natur.522...45S}. The discovery of small moons around Pluto and their ability to add constraints to the understanding of this system suggests that all asteroid and KBO binaries and triples be searched for additional moons. We recommend the continuation of this standard practice, even when an initial companion is identified. For KBOs, satellite searches are observationally difficult for multiple reasons. First, acquiring data of sufficient depth and resolution to identify faint moons of faint KBOs usually requires a considerable amount of time at the best telescopes in the world, such as the \emph{Hubble Space Telescope} (HST) or 8-10 meter class telescopes with Laser Guide Star Adaptive Optics. The only KBOs with large amounts of continuous high-quality data are Pluto and Haumea. Second, the discovery of small moons can be frustrated by their \emph{a priori} unknown satellite orbital motion during long exposures. Faint, fast-moving moons can then evade detection even with the best data, using standard analysis methods. Therefore, an enhanced methodology to search for faint moving moons is required. In an attempt to better understand the formation of the Haumea system, we use a large set of consecutive HST observations to perform a search for very small moons around Haumea similar to those discovered around Pluto ($\S$2). To search for faint, fast-moving moons well below the single-exposure limit, we implemented the non-linear shift-and-stack method proposed by \citet[][hereafter PK10]{2010PASP..122..549P} for the discovery of KBOs ($\S$3). Adapted to the problem of finding additional satellites, this method was both efficient and effective. Though no additional satellites of Haumea were detected ($\S$4), with careful characterization of this null detection, we set strong limits on the size and location of possible undiscovered moons ($\S$5) and discuss the implications for understanding of Haumea's satellite system ($\S$6). \section{OBSERVATIONS} In determining the ideal set of observations for a deep satellite search, a balance must be struck between including the largest number of observations and considering the motion of putative satellites during the total observational baseline. The standard stacking method of adding images that have been co-registered to the position of the primary to enhance sensitivity to faint satellites is limited to observational arcs where the satellite's relative position remains within a region not much larger than $\sim$1 Point Spread Function (PSF) Full Width at Half Maximum (PSF FWHM). Our use of non-linear shift-and-stack can mitigate this problem significantly and allows us to perform a sensitive search on longer timescales. In particular, our HST Program 12243 observed with a wide filter for 10 consecutive orbits and is an excellent dataset for a deep satellite search; with the technique discussed below, we can search these observations even for close-in satellites which traverse a significant fraction of an orbit during the 15-hour baseline, corresponding to several PSF widths. These observations are our main focus as they are clearly the best for a deep satellite search ($\S$2.1); however, we also inspected other observations for the additional satellites of Haumea ($\S$2.2). \subsection{HST 12243: 10 Orbits in July 2010} HST Program 12243 obtained 10 orbits of observations over the course of $\sim$15 hours in July 2010. This program used the Wide Field Camera 3 (WFC3) UVIS imager with the F350LP (long-pass) filter. The primary goal of these observations was the the detection of a mutual event between the inner moon Namaka and Haumea (RB09). In order to produce high cadence time series photometry of the proposed mutual event (and to avoid saturation), exposures were limited to $\sim$45 seconds. To prevent a costly memory buffer download, only a 512x512 subarray of the full WFC3 camera was used with a field of view of $\sim$20.5 arcseconds. HST tracked at Haumea's rate of motion (except for controlled dithering) to maintain its position near the center of the field of view throughout the observations. The geocentric distance to Haumea at time of observations was 50.85 AU. At this distance, 1" corresponds to 36900 km, 1 WFC3 pixel (.04 arcseconds) corresponds to 1475 km, and the entire subarray field of view corresponds to $\sim$750000 km. Parts of the last two orbits were affected by the South Atlantic Anomaly. This caused a portion of the these orbits to lose data entirely, and another portion was severely affected with cosmic rays and loss of fine pointing precision. The most offensive frames were discarded for the purpose of the satellite search, leaving 260 individual exposures. The center of Haumea was identified by eye in combination with a 2-d Gaussian fitting routine. With these well-determined preliminary Haumea locations, all images were co-registered to Haumea's position. In this Haumea-centric frame, cosmic rays and hot pixels were identified by significant changes in brightness at a particular position using robust median absolute deviation filters. A detailed and extensive image-by-image investigation of cosmic rays by eye confirmed that this method was very accurate at identifying cosmic rays and other anomalies. Furthermore, the automatic routine did not flag the known objects (Haumea and its two moons: Hi'iaka and Namaka) as cosmic rays, nor were any other specific localized regions identified for consistent masking (i.e., putative additional satellites were not removed). TinyTim software\footnote{\texttt{http://www.stsci.edu/hst/observatory/focus/TinyTim}} was used to generate local point-spread function (PSF) models. As in RB09, these PSF models were then fit to the three known objects using standard $\chi^2$ minimization techniques \citep{2009ASPC..411..251M}. This identified the best-fit locations and heights of the scaled PSFs, with bad pixels masked and thus not included in the $\chi^2$ calculation. The astrometric positions of the known satellites relative to Haumea seen by this method were in clear accord with their projected orbital motion from RB09. Despite Namaka's nearness to Haumea (which was purposely chosen, as the goal of the observations was to observe a mutual event), it is easily distinguishable in the first few orbits. The best fit PSFs are visually inspected and found to be good fits for all images. These PSFs are then subtracted, removing a large portion of the three signals, but leaving non-negligible residuals; these residuals are caused by imperfect PSFs (note that Haumea may be marginally resolved in some images) and standard Poisson noise. Using the updated best-fit centers of Haumea, these PSF-subtracted images are re-coregistered to Haumea's best-fit location (though the additional shifts from the preliminary 2-d Gaussian centers are very small). As described below, these same PSF-subtracted images are used to perform the non-linear shift-and-stack. Moons with negligble motion relative to Haumea can be identified in this image by stacking all the observations in the Haumea-centered reference frame (including fractional pixel shifts implemented by IDL's \texttt{fshift}). Throughout this investigation, we create stacks by performing a pixel-by-pixel median of the images, which is less sensitive to cosmic rays, bad pixels, and errors in the PSF subtraction of the known bodies. This results in a small decrease in sensitivity, as, assuming white noise, the noise level grows a factor of $\sqrt{\frac{\pi}{2}}$ faster when using the median over the mean. This will correspond to only a $\sim 20\%$ difference in brightness sensitivity, or a $\sim 10\%$ difference in radius, and we find this acceptable so that the other effects mentioned above can be mitigated. A portion of this stacked image around Haumea is shown in Figure \ref{stationary}. Detailed investigation of this deep stack by eye by each co-author yielded no clear satellite candidates. The median stacks were also automatically searched using the IDL routine \texttt{find}, which uses a convolution filter with FWHM of $1.6$ pixels to identify positive brightness anomalies. Detections with SNR of 5 or greater were examined; none were found that were consistent with an additional satellite (e.g., having PSF-like shape). Scaling from the SNR of Haumea and using a more conservative detection limit SNR of 10, this places a limit on non-moving satellites as fainter than about $V\simeq27.1$, corresponding to Haumea satellites with radii less than about $8$ km (see $\S$5). \begin{figure} \centering \plotone{f1.eps} \caption{A portion of the median stack of 260 images from 10 orbits of HST WFC3 data (Program 12243). Individual images are co-registered to be stationary in the Haumea-centric frame, with best-fit TinyTim PSFs for Haumea, Hi'iaka and Namaka subtracted. The brightness has been stretched significantly to highlight the residuals. These residuals will limit sensitivity near Haumea, but the diffraction spikes and the majority of the PSFs have been removed. Above and to the left of Haumea lie the residuals from Hi'iaka which is 1.23" away (45600 km projected distance) in this stack. Vertical darker columns are due to minor uncorrected pixel sensitivity. The image is aligned so that Astronomical North is up. } \label{stationary} \end{figure} \subsection{Other Observations} HST has observed Haumea during many programs for multiple reasons. Program 11518 was proposed to obtain astrometry of both moons and is 5 independent orbits of observations spread over 2 weeks (RB09). Although it would be interesting to investigate the possibility of combining these data in a long-baseline non-linear shift-and-stack, given the existence of other more sensitive datasets, we investigated only the single-orbit median stacked images. Motion during a single 45-minute HST orbit is small compared to the PSF width, even for the shortest satellite orbital periods. HST Program 11971 was 5 consecutive orbits and HST Program 12004 was 7 consecutive orbits, both attempts to observe the last satellite-satellite mutual events. The latter program was within a few weeks of the HST 4th Servicing Mission but was still executed. Unfortunately, for 6.5 of the 7 orbits, the STIS shutter was closed and no on-sky data were taken. For the 5-orbit Wide Field Planetary Camera 2 observation of Program 11971, we median-stacked images centered on Haumea and searched for additional sources by eye and using IDL's \texttt{find} as described above. We investigated stacks of individual orbits and of the entire 5-orbit sequence and found no sources consistent with additional satellites. Though the non-linear shift-and-stack method below could fruitfully be applied to these observations, the WFC3 observations are considerably deeper and we opted to focus on our best dataset. Finally, we obtained some long-duration ($\sim$5 hours) observations of Haumea using the Laser Guide Star Adaptive Optics system at the Keck Observatory. Co-registered stacks of this data also showed no clear additional satellites, though the known satellites were very easily detected. \section{METHODS} For the detection of faint bodies, with signal-to-noise ratio (SNR) per image of $\lesssim$5, a useful approach is the co-addition (``stacking'') of multiple images. With the 260 images in our dataset, this method can increase the SNR by $\sim$$\sqrt{\frac{2}{\pi}}\sqrt{260} \approx 13$, thereby searching for satellites with radii $\sim\sqrt{13}\approx 3.6$ times as small as could be detected in a single image. If the object does not remain apparently stationary (within $\lesssim$1 FWHM) over the course of the observation, the simple co-addition will result in insufficient overlap between images to yield the expected increase in SNR. If the motion of the object is known, images can be first shifted to compensate for this motion, and the images added with the object localized regaining nearly the full sensitivity: this is the meaning of ``shift-and-stack''. Linear searches with shift-and-stack have been used to discover satellites in the past \citep{2004Icar..169..474K,2004Natur.430..865H, 2013DPS....4520601S} although these searches did not need to use the non-linear shift-and-stack method we employ below. In a situation where the motion is unknown, such as a broad search for KBOs, a large set of possible paths on the sky can be considered, and each path independently used as the basis for a shift-and-stack, as described by PK10. A composite image results from each proposed orbital path (which we call a ``sky track''), and each stack can be searched for faint satellites which emerge from the noise due to shifting the image accurately enough to (mostly) compensate for its motion. To minimize statistical false-positives and to increase computational tractability, it is important to identify a near-minimal number of sky tracks that will faithfully reproduce all the possible motions without performing redundant searches. PK10 suggested an algorithm for identifying the most important non-redundant set of sky tracks, which we fruitfully employ: generate a large number of random sky tracks based on the full range of expected motion (within desired search parameters) and then remove tracks that are similar to one another. We have adapted this technique for our search. There is a distinction between a general KBO search and a satellite search, which, it is important to note, is largely ignored in the method presented here. This distinction is that, in a broad KBO search, a sky track could be valid for any part of an image; that is, there is little correlation between position and motion. This is not the case for a satellite orbiting a given primary, in which a specific motion only applies to a small spatial region. The more highly curved tracks are, the more specific to a particular region they are --- a curved orbital arc translated to the other side of Haumea would not make physical sense. The method described below involves shifting and stacking the entirety of each image, and searching the whole of the composite image, when in fact the track upon which the shift-and-stack is based applies to only a small subset of each image. In addition to the computational cost of shifting and searching larger images than is necessary, this overuse of the images could potentially result in an increase in statistical false-positives. However, neither of these effects manifest in a noticeable way --- neither computation time nor an abundance of false-positives limit our search method. This suggests that we are near the optimal minimum number of sky tracks searched, or have at least reached an acceptably small number. Discussed in greater detail below, an overview of our search algorithm is as follows: \begin{enumerate} \item Generate a large bank of physically reasonable putative sky tracks by randomly selecting from plausible Keplerian satellite orbital parameters. \item Fit each sky track with non-linear polynomials in time (shift rates). If the shift rates for two distinct sky tracks are similar enough (quantified below), discard one. \item Continue searching for sky tracks until a nearly-complete non-redundant set is identified. \item For each track, create a composite image. This is done by overlaying the dataset (in our case, 260 images) upon itself, with the images shifted by the appropriate shift rates such that an object on that track will appear in the same place in each image. Co-add the images into one composite. \item Search each composite image for satellite candidate sources. \end{enumerate} The use of non-linear polynomial fits allows the shift rates to more accurately capture curved orbits than simple linear fits. For the motion of even the fastest detectable Haumean satellites over the timescale of our observations, we find that quadratic fits to the x and y positions are always sufficient. Note that the polynomial fits are included for convenience in describing the sky tracks; the actual positions of a putative satellite could be used, but the difference between the actual positions and the best-fit quadratic approximate was negligible. Including non-linear rates is often expected to greatly expand the number of dissimilar shift rates to the point of computational impracticality, but we find that an appropriate criterion for similarity of shift rates easily permits the inclusion of quadratic rates. \subsection{Generation of Sky Tracks} In a typical shift-and-stack search for KBOs, the putative sky tracks are selected from a grid of the six degrees of freedom needed to describe an object in a Keplerian orbit \citep[PK10, ][]{2004AJ....128.1364B}. For the purposes of a KBO satellite, particularly that of a primary with other known satellites, it is convenient to instead sample the space of Keplerian orbital elements relative to the primary: semi-major axis ($a$), eccentricity ($e$), inclination ($i$), longitude of the ascending node ($\Omega$), argument of periapse ($\omega$), and mean anamoly at epoch ($M$). Sampling in this space allows for direct control over the types of orbits that are searched, making it straightforward to exclude unphysical motions. In our case, we also benefit from a well-known mass of the primary; if this is not known, a variety of plausible values could be sampled for the generation of sky tracks. For this search, $a$ and $e$ were randomly sampled from orbits with semi-major axes between 5310 and 368000 km and eccentricities less than 0.5, while the orbital angles $i$, $\Omega$, $\omega$ and $M$ were allowed to assume any value. All parameters were chosen from the sample space uniformly, with the exception of $a$, which was sampled on a log scale to increase the likelihood of sampling an orbit in the regime of fast-moving satellites. The lower bound on $a$ is a constraint imposed by our sensitivity of detection. This limit corresponds to 3.75 pixels (15 milliarcseconds) on the WFC3, at which distance from the center of Haumea the subtraction noise is considerable enough to make reliable detections difficult (see Figure \ref{stationary}). The upper bound on $a$ is set much larger than the semi-major axes needed to shift images (as opposed to investigation of the unshifted stack). At a distance where the satellite's maximum velocity would cause it to travel less than one PSF FWHM over the course of the 15 hour observational period (here $\sim$27 m s$^{-1}$), shift-and-stack is unnecessary, giving the upper limit of $a \simeq 150,000$ in our search. This semi-major axis is $\sim$3 times the semi-major axis of Hi'iaka, whose motion in these frames is detectable, but $\lesssim$0.5 pixels. For the upper limit on $a$, we doubled this number to be conservative. \label{notmoving} Much of this orbital parameter space can be excluded on physical grounds, reducing the number of shift rates necessary to well-sample the space. Any putative orbit which crossed paths with the known satellites was rejected, as was any orbit with periapsis less than $3000$ km. These weak restrictions on orbital elements did not appreciably affect the selection of shift-and-stack parameters and additional tests (described below) show that we are sensitive to objects on practically any orbit with semi-major axis $\gtrsim$10000 km. \subsection{Non-linear Fitting and Shift Rate Similarity} Having created a bank of physically plausible orbits, we then generate a set of shift rates with which to create composite images to search for satellites. Orbital parameters were converted into sky coordinates relative to Haumea, right ascension ($\Delta$RA) and declination ($\Delta$Dec), for each image, as described in RB09. We assumed an instantaneous Keplerian orbit for the position of the satellite as this is an excellent approximation over the course of our observations. In our case, orbital acceleration was quite important, as we desired to search orbital periods down to $\sim$40 hours, of which the 15-hour observational arc is a sizable fraction. Therefore, the sky positions $\Delta$RA and $\Delta$Dec were fit with quadratic polynomials in time, which we found were sufficient to accurately describe the non-linear motion in every case. In order to minimize the number of sky tracks, we eliminated tracks which were similar to one another, as suggested by PK10. To determine if two tracks were similar, we focused on the final requirement that the shift rates localize the flux of satellite so that it can be identified in the stacked image. If the flux of a satellite traveling along the second orbit would be well-localized by the shift rates of the first, then there would be considerable overlap of the flux between images when shifted according to the rates of the first. This criterion can be quantified by calculating the overlap fraction between two shift-and-stack rates using the reasonable assumption that the WFC3 PSF is nearly Gaussian, with FWHM of .067 arcseconds ($\approx$1.7 pixels). For a pair of shift rates, the overlap was defined for each image in the dataset as the integral of the product of two such Gaussians separated by the difference in the two rates ($\Delta$RA and $\Delta$Dec) at the time of that image. We call this the overlap between two orbits as it is calculated from the product of two overlapping PSFs, but it is distinct from the concept of the overlap in co-added images. If the median overlap (normalized to 1 for perfect coincidence) was greater than a pre-specified threshold, it was considered that a sufficient fraction of the flux of the proposed satellite would have been collected by the stack of an existing sky track, and the new track was rejected as unnecessary. The goal is to build up a bank of sky tracks known to be mutually distinct. After accepting the first track into the bank, each subsequent track was compared to the previously selected tracks in the bank using the above overlap criterion. We experimented with different overlap threshold criteria and found the overall results mostly insensitive to the specific value chosen. In general, we required a median overlap of less than 0.7 with each previously accepted shift-and-stack tracks to accept the proposed track as distinct enough to add to the bank. By drawing from a large set of orbits covering the desired search space, this method efficiently builds a bank of mutually distinct shift rates that are also the most relevant (PK10). However, unlike a grid search, random orbital draws can continue indefinitely. Thus, we also require a ``stopping criterion'' to decide when the bank is large enough for practical use. To determine the upper limit on the number of necessary shift rates, we noted that the sample space saturates quickly; that is, the rate of acceptance drops off drastically after 10-15 shift rates are chosen. Consequently, the number of orbits rejected between successive accepted rates grows very quickly. Our criterion for a dense sampling was that the number of rejected rates between successive accepted rates was at least equal to the total number of rates rejected so far. Put another way, the selection was stopped when the acceptance of each new shift rate required doubling the number of sampled orbits, which typically occurred after testing hundreds of thousands of random orbits. This is an exponentially slow process, which suggests that once past this threshold, we have reached the limits of our rate selection method. Practically speaking, we found that this stopping criterion still generated enough shift rates to recover injected sources with a complete variety of orbits. Together with the above ranges for satellite orbital parameters, this method yielded only $\sim$35 sufficiently distinct orbits over which to search. Considering the size of the parameter space (linear and quadratic terms for shifts in both x and y directions), it might seem surprising that so small a set of potential orbits spans the space. However, a large portion of the space consists of short, almost entirely linear tracks, where the quadratic corrections are of limited importance. The only strongly quadratic orbits are very near to
Haumea, which also have large linear rates. In other words, there are strong correlations between the allowed linear and quadratic coefficients of physically-plausible tracks. The result is a relatively small number of non-linear shift rates that efficiently cover the desired search space (PK10); these tracks are illustrated in Figure \ref{tracks}. Contrasted with the orbital element motivated sampling presented here, the number of shift rates in the case of a quadratic sky motion grid search would have been much larger. \begin{figure} \centering \plotone{f2.eps} \caption{The non-linear shift-and-stack rates. Arcs show the displacement of images (relative to position at the middle image) over the course of the 15 hour observation. Each arc represents a different shift rate or ``sky track.'' Horizontal and vertical axes show differential right ascension ($\Delta$RA) and declination ($\Delta$Dec) in arcseconds and pixels (1 pixel = 0.04 arcseconds = 1475 km). The circle at bottom left has diameter of .067 arcseconds, the FWHM of WFC3's PSF. Following the method suggested by PK10, we generate random non-linear shift rates out of Keplerian orbital elements. We reject as duplicates rates for which the overlapping PSFs would catch at least 70\% of the flux if moving at the same rate as an existing orbit (see $\S$3.2). This method requires only $\sim$35 non-linear rates to cover the vast majority of parameter space. The ``sky tracks'' associated with these rates are mostly symmetric about the origin as seen above, with slight asymmetries arising from the projection of eccentric orbits into the skyplane, and the variation in orbital speed throughout an orbit. Almost all rates are substantially quadratic, which shows the importance of the non-linear approach. As can be seen, the use of quadratic shift rates allowed us to probe the region near Haumea where satellites would execute sizable fractions of an orbit during the 15-hour observation. Implantation of artificial sources on orbits randomly drawn from the same Keplerian elements showed an excellent recovery rate (see Figures \ref{avsb} and \ref{svsb}).} \label{tracks} \end{figure} \subsection{Creation of Composite Image} With our bank of non-degenerate sky tracks, we can now perform the non-linear shift-and-stack procedure. Each track corresponds to a specific set of $\Delta$RA and $\Delta$Dec values of a putative satellite relative to Haumea. We used \texttt{adxy}, a routine from the IDL Astro Library which uses astrometric data from the image headers, to convert these sky coordinates into on-image pixel positions, thus yielding the desired pixel shifts. The prepared images were shifted (including fractional pixel shifts implemented by IDL's \texttt{fshift}) and stacked using the pixel-by-pixel median of the images as described above. In preparation for the automated search, many images were investigated in detail by eye. \subsection{Sensitivity} To test the sensitivity of this method, artificial sources were implanted into the images with a range of random brightnesses. Their positions and rates of motion were determined by orbits randomly drawn from the same space mentioned above (but without restriction of non-crossing orbits with the known moons). The implants were generated by scaling from the actual PSF of Haumea (when brightest). This source was implanted into the images at the pixel positions corresponding to the randomly-chosen orbit. A subimage of 200 x 200 pixels was used for the search: the outer reaches of this subimage have objects that are practically not moving (see $\S$\ref{notmoving}) and any object beyond this region would have the same detectability threshold as stationary objects. This was done for a large number of orbits, with a new set of images created for each. Stacks were generated in the exact same manner as the real images, with the same $\sim$35 shift rates making new median stacks for each new set of images. These stacks were inspected using the same automated search routine (IDL's \texttt{find}). To distingush detections of implanted sources from the detection of the three known bodies, we examined the output of the search for the sets of stacks with no sources implanted. All detections here were due to known bodies, and the positions were used to establish a mask with three regions, one for each known body, to reject detections that were not due to implanted sources. In this way, detections could be automatically classified as a recovery of an implant or as a false positive due to the known bodies. These automated classifications were extensively verified with an investigation that included searching by eye and found to be very robust. Due to the application of a threshhold SNR by the \texttt{find} routine, objects in the vicinity of Haumea, while still far enough and bright enought to be seen by eye, may be rejected by the routine itself (not our masks). The presence of the primary nearby leads to an artificially high computed background noise level, which reduces the computed SNR significantly, causing the object to appear below threshhold. Any stacks with sources at risk of being left undetected due to this effect were searched by eye by multiple coauthors, and any that were detected in this processes were considered to be recovered for the purposes of our results, shown below. The success rates of finding implanted objects places constraints on additional satellites of Haumea: any recovered implanted source represents a satellite that we can say with reasonable certainty is not present in the Haumea system. \section{RESULTS} The implantation and successful recovery of faint moving sources clearly indicated the effectiveness of our non-linear shift-and-stack method. Nevertheless, we did not detect any additional satellites around Haumea and no candidate satellites were found that were worthy of additional investigation. A careful characterization of this null result is able to place strong limits on the brightness and separation of undiscovered Haumean satellites. These limits are summarized by Figures \ref{avsb} and \ref{svsb}, which show the results of our search for each implanted source. The source is either recovered, rejected (for being too close to one of the three known objects, usually Haumea), or ``missed'' because was too faint to be detected, or because it fell off the 200 x 200 subimage that was searched. Note that the ``rejected'' category is primarily composed of objects that we not clearly detected by the automated routine, but were detected in a blind search by eye by multiple coauthors; these consistent entirely of objects that are $\lesssim$0.2" (5 pixels) from Haumea. Figure \ref{avsb} plots semi-major axis against brightness of the sources as a fraction of that of Haumea, while Figure \ref{svsb} shows the projected distance (in arcseconds) of the moving sources versus the brightness. Assuming the same albedo ($p \simeq 0.7$) as Haumea, the relative brightness corresponds to the radius of a spherical satellite, which is also shown. \begin{figure} \centering \plotone{f3.ps} \caption{(Color online) Results from the sensitivity survey. The figure shows implanted sources that were either recovered (blue stars), not recovered (red crosses), or recovered but rejected due to confusion with existing sources (green squares). The horizontal axis is the semi-major axis of implanted objects in thousands of kilometers. The left-hand vertical axis is brightness relative to Haumea (when brightest). The right-hand vertical axis is radius of a spherical satellite assuming an albedo ($\sim$0.7) similar to Haumea. Diamonds represent know satellites Namaka and Hi'iaka; the vertical lines are guides to the eye at their respective semi-major axes. Purple triangles represent the moons of Pluto --- Nix, Hydra and Kerberos --- according to brightness relative to the primary. (The smallest moon Styx, with brightness approximately $6\times 10^{-6}$ that of Pluto, is below the range of brightness represented on this figure.) Because of differences in geocentric distance and albedo, the approximate radius does not directly apply to these three points. Figure \ref{svsb} is similar but shows distance in projected separation instead of semi-major axis. Bounds on brightness and semi-major axis of were chosen as described in $\S$3.1. The unrecovered implantations at semi-major axis $\gtrsim 200\times 10^3$ km are not found because their distance from Haumea often places them outside the subimages searched. This figure shows that satellites with radii as low as $\sim$8 km would be detectable in much of the space searched, and that our lower detection limit on semimajor axis is limited by the properties of the dataset, not by the sensitivity of the non-linear shift-and-stack technique. Nix and Hydra-like objects would be detected around Haumea, while Styx and Kerberos-like objects would still be too faint, mostly due to Haumea's further distance (50 AU compared to Pluto's 30 AU). } \label{avsb} \end{figure} \begin{figure} \centering \plotone{f4.ps} \caption{(Color online) Results from the sensitivity survey. The lower horizontal axis is the sky-planet projected separation from Haumea in arcseconds, while the upper axis gives approximate projected distance from Haumea in thousands of kilometers. The vertical axes are brightness and radius of implanted sources, as described in the caption to Figure \ref{avsb}. Symbols connected by horizontal lines show the maximum and minimum apparent distance from Haumea of the implanted object during the 15-hour "observation." As in Figure \ref{avsb}, implanted sources were either recovered (blue stars), not recovered (red crosses), or recovered but rejected due to confusion with existing sources (green squares). Note that implantations at separation greater than 4 arcseconds are unrecovered because they fall outside the region of the $\sim$4" subimages that were searched (see $\S$5.3). The vertical dashed line is a guide to the eye for this rough cutoff. No sources were implanted at separations larger than 10 arcseconds, corresponding to the upper limit on semi-major axis shown in Figure \ref{avsb}. Diamonds represent Hi'iaka and Namaka as they appear in the observaion; Namaka's separation is only given for the first four orbits where its presence is measured reliably enough for precise astrometry; as these observations were designed to catch Namaka in a mutual event, its projected separation would approach very low values if all ten orbits were included.} \label{svsb} \end{figure} \section{DISCUSSION} The constraints on undiscovered Haumean satellites can be divided into three categories based on orbital semi-major axis: close-in satellites ($a$ $\lesssim$ 10000 km), intermediate satellites (10000 km $\lesssim$ $a$ $\lesssim$ 350000 km), and distant satellites ($a$ $\gtrsim$ 350000 km). \subsection{Limits on Close-in Satellites} At a semi-major axis of $\sim$10000 km, the maximum separation of a satellite from Haumea would be 6.9 WFC3 pixels. Within 7 pixels ($\lesssim 4$ PSF FWHM) of Haumea, it is very difficult to recover objects due to imperfect subtraction of Haumea's PSF. It is possible that an empirical PSF subtraction would perform better for recovering very close-in satellites, but we do not consider such an approach here. As can be seen in Figure \ref{svsb}, there is the expected anti-correlation between the brightness of an object that can be recovered and the separation from Haumea: close in, only brighter objects can be found. However, there are dynamical reasons to expect that this region is nearly devoid of satellites. Due to Haumea's highly triaxial shape, the orbital region near Haumea is strongly perturbed and long-term stable orbits are difficult to maintain. According to \citet[][]{1994Icar..110..225S}, periods less than about 10 times the spin period are unlikely to be stable due to primary-spin-satellite-orbit resonances. In Haumea's case, this is exacerbated by the additional effects of tidal evolution and other dynamically excited satellites \citep[][]{1999AJ....117..603C,2013AJ....146...89C,2014arXiv1407.1059C}. An orbital period that is 10 times the spin period corresponds to a semi-major axis of about 5000 km (about 5 times the long-axis radius of Haumea). While about twice as distant as the Roche radius, for long-term dynamical stability, we consider this the inner limit. Even if satellites were originally found in such short orbits, it is possible that long-term tidal evolution would have moved them to a more detectable distance. A detailed analysis by \citet{2013AJ....146...89C} calls into question the idea first proposed that the satellites tidally evolved outwards from orbits near the Roche lobe. While extensive tidal evolution might not have taken place, it is worth noting that scaling the tidal evolution from the properties of the other satellites \citep{2005ApJ...632L..45B}, indicates that even for the smallest satellites we could have detected (which evolve the shortest distance due to tides), tidal evolution would have placed them near or beyond the $\sim$5000 km detection threshold. There remains a range of semi-major axes from 5000-10000 km that could potentially harbor very small undetected satellites which would be somewhat protected from dynamical and tidal instability. By lying well within Haumea's PSF, these satellites also generally evade detection. Furthermore, some satellites would not have been detected if they had an orbital phase placing them at undetectably small distances (although this is mitigated somewhat by observations at a variety of times). Overall, it is difficult to hide stable inner ($a \lesssim$10000 km) Haumean satellites with radii $\gtrsim$30 km. \subsection{Intermediate Satellites} At semi-major axes between about 10000 and 350000 km lies the region of satellites near where the other two moons are detected (at semi-major axes of 25600 km for Namaka and 49900 km for Hi'iaka, RB09). At this distance, contamination from Haumea is negligible and the main limitation to detecting satellites is insufficient SNR or falling beyond the edge of the image. By using the non-linear shift-and-stack we maximize the search depth, particularly closer to Haumea. The search depth can be reported as relative brightness (in magnitudes and flux) and as the radius of a spherical satellite assuming the same albedo as Haumea. As is usual for such deep searches, the recovery rate is a function of magnitude (Figure \ref{avsb},\ref{svsb}). We reliably detect satellites at -9.2 magnitudes (0.0002 relative brightness, radius of 10 km), our recovery rate is roughly 50\% at -10 magnitudes (0.0001 relative brightness, radius of 8 km), and our best case recovery is at -10.4 magnitudes (0.00007 relative brightness, radius of 6 km). Following typical practice, we summarize the recovery depth using the 50\% recovery rate. Note that it is possible that the albedo of the satellites is even higher; using Haumea family member 2002 TX300's measured albedo of 0.9 \citep{2010Natur.465..897E} instead of Haumea's presumed 0.7 albedo \citep{2014EM&P..111..127L} would imply a radius detection threshold of only $\sim$7 km (or $\sim$5 km in the best case). While close approaches to Hi'iaka and Namaka as projected on the sky would result in a missed detection for faint objects, this is generally unlikely (even for orbits coplanar with the known satellites which are near edge-on, RB09). Close approaches to Hi'iaka and Namaka are negligibly unlikely to happen at more than one epoch\footnote{Unlike irregular satellites of the giant planets, long-term tidal stability precludes Hi'iaka or Namaka from being binaries themselves.}, thus any missed detection would be mitigated for moderately bright objects by the non-detection of satellites in other datasets. Thus, we expect that this region of the Haumean system does not contain undiscovered satellites larger than $\sim$8 km in radius. Our results compare favorably with the current state of knowledge regarding the small satellites of Pluto. From the New Horizons flyby, we now have detailed knowledge of the albedoes (about 0.5) and sizes of the small satellites: $\sim$10 km for Styx and Kerberos and $\sim$40 km for Nix and Hydra \citep{2015Sci...350.1815S}. As Figure \ref{avsb} shows, we predict that a satellite of apparent magnitude relative to Haumea similar to that of Hydra or Nix around Pluto (-8.7 and -9.2 magnitudes respectively) would fall above our detection limit. With the higher expected albedo (0.7) of Haumean satellites, we would have detected objects as large as Styx and Kerberos. We conclude that Haumea very likely does not contain small satellites similar to Pluto's. \begin{deluxetable*}{llrrrrrc} \label{magnitudes} \tabletypesize{\footnotesize} \tablewidth{0pt} \tablecaption{Summary of Estimated Properties of Dwarf Planet Satellites} \tablehead{\colhead{Object} & \colhead{Satellite} & \colhead{Relative Brightness} & \colhead{$H_{sat}$\tablenotemark{a}} & \colhead{$V_{sat}$\tablenotemark{a}} & \colhead{Radius\tablenotemark{b}} & \colhead{$a$\tablenotemark{c}} & \colhead{Ref}\\ & & (magnitudes) & & & (km) & ($10^3$ km) &} \startdata Haumea & Hi'iaka & $-3.3$ & $3.4$ & $20.5$ & 200 & $50$ & 1 \\ Haumea & Namaka & $-4.6$ & $4.7$ & $21.8$ & 150 & $26$ & 1\\ Haumea & ``close'' upper limit & $-6.7$ & $6.8$ & $23.9$ & 30 & $\lesssim$10 & 2\\ Haumea & ``intermediate'' upper limit & $-10.0$ & $10.1$ & $27.6$ & 8 & 10-350 & 2\\ Haumea & ``distant'' upper limit & $-6.2$ & $6.3$ & $23.4$ & 40 & $\gtrsim$350 & 2\\ \hline Pluto & Charon & $-2.6$ & 1.9 & 16.6 & 350 & $20$ & 3 \\ Pluto & Hydra & $-8.7$ & 8.0 & 22.7 & 41 & $64$ & 3 \\ Pluto & Nix & $-9.2$ & 8.5 & 23.2 & 35 & $49$ & 3 \\ Pluto & Kerberos & $-12$ & 11 & 26 & 12 & $59$ & 4 \\ Pluto & Styx & $-13$ & 12 & 27 & 11 & $42$ & 5 \\ \hline Eris & Dysnomia & $-6.7$ & 5.5 & 25.4 & 60 & $37$ & 6 \\ Eris & ``close'' upper-limit & $-5.8$ & 4.6 & 24.5 & 80 & $\gtrsim$18 & 6 \\ Eris & ``distant'' upper-limit & $-8.2$ & 7.0 & 26.9 & 30 & $\gtrsim$37 & 6 \\ \hline Makemake & S/2015 (136472) 1 & $-7.8$ & 7.4 & 24.7 & 25 & $\sim$100 & 8\\ Makemake & upper-limit &$-10$ & 9.6 & 26.9 & 8 & $\gtrsim$30 & 7,8 \\ \enddata \tablecomments{Magnitudes and semi-major axes of bodies in KBO systems. The relative magnitude of the faintest detectable bodies in our search is -10, comparable to that of Hydra and Nix. For Eris and Makemake, values are more approximate and/or interpolated from published estimates. We do not list the large number of KBO binaries \citep[e.g.][]{2008ssbn.book..345N} or KBO triple 1999TC36 \citep{2010Icar..207..978B} since the formation of these systems appears to be distinct from processes associated with dwarf planets. In particular, these binaries tend to be nearly equal brightness without known small additional companions.} \tablenotetext{a}{Approximate absolute magnitude ($H$) or approximate apparent magnitude in a typical optical filter ($V$) of the satellite. These are calculated combining the relative magnitude with the absolute and typical apparent magnitudes of the KBOs from JPL Horizons. These are meant mostly for illustration purposes and generally have significant uncertainties of $\lesssim$1 magnitude.} \tablenotetext{b}{Radius estimate in kilometers, listed for illustration purposes only. Quoted radii for the highly ellipsoidal small satellites of Pluto are volumetric means (S. Porter, pers. comm.). Note that these have albedoes of 0.5, somewhat less than assumed for Haumea's moons. For simplicity and ease of inter-comparison, observed moons of Eris and Makemake are given an estimated albedo of 0.7 like the Haumea moons. The actual albedo and size of these moons is not well constrained.} \tablenotetext{c}{Approximate semi-major axis in units of thousands of kilometers. For upper-limits, this is the approximate range of semi-major axes where this limit applies. The discovery of S/2015 (136472) 1 by \citet{2016arXiv160407461P} within the magnitude and distance ``upper-limit'' quoted by \citet{2008ssbn.book..335B} is easily attributed to the difficulty of detecting moons with small semi-major axes and/or edge-on orbits in single-epoch observations when the actual on-the-sky separation is often small enough to render the moon indistinguishable from the primary \citep{2016arXiv160407461P}. The upper-limits reported here should be understood with that caveat. } \tablerefs{ (1) RB09 \citep{2009AJ....137.4766R} \quad (2) $\S$4, this paper \quad (3) \citet{2006Natur.439..943W} \quad (4) \citet{2011IAUC.9221....1S} \quad (5) \citet{2012IAUC.9253....1S} \quad (6) \citet{2007Sci...316.1585B} \quad (7) \citet{2008ssbn.book..335B} \quad (8) \citet{2016arXiv160407461P} } \end{deluxetable*} \subsection{Distant Satellites} Satellites with semi-major axes beyond 350000 km may not have been detected in the Program 12243 WFC3 data due to the small field-of-view employed for the subarray observations. Other HST and Keck observations that were not as deep covered a larger area and were also searched for satellites. We estimate that satellites larger than about 40 km in radius (again assuming an albedo similar to Haumea's) would have been detected even several tens of arcseconds away by, e.g., the WFPC2 observations (with a field of view of 162"). Because the motion of sattelites in this region is negligible over the relevant timescales, the shift-and-stack method is not necessary. Using half the size of Haumea's Hill sphere at perihelion as an estimate of the full region of stable satellites \citep{2008AJ....136.2453S}, the semi-major axis of the most distant stable satellites would be about 4.6 $\times$ $10^6$ km or 124". About half of this volume has been covered down to 40 km in radius. For comparison, the Program 12243 deep observations covered separations up to about 10" around Haumea or 350000 km. Thus, this limit on very small intermediate-range satellites corresponds to about 0.5\% of the stable region radius. \section{CONCLUSIONS} By efficient application of the PK10 method for non-linear shift-and-stack and recovery of known implanted sources, we have strongly limited the possibility of undetected satellites in orbit around Haumea. As Figure \ref{svsb} shows, we detect no satellites larger than $\sim$8 km in radius with separations between 10000 and 350000 km. This same region around Pluto contains Charon and 4 small satellites which, by size, would all have been detected in this search. Nearer to Haumea, diffraction limits make distinguishing small satellites difficult, but there are dynamical reasons to expect that this region is mostly unpopulated. Further from Haumea, other observations would have detected satellites larger than $\sim$40 km in radius within much of the entire region of possible stable satellites. Significant improvement in the detection limits on smaller satellites would require extensive observations that are unlikely in the foreseeable future until, perhaps, deep observations with the James Webb Space Telescope. Though Pluto contains multiple small moons and some formation theories \citep[e.g.,][]{2008arXiv0802.2951L} predict them in the Haumea system, we find no additional Haumean moons. Considering upper limits from other studies (summarized in Table 1), Nix/Hydra analogues would have been discovered if present around Makemake and they would be near the detection threshold around Eris. As the properties of the dwarf planet satellite systems differ significantly, it was not anticipated that Pluto's small satellites would necessarily find counterparts around Haumea, though it seems that Makemake may have a satellite of similar size \citep{2016arXiv160407461P}. Our null result affirms that, for the time being, Pluto is the only known KBO with a retinue of small satellites, though such could have been detected or nearly detected around all four dwarf planets. This implies that the satellite systems may result from somewhat different formation pathways, although all the dwarf planet satellites are probably connected with a collisional formation. Pluto's small satellite system may be connected with Charon since, from a dynamical perspective, the other dwarf planet satellites are more like small moons compared to the near-equal-sized Pluto-Charon binary. We demonstrate that the non-linear shift-and-stack is a valuable tool for satellite searches. Utilizing the application techniques developed herein, this method can sufficiently capture the nonlinearity of the orbits of fast-moving satellites close to the primary. We have applied this technique to the regime of searching for sub-threshold satellites around Haumea, but it could also be used for other long-observation datasets (PK10). Besides discovery of new moons, it has promise for improving astrometric parameters for known faint moving satellites (e.g., precovery observations of Styx and Kerberos). The tractability of the non-linear shift-and-stack also promotes the possibility of applying this to the general search for KBOs, as originally proposed by PK10. Other applications for improving sensitivity are also possible, e.g. searching for moving exoplanets in direct imaging campaigns \citep{2013ApJ...771...10M}. To facilitate further analyses, all data and source codes used in this project are available upon request. The sensitivity and tractability of the method presented in this work suggests that, when appropriate, it should be applied to other satellite searches in the solar system. The non-detection of small satellites around Haumea increases our understanding of this intriguing object and contributes to the our understanding of the formation and evolution of multiple KBO systems. \acknowledgements We thank Alex Parker, Danielle Hastings, and the anonymous referee for discussions and suggestions that improved the manuscript. DR acknowledges the support of a Harvard Institute for Theory and Computation Fellowship. This work is based on NASA/ESA Hubble Space Telescope Program 12243. Support was provided by NASA through grants HST-GO-12243 from the Space Telescope Science Institute (STScI), which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. \bibliographystyle{apj}
\section{Introduction} One standard implementation of the renormalization group philosophy \cite{Wil} uses block spin transformations. See \cite{KAD,BalLausane,GK,BalPalaiseau,Dim1}. Concretely, suppose we are to control a functional integral on a finite\footnote{Usually, the finite lattice is a ``volume cutoff'' infinite lattice and one wants to get bounds that are uniform in the size of the volume cutoff.} lattice $\cX_-$ of the form \begin{equation}\label{introbstrbasiscfi} \int \smprod_{x\in\cX_-} \sfrac{d\phi^*(x) d\phi(x)}{2\pi i}\, e^{A(\al_1,\cdots,\al_s;\phi^*,\phi)} \end{equation} with an action $A(\al_1,\cdots,\al_s;\phi_*,\phi)$ that is a function of external complex valued fields $\al_1$, $\cdots$, $\al_s$, and the two\footnote{ In the actions, we treat $\phi$ and its complex conjugate $\phi^*$ as independent variables.} complex fields $\phi_*,\phi$ on $\cX_-$. This scenario occurs in \cite{PAR1,PAR2}, where we use block spin renormalization group maps to exhibit the formation of a potential well, signalling the onset of symmetry breaking in a many particle system of weakly interacting Bosons in three space dimensions. (For an overview, see \cite{ParOv}.) For simplicity, we suppress the external fields in this paper. Under the renormalization group approach to controlling integrals like \eqref{introbstrbasiscfi} one successively ``integrates out'' lower and lower energy degrees of freedom. In the block spin formalism this is implemented by considering a decreasing sequence of sublattices of $\cX_-$. The formalism produces, for each such sublattice, a representation of the integral \eqref{introbstrbasiscfi} that is a functional integral whose integration variables are indexed by that sublattice. To pass from the representation associated with one sublattice $\cX\subset\cX_-$, with integration variables $\psi(x)$, $x\in\cX$, to the representation associated to the next coarser sublattice $\cX_+\subset\cX$, with integration variables $\th(y)$, $y\in\cX_+$, one \begin{itemize}[leftmargin=*, topsep=2pt, itemsep=0pt, parsep=0pt] \item paves $\cX$ by rectangles centered at the points of $\cX_+$ (this is illustrated in the figure below --- the dots, both small and large, are the points of $\cX$ and the large dots are the points of $\cX_+$) and then, \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{unitCrsLatticeB} \caption{The lattices $\cX$ and $\cX_+$} \end{figure} \item for each $y\in\cX_+$ integrates out all values of $\psi$ whose ``average value'' over the rectangle centered at $y$ is equal to $\th(y)$. The precise ``average value'' used is determined by an averaging profile. One uses this profile to define an averaging operator $Q$ from the space $\cH$ of fields on $\cX$ to the space $\cH_+$ of fields on $\cX_+$. One then implements the ``integrating out'' by first, inserting, into the integrand, $1$ expressed as a constant times the Gaussian integral \begin{equation}\label{eqnBSgaussianOne} \int \smprod_{y\in\cX_+} \sfrac{d\th^*(y) d\th(y)}{2\pi i} e^{-b\< \th^*-Q\,\psi_*\,,\, \th-Q\,\psi) \>} \end{equation} with some constant $b>0$, and then interchanging the order of the $\th$ and $\psi$ integrals. \end{itemize} For example, in \cite{ParOv,PAR1,PAR2} the model is initially formulated as a functional integral with integration variables indexed by a lattice\footnote{The volume cutoff is determined by $L_\tp$ and $L_\sp$.} $\big(\bbbz/L_\tp\bbbz\big)\times\big(\bbbz^3/L_\sp\bbbz^3\big)$. After $n$ renormalization group steps this lattice is scaled down to $\cX_n= \big(\sfrac{1}{L^{2n}}\bbbz \big/ \sfrac{L_\tp}{L^{2n}}\bbbz\big) \times \big(\sfrac{1}{L^n}\bbbz^3 \big/\sfrac{L_\sp}{L^n}\bbbz^3 \big)$. The decreasing family of sublattices is $\cX_{j}^{(n-j)} = \big(\sfrac{1}{L^{2j}}\bbbz \big/ \sfrac{L_\tp}{L^{2n}}\bbbz\big) \times \big(\sfrac{1}{L^{j}}\bbbz^3 \big/ \sfrac{L_\sp}{L^n}\bbbz^3 \big)$, $j=n$, $n-1$, $\cdots$. The abstract lattices $\cX_-$, $\cX$, $\cX_+$ in the above framework correspond to $\cX_n$, $\cX_0^{(n)}$ and $\cX_{-1}^{(n+1)}$, respectively. Return to the abstract setting. The integral is often controlled using stationary phase/steepest descent. The contributions to the integral that come from integration variables close to their critical values are called ``small field'' contributions. At the end of every step, the small field contribution to the original integral \eqref{introbstrbasiscfi} is, up to a multiplicative normalization constant\footnote{See Remark \ref{remBSactionRecur} for the core of the recursion responsible for this form.}, of the form \begin{equation}\label{introbstrstepnfi} \int \smprod_{x\in\cX} \sfrac{d\psi^*(x) d\psi(x)}{2\pi i}\, e^{-\< \psi^*-Q_-\,\phi_*\,,\, \fQ(\psi-Q_-\,\phi) \> - \fA(\phi_*,\phi) +\cE(\psi^*,\psi) } \bigg|_{\atop{\phi_* = \phi_{*\rm bg}(\psi^*,\psi)} {\phi = \phi_{\rm bg}(\psi^*,\psi)}} \end{equation} where \begin{itemize}[leftmargin=*, topsep=2pt, itemsep=0pt, parsep=0pt] \item $Q_-$ is an averaging operator that maps the space $\cH_-$ of fields on $\cX_-$ to the space $\cH$ of fields on $\cX$. It is the composition of the averaging operations for all previous steps. \item the exponent $\< \psi^*-Q_-\,\phi_*\,,\, \fQ(\psi-Q_-\,\phi) \> $ is a residue of the exponents in the Gaussian integrals \eqref{eqnBSgaussianOne} inserted in the previous steps. The operator\footnote{See Remark \ref{remBSactionRecur} for the recursion relation that builds $\fQ$.} $\fQ$ is bounded and boundedly invertible on $L^2(\cX)$. \item the ``background fields'' \begin{equation*} (\psi_*,\psi) \mapsto \phi_{*\rm bg}(\psi_*,\psi) \qquad (\psi_*,\psi) \mapsto \phi_{\rm bg}(\psi_*,\psi) \end{equation*} map sufficiently small fields $\psi_*$, $\psi$ on $\cX$ to fields on $\cX_-$. They are the concatination\footnote{See Proposition \ref{propBSconcatbackgr}.c for the recursion relation that builds $\phi_{(*)\rm bg}$.} of ``steepest descent'' critical field maps for all previous steps. \item $\,\fA(\phi_*,\phi)\,$, the ``dominant part'' of the action, is an explicit function of $\phi_*,\phi\in\cH_-$ \item $\,\cE(\psi_*,\psi)\,$ is the contribution to the action that consists of ``perturbative corrections''. It is an analytic function of $\psi_*,\psi\in\cH$. \end{itemize} The next block spin renormalization group step then consists of \begin{itemize}[leftmargin=*, topsep=2pt, itemsep=0pt, parsep=0pt] \item rewriting \eqref{introbstrstepnfi}, by inserting $1$ expressed as a constant times \eqref{eqnBSgaussianOne}, as \begin{equation}\label{eqnBSintFunctInt} \begin{split} &\int \smprod_{y\in\cX_+} \sfrac{d\th^*(y) d\th(y)}{2\pi i} \int \smprod_{x\in\cX} \sfrac{d\psi^*(x) d\psi(x)}{2\pi i}\, e^{-b\< \th^*-Q\,\psi_*\,,\, \th-Q\,\psi \>} \\ \noalign{\vskip-0.2in} &\hskip2in e^{-\< \psi^*-Q_-\,\phi_*\,,\, \fQ(\psi-Q_-\,\phi) \> - \fA(\phi_*,\phi) +\cE(\psi^*,\psi) } \bigg|_{\atop{\phi_* = \phi_{*\rm bg}(\psi^*,\psi)} {\phi = \phi_{\rm bg}(\psi^*,\psi)}} \end{split} \end{equation} up to a multiplicative normalization constant, \item and performing a stationary phase argument, for the $\psi$ integral, around appropriate critical fields\footnote{$\psi_{*\rm cr}(\th_*,\th)$ and $\psi_{\rm cr}(\th_*,\th)$ need not be complex conjugates of each other} $\psi_{*\rm cr}(\th_*,\th)$, $\psi_{\rm cr}(\th_*,\th)$ that map sufficiently small fields $\theta_*$, $\theta$ on $\cX_+$ to fields on $\cX$. \end{itemize} \vskip .3cm In this paper, we discuss some purely algebraic aspects of the block spin renormalization group in an abstract setting. We derive some ``well known'' identities like, in Proposition \ref{propBSconcatbackgr}.c, the composition rule, and, in Proposition \ref{propBSconcatbackgr}.a, the relation between critical fields and background fields, and, in Lemma \ref{lemBSdeltaAalernew}, a formula for the dominant part of the action in the fluctuation integral. They are used in Proposition \propBGAomnibus.b, Proposition \propBGAomnibus.a, and Lemma \lemSTdeA.a of \cite{PAR1}, respectively. We use the following abstract environment: \begin{itemize}[leftmargin=*, topsep=2pt, itemsep=0pt, parsep=0pt] \item Let $H_-$, $H$, $H_+$ be finite dimensional, real vector spaces with positive definite symmetric bilinear forms $\<\,\cdot\,,\,\cdot\,\>_-$, $\<\,\cdot\,,\,\cdot\,\>$, $\<\,\cdot\,,\,\cdot\,\>_+$. These bilinear forms extend to nondegenerate bilinear forms on their complexifications $\cH_-$, $\mathcal{H}$, $\cH_+$. Think of $\,H_-$, $H$ and $H_+$ as being the vector spaces of real valued functions on the finite lattices $\cX_-$, $\cX$ and $\cX_+$, respectively, and think of the complexifications $\cH_-$, $\mathcal{H}$, $\cH_+$ as being $L^2(\cX_-)$, $L^2(\cX)$ and $L^2(\cX_+)$ respectively. \item Let $d\mu_{\cH}(\phi^*,\phi)$ be the volume form on $\cH$ determined by its bilinear form. If $\cH=L^2(\cX)$, then $ d\mu_{\cH}(\phi^*,\phi)=\smprod_{x\in \cX} \sfrac{ d\phi(x)^\ast\wedge d\phi(x)}{2\pi \imath} $. \item Let \begin{equation*} Q_-: H_- \rightarrow H \qquad Q: H \rightarrow H_+ \end{equation*} be linear maps. They induce $\bbbc$ linear maps between $\cH_-$, $\mathcal{H}$, $\cH_+$ which are denoted by the same letter. We set \begin{equation*} {\check Q}_- = Q\circ Q_- \end{equation*} \item Fix $b>0$ and a strictly positive definite (real) symmetric linear operator, $\fQ$, on $H$. \item Let $\fA$ be a polynomial on $\cH_-\times\cH_-$. \end{itemize} Set, for $\phi_*,\phi \in \cH_-$, $\psi_*,\psi \in \mathcal{H}$ and $\th_*,\th\in \cH_+$ \begin{align*} \cA(\psi_*,\psi;\phi_*,\phi) &= \< \psi_*-Q_-\,\phi_*\,,\, \fQ(\psi-Q_-\,\phi) \> + \fA(\phi_*,\phi) \\ \cA_\eff(\th_*,\th;\psi_*,\psi;\phi_*,\phi) &= b \< \th_*-Q\psi_*\,,\, \th-Q\psi \>_+ + \cA(\psi_*,\psi;\phi_*,\phi) \\ \check\cA(\th_*,\th;\phi_*,\phi) &= \big< \th_*-{\check Q}_-\,\phi_*\,,\,\check \fQ \big( \th-{\check Q}_-\,\phi\big) \big>_+ + \fA(\phi_*,\phi) \end{align*} where \begin{equation}\label{eqnBSfQrecursion} \check \fQ= \big(\sfrac{1}{b}\bbbone_{\cH_+}+Q\,\fQ^{-1}Q^*\big)^{-1} \end{equation} \begin{remark}\label{remBSactionRecur} In this setting, the action of the functional integral \eqref{introbstrstepnfi} that appears at the beginning of the renormalization group step is \begin{equation*} -\< \psi^*-Q_-\,\phi_*\,,\, \fQ(\psi-Q_-\,\phi) \> - \fA(\phi_*,\phi) +\cE(\psi^*,\psi) =-\cA(\psi^*,\psi;\phi_*,\phi) +\cE(\psi^*,\psi) \end{equation*} and the action of the functional integral \eqref{eqnBSintFunctInt} that appears in the middle of the renormalization group step is \begin{align*} &-b\< \th^*-Q\,\psi_*\,,\, \th-Q\,\psi \>_+ -\< \psi^*-Q_-\,\phi_*\,,\, \fQ(\psi-Q_-\,\phi) \> - \fA(\phi_*,\phi) +\cE(\psi^*,\psi) \\ &\hskip2in = -\cA_\eff(\th^*,\th;\psi^*,\psi;\phi_*,\phi) +\cE(\psi^*,\psi) \end{align*} We show in Proposition \ref{propBSconcatbackgr}.b, below, that when one substitutes the critical $\psi$ into $\cA_\eff$ one gets $\check\cA$. Upon scaling (and renormalizing) $\check\cA$ becomes the $\cA$ for the beginning of the next renormalization group step. Equation \eqref{eqnBSfQrecursion} is the recursion relation that builds the operator $\fQ$ in $\cA(\psi_*,\psi;\phi_*,\phi) $. \end{remark} \begin{remark}\label{remotherrepcheckfQ} $\ \ \ \ \check\fQ =b \big[ \bbbone_{\cH_+} -bQ{\big(bQ^* Q+\fQ\big)}^{-1}Q^*\big]\,$ \end{remark} \begin{proof} Apply Lemma \ref{lembSabstrlinalg} with $V=\cH$, $W=\cH_+$, $q=Q$, $q_*=Q^*$, $f=\fQ$ and $g=b\bbbone_W$. \end{proof} \pagebreak[2] \begin{definition}\label{defBSbackfld} \ \begin{enumerate}[label=(\alph*), leftmargin=*] \item Let $\cN$ be a domain in $\cH$ which is invariant under complex conjugation. ``Background fields on $\cN$'' are maps $ \phi_{*\bg},\phi_\bg:\cN\times\cN\rightarrow \cH_- $ such that, for each $(\psi_*,\psi) \in \cN\times\cN\,$, the point $\,\big(\phi_{*\bg}(\psi_*,\psi),\,\phi_\bg(\psi_*,\psi)\big)$ is a critical point of the map \begin{equation*} (\phi_*,\phi) \mapsto \cA( \psi_*,\psi;\,\phi_*,\phi) \end{equation*} That is, it solves \begin{equation}\label{eqnBSbckgndequ} \begin{split} Q_-^* \fQ\,Q_- \phi_*+\nabla_{\phi}\fA (\phi_*,\phi) &= Q_-^* \fQ \psi_*\\ Q_-^* \fQ\,Q_- \phi+\nabla_{\phi_*}\fA(\phi,\phi) &= Q_-^* \fQ\psi \end{split} \end{equation} ``Formal background fields'' are formal power series $\phi_{*\bg}(\psi_*,\psi),\,\phi_\bg(\psi_*,\psi)$, in $(\psi_*,\psi)$ with vanishing constant terms, that solve \eqref{eqnBSbckgndequ}. \item Let $\,\cN_+\,$ and $\cN$ be domains in $\cH_+$ and $\cH$, respectively, which are invariant under complex conjugation. Let $\phi_{*\bg},\phi_\bg$ be background fields on $\cN$. ``Critical fields on $\cN_+$ with respect to $\phi_{*\bg},\phi_\bg$'' are maps $ \psi_{*\mathrm{cr}}, \psi_{\mathrm{cr}}:\cN_+\times\cN_+\rightarrow \cN $ such that, for each $ (\th_*,\th) \in \cN_+ \times \cN_+$, the point $\, \big(\psi_{*\mathrm{cr}}(\th_*,\th), \psi_{\mathrm{cr}}(\th_*,\th)\big)\,$ is a critical point for the map \begin{equation*} \ (\psi_*,\psi) \mapsto \cA_\eff(\th_*,\th;\psi_*,\psi; \phi_{*\bg}(\psi_*,\psi),\phi_\bg(\psi_*,\psi)) \end{equation*} That is, it solves \begin{equation}\label{eqnBScritpointequ} \begin{split} (bQ^* Q+\fQ)\psi_* &= bQ^*\th_* +\fQ\,Q_-\,\phi_{*\rm bg}(\psi_*,\psi)\\ (bQ^* Q+\fQ)\psi &= bQ^*\th +\fQ\,Q_-\,\phi_{\rm bg}(\psi_*,\psi) \end{split} \end{equation} If $\phi_{*\bg},\phi_\bg$ are formal background fields, then ``formal critical fields with respect to $\phi_{*\bg},\phi_\bg$'' are formal power series $\psi_{*\mathrm{cr}}(\th_*,\th), \psi_{\mathrm{cr}}(\th_*,\th)$, in $(\th_*,\th)$ with vanishing constant terms, that solve \eqref{eqnBScritpointequ}. \item Let $\cN_+$ be a domain in $\cH_+$ which is invariant under complex conjugation. ``Next scale background fields on $\cN_+$'' are maps $ \check\phi_{*\bg},\check\phi_\bg:\cN_+\times\cN_+\rightarrow \cH_- $ such that, for each $(\th_*,\th) \in \cN_+\times\cN_+\,$, the point $\,\big(\check\phi_{*\bg}(\th_*,\th),\,\check\phi_\bg(\th_*,\th)\big)$ is a critical point of the map \begin{equation*} (\phi_*,\phi) \mapsto \check\cA( \th_*,\th;\,\phi_*,\phi) \end{equation*} That is, it solves \begin{equation}\label{eqnBSnsbckgndequ} \begin{split} {\check Q}_-^* \check \fQ\,{\check Q}_- \check\phi_* +\nabla_{\check\phi}\fA (\check\phi_*,\check\phi) &= {\check Q}_-^* \check \fQ\, \th_*\\ {\check Q}_-^* \check \fQ\,{\check Q}_-\check\phi +\nabla_{\check\phi_*}\fA(\check\phi_*,\check\phi) &= {\check Q}_-^* \check \fQ\,\th \end{split} \end{equation} Formal power series $\check\phi_{*\bg}(\th_*,\th)$, $\check\phi_\bg(\th_*,\th)$, in $(\th_*,\th)$ with vanishing constant terms, that solve \eqref{eqnBSnsbckgndequ} are called ``formal next scale background fields''. \end{enumerate} \end{definition} \begin{proposition}\label{propBSconcatbackgr} Let $\,\cN_+\,$ and $\cN$ be domains in $\cH_+$ and $\cH$, respectively, which are invariant under complex conjugation. Let $\phi_{*\bg},\phi_\bg$ be background fields on $\cN$ and $\psi_{*\mathrm{cr}},\psi_\mathrm{cr}$ be critical fields on $\cN_+$ with respect to $\phi_{*\bg},\phi_\bg$. Define the composition \begin{equation}\label{eqnBScheckphibgde} \begin{split} \check\phi_{*\mathrm{cp}}(\th_*,\th) &= \phi_{*\bg}\big(\psi_{*\mathrm{cr}}(\th_*,\th),\psi_{\mathrm{cr}}(\th_*,\th)\big) \\ \check\phi_\mathrm{cp}(\th_*,\th) &= \phi_\bg\big(\psi_{*\mathrm{cr}}(\th_*,\th),\psi_{\mathrm{cr}}(\th_*,\th)\big) \end{split} \end{equation} Then, for all $ (\th_*,\th) \in \cN_+ \times \cN_+$, \begin{enumerate}[label=(\alph*), leftmargin=*] \item $\, \big(\psi_{*\mathrm{cr}}(\th_*,\th), \psi_{\mathrm{cr}}(\th_*,\th)\big)\,$ fulfils the equations \begin{align*} \psi_{*\mathrm{cr}}(\th_*,\th) &={(bQ^* Q+\fQ)}^{-1} \big(bQ^*\th_* +\fQ\,Q_-\,\check\phi_{*\mathrm{cp}}(\th_*,\th)\big)\\ \psi_\mathrm{cr}(\th_*,\th) &={(bQ^* Q+\fQ)}^{-1} \big(bQ^*\th +\fQ\,Q_-\,\check\phi_\mathrm{cp}(\th_*,\th)\big) \end{align*} \item The effective action \begin{align*} &\cA_\eff\big(\th_*,\th;\psi_{*\mathrm{cr}}(\th_*,\th),\psi_{\mathrm{cr}}(\th_*,\th); \check\phi_{*\mathrm{cp}}(\th_*,\th), \check\phi_\mathrm{cp}(\th_*,\th)\big) \\ &\hskip2.5in=\check\cA(\th_*,\th; \check\phi_{*\mathrm{cp}}(\th_*,\th), \check\phi_\mathrm{cp}(\th_*,\th)) \end{align*} \item $\check\phi_{*\mathrm{cp}}(\th_*,\th)\,,\, \check\phi_\mathrm{cp}(\th_*,\th)$ are next scale background fields on $\cN_+$. \item For any continuous function $\,\cE(\psi_*,\psi)\,$ on $\cN\times\cN$ \begin{align*} &\int_{\cN\times\cN} \! d\mu_{\mathcal{H}}(\psi^*,\psi) \ e^{-\cA(\psi^*,\psi; \phi_{*\bg}(\psi_*,\psi),\phi_\bg(\psi_*,\psi)) +\cE(\psi^*,\psi) }\\ &\hskip0.2in=b^{\dim\cH_+} \bigg\{\int_{\cN_+\times\cN_+}\hskip-15pt d\mu_{\cH_+}(\th^*,\th)\ e^{- \check\cA(\th^*,\th; \check\phi_{*\mathrm{cp}}(\th^*,\th), \check\phi_\mathrm{cp}(\th^*,\th)) }\ e^{\cE(\psi_{*\mathrm{cr}}(\th^*,\th),\psi_{\mathrm{cr}}(\th^*,\th))} \cF(\th^*,\th) \\ & \hskip0.3in +\int_{(\cH_+\times\cH_+)\setminus (\cN_+\times\cN_+)} \hskip -40pt d\mu_{\cH_+}(\th^*,\th) \int_{\cN\times\cN} \hskip -15pt d\mu_{\mathcal{H}}(\psi^*,\psi) \, e^{- \cA_\eff(\th^*,\th;\psi^*,\psi; \phi_{*\bg}(\psi^*,\psi), \phi_\bg(\psi^*,\psi)) + \cE(\psi^*,\psi) }\bigg\} \end{align*} where the fluctuation integral \begin{align*} \cF(\th_*,\th)&=\int_{\cD(\th_*,\th)} d\mu_{\mathcal{H}}(\de\psi_*,\de\psi) \, e^{-\de \cA(\th_*,\th;\de\psi_*,\de\psi) } e^{\de\cE(\th_*,\th;\de\psi_*,\de\psi) } \end{align*} Here the functions $\de\cA$ and $\de\cE$ are given by \begin{align*} \de \cA(\th_*,\th;\de\psi_*,\de\psi) &=\cA_\eff\big(\th_*,\th;\psi_*,\psi; \phi_{*\bg}(\psi_{*},\psi), \phi_\bg(\psi_*,\psi)\big) \Big|^{\psi_*=\psi_{*\mathrm{cr}}+\de\psi_*,\ \psi=\psi_{\mathrm{cr}}+\de\psi} _{\psi_*=\psi_{*\mathrm{cr}},\ \psi=\psi_{\mathrm{cr}}}\\ \de \cE(\th_*,\th;\de\psi_*,\de\psi) &=\cE\big(\psi_*,\psi\big) \Big|^{\psi_*=\psi_{*\mathrm{cr}}+\de\psi_*,\ \psi=\psi_{\mathrm{cr}}+\de\psi} _{\psi_*=\psi_{*\mathrm{cr}},\ \psi=\psi_{\mathrm{cr}}} \end{align*} with $\psi_{*\mathrm{cr}}=\psi_{*\mathrm{cr}}(\th_*,\th)$, $\psi_\mathrm{cr}=\psi_\mathrm{cr}(\th_*,\th)$, and the domain \begin{align*} \cD(\th_*,\th)&=\set{(\de\psi_*,\de\psi)\in\mathcal{H}\times\mathcal{H}} {\psi_{*\mathrm{cr}}(\th_*,\th)+\de\psi_* =\big(\psi_{\mathrm{cr}}(\th_*,\th)+\de\psi\big)^*\in\cN} \end{align*} \end{enumerate} \end{proposition} \noindent The formal power series versions of parts (a), (b) and (c) of Proposition \ref{propBSconcatbackgr} are { \renewcommand{\thetheorem}{\ref{propBSconcatbackgr}'} \begin{proposition} Let $\phi_{*\bg},\phi_\bg$ be formal background fields and $\psi_{*\mathrm{cr}},\psi_\mathrm{cr}$ be formal critical fields with respect to $\phi_{*\bg},\phi_\bg$. Set\footnote{We routinely use the ``optional $*$'' notation $\al_{(*)}$ to denote ``$\al_*$ or $\al$''. The equation ``$\al_{(*)}=\be_{(*)}$'' means ``$\al_*=\be_*$ and $\al=\be$''. } \begin{equation} \check\phi_{(*)\mathrm{cp}}(\th_*,\th) = \phi_{(*)\bg}\big(\psi_{*\mathrm{cr}}(\th_*,\th),\psi_{\mathrm{cr}}(\th_*,\th)\big) \tag{\ref{eqnBScheckphibgde}'} \end{equation} \begin{enumerate}[label=(\alph*), leftmargin=*] \item $\, \big(\psi_{*\mathrm{cr}}(\th_*,\th), \psi_{\mathrm{cr}}(\th_*,\th)\big)\,$ fulfils the equations \begin{align*} \psi_{(*)\mathrm{cr}}(\th_*,\th) &={(bQ^* Q+\fQ)}^{-1} \big(bQ^*\th_{(*)} +\fQ\,Q_-\,\check\phi_{(*)\mathrm{cp}}(\th_*,\th)\big) \end{align*} \item The effective action \begin{align*} &\cA_\eff\big(\th_*,\th;\psi_{*\mathrm{cr}}(\th_*,\th),\psi_{\mathrm{cr}}(\th_*,\th); \check\phi_{*\mathrm{cp}}(\th_*,\th), \check\phi_\mathrm{cp}(\th_*,\th)\big) \\ &\hskip2.5in=\check\cA(\th_*,\th; \check\phi_{*\mathrm{cp}}(\th_*,\th), \check\phi_\mathrm{cp}(\th_*,\th)) \end{align*} \item $\check\phi_{*\mathrm{cp}}(\th_*,\th)\,,\, \check\phi_\mathrm{cp}(\th_*,\th)$ are formal next scale background fields. \end{enumerate} \end{proposition} \addtocounter{theorem}{-1} } \noindent The proof of these Propositions will be given after Lemma \ref{lemBSpreparation}. \begin{remark}\label{remBSremarkonbackgroundfields} \ \begin{enumerate}[label=(\alph*), leftmargin=*] \item Part (c) of the Proposition is often called the ``composition rule''. \item In applications, the domain $\cN_+$ is chosen so that the second integral on the right hand side of the formula in part (d) is small. In that integral either $\th$ or $\th_*$ is bounded away from the origin (``large fields''). \item As in Proposition \ref{propBSconcatbackgr}', let $\phi_{*\bg},\phi_\bg$ be formal background fields and $\psi_{*\mathrm{cr}},\psi_\mathrm{cr}$ be formal critical fields with respect to $\phi_{*\bg},\phi_\bg$. Assume, in addition, that the equations \eqref{eqnBSnsbckgndequ}, for the next scale background fields, have a unique formal power series solution, that we denote $\check\phi_{*\bg},\check\phi_\bg$. Then by part (c) of Proposition \ref{propBSconcatbackgr}', $ \check\phi_{(*)\bg}(\th_*,\th) = \check\phi_{(*)\mathrm{cp}}(\th_*,\th) $ and, by part (a) of Proposition \ref{propBSconcatbackgr}', \begin{align*} \psi_{(*)\mathrm{cr}}(\th_*,\th) &={(bQ^* Q+\fQ)}^{-1} \big(bQ^*\th_{(*)} +\fQ\,Q_-\,\check\phi_{(*)\bg}(\th_*,\th)\big) \end{align*} If, in addition, $\check\phi_{(*)\bg}(\th_*,\th)$ are analytic functions on some domain, then so are $\psi_{(*)\mathrm{cr}}(\th_*,\th)$. So to construct analytical critical fields, it suffices to have \begin{itemize}[leftmargin=*, topsep=2pt, itemsep=0pt, parsep=0pt] \item uniqueness of formal power series solutions to the next scale background field equations \item existence of analytic solutions to the next scale background field equations \item formal background fields \item formal critical fields with respect to the formal background fields \end{itemize} Lemma \ref{lemBSuniquefps}, below, provides existence and uniqueness for formal power series solutions of the critical field equations. \end{enumerate} \end{remark} \begin{lemma}\label{lemBSuniquefps} Let $\phi_{*\bg},\phi_\bg$ be formal background fields of the form \begin{equation*} \phi_{(*)\bg}(\psi_*,\psi) = L_{(*)}\psi_{(*)} + \phi_{(*)\bg}^{(\ge 2)}(\psi_*,\psi) \end{equation*} with $\phi_{(*)\bg}^{(\ge 2)}(\psi_*,\psi)$ being of degree at least two\footnote{By this we mean that each nonzero monomial in $\phi_{(*)\bg}^{(\ge 2)}$ has degree at least two. } in $(\psi_*,\psi)$ and with the $L_{(*)}$'s being linear operators. If the linear operators $bQ^*Q+\fQ - \fQ Q_-L_{(*)}$ are invertible, then there exist unique formal critical fields with respect to $\phi_{*\bg},\phi_\bg$. \end{lemma} \begin{proof} Rewrite the equations \eqref{eqnBScritpointequ} in the form \begin{align*} (bQ^* Q+\fQ- \fQ Q_-L_*)\psi_* &= bQ^*\th_* +\fQ\,Q_-\,\phi_{*\rm bg}^{(\ge 2)}(\psi_*,\psi)\\ (bQ^* Q+\fQ- \fQ Q_-L)\psi &= bQ^*\th +\fQ\,Q_-\,\phi_{\rm bg}^{(\ge 2)}(\psi_*,\psi) \end{align*} As $\psi_*$ and $\psi$ are to have vanishing constant terms, this provides a ``lower triangular'' recursion relation for the coefficients of $(\psi_*,\psi)$. As $\cH$ and $\cH_+$ are finite dimensional, this recursion relation trivially generates a unique solution. \end{proof} The proof of Proposition \ref{propBSconcatbackgr} is based on \begin{lemma}\label{lemBSpreparation} For $ \phi_*,\phi \in \cH_-$ and $\th_*,\th \in \cH_+$ set \begin{equation*} \tilde\psi_{(*)}(\th_{(*)},\phi_{(*)})={(bQ^* Q+\fQ)}^{-1} \big(bQ^*\th_{(*)} +\fQ\,Q_-\,\phi_{(*)}\big) \end{equation*} Then $ \check\cA\big( \th_*,\th;\,\phi_*,\phi\big) =\cA_\eff\big(\th_*,\th; \tilde\psi_*(\th_*,\phi_*),\tilde\psi(\th,\phi);\,\phi_*,\phi\big) $ and \begin{equation}\label{eqnBSpreparationgrad} \begin{split} &(\nabla_{\phi_{(*)}} \check\cA)(\th_*,\th;\phi_*,\phi) \\ &\hskip0.5in=(\nabla_{\phi_{(*)}} \cA)\big(\tilde\psi_*(\th_*,\phi_*),\tilde\psi(\th,\phi);\phi_*,\phi\big) \\ &\hskip1in + Q_-^*\fQ\,(bQ^* Q+\fQ)^{-1} \big[ (\nabla_{\psi_{(*)}} \cA_\eff)\big(\th_*,\th; \tilde\psi_*(\th_*,\phi_*),\tilde\psi(\th,\phi);\,\phi_*,\phi\big)\big] \end{split} \end{equation} \end{lemma} \begin{proof} With the abbreviation $\,\tilde \psi_{(*)} = \tilde\psi_{(*)}(\th_{(*)},\phi_{(*)})$ \begin{align*} \th-Q\tilde\psi &=\th - Q{(bQ^* Q+\fQ)}^{-1}\big(bQ^*\th +\fQ\,Q_-\,\phi\big)\\ &= \big[\bbbone-bQ{(bQ^* Q+\fQ)}^{-1}Q^*\big]\th -{\check Q}_-\,\phi +QQ_-\,\phi-Q{(bQ^* Q+\fQ)}^{-1}\fQ\,Q_-\,\phi\\ &= \big[\bbbone-bQ{(bQ^* Q+\fQ)}^{-1}Q^*\big]\th -{\check Q}_-\,\phi\\ &\hskip1in +Q{(bQ^* Q+\fQ)}^{-1} \big[(bQ^* Q+\fQ)-\fQ\big]Q_-\,\phi\\ &= \big[\bbbone-bQ{(bQ^* Q+\fQ)}^{-1}Q^*\big] \big(\th -{\check Q}_-\,\phi\big)\\ \tilde\psi-Q_-\,\phi &= {(bQ^* Q+\fQ)}^{-1}\big(bQ^*\th +\fQ\,Q_-\,\phi\big) -Q_-\,\phi\\ &= {(bQ^* Q+\fQ)}^{-1}\big(bQ^*\th +\fQ\,Q_-\,\phi -bQ^* QQ_-\,\phi-\fQ\,Q_-\,\phi\big)\\ &= b{(bQ^* Q+\fQ)}^{-1}Q^*\big(\th -{\check Q}_-\,\phi\big) \end{align*} Therefore \begin{align*} &\check\cA\big( \th_*,\th;\,\phi_*,\phi\big) -\cA_\eff\big(\th_*,\th; \tilde\psi_*,\tilde\psi;\,\phi_*,\phi\big) \\ &\hskip 0.75cm = \big< \th_*-{\check Q}_-\,\phi_*\,,\,\check\fQ \big( \th-{\check Q}_-\,\phi\big) \big>_+ -b \big< \th_*-Q\tilde\psi_*\,,\, \th-Q\tilde\psi \big>_+\\ &\hskip 7cm - \big< \tilde\psi_*-Q_-\,\phi_*\,,\, \fQ(\tilde\psi-Q_-\,\phi) \big>\\ &\hskip 0.75cm = b\big< \th_*-{\check Q}_-\,\phi_*\,,\, \cO \big( \th-{\check Q}_-\,\phi\big) \big>_+ \end{align*} where, by Remark \ref{remotherrepcheckfQ}, \begin{align*} \cO&= \big[ \bbbone -bQ{\big(bQ^* Q+\fQ\big)}^{-1}Q^*\big] - \big[\bbbone-bQ{(bQ^* Q+\fQ)}^{-1}Q^*\big]^2\cr&\hskip0.5in -bQ{(bQ^* Q+\fQ)}^{-1}\fQ{(bQ^* Q+\fQ)}^{-1}Q^* \\ &= b\big[ \bbbone -bQ{\big(bQ^* Q+\fQ\big)}^{-1}Q^*\big] Q{(bQ^* Q+\fQ)}^{-1}Q^*\cr&\hskip0.5in -bQ{(bQ^* Q+\fQ)}^{-1}\fQ{(bQ^* Q+\fQ)}^{-1}Q^* \\ &= bQ\big[\bbbone -{\big(bQ^* Q+\fQ\big)}^{-1}bQ^* Q -{(bQ^* Q+\fQ)}^{-1}\fQ\big] {(bQ^* Q+\fQ)}^{-1}Q^*\\ &=0 \end{align*} This proves the first statement. The second follows by the chain rule and the observation that $\,\nabla_{\phi_{(*)}} \cA_\eff =\nabla_{\phi_{(*)}} \cA\,$. \end{proof} \begin{proof}[Proof of Propositions \ref{propBSconcatbackgr} and \ref{propBSconcatbackgr}'] The proof of Proposition \ref{propBSconcatbackgr}' is virtually identical to that of Proposition \ref{propBSconcatbackgr}.a,b,c, so we just give the proof of Proposition \ref{propBSconcatbackgr}. Part (a) follows immediately from \eqref{eqnBScritpointequ} and \eqref{eqnBScheckphibgde}. Now evaluate the conclusions of Lemma \ref{lemBSpreparation} at $\,\phi_{(*)}= \check\phi_{(*)\mathrm{cp}}(\th_*,\th)\big)\,$. The formula for $\check\cA$ in Lemma \ref{lemBSpreparation} directly gives part (b). The right hand side of \eqref{eqnBSpreparationgrad} vanishes upon this evaluation by parts (a) and (b) of Definition \ref{defBSbackfld}. This shows that $\,\big(\check\phi_{*\mathrm{cp}}(\th_*,\th)\,,\, \check\phi_\mathrm{cp}(\th_*,\th)\big)\,$ is critical for the map $ \,(\phi_*,\phi) \mapsto \check\cA\big( \th_*,\th;\,\phi_*,\phi\big)\, $, which proves part (c). Now \begin{align*} &b^{-\dim\cH_+}\int_{\cN\times\cN} \! d\mu_{\mathcal{H}}(\psi^*,\psi) \,e^{ -\cA(\psi^*,\psi; \phi_{*\bg}(\psi^*,\psi),\phi_\bg(\psi^*,\psi)) +\cE(\psi^*,\psi) }\\ &\hskip0.1in=\int\! d\mu_{\cH_+}(\th^*,\th) \!\int_{\cN\times\cN} \hskip -15pt d\mu_{\mathcal{H}}(\psi^*,\psi) \, e^{- b \< \th^*-Q\psi^*\,,\, \th-Q\psi \>_+ -\cA(\psi^*,\psi; \phi_{*\bg}(\psi^*,\psi),\phi_\bg(\psi^*,\psi)) \,+\,\cE(\psi^*,\psi) }\\ &\hskip0.1in=\int d\mu_{\cH_+}(\th^*,\th)
\int_{\cN\times\cN} \hskip -12pt d\mu_{\mathcal{H}}(\psi^*,\psi) \, e^{- \cA_\eff(\th^*,\th;\psi^*,\psi; \phi_{*\bg}(\psi^*,\psi), \phi_\bg(\psi^*,\psi)) + \cE(\psi^*,\psi) }\\ &\hskip0.1in= \int_{\cN_+\times\cN_+} d\mu_{\cH_+}(\th^*,\th) \int_{\cN\times\cN} \hskip -12pt d\mu_{\mathcal{H}}(\psi^*,\psi) \, e^{- \cA_\eff(\th^*,\th;\psi^*,\psi; \phi_{*\bg}(\psi^*,\psi), \phi_\bg(\psi^*,\psi)) + \cE(\psi^*,\psi) } \\ &\hskip0.2in+\int_{\cH_+\times\cH_+\setminus \cN_+\times\cN_+} \hskip -30pt d\mu_{\cH_+}(\th^*,\th) \int_{\cN\times\cN} \hskip -10pt d\mu_{\mathcal{H}}(\psi^*,\psi) \, e^{- \cA_\eff(\th^*,\th;\psi^*,\psi; \phi_{*\bg}(\psi^*,\psi), \phi_\bg(\psi^*,\psi)) \,+\, \cE(\psi^*,\psi) } \end{align*} Making the change of variables $\psi^*=\psi_{*\mathrm{cr}}(\th^*,\th)+\de\psi_*$, $\psi=\psi_\mathrm{cr}(\th^*,\th)+\de\psi$ in the inner integral of the upper line and applying part (b) gives part (d). \end{proof} From now on we assume that the function $\fA(\phi_*,\phi)$ in the definitions of $\cA$ and $\check\cA$ is of the form \begin{equation}\label{eqnBSpolyAction} \fA(\phi_*,\phi)=\<\phi_*,D\phi\>_-+P(\phi_*,\phi) \end{equation} where \begin{itemize}[leftmargin=*, topsep=2pt, itemsep=0pt, parsep=0pt] \item $P$ is a polynomial whose nonzero monomials are each of degree at least two and \item $D$ a linear operator on $\cH_-$ such that both the operators $\,(D+Q_-^* \fQ\,Q_- )\,$ and $\,(D+{\check Q}_-^* \check \fQ\,{\check Q}_-)\,$ are invertible. We define the ``Green's functions'' \begin{equation}\label{eqnBSdefinitionScheckS} S=(D+Q_-^* \fQ\,Q_-)^{-1} \qquad \qquad {\check S}=(D+{\check Q}_-^* \check \fQ\,{\check Q}_-)^{-1} \end{equation} \end{itemize} We think of $D$ as a differential operator, possibly shifted by a chemical potential. \begin{remark}\label{remBSremarkonbackgroundfieldsB} In this setting, the background field equations \eqref{eqnBSbckgndequ} become \begin{equation} \phi_{(*)} = S^{(*)} Q_-^* \fQ \psi_{(*)} - S^{(*)}P'_{(*)} (\phi_*,\phi) \tag{\ref{eqnBSbckgndequ}'} \end{equation} where $P'_*(\phi_*,\phi)= \nabla_{\phi}P (\phi_*,\phi)$ and $P'(\phi_*,\phi)= \nabla_{\phi_*}P (\phi_*,\phi)$. Similarly, the next scale background field equations \eqref{eqnBSnsbckgndequ} become \begin{equation} \check\phi_{(*)} = \check S^{(*)}{\check Q}_-^* \check \fQ\, \th_{(*)} - \check S^{(*)}P'_{(*)} (\check\phi_*,\check\phi) \tag{\ref{eqnBSnsbckgndequ}'} \end{equation} \end{remark} We now continue with our study of the critical field, following the plan of Remark \ref{remBSremarkonbackgroundfields}.c. To describe the leading part of the critical field, we set \begin{equation}\label{eqnBSdefDe} \De = \fQ -\fQ\,Q_- S Q_-^* \fQ :\ \cH \longrightarrow \cH \end{equation} From now on we assume that $\,\De + bQ^* Q\,$ is invertible and define\footnote{We shall show, in Lemma \ref{lemBSdeltaAalernew}, below, that $C$ is the covariance for the fluctuation integral.} the ``covariance'' \begin{equation}\label{eqnBSdefCascovariance} C=(\De + bQ^* Q)^{-1}:\ \cH \longrightarrow \cH \end{equation} \begin{proposition}\label{propFormalFldSlns} Assume that in the setting \eqref{eqnBSpolyAction}, each nonzero monomial of $P$ is of degree at least three. Then there exist unique formal background fields $\phi_{(*)\bg}$ and unique formal next scale background fields $\check\phi_{(*)\bg}$. They are of the form \begin{align*} \phi_{(*)\bg}(\psi_*,\psi)& = S^{(*)} Q_-^* \fQ \psi_{(*)} + \phi_{(*)\bg}^{(\ge 2)}(\psi_*,\psi) \\ \check\phi_{(*)\bg}(\th_*,\th)& = \check S^{(*)} \check Q_-^* \check \fQ \th_{(*)} + \check\phi_{(*)\bg}^{(\ge 2)}(\th_*,\th) \end{align*} with $\phi_{(*)\bg}^{(\ge 2)}(\psi_*,\psi)$ and $\check\phi_{(*)\bg}^{(\ge 2)}(\th_*,\th)$ being of degree at least two. Furthermore, there are unique formal critical fields with respect to $\phi_{(*)\bg}$. They are of the form \begin{align*} \psi_{(*)\mathrm{cr}}(\th_*,\th) &={(bQ^* Q+\fQ)}^{-1} \big(bQ^*\th_{(*)} +\fQ\,Q_-\,\check\phi_{(*)\bg}(\th_*,\th)\big)\\ &= b C^{(*)} Q^*\,\th_{(*)} + \psi_{(*)\mathrm{cr}}^{(\ge 2)}(\th_*,\th)\big) \end{align*} with $\psi_{(*)\mathrm{cr}}^{(\ge 2)}$ being of degree at least two. \end{proposition} \begin{proof} The existence, uniqueness and forms of the formal background and next scale background fields are proven as Lemma \ref{lemBSuniquefps} was proven. The existence and uniqueness of the formal critical field now follows from Lemma \ref{lemBSuniquefps}. The first representation of the critical fields follows from parts (a) and (c) of Proposition \ref{propBSconcatbackgr}'. For the second representation, rewrite the equations \eqref{eqnBScritpointequ} as \begin{align*} (bQ^* Q+\fQ)\psi_{(*)} &= bQ^*\th_{(*)} +\fQ\,Q_-\,S^{(*)} Q_-^* \fQ \psi_{(*)} + \fQ\,Q_-\,\phi_{(*)\bg}^{(\ge 2)}(\psi_*,\psi) \end{align*} or \begin{align*} \psi_{(*)} &= bC^{(*)}Q^*\th_{(*)} + C^{(*)}\fQ\,Q_-\,\phi_{(*)\bg}^{(\ge 2)}(\psi_*,\psi) \end{align*} \end{proof} The two representations of the critical field, $\psi_{\mathrm{cr}}$, given in Proposition \ref{propFormalFldSlns}, combined with the representation of $\check\phi_{\bg}$, suggest a formula for $bCQ^*$. In Remark \ref{remBSedA}, below, we give an algebraic proof of this formula, together with a number of representations for the Green's functions, $S$ and $\check S$, and covariance $C$. Then, in Lemma \ref{lemBSdeltaAalernew} below, we analyze the fluctuation integral of Proposition \ref{propBSconcatbackgr}.d in more detail. \begin{remark}\label{remBSedA} Assume that $D$ is invertible. \begin{enumerate}[label=(\alph*), leftmargin=*] \item $ \De = \big( \bbbone_{\mathcal{H}} +\fQ\,Q_- D^{-1} Q_-^* \big)^{-1}\fQ = \fQ\big( \bbbone_{\mathcal{H}} +Q_- D^{-1} Q_-^*\,\fQ \big)^{-1} $ \item Let $\,R: \cH_-\rightarrow \cH\,$ and $\,R_*: \cH\rightarrow \cH_-\,$ be linear maps such that $\,R\,D^{-1} R_* = Q_-D^{-1}Q_-^*\,$ and such that $\,D+R_*\fQ \,R\,$ is invertible. Then \begin{equation*} [D+R_*\fQ \,R]^{-1} = D^{-1} -D^{-1} R_* \De\, R\, D^{-1} \end{equation*} In particular \begin{equation*} S=D^{-1} -D^{-1} Q_-^* \De\, Q_- D^{-1} \end{equation*} \item $ {\check S}= \big[ S^{-1} - Q_-^*\fQ (\fQ+bQ^*Q)^{-1} \fQ Q_-\big]^{-1} = S + S Q_-^*\, \fQ\, C\, \fQ\, Q_- S $ \item $ C = \big(bQ^* Q+\fQ\big)^{-1} +(bQ^* Q+\fQ)^{-1}\, \fQ Q_-\check S Q_-^*\fQ\,(bQ^*Q+\fQ)^{-1} $ \item $ bC^{(*)}Q^* = \big(bQ^* Q+\fQ\big)^{-1} \Big[bQ^* +\fQ Q_-\check S^{(*)} \check Q_-^* \check \fQ\Big] $ \end{enumerate} \end{remark} \begin{proof} (a) By Lemma \ref{lembSabstrlinalg}, with $V=\cH_-$, $W=\cH$, $q=Q_-$, $q_*=Q_-^*$, $f=D$ and $g=\fQ$ \begin{alignat*}{3} \big\{ \bbbone +\fQ\,Q_- D^{-1} Q_-^* \big\}^{-1}\fQ &= \big\{ \bbbone - \fQ\,Q_-( D+Q_-^* \fQ Q_-)^{-1}Q_-^*\big\} \fQ &\,=\De \\ \fQ\big\{ \bbbone +Q_- D^{-1} Q_-^*\,\fQ \big\}^{-1} &= \fQ\big\{ \bbbone - Q_-( D+Q_-^* \fQ Q_-)^{-1}Q_-^*\,\fQ\big\} &\,=\De \end{alignat*} \Item (b) By part (a) \begin{align*} \big[ D+ R_*\fQ \,R\big]\,\big[ D^{-1} -D^{-1} R_* \De\, R\, D^{-1} \big] &= \bbbone + R_* \big[ \fQ - (\bbbone +\fQ\, R\,D^{-1}R_* )\,\De\big]\,R\,D^{-1} \\ &= \bbbone + R_* \big[ \fQ - (\bbbone +\fQ\, Q_-D^{-1}Q_-^* )\,\De\big]\,R\,D^{-1} \\ &=\bbbone \end{align*} \Item (c) By Remark \ref{remotherrepcheckfQ} \begin{align*} Q^* \check \fQ\,Q &= b Q^*Q \big[ \bbbone -(bQ^* Q+\fQ)^{-1}bQ^*Q\big] \\ &= b Q^*Q \big[ (bQ^* Q+\fQ)^{-1} (bQ^* Q+\fQ) -(bQ^* Q+\fQ)^{-1}bQ^*Q\big] \\ &= (\fQ+ b Q^*Q -\fQ) (bQ^* Q+\fQ)^{-1}\fQ \\ &= \fQ - \fQ (\fQ+bQ^*Q)^{-1} \fQ \end{align*} Therefore \begin{align*} S^{-1}-\check S^{-1} &=Q_-^* \fQ\,Q_- - Q_-^* Q^*\check \fQ\,Q Q_- = Q_-^* \fQ (\fQ+bQ^*Q)^{-1} \fQ Q_- \end{align*} which gives the first representation of $\check S$. For the proof of the second representation, first observe that, by \eqref{eqnBSdefDe} and \eqref{eqnBSdefCascovariance}, \begin{align*} C^{-1}(\fQ+bQ^*Q)^{-1} &= ( \fQ +bQ^*Q -\fQ Q_-S Q_-^*\fQ)(\fQ+bQ^*Q)^{-1} \\ &= \bbbone - \fQ Q_- S Q_-^* \fQ (\fQ+bQ^*Q)^{-1} \end{align*} so that \begin{equation}\label{eqnBSRGCone} C = (\fQ+bQ^*Q)^{-1} \big\{ \bbbone - \fQ\, Q_- S Q_-^*\fQ (\fQ+bQ^*Q)^{-1}\big\}^{-1} \end{equation} Hence, by the first representation of $\check S$, \begin{align*} &\big[ S + S Q_-^*\, \fQ\, C\, \fQ\, Q_- S\big] \check S^{-1} -\bbbone \\ & \hskip 1cm = \big[ \bbbone + S Q_-^*\, \fQ\, C\, \fQ\, Q_- \big] \big[ \bbbone - S Q_-^*\fQ (\fQ+bQ^*Q)^{-1} \fQ Q_-\big] -\bbbone \\ & \hskip 1cm = S Q_-^*\, \fQ \Big[ C \big\{\bbbone - \fQ Q_- S Q_-^* \fQ (\fQ+bQ^*Q)^{-1} \big\} - (\fQ+bQ^*Q)^{-1} \Big] \fQ Q_- \\ & \hskip 1cm =0 \end{align*} which implies the second representation of $\check S$. \Item (d) By Lemma \ref{lembSabstrlinalg} with $q= \fQ Q_-$, $q_*= Q_-^*\fQ$, $f=S^{-1}$ and $g= -(\fQ+bQ^*Q)^{-1}$ \begin{equation}\label{eqnBSRGCtwo} \begin{split} &\big\{\bbbone - \fQ \,Q_- S Q_-^*\fQ (\fQ+bQ^*Q)^{-1}\big\}^{-1} \\ & \hskip 2cm= \bbbone + \fQ Q_- \big[ S^{-1} - Q_-^*\fQ (\fQ+bQ^*Q)^{-1} \fQ Q_-\big]^{-1}Q_-^*\fQ (\fQ+bQ^*Q)^{-1} \\ & \hskip 2cm= \bbbone + \fQ Q_- \check S Q_-^*\fQ (\fQ+bQ^*Q)^{-1} \end{split} \end{equation} The second equality follows by the first representation of $\check S$ in part (c). Substituting \eqref{eqnBSRGCtwo} into \eqref{eqnBSRGCone} gives the desired representation of $C$. \Item (e) By Remark \ref{remotherrepcheckfQ} \begin{align*} \check Q_-^* \check \fQ &= bQ_-^* Q^* \big[ \bbbone -bQ(bQ^*Q+\fQ)^{-1} Q^*\big] \\ &= bQ_-^*\big[ \bbbone -bQ^*Q(bQ^*Q+\fQ)^{-1}\big]Q^* \\ &= bQ_-^*\fQ(bQ^*Q+\fQ)^{-1}Q^* \end{align*} Therefore by part (d) \begin{align*} bC^{(*)}Q^* &= \big(bQ^* Q+\fQ\big)^{-1} \big[ bQ^* + b \fQ Q_-\check S^{(*)} Q_-^*\fQ\,(bQ^*Q+\fQ)^{-1} Q^* \big] \\ &= \big(bQ^* Q+\fQ\big)^{-1} \Big[bQ^* +\fQ Q_-\check S^{(*)} \check Q_-^* \check \fQ\Big] \end{align*} \end{proof} Define, in the setting of Proposition \propBSconcatbackgr, $\de\phi_{(*)\rm bg}\big(\psi_{*},\psi,\de\psi_*,\de\psi\big)$ by \refstepcounter{equation}\label{eqnBSdephibg} \begin{equation} \phi_{(*)\rm bg}\big(\psi_{*}+\de\psi_*,\psi+\de\psi\big) = \phi_{(*)\rm bg}\big(\psi_{*},\psi\big) + \de\phi_{(*)\rm bg}\big(\psi_{*},\psi,\de\psi_*,\de\psi\big) \tag{\ref{eqnBSdephibg}.a} \end{equation} and set \begin{equation} \de\check\phi_{(*)\rm bg}\big(\th_{*},\th,\de\psi_*,\de\psi\big) =\de\phi_{(*)\rm bg}\big(\psi_{*\mathrm{cr}}(\th_*,\th)\,,\, \psi_\mathrm{cr}(\th_*,\th)\,,\,\de\psi_*\,,\,\de\psi\big) \tag{\ref{eqnBSdephibg}.b} \end{equation} With the $\check\phi_{(*)\rm bg}(\th_*,\th)$ of Proposition \ref{propBSconcatbackgr} and \eqref{eqnBScheckphibgde}, \begin{equation}\label{eqnBSdephicheck} \phi_{(*)\rm bg}\big(\psi_{*\mathrm{cr}}(\th_*,\th)\!+\de\psi_*, \psi_{\mathrm{cr}}(\th_*,\th)\!+\de\psi\big) = \check\phi_{(*)\rm bg}(\th_*,\th) +\de\check\phi_{(*)\rm bg}\big(\th_*,\th;\de\psi_*,\de\psi\big) \end{equation} Also define $\de{\check\phi_{(*)}}^{(+)}\big(\th_*,\th;\de\psi_*,\de\psi\big)$ by \begin{equation}\label{eqnBSdephicheckplus} \de\check\phi_{(*)\rm bg}\big(\th_*,\th;\de\psi_*,\de\psi\big) = S^{(*)} Q_-^* \fQ\,\de\psi_{(*)} +\! \de{\check\phi_{(*)}}^{(+)}\big(\th_*,\th;\de\psi_*,\de\psi\big) \end{equation} \begin{remark}\label{remBSdephieqn} By Remark \ref{remBSremarkonbackgroundfieldsB}, the fields $\de\check\phi_{(*)\rm bg}\big(\th_*,\th,\de\psi_*,\de\psi\big)$ introduced in \eqref{eqnBSdephibg} obey \begin{align*} \de\check\phi_{(*)\bg} &= S^{(*)}Q_-^* \fQ\, \de\psi_{(*)} - S^{(*)}P'_{(*)} (\phi_*, \phi) \Big|^{\phi_{(*)}=\check\phi_{(*)\bg}(\th_*,\th)+\de\check\phi_{(*)\bg}} _{\phi_{(*)}=\check\phi_{(*)\bg}(\th_*,\th)} \end{align*} In particular, if $P=0$, then $\de\check\phi_{(*)\rm bg} ={S^{(*)}}Q_-^* \fQ\, \de\psi_{(*)}$. This is the motivation for the definition of $\de{\check\phi_{(*)}}^{(+)}$ in \eqref{eqnBSdephicheckplus}. \end{remark} \begin{lemma}\label{lemBSdeltaAalernew} The function $\de\cA$ appearing in the exponent of the fluctuation integral $\cF(\th_*,\th)$ of Proposition \ref{propBSconcatbackgr}.d is \begin{align*} \de \cA(\th_*,\th;\de\psi_*,\de\psi) &= \<\de\psi_*,C^{-1}\,\de\psi\> -\int_0^1 \!dt\ \big< \de\psi_*\,,\,\fQ\, Q_-\, \de{\check\phi}^{(+)}\big(\th_*,\th;t\,\de\psi_*,t\,\de\psi\big) \big> \\ & \hskip 3.5cm -\int_0^1 \!dt\ \big< \fQ\, Q_-\, \de{\check\phi_*}^{(+)}\big(\th_*,\th;t\,\de\psi_*,t\,\de\psi\big) \,,\, \de\psi \big> \end{align*} \end{lemma} \begin{proof} Set $ \cB(\psi_*,\psi) =\cA\big(\psi_*,\psi;\phi_{*\bg}(\psi_*,\psi),\phi_\bg(\psi_*,\psi)\big) $. As \begin{align*} \big(\nabla_{\phi_*}\cA\big)\big(\psi_*,\psi;\phi_{*\bg}(\psi_*,\psi),\phi_\bg(\psi_*,\psi)\big) =\big(\nabla_\phi\cA\big)\big(\psi_*,\psi;\phi_{*\bg}(\psi_*,\psi),\phi_\bg(\psi_*,\psi)\big) =0 \end{align*} we have \begin{alignat*}{3} \big(\nabla_{\psi_*}\cB\big)\big(\psi_*,\psi\big) &=\big(\nabla_{\psi_*}\cA\big)\big(\psi_*,\psi;\phi_{*\bg}(\psi_*,\psi),\phi_\bg(\psi_*,\psi)\big) &&=\fQ\big(\psi-Q_-\,\phi_\bg(\psi_*,\psi)\big)\\ \big(\nabla_\psi\cB\big)\big(\psi_*,\psi\big) &=\big(\nabla_\psi\cA\big)\big(\psi_*,\psi;\phi_{*\bg}(\psi_*,\psi),\phi_\bg(\psi_*,\psi)\big) &&=\fQ\big(\psi_*-Q_-\,\phi_{*\bg}(\psi_*,\psi)\big) \end{alignat*} Therefore \begin{align*} &\cB(\psi_*+\de\psi_*,\psi+\de\psi) -\cB(\psi_*,\psi) \\ & \hskip .2cm = \int_0^1 \hskip-5ptdt\,\Big[ \big< \de\psi_*\,,\, (\nabla_{\psi_*}\cB)(\psi_*+t\de\psi_*,\psi+t\de\psi)\big> + \big< (\nabla_{\psi}\cB)(\psi_*+t\de\psi_*,\psi+t\de\psi)\,,\,\de\psi\big> \Big]\\\noalign{\vskip0.05in} & \hskip .2cm = \int_0^1 dt \ \big< \de\psi_*\,,\, \fQ(\psi+t\de\psi) -\fQ\,Q_-\,\phi_\bg(\psi_*+t\de\psi_*,\psi+t\de\psi)\big> \cr \noalign{\vskip-0.05in}& \hskip 2cm+ \int_0^1 dt \ \big< \fQ(\psi_*+t\de\psi_*) -\fQ\,Q_-\,\phi_{* \bg}(\psi_*+t\de\psi_*,\psi+t\de\psi) \,,\,\de\psi\big> \\ \noalign{\vskip0.05in} & \hskip .4cm = \big< \de\psi_*, \fQ \,\de\psi \big> + \big< \de\psi_*, \fQ\, \psi \big> + \big< \psi_*, \fQ \,\de\psi \big> -I \end{align*} where \begin{align*} I&= \int_0^1 dt\ \big< \de\psi_*\,,\, \fQ\,Q_-\,\phi_\bg(\psi_{* \mathrm{cr}}+t\de\psi_*, \psi_\mathrm{cr}+t\de\psi)\big> \\ \noalign{\vskip-0.05in} & \hskip 1cm + \int_0^1 dt\ \big< \fQ\,Q_-\,\phi_{* \bg}(\psi_{* \mathrm{cr}}+t\de\psi_*,\psi_\mathrm{cr}+t\de\psi) \,,\,\de\psi\big> \end{align*} Since \begin{equation*} \cA_\eff\big(\th_*,\th;\psi_*,\psi; \phi_{*\bg}(\psi_{*},\psi), \phi_\bg(\psi_*,\psi)\big) =b\<\th_*-Q\psi_*,\th-Q\psi\>_+ +\cB(\psi_*,\psi) \end{equation*} we get, using Proposition \ref{propBSconcatbackgr}, \begin{align*} \de\cA &= b\<Q\,\de\psi_*\,,\,Q\,\de\psi\>_+ -b\< Q\,\de\psi_*\,,\,\th-Q\psi_\mathrm{cr}\>_+ -b\<\th_*-Q\psi_{* \mathrm{cr}}\,,\, Q\,\de\psi\>_+ \\ & \hskip 1cm + \big< \de\psi_*\,,\, \fQ \,\de\psi \big> + \big< \de\psi_*\,,\, \fQ\, \psi_\mathrm{cr} \big> + \big< \psi_{* \mathrm{cr}}\,,\, \fQ \,\de\psi \big> -I \displaybreak[0]\\ \noalign{\vskip0.05in} & = \<\de\psi_*\,,\,(bQ^* Q + \fQ)\,\de\psi\> + \< \de\psi_*\,,\,(bQ^* Q + \fQ)\psi_\mathrm{cr} -bQ^* \th\> \\ & \hskip 3.9cm + \< (bQ^* Q + \fQ)\psi_{* \mathrm{cr}} -bQ^* \th_*\,,\, \de\psi\> -I \displaybreak[0]\\ \noalign{\vskip0.05in} & = \<\de\psi_*\,,\,(bQ^* Q + \fQ)\,\de\psi\> + \< \de\psi_*\,,\,\fQ\, Q_- \check\phi_\bg \> + \< \fQ\, Q_-\check \phi_{* \bg} \,,\, \de\psi\> -I \displaybreak[0]\\ \noalign{\vskip0.05in} & = \<\de\psi_*\,,\,(bQ^* Q\!+\!\fQ)\,\de\psi\> -\int_0^1 \hskip-5pt dt\ \big< \de\psi_*\,,\,\fQ\, Q_- \big[\phi_\bg(\psi_{* \mathrm{cr}}+t\de\psi_*,\psi_\mathrm{cr}+t\de\psi) - \check\phi_\bg\big] \big> \\ & \hskip 3.6cm -\int_0^1 \!dt\ \big< \fQ\, Q_- \big[\phi_{*\bg}(\psi_{* \mathrm{cr}}+t\de\psi_*,\psi_\mathrm{cr}+t\de\psi) - \check\phi_{*\bg}\big] \,,\, \de\psi \big> \\ \noalign{\vskip0.05in} & = \<\de\psi_*,(bQ^* Q\!+\! \fQ\!-\!\fQ Q_-S Q_-^* \fQ)\,\de\psi\> -\!\int_0^1 \hskip-5pt dt\, \big< \de\psi_*,\fQ\, Q_-\, \de{\check\phi}^{(+)}\big(\th_*,\th;t\de\psi_*,t\de\psi\big) \big> \\ & \hskip 6cm -\int_0^1 \!dt\, \big< \fQ\, Q_-\, \de{\check\phi_*}^{(+)}\big(\th_*,\th;t\de\psi_*,t\de\psi\big) ,\, \de\psi \big> \end{align*} By the definition of $C$ in \eqref{eqnBSdefCascovariance}, this is the desired representation. \end{proof} \bigskip In the course of the arguments above the following simple algebraic observation was used several times. \begin{lemma}\label{lembSabstrlinalg} Let $V$ and $W$ be vector spaces and let $\,q:V\rightarrow W\,$, $\,q_*:W\rightarrow V\,$, $\,f:V\rightarrow V\,$ and $\,g:W\rightarrow W\,$ be linear maps. Assume that $f$ and $\,f+q_*g\,q\,$ are invertible. Then $\,\bbbone_W +gq\,f^{-1}q_*\,$ and $\,\bbbone_W +q\,f^{-1}q_*g\,$ are also invertible and \begin{align*} \big( \bbbone_W +gq\,f^{-1}q_* \big)^{-1} &= \bbbone_W -gq (f+q_*gq)^{-1} q_*\\ \big( \bbbone_W +q\,f^{-1}q_*g \big)^{-1} &= \bbbone_W -q (f+q_*gq)^{-1} q_*g \end{align*} \end{lemma} \begin{proof} Replacing $q$ by $gq$ for the first line and $q_*$ by $q_*g$ for the second, we may assume that $g=\bbbone_W$. Write $\bbbone_W=\bbbone$. Then \begin{align*} \big(\bbbone -q (f+q_*q)^{-1} q_*\big) \big( \bbbone +qf^{-1}q_* \big) &= \bbbone +q \big[\bbbone -(f+q_*q)^{-1}f - (f+q_*q)^{-1}q_*q\big] f^{-1} q_* \\ &= \bbbone \end{align*} and similarly $\, \big( \bbbone +qf^{-1}q_* \big) \big(\bbbone -q (f+q_*q)^{-1} q_*\big) =\bbbone \,$. \end{proof} \newpage \bibliographystyle{plain}
\section{INTRODUCTION} Discovered in the 1960s, the Galactic globular cluster (GC) \ter is a fascinating object lying at a distance $d=5.9\pm0.5$~kpc \citep{Valenti07} and having a particularly high central stellar density as well as high metallicity. It also has the highest stellar interaction rate of all Galactic GCs \citep{Verbunt87}, which is probably linked to the large number of X-ray binaries found in this system. The latter may furthermore explain the fact that \ter hosts the largest number of millisecond pulsars ($N^{\rm rad}_{\rm vis}=37$ visible radio MSPs) of all Galactic GCs \citep{Cadelano18}, MSPs being the offspring of low-mass X-ray binaries \citep{Camilo05,Abdo10}. The discovery of two distinct stellar populations with different iron content and ages in this GC has been interpreted as an indication that \ter may not be a ``true'' GC in the usual sense: it may represent the merger of two stellar clusters, or it may be the remnant of a disrupted galaxy \citep{Ferraro09}. The high metallicity probably points to a very large number of supernova explosions (i.e., progenitors of neutron stars) occurring in \terc further explaining why this cluster harbors so many MSPs. Moreover, the latter leads to the expectation that even more MSPs may be found in \ter than the current 37 known ones\footnote{http://www.naic.edu} \citep{Lanzoni10, Freire17,Cadelano18}. As such, this GC has been an attractive source to model and observe, since MSPs are known not only to radiate pulsed emission in multiple wavebands, but to generate relativistic particles that may in turn upscatter ambient photons into the very-high-energy (VHE) domain \citep{BS07}, or interact with the cluster magnetic field to yield diffuse synchrotron radiation (SR; \citealt{Venter08}). The \textit{Fermi} Large Area Telescope \citep[LAT;][]{Atwood09} has detected a bright GeV source that is very plausibly associated with \ter \citep{Abdo10,2FGL}, bringing the total number of LAT sources that are associated with GCs up to about 20 \citep{3FGL,Zhang16}. \citet{deMenezes} recently performed a systematic study of 23 \textit{Fermi} GC candidates and detected \ter at more than 60$\sigma$. The High Energy Stereoscopic System (H.E.S.S.) has furthermore detected an extended source in the direction of \terc although its morphology is peculiar and offset from the GC center \citep{Abramowski11}. This makes \ter the only GC plausibly detected at VHEs \citep{Anderhub09,Aharonian09,McCutcheon09,Abramowski11_DM,Abramowski13}. (See \citealt{Tam16} for a recent review of $\gamma$-ray detections of GCs.) Diffuse X-rays have also been detected from this GC, peaking at the center and decreasing with cluster radius \citep{Eger10}. Radio observations have revealed several extended structures in the vicinity of this source \citep{Clapson2011}. In light of the available multi-wavelength data and the unique and unexpected source morphology in different energy bands, this source represents a prime subject for deeper investigation. Several models have attempted to explain the broadband emission properties of GCs. The first class of models invoke MSPs as sources of relativistic particles and cumulative high-energy emission. \citet{Chen1991} provided an early estimate of the cumulative $\gamma$-ray luminosity from a population of MSPs embedded in a GC, finding $L_{\rm \gamma,tot}\sim10^{36}n_{500}$~erg\,s$^{-1}$ for $n_{500}=N^\gamma_{\rm MSP}/500$ and $N^\gamma_{\rm MSP}$ the number of $\gamma$-ray-bright MSPs, when convolving the predicted $L_\gamma$ for each pulsar with an expected distribution of periods of GC MSPs (see also \citealt{Bhatia92}). This estimate turns out to be correct to within a factor of a few compared to the measured GeV luminosities for some \textit{Fermi}-detected GCs \citep[e.g.,][]{Abdo10} if one sets $n_{500}\sim0.2$. \citet{Wei96} calculated such a cumulative $\gamma$-ray flux using an outer gap model. Comparing their expected flux level for unpulsed $\gamma$-rays to an upper limit by the Energetic Gamma-Ray Experiment Telescope (\textit{EGRET}), they constrained $n_{500}<0.8$ for 47~Tucanae. \citet{HUM05,Venter08,Venter09_GC} summed the individual pulsed curvature radiation (CR) spectra from their model for an ensemble of MSPs to estimate the GeV flux expected from a GC, and the predictions of \citet{Venter09_GC} provided a good match to the subsequent {\it Fermi} measurements of the high-energy (HE) spectrum of 47~Tucanae~\citep{Abdo09}. \citet{Cheng10} investigated an alternative scenario to produce GeV emission by attributing this to inverse Compton (IC) rather than CR emission, also predicting GCs to be extended sources in the GeV regime. Conversely, \citet{BS07} predicted that GCs may be point-like sources of GeV and TeV emission by considering MSPs that accelerate leptons either at the shocks that originate during collisions of the respective pulsar winds, or inside the pulsar magnetospheres. The leptons escape from these local acceleration sites, diffuse outward, and interact with the GC magnetic and soft-photon background fields. This leads to SR and IC scattering (see \citealt{Venter09_GC} and \citealt{Zajczyk13} for updated calculations). \citet{Kopp13} presented an improved model and found reasonable fits to the multi-band spectral energy distribution (SED) data of \terp \citet{Ndiyavala18} applied this model to the Galactic population of GCs, ranking them according to predicted VHE flux. There exist alternative models that invoke other astrophysical objects as sources of relativistic particles. \citet{Bednarek12} calculated the contribution of non-accreting white dwarfs in GCs to the $\gamma$-ray flux from such clusters and concluded that white dwarfs may produce $\gamma$-ray emission at a level which may be detectable by the Cherenkov Telescope Array (CTA) in some cases. See \citet{Bednarek11} for a review of the such leptonic GC models. On the other hand, \citet{Domainko11} investigated a hadronic model, invoking a $\gamma$-ray burst remnant as a potential source of energetic leptons and hadrons. In this model, the hadrons interact with ambient target nuclei, leading to the formation of $\pi^0$ particles that decay into $\gamma$ rays. Recently, \citet{Brown18} concluded\footnote{These authors themselves note that their work does not rule out an MSP-only explanation for the GeV flux seen from 47~Tucanae. They used the average spectrum of \textit{Fermi}-detected MSPs, assuming this to be universal, and allowing only the spectral normalization to be free. Other effects such as the inclusion of MSPs that are below detection threshold as well as different MSP geometries (inclination and viewing angles) may change the low-energy spectral shape, hardening it to potentially bring it in better agreement with the data, without the need to invoke dark matter annihilation (the latter model has several more free parameters, and a combination of MSPs and dark matter may therefore naturally better account for the data).} that a combination of MSP pulsed curvature emission and dark matter annihilation (with an enhanced density around a putative intermediate-mass black hole) may explain the GeV emission detected by \textit{Fermi} LAT for 47~Tucanae. Even though models are making progress to explain the broadband emission of GCs, many questions remain. For example, \citet{Venter_HEASA15} noted that uncertainties in the model parameters may lead to a spread in the predicted GC flux of up to an order of magnitude. They attempted to mitigate this problem by considering an ensemble of observed GCs to constrain their models, using a H.E.S.S.\ upper limit \citep{Abramowski13} to the cumulative flux from 15 GCs (\citealt{Venter15,Venter_HEASA15}, Ndiyavala et al., in prep.), but some parameter degeneracies are expected to remain. The hard slope of the diffuse X-ray emission in the case of \ter poses another puzzle, since the existing models have not been able to fit this component \citep[e.g.,][]{Kopp13}. The energy-dependent morphology (which is non-spherical at high energies) further challenges the existing models. \citet{Bednarek14} considered a model where energetic particles escape from the GC and interact with the Galactic medium, creating a bow shock nebula around the GC. If the latter is immersed in the relatively dense medium close to the Galactic Plane, this should manifest as an intricate morphology at high energies. To further address this complex morphology, \citet{Bednarek16} extended their model to take into account the advection of leptons by a mixture of red giant stellar and pulsar winds, as well as considering the effect of having a non-central (offset) energetic MSP as a source of relativistic particles. Furthermore, in the case of \terc the source morphologies differ significantly in extent and position across the electromagnetic spectrum, raising the question whether all the spectral components arise due to the same underlying particle population (in the leptonic scenario) or not. Lastly, the operation of different emission mechanisms and relative contribution of MSPs vs.\ other astrophysical sources or dark matter to the SED remains an open question. Given the richness of the existing data set on \terc as well as the variety of models that exist to explain GC emission (and their many free parameters), we use this system as a case study to further probe the origin of multi-wavelength emission from GCs. Improved models will aid selection of promising GCs for future observations by the CTA, which may see tens of these sources in the next decade \citep{Ndiyavala18}. We therefore aimed to gather more data on \ter (Section~\ref{sec:data}) and model the updated SED in a leptonic scenario (Section~\ref{sec:leptonic}). We present our conclusions in Section~\ref{sec:concl}. \section{MULTI-WAVELENGTH DATA AND SPECTRAL UPPER LIMITS}\label{sec:data} \subsection{Previous Radio Observations} Individual MSP discoveries in \ter bring the total membership to 37 \citep[e.g.,][]{Lyne90,Lyne00,Ransom05,Hessels06,Freire16, Freire17,Cadelano18}, although hundreds of MSPs may be present following expectations of numerical simulations \citep{Ivanova08}. \citet{Fruchter00} obtained images of \ter at 6~cm, 20~cm, and 90~cm using the Very Large Array (VLA). These displayed strong, steep-spectrum emission that could not be associated with known pulsars at that time. Numerous point sources were also detected within $30^{\prime\prime}$ of the cluster center, with their density rising rapidly toward the core. There, an elongated region of emission was found. Based on the steep spectrum as well as on the flux distribution, \citet{Fruchter00} concluded that this most probably indicated the presence of many undetected pulsars in the cluster, making this the most pulsar-rich of all Galactic GCs (they estimated a total number of $60-200$ host pulsars, based on their assumed radio luminosity function). \ter was furthermore detected in the NRAO VLA Sky Survey (NVSS) at 21~cm as a single source with a flux of about 5~mJy \citep{Condon98}. \citet{Clapson2011} analyzed archival 11~cm and 21~cm Effelsberg data and detected several radio structures in the direction of \terp However, given the uncertainty in flux, no spectral index could be inferred. \citet{Clapson2011} speculated that one structure\footnote{Care should be taken to simply compare the results of \citet{Condon98} with those of \citet{Clapson2011}. The NVSS was done with the VLA in the DnC configuration and the largest angular scale that can be detected in this most compact configuration is 970$^{\prime\prime}$ in full synthesis mode. In snapshot mode, this is 485$^{\prime\prime}$. We expect that only the brightest part of the emission would have been detected and that the angular size of the detected region was actually less than 485$^{\prime\prime}$. In fact, \citet{Condon98} give the size of the emission region as less than $58^{\prime\prime}\times47^{\prime\prime}$. Conversely, \citet{Clapson2011} used the Effelsberg single-dish telescope for which there is no limit to the largest detectable angular size of an extended structure. In the present context, the largest angular scale is of importance. \citet{Clapson2011} listed a size for Region 11 of $720^{\prime\prime}\times1080^{\prime\prime}$, even larger than the tidal radius of \ter of $R_{\rm t}\sim280^{\prime\prime}$, and also exceeding the largest angular size detectable at 21~cm with the VLA in the DnC configuration. It thus makes sense that the implied flux measured by \citet{Clapson2011} for Region 11 at 21~cm is $\sim4$~Jy vs.\ the relatively low flux of $\sim5$~mJy measured by \citet{Condon98}. However, the surface brightness of both observations are quite similar in magnitude. It is unquestionable that the \citet{Condon98} observations suffered from the ``missing-flux effect'' of interferometric observations. We conclude that the \citet{Clapson2011} value of the total flux from Region 11 is the more reliable value that should be used in the model fitting.} in particular (labeled as ``Region~11''), extending from the GC center to the north-west (roughly perpendicular to the Galactic Plane), could be the result of SR by electrons escaping from the large population of MSPs in this GC. In what follows, we fit these radio data using a diffuse low-energy synchrotron radiation (LESR) component due to the interaction of relativistic electrons escaping from the MSP magnetospheres with the cluster $B$-field (see Figure~\ref{fig:SED}). The contribution of the population of unresolved MSPs to this diffuse radio flux is negligible and can be ignored\footnote{\citet{Ransom05} estimated the total flux at 1.95~GHz of 22 MSPs in the core of \ter to be $\sim1$~mJy (the scale is $R_{\rm c}\sim9^{\prime\prime}$) or up to a few mJy if one adds two more MSPs farther from the GC center, while the flux from the large Region~11 measured by \citet{Clapson2011} of $\sim4$~Jy dwarfs this value.}. \subsection{Optical Upper Limits: Comparison of Thermal and non-Thermal Flux Levels} \label{optical} Our predicted non-thermal unpulsed LESR spectral component invoked to model the radio data (Section~\ref{sec:model}) extends into the optical band, raising the question of its detectability. The LESR component's flux is relatively low in the optical band \citep{Kopp13}, and we now show that it may in principle be very difficult to directly observe it, since there are $\sim 10^5$ stars (point sources) that contribute a high level of blackbody (BB) radiation that will swamp any diffuse non-thermal SR. To appraise the BB $\nu F_\nu$ flux from stars in different annuli centered on the cluster, and also the total flux expected from the full cluster, we use the surface-density profile of \ter obtained by \citet{Trager1995}, as converted by \citet{Cohn2002}. We estimate the area of an annulus as $ A_{\rm ann} = \pi(\theta_{2}^{2} - \theta_{1}^{2})$, with $\theta_{1}$ and $\theta_{2}$ the edges (angular radii) of a particular annulus. The average number of stars in such an annulus is found by interpolating the surface brightness $f$ and calculating $ N_{\rm ann} = fA_{\rm ann}$. We approximate the emitted spectrum of each star by a BB spectrum at a single average frequency $\langle\nu\rangle=2.7k_{\rm B}T/h=2.5\times10^{14}$~Hz, where $k_{\rm B}$ is the Boltzmann constant and $h$ the Planck constant, assuming a constant stellar surface temperature of $T = 4~500\,\rm K$. Upon multiplying the Planck spectrum $B_{\nu}$ by the stellar surface area $A_{\star}=4\pi R_*^2$, with $R_*$ the average stellar radius, and dividing by the square of the distance to the cluster, we obtain the thermal $\nu F_{\nu}$ flux level: \begin{eqnarray} \frac{B_{\nu}\langle\nu\rangle A_{\star}}{d^{2}} & = & \frac{8\pi R_*^{2}h\langle\nu\rangle^{4}N_{\rm ann}}{d^{2}c^{2}}\frac{1}{e^{h\nu/k_{B}T}-1} \nonumber \\ & \sim & 1.7\times10^{-14}R^{2}_{*,10}N_{\rm ann}\,\rm erg\,cm^{-2}s^{-1},\label{eq:BB_flux} \end{eqnarray} with $R_{*,10} = R_*/10^{10}\,\rm cm$, $c$ the speed of light, and assuming $d\,=\,5.9~{\rm kpc}$. When applying Eq.~(\ref{eq:BB_flux}) to the whole cluster (i.e., choosing $N_*=N_{\rm ann} = 7.7\times10^4$; \citealt{Lang1992}), we find that the predicted BB flux is $\sim 6.2\times10^{-8}$~erg\,cm$^{-2}$s$^{-1}$ for $R_{10}=7$, while the predicted $\nu F_\nu$ flux for the LESR at 1~eV is only $\sim5.0\times10^{-12}$~erg\,cm$^{-2}$s$^{-1}$ (Section~\ref{sec:model}), which is a factor $\sim10^4$ lower than the estimated thermal flux level. The only hope to detect the LESR component is if the stellar flux falls faster with radius than does the LESR flux. One might thus try to obtain a smaller ratio between thermal and non-thermal fluxes by focusing on different annuli. We calculated the flux ratio for all annuli in \citet{Cohn2002} and found that they drop from $\sim10^5$ to $\sim10^3$ with increasing radius, out to $\sim0.35R_{\rm t}$, with $R_{\rm t}$ the tidal radius. Since the annular area increases as $r^2$ while the surface brightness falls as $r^{-2.1}$ for larger radii, the estimated BB flux remains nearly constant for the outer annuli (with $N_{\rm ann}\sim10^3$) out to $R_{\rm t}$. Thus, even in the outer annuli where the stellar density has dropped significantly, the BB flux still exceeds the LESR flux by a factor $10^3$. In some sense, the thermal emission thus provides a very unconstraining upper limit to the LESR emission under the assumption that the latter will not exceed the thermal flux. However, since the BB spectrum covers a much narrower energy range than the LESR, there may yet be hope of detecting the LESR flux outside of the optical range (e.g., the millimeter or ultraviolet to low-X-ray range) where the BB component dominates. \subsection{Diffuse X-ray Emission} To investigate the X-ray point source population of \terc the core of this GC was covered in a deep \textit{Chandra} observation with a field of view of $\sim 5^\prime\times 5^\prime$ \citep{Heinke06}. Later, by following up on the detection of extended TeV $\gamma$-ray emission from the direction of \ter by H.E.S.S.\ \citep{Abramowski11}, \citet{Eger10} discovered the presence of hard and diffuse X-ray emission using the same \textit{Chandra} data. The diffuse X-ray signal was shown to be extended well beyond the half-mass-radius ($R_{\rm hm}\sim30^{\prime\prime}$) of the GC up to $\sim180^{\prime\prime}$, featuring a very hard spectrum that may be fit by a power law with a photon index of $0.9\pm0.5$ (see Figure~\ref{fig:SED}). The contribution from unresolved point-like sources to this diffuse signal was estimated to be very small. They found that the surface brightness peaked near the cluster center and decreased smoothly outwards. Various non-thermal emission mechanisms for the origin of this diffuse signal were discussed, but with no single scenario being clearly preferred. A follow-up search for an X-ray signal on similar spatial scales from a number of other LAT-detected GCs covered by archival X-ray observations yielded no additional significant detections \citep[see][]{Eger12}. However, a new hard, diffuse X-ray signal was recently discovered from 47~Tucanae, yet on comparatively smaller spatial scales \citep{Wu14}. In contrast to \terc the X-ray signal here appears to be contained within the half-mass radius of the GC. The spectrum can be described as a combination of a hard power-law component with a photon index of $\sim$1.0, and a thermal plasma component with a temperature of $k_{\rm B}T = 0.2$\,keV. The non-thermal X-ray emission detected from both \ter and 47~Tucanae could be unpulsed SR from relativistic leptons that were accelerated in shocks, following the collision of stellar winds in the GC cores (i.e., a single spectral component explaining both the radio and X-ray data in the case of \terc although the diffuse X-ray emission appears on very different spatial scales in these two GCs; \citealt{BS07,Venter09_GC}). However, we cannot find a satisfactory fit to the spectral data of these clusters when invoking only a single SR spectral component. We therefore model the diffuse X-ray emission observed from \ter by invoking a new component that is due to the cumulative pulsed SR by pairs originating in the various host MSP magnetospheres (Section~\ref{sec:model}). \subsection{New \textit{Fermi} LAT Data Analysis}\label{sec:latp8} \ter was the second GC to be associated with a \textit{Fermi}-LAT source \citep{Kong10,Abdo10,2FGL}. Comparing the likelihood when modeling the spectrum with a simple power-law shape, \beq\label{eq:pl} \frac{dN}{dE}\ =\ N_{0}\ \left(\frac{E}{E_{0}}\right)^{-\Gamma} \eeq \noindent{}and an exponentially cutoff power-law shape: \beq\label{eq:ecpl} \frac{dN}{dE}\ =\ N_{0}\ \left(\frac{E}{E_{0}}\right)^{-\Gamma}\ \exp\left\{-\left(\frac{E}{E_{\rm C}} \right)^{b} \right\} \eeq \noindent{}the $\gamma$-ray point source associated with \ter was found to be significantly curved, consistent with the interpretation of the collective emission from a population of MSPs. In both Eq.~(\ref{eq:pl}) and (\ref{eq:ecpl}), $N_{0}$ is a normalization factor with units cm$^{-2}$\,s$^{-1}$\,MeV$^{-1}$, $E_{0}$ is a scale parameter, and $\Gamma$ is the photon index. In Eq.~(\ref{eq:ecpl}), $E_{\rm C}$ is the cutoff energy and $b$ is an exponential index that governs how quickly the spectrum rolls over. Low-altitude pulsar emission models predict a super-exponential cutoff with $b\ >\ 1$ \citep[e.g.,][]{Harding78}. For some of the brightest $\gamma$-ray pulsars, LAT observations require a sub-exponential cutoff with $b\ <\ 1$, plausibly explained as a blending of several simple exponentially-cutoff spectra as the line of sight crosses different regions of the magnetosphere \citep{2PC}. \citet{Kong10} analyzed approximately 1.4 years of Pass 6 (P6) LAT data from the region around \ter with energies ranging from 0.5 to 20 GeV, and found a significant point source (18$^\prime$) from the optical center of \terc with a pulsar-like spectrum having a photon index $\Gamma\ =\ 1.9\pm0.2$, a cutoff energy $E_{\rm C}\ =\ 3.8\pm 1.2$~GeV, and integrated photon and energy fluxes over their energy range of $(3.4\pm1.1)\times 10^{-8}$ cm$^{-2}$\,s$^{-1}$ and $(6.8\pm2.0)\times 10^{-11}$ erg\,cm$^{-2}$\,s$^{-1}$, respectively. \citet{Abdo10} analyzed the region around \ter using approximately 1.5 years of P6 LAT data, with energies $\geq$ 0.2 GeV, including the same time span of \citet{Kong10}. These authors also found a significant point source, located $2.4^\prime$ from the cluster center, with a pulsar-like spectrum. Their best-fit simple exponentially-cutoff power-law spectrum had $\Gamma\ =\ 1.4^{+0.2,+0.4}_{-0.2,-0.3}$, $E_{\rm C}\ =\ 2.6^{+0.7,+1.2}_{-0.5,-0.7}$ GeV, and integrated photon and energy fluxes over their energy range of $(7.6^{+1.7,+3.4}_{-1.5,-2.2})\times 10^{-8}$ cm$^{-2}$\,s$^{-1}$ and $(7.1^{+0.6,+1.0}_{-0.5,-0.5})\times 10^{-11}$ erg cm$^{-2}$\,s$^{-1}$, respectively. The first uncertainties are statistical while the second reflect estimates of systematic errors. Using an estimate of the average MSP spin-down power and $\gamma$-ray efficiency with the measured $\gamma$-ray flux, \citet{Abdo10} estimated the number of MSPs in \ter to be $N^\gamma_{\rm MSP}\ =\ 180^{+100}_{-90}$. Using four years of data, the third \emph{Fermi} LAT catalog \citep[3FGL,][]{3FGL} associates 3FGL J1748.0$-$2447 with \terp The source is offset from the cluster center\footnote{The optical and diffuse X-ray centers are more or less coincident. The center of the radio ``Region 11'' is offset from this position by $\sim14^\prime$, while that of the extended H.E.S.S.\ source is offset by $\sim4^\prime$ (compared to a tidal radius of $R_{\rm t}=4.6{^\prime}$).} by $0.66^\prime$, well within the 95\% confidence-level ellipse with semi-major and semi-minor axes of $1.69^\prime$ and $1.53^\prime$, respectively. The pivot energy for 3FGL J1748.0$-$2447, 1280.38 MeV, is used as the scale parameter $E_{0}$ in our subsequent analyses. We selected seven years of Pass~8 (P8) LAT data\footnote{\url{http://fermi.gsfc.nasa.gov/ssc/data/analysis/}\\\url{documentation/Pass8_usage.html}} \citep{AtwoodP8} from the start of science operations on 2008 August 4, with \textit{evclass} = 128 and \textit{evtype} = 3, within 15\degr\ of the best-fit position of 3FGL J1748.0$-$2447, with energies from 0.1 to 300 GeV, and with maximum zenith angle of 90\dgr. The \textit{Fermi} ScienceTool\footnote{Available for download at \url{https://fermi.gsfc.nasa}} (ST) \texttt{gtmktime} was used to select good time intervals when the spacecraft was in nominal science operations mode and the data were flagged as good. In preparation for a binned maximum likelihood analysis, we made a livetime cube using the ST \texttt{gtltcube} with \textit{zmax} = 90\dgr\ and an exposure cube with 35 bins in log$_{10}$ energy and spatial pixels 0\fdg1 on a side using the ST \texttt{gtexpcube2} and the P8R2\_SOURCE\_V6 LAT Instrument Response Functions. We constructed a model of our region of interest (ROI) including all 3FGL sources within 25\dgr\ of the ROI center, those sources known to be extended were modeled using the spatial templates from the catalog. The spectral parameters of sources $>$ 6\dgr\ from the ROI center were held fixed at the values from 3FGL. For sources within 6\dgr\ of the ROI center, the spectral parameters were allowed to vary if they were found to have an average significance $\geq$ 15$\sigma$ in 3FGL. However, for sources within 8\dgr\ of the ROI center that did not otherwise satisfy our requirements for free spectral parameters but were flagged as significantly variable in 3FGL, we did allow the normalization parameters to vary. The diffuse emission from the Milky Way was included using the \textit{gll\_iem\_v06.fits} model, while the isotropic diffuse emission and residual background of misclassified cosmic rays were modeled using the \textit{iso\_P8R2\_SOURCE\_V6\_v06.txt} template \citep{Acero16}. We allowed the intensity of the Galactic diffuse emission to be modified by a power-law spectrum. The spectrum of 3FGL J1748.0$-$2447 was found to have significant curvature and, thus, modeled using a log-parabola function in the catalog. For our purposes, we modeled the spectrum of 3FGL J1748.0$-$2447 using both a simple power law (Eq.~[\ref{eq:pl}]) and an exponentially-cutoff power law (Eq.~[\ref{eq:ecpl}]). We performed three binned maximum likelihood analyses, with energy dispersion disabled, with the spectrum of 3FGL J1748.0$-$2447 modeled as a power law, as a simple exponentially-cutoff power law, and as an exponentially-cutoff power law with the $b$ parameter allowed to vary. Following \citet{2PC}, we compared the best-fit likelihood value from the fit using a power law ($\mathcal{L}_{\rm pl}$) to that when using a simple cutoff ($\mathcal{L}_{\rm co}$ when $b=1$) to calculate $TS_{\rm cut}$ = $-2(\ln(\mathcal{L}_{\rm co})-\ln(\mathcal{L}_{\rm pl}))$ = 207, significantly favoring the cutoff model over the power law. Similarly, we found $TS_{\rm{b free}}$ = $-2(\ln(\mathcal{L}_{\rm{b free}})-\ln(\mathcal{L}_{\rm co}))$ = 4, where $\mathcal{L}_{\rm{b free}}$ is the best-fit likelihood when modeling the spectrum of 3FGL J1748.0$-$2447 as an exponentially-cutoff power law with the $b$ parameter free. As such, there is no preference for the fit with $b$ free and we use the results from the simple exponentially-cutoff power law, which are reported in Table~\ref{tab:spec}. In additin to the fit parameters from Eq.~(\ref{eq:ecpl}), Table~\ref{tab:spec} also includes the integrated photon ($F$) and energy ($G$) fluxes, from 0.1 to 300 GeV, derived from the best-fit models. Our best-fit energy flux agrees well with that of \citet{deMenezes}, who reported a value of $G_{100}$ = $(7.44\pm0.27)\times10^{-11}$ erg cm$^{-2}$ s$^{-1}$ (rounded so that there are two significant figures in the error), over the energy range 0.1 to 100 GeV, using nine years of P8 data, and assuming a log-parabola spectral shape. A residual TS map of the region around 3FGL J1748.0$-$2447, using our best-fit $b=1$ model, did not reveal the need for adding any new sources to our model, even though the data sets we analyzed covered three more years than that used for the 3FGL catalog. \begin{deluxetable}{lc} \tablewidth{0pt} \tablecaption{LAT Spectral Fit Results \label{tab:spec}} \tablecolumns{2} \startdata\\ $N_{0}$ (10$^{-11}$ cm$^{-2}$\,s$^{-1}$\,MeV$^{-1}$) & 1.04$\pm$0.40\\ $\Gamma$ & 1.71$\pm$0.04\\ $E_{\rm C}$ (GeV) & 4.61$\pm$0.35\\ $F_{100}$ (10$^{-8}$ cm$^{-2}$\,s$^{-1}$) & 9.68$\pm$0.52\\ $G_{100}$ (10$^{-11}$ erg\,cm$^{-2}$\,s$^{-1}$) & 7.79$\pm$0.22\\ $TS_{\rm cut}$ & 207\\ $TS_{\rm b free}$ & 4\\ \enddata \end{deluxetable} While our likelihood analyses did successfully converge, the fits of the entire region were formally bad. In particular, there were large residuals starting at $\sim$10 GeV, growing to larger discrepancies out to 300 GeV. The preliminary 8-year LAT source catalog\footnote{Available at \url{https://fermi.gsfc.nasa.gov/ssc/data/access/lat/8yr_catalog/}} with an improved Galactic diffuse model, P8R3 data, and using weighted likelihood \citep{4FGL} finds much better residuals in the region around Terzan~5. This catalog used a different functional form for the spectrum of the source associated with Terzan~5 (4FGL J1748.0$-$2446) and fixed $b=2/3$, but we can still compare the flux values. The source 4FGL J1748.0$-$2446 has a reported integral photon flux, above 1 GeV, of (1.26$\pm$0.03)$\times10^{-8}$ cm$^{-2}$ s$^{-1}$, which is larger than the value of (1.05$\pm$0.03)$\times10^{-8}$ cm$^{-2}$ s$^{-1}$ found in our analysis. Our best-fit $\Gamma$ value agrees with both previous studies \citep{Kong10, Abdo10}, within their quoted uncertainties. The best-fit $E_{\rm C}$ of \citet{Kong10} agrees with our value, within uncertainty, but that of \citet{Abdo10} is significantly lower. Using our P8 results, we find photon and energy fluxes over the 0.5 to 20 GeV energy range of $(2.3\pm 0.1)\times 10^{-8}$ cm$^{-2}$\,s$^{-1}$ and $(5.3\pm 0.1)\times 10^{-11}$ erg cm$^{-2}$\,s$^{-1}$. While both values are lower than those reported by \citet{Kong10}, they agree within uncertainties. Integrating above 0.2~GeV, our model yields photon and energy fluxes of $(5.4\pm 0.2)\times 10^{-8}$ cm$^{-2}$\,s$^{-1}$ and $(6.8\pm 0.2)\times 10^{-11}$ erg cm$^{-2}$\,s$^{-1}$. These values are also lower than those reported by \citet{Abdo10} but agree within uncertainties, and we note that the energy flux values \citep[often more reliable as noted by][]{2PC} agrees well. We produced spectral points by dividing the 0.1 to 300 GeV interval into 12 bins, equally sized in log$_{10}$ energy, and performing binned likelihood fits assuming a power-law form for the spectrum of 3FGL J1748$-$2447 with $\Gamma = 2$ and only the normalization parameters of other sources left free. We report a flux value, with uncertainty, for those bins where 3FGL J1748$-$2447 was detected with a point-source TS $\geq$ 9 ($\sim3\sigma$) and at least 4 predicted counts, otherwise a 95\% confidence-level flux upper limit is reported. The flux upper limits were calculated using the Bayesian method for energy bins with point-source TS $\leq$ 0 or with $<$ 4 predicted counts from 3FGL J1748$-$2447. For plotting and to produce the $E^{2}dN/dE$ points, we used the logarithmic mean of each energy bin (cf.\ Figure~\ref{fig:SED}). In Section~\ref{sec:model}, we model the \textit{Fermi} LAT spectrum as originating due to the cumulative pulsed CR by the embedded MSPs. In order to search for $\gamma$-ray pulsations, we obtained timing solutions for 33 of the pulsars in \ter (namely, PSRs J1748$-$2446aa, ab, ac, ae, af, ag, ah, ai, aj, ak, C, D, E, F, G, H, I, J, K, L, M, N, O, Q, R, S, T, U, V, W, X, Y, and Z) that were valid from before the launch of \textit{Fermi} until 2012 July (S.\ Ransom, personal communication\footnote{\url{https://www.cv.nrao.edu/~sransom/Ter5_index.html}}). We used the ephemerides for pulsars aj and ak from \citet{Cadelano18}. Using the best-fit maximum likelihood model, in which the spectrum of 3FGL J1748.0$-$2447 was modeled as a simple exponentially-cutoff power law, we calculated spectral weights for events within 2\degr\ of the best-fit position. For each event, the weight reflects the probability that the event originated from 3FGL J1748.0$-$2447. Use of these weights has been shown to enhance the sensitivity of $\gamma$-ray pulsation searches \citep{Kerr11}. We then used the timing solutions mentioned above to search for modulations in $\gamma$-ray events at the spin and orbital periods from known pulsars in \terp We tested for modulation at the spin period using the H test \citep{deJager89,deJager10}, modified to include spectral weights \citep{Kerr11}. For those pulsars in binary systems we used both the H test and the $Z^{2}_{m}$ test with $m$ = 2 harmonics when testing for modulation at the orbital period. When performing the search for orbital modulation, we corrected for exposure variations as described in \citet{JohnsonJ1227}. We tested for spin pulsations using both the full data set and only events up to the end of each ephemeris' validity interval. No significant modulation was detected from any pulsar for which we had a timing solution, with a maximum signal of 2.2$\sigma$. \subsection{H.E.S.S.\ Data} \label{sec:hess} H.E.S.S.\ discovered a VHE $\gamma$-ray source in the direction of \ter \citep{Abramowski11}. The integral flux above 440~GeV of the source was measured as $(1.2 \pm 0.3) \times 10^{-12}$~cm$^{-2}$\,s$^{-1}$ and its spectrum was best described by a single power law with index of $2.5 \pm 0.3_\mathrm{stat} \pm 0.2_\mathrm{sys}$. The VHE source is offset from the center of the GC by $4.0^\prime\pm1.9^\prime$ (about 7~pc at a distance of 5.9~kpc), with its size being characterized by widths from a 2D Gaussian fit of $9.6^\prime\pm2.4^\prime$ and $1.8^\prime\pm1.2^\prime$ for the major and minor axes (compared to the GC tidal radius of $R_{\rm t}=4.6^\prime$). The sources is oriented 92\dgr$\pm$6\dgr\ westwards from north\footnote{This means that the H.E.S.S.\ source is much closer to the GC core than the radio ``Region~11'', and it only slightly overlaps with its inner edge.}. A chance coincidence between \ter and an unrelated VHE $\gamma$-ray source is rather unlikely ($\sim 10^{-4}$). \citet{Ndiyavala18} reanalyzed \ter data and obtained a significance of $6\sigma$ for standard and loose cuts, and $7.1\sigma$ for hard cuts that compare well with that of \citet{Abramowski11} who obtained a significance of $5.3\sigma$. \section{MODELING THE BROADBAND SED}\label{sec:model} \subsection{Parameter Constraints from General Considerations} In this section, we derive general constraints on the spatial diffusion coefficient $\kappa$ (for simplicity, we assume that this coefficient is only a function of particle energy, not of space) and the cluster $B$-field. As a first approach, the Bohm value has been used in the past to model the particle diffusion \citep[][]{BS07,Venter08}: \beq \kappa_{\rm Bohm} = \frac{cE_{\rm e}}{3eB} = 3.3\times10^{25}E_{\rm TeV}B_{-6}^{-1}~{\rm cm}^2\,{\rm s}^{-1},\label{eq:Bohm} \eeq with $E_{\rm TeV}=E_{\rm e}/{\rm 1~TeV}$ the particle energy and $B_{-6}=B/1~\mu{\rm G}$. By invoking a containment argument, one may obtain a constraint on this coefficient: since we observe VHE $\gamma$-ray emission up to $E_\gamma\sim10$~TeV, one can write that the diffusion time (in the limit that it exceeds the escape time) should exceed the typical timescale for IC emission: \beq \tau_{\rm esc} > \tau_{\rm IC}. \eeq This leads to \beq \frac{R^2}{6\langle\kappa\rangle} > \frac{E_{\rm e}}{\dot{E}_{\rm IC}}. \eeq Let us concentrate on the optical soft-photon background with photons at $T\sim4500$~K (i.e., having an average energy $\langle\epsilon\rangle\sim1$~eV). For very energetic leptons, we have to take Klein-Nishina effects into account when calculating the IC loss rate. We thus use the expression of \citet{Ruppel10} \beq \dot{E}_{\rm IC} \approx \frac{4\sigma_{\rm T}cu}{3}\frac{\gamma^2_{\rm e}\gamma^2_{\rm KN}}{\gamma^2_{\rm e}+\gamma^2_{\rm KN}}, \eeq with $\sigma_{\rm T}=6.65\times10^{-25}$~cm$^2$ the Thomson cross section, $u$ the average soft-photon energy density, and \beq \gamma_{\rm KN}\equiv\frac{3\sqrt{5}}{8\pi}\frac{m_{\rm e}c^2}{k_{\rm B}T} \eeq the critical Klein-Nishina Lorentz factor. If the particle Lorentz factor $\gamma^2_{\rm e}\gg\gamma^2_{\rm KN}$, the IC loss rate reduces to \beq \dot{E}_{\rm IC} \approx \frac{4\sigma_{\rm T}cu\gamma^2_{\rm KN}}{3}, \eeq yielding \begin{eqnarray} \tau_{\rm IC} & \approx & 6\times10^{12}\left(\frac{E_{\rm TeV}T_{4500}^2}{u_{50}}\right)~{\rm s}\nonumber \\ & \approx & 2\times10^5\left(\frac{E_{\rm TeV}T_{4500}^2}{u_{50}}\right)~{\rm yr}, \end{eqnarray} with $u_{50} \equiv u/(50~{\rm eV/cm^3})$ and $T_{4500}=T/4500~{\rm K}$. We use $u_{50}$ to scale our results since this value reflects a spatially-averaged value for the energy density; See Figure~\ref{fig:SED} of \citet[][]{BS07} and of \citet[][]{Prinsloo13}. If we set $R\sim R_{\rm t}\sim10$~pc, we find \beq \langle\kappa\rangle < 2.6\times10^{25}\left(\frac{R^2_{10}u_{50}}{E_{\rm TeV}T_{4500}^2}\right)~{\rm cm^2 s^{-1}},\label{eq:1} \eeq with $R_{10}\equiv R/10~{\rm pc}$ and $R_{\rm t}$ the tidal radius. This upper limit is similar with the value of the Bohm coefficient at $E_{\rm e}=1$~TeV. \citet{Kopp13} inferred values for $\kappa$ that are slightly larger at 1~TeV than the Bohm value (for $B_{-6}\sim5$) when fitting the X-ray surface brightness profile, although they assumed an energy dependence $\kappa\propto E_{\rm e}^{0.6}$. They also noted that by assuming Bohm diffusion they could fit the X-ray surface-brightness data, and that the degeneracy in diffusion index and normalization may be broken by using spatial data in a different waveband as well as more spectral data. The caveat is that both the spatial and spectral fit should be reasonable. While \citet{Kopp13} could fit the X-ray surface brightness profile, their predicted SED did not match the data. We thus update their calculation so as to fit both these quantities (Section~\ref{sec:leptonic}). Additionally, one may argue that since we observe IC emission up to $E_\gamma\sim10$~TeV, we must have \beq \tau_{\rm SR} \gtrsim \tau_{\rm IC}. \eeq This implies (at those high energies) that \beq \dot{E}_{\rm SR} \lesssim\dot{E}_{\rm IC}, \eeq which yields a limit on the magnetic field \beq B_{-6} \lesssim 8\left(\frac{\sqrt{u_{50}}}{T_{4500}E_{\rm TeV}}\right). \eeq Therefore, from the simple arguments above, we find typical values of $B_{-6}\sim10$ and $\langle\kappa\rangle\sim5\times10^{25}~{\rm cm^2 s^{-1}}$ around $E_{\rm TeV}\sim1$, similar to what was found by \citet{Kopp13}. At these typical cluster $B$-fields, the LESR spectrum should peak around \beq E_\gamma = 0.29h\nu_{\rm crit} \approx 2\times10^{-5} B_{-6} E_{\rm TeV}^2~{\rm keV},\label{eq:SR_freq} \eeq and for $B_\perp\sim5~\mu$G and $E_{\rm e}\sim 10$~TeV, this component should peak around $\sim0.01~$keV, with $B_\perp=B\sin\alpha^\prime$ and $\alpha^\prime$ the pitch angle. This is consistent with our findings in the next section. \subsection{Leptonic Modeling of the Broadband SED of \ter}\label{sec:leptonic} \subsubsection{LESR and IC Components} We present new spectral fits\footnote{The main aim of this paper is to ascertain whether we can elucidate the broadband spectral emission properties of Terzan 5 as well as those of the underlying sources that inject particles into Terzan 5. However, we realize that the energy-dependent morphology of this cluster is quite complex, so much so that it challenges the idea of a single (collective) particle population injected by the MSPs being responsible for all spectral emission components originating from partially-overlapping spatial regions of different extents. Yet, to facilitate usable conclusions to be drawn from the current data, we do invoke a single population and study the source energetics, while deferring a study of spatial properties of Terzan 5 to future work.} to the SED of \ter using the model of \citet{Kopp13} as shown in Figure~\ref{fig:SED} (blue dashed lines). The model includes a spatial dimension, refined stellar soft-photon energy density profile and full particle transport, taking diffusion and radiation losses into account with the assumptions of spherical symmetry and a steady-state regime. \begin{figure*} \includegraphics[width=0.85\textwidth]{CORRECTED_SED_TER5.eps} \centering \caption{Different spectral components for \ter predicted by the leptonic models of \citet{Kopp13} and \citet{Harding08,HK15}. Using the first model, we calculate the low-energy SR (LESR) and VHE IC components (integrated over all $r_{\rm s}$; dashed blue lines). We assumed $E_{\rm e,min} = 9\times10^{-3}$~TeV, $E_{\rm e,max} = 10$~TeV, $Q_{0} = 1.4\times10^{34}~{\rm erg}^{-1}~{\rm s}^{-1}$, $B = 4.0\,\mu \rm G$, $\Gamma = 1.5$, and $\kappa = 7\times10^{-5}\, \rm kpc^{2}\,Myr^{-1}\approx2\times10^{25}~{\rm cm}^2~{\rm s}^{-1}$. We used a distance of $d=5.9\,{\rm kpc}$, core radius $R_{\rm c}=0.15^\prime=0.26\,{\rm pc}$, half-mass radius $R_{\rm hm}=0.52^\prime=0.89\,{\rm pc}$, and tidal radius $R_{\rm t}=4.6^\prime=7.9\,{\rm pc}$. The HESR and CR components (red lines) are predictions using the model of \citet{Harding08,HK15} for $\langle\alpha\rangle = 45^\circ$, $\langle\zeta\rangle = 60^\circ$, $\langle P \rangle = 7.7\times 10^{-3}$~s, and $\langle B_{\rm s}\rangle=5.8\times10^9$~G. We also indicate \textit{Chandra} \citep{Eger10}, H.E.S.S. \cite{Abramowski11}, and radio data \citep[``Region~11'' as defined by][]{Clapson2011}. The uncertainties in our LAT points do not reflect possible systematic errors on the Galactic diffuse emission model.} \label{fig:SED} \end{figure*} In Figure~\ref{fig:SED} we indicate radio data (labeled ``Effelsberg'') associated with ``Region~11'' (a prominent, large-scale, asymmetric feature offset from the center) as defined by \citet{Clapson2011}. We fit these points with our predicted LESR component, in keeping with the suggestion by \citet{Clapson2011} that the flux from this region may be due to unpulsed SR from leptons that were injected by the MSPs into the GC and diffused throughout the cluster. Our predicted LESR component is much below the estimated BB flux level in the optical band (see Section~\ref{optical}). As mentioned in Eq.~(\ref{eq:SR_freq}), we expect this component to peak around 1~keV for particle energies $E_{\rm e}\sim100$~TeV and $B\sim10\,\mu$G. This led \citet{Kopp13} to fit the X-ray surface brightness profile measured by \textit{Chandra} in order to constrain the diffusion coefficient to $\kappa\sim 3.3\times10^{25}$~cm$^2$\,s$^{-1}$ at 1~TeV, similar to Eq.~(\ref{eq:Bohm}) and Eq.~(\ref{eq:1}). However, although their model prediction reproduced the flux level at a few keV, they could not fit the spectral slope of the observed data. This implies that the observed diffuse X-ray emission may be due to a different spectral component. We therefore now choose the maximum particle energy $E_{\rm e,max}=10$~TeV and a slightly lower $B$-field of $B=4\,\mu$G so that our new LESR component peaks around $E_\gamma\sim0.01$~keV (as we found in Section~\ref{sec:model}) and thus cuts off below the \textit{Chandra} data. Furthermore, the low-energy tail of our predicted IC component satisfies the new \textit{Fermi} data and upper limits and we also reproduce the H.E.S.S.\ data. \subsubsection{Primary CR and Pair High-energy SR Components} We use the model of \citet{HK15} to fit the GeV and keV data. Similar to previous studies \citep[e.g.,][]{HUM05,Venter09_GC,Zajczyk13}, we fit the \textit{Fermi} LAT data using the cumulative primary CR component of pulsed $\gamma$-ray emission originating in the MSP magnetospheres (GeV component indicated by a solid red line and labeled ``CR'' in Figure~\ref{fig:SED}). This has been a standard interpretation for the GeV spectrum measured by the LAT for several GCs \citep{Abdo10}. Following the idea of \citet{Kopp13}, we propose that the \textit{Chandra} data indicate the presence of a ``new'' high-energy SR (HESR) component that has not been modeled in detail in this context\footnote{As a practical measure, we attribute the \textit{Chandra} data solely to collective, pulsed, non-thermal, magnetospheric pulsar emission. There could be contributions by other sources, but we do not know \textit{a priori} the properties of unresolved stellar members hosted by Terzan 5.} before: the cumulative pulsed SR from pairs generated within the magnetospheres of the host MSPs, radiating at altitude that are a substential fraction of the light cylinder (at a radius $R_{\rm LC}=c/\Omega$, with $\Omega$ the angular speed, where the co-rotational speed equals $c$) through cyclotron resonant absorption of radio photons \citep[cf.][]{Harding08}. Given the much higher local $B$-field (e.g., the magnetospheric field at the MSPs' light cylinder may reach $B_{\rm HESR}\sim10^6$~G for the most energetic ones\footnote{The average $B$-field at the light cylinder may be closer to $\sim10^4$~G; however, the pair SR component is likely dominated by the MSPs with the highest spin-down luminosities and $B$-fields.} vs.\ the much lower GC field $B_{\rm LESR}\sim10^{-5}$~G) and the much smaller average pitch angle ($\alpha_{\rm HESR} \sim0.1$ vs.\ $\alpha_{\rm LESR}\sim\pi/2$ radians) as well as different particle energies, the cutoff energy of this new component is much higher than that of the LESR spectrum: \beqa \frac{E_{\rm HESR,cut}}{E_{\rm LESR,cut}} & \sim & \left(\frac{\gamma_{\rm SR}}{\gamma_{\rm LESR}}\right)^2\frac{B_{\rm HESR}\sin\alpha_{\rm HESR}}{B_{\rm LESR}\sin\alpha_{\rm LESR}}\\ & \sim & \left(\frac{10^4}{10^7}\right)^2\frac{10^6~{\rm G}\times10^{-1}}{10^{-5}~{\rm G}}\sim 10^4. \eeqa This simple scaling predicts a cutoff $E_{\rm HESR,cut}\lesssim100$~keV, and thus provides us with a low-energy tail that might fit the X-ray data. This idea is also supported by observations of sources embedded in 47~Tucanae: \citet{Bogdanov08} noted that even though most of the observed MSPs exhibit soft thermal spectra, three of them manifest hard power-law components. These components may plausibly be attributed to binary shock emission or magnetospheric SR. It is furthermore supported by detection of hard non-thermal X-ray emission from a number of field MSPs. As a proof of principle, we now calculate model spectra invoking a cumulative pulsed HESR component originating in the MSP magnetospheres to fit the \textit{Chandra} data (keV component indicated by a solid red line and labeled as ``HESR'' in Figure~\ref{fig:SED}). We use a force-free $B$-field in the inertial observer frame, choosing a slot gap width of 0.03$\Theta_{\rm PC}$, with $\Theta_{\rm PC}$ the polar cap angle (the inner and outer angular boundaries of the gap were set at open-volume coordinates $r_{\rm ovc}\in(0.90,0.93)$, where the $r_{\rm ovc}$ coordinate labels self-similar rings, $r_{\rm ovc}=0$ being the magnetic pole and $r_{\rm ovc}=1$ being the polar cap rim; see \citealt{Dyks04}), and a constant $E$-field from the MSP surface to $2R_{\rm LC}$, set by an inverse acceleration length scale of $R_{\rm acc}=d\gamma_{\rm e}/dl=2$ cm$^{-1}$ (i.e., $E_{||}=R_{\rm acc}m_{\rm e}c^2/e$, with $\gamma_{\rm e}$ the particle Lorentz factor and $dl$ the step length along the particle trajectory, $m_{\rm e}$ and $e$ the electron mass and charge). We divide the fraction of stellar surface covered by $B$-field line footpoints that are within the gap using 4 self-similar rings and 72 azimuthal divisions. We choose an average pulsar period of $\langle P\rangle = 7.7$~ms, and by fixing the average surface $B$-field to $\langle B_{\rm s}\rangle=5.8\times10^9$~G and moment of inertia $\langle I \rangle=1.56\times10^{45}$~g\,cm$^2$, we obtain $\langle \dot{P}\rangle \sim 7\times10^{-19}$\,s\,s$^{-1}$ and $\langle\dot{E}\rangle\sim9.08\times10^{34}$~erg\,s$^{-1}$. The latter may represent significant contributions from more energetic MSPs. Nonetheless, we take these values as representative\footnote{Unfortunately, there is a large uncertainty in the MSP population's properties. While a full Monte Carlo investigation of the SED may be preferable from a first-principles point of view, this will introduce many more uncertainties and a large range for the SED components' shapes and levels, so this will probably not lead to any conclusive answers. We therefore deem the approach of studying the behaviour of an ``average MSP'' as the most practical, although we are cognisant of the fact that a particularly powerful MSP may skew the results.} of the pulsars in \terp We furthermore use a polar cap pair spectrum calculated for an offset-polar-cap $B$-field \citep{Harding11a,Harding11b,Barnard16} with an offset parameter of $\epsilon = 0.6$. We choose an average magnetic inclination angle of $\langle\alpha\rangle=45$\dgr and average observer angle of $\langle\zeta\rangle=60$\dgr (for both HESR and CR components). See \citet{HK15} for details. The number of visible $\gamma$-ray pulsars ($N^{\rm \gamma}_{\rm vis}$) is constrained by the primary CR flux level, for a given set of model parameters. Alternatively, if we fix $N^{\rm \gamma}_{\rm vis}=N^{\rm rad}_{\rm vis}=37$ to the number of visible radio pulsars (since nearly all currently detected $\gamma$-ray MSPs by \textit{Fermi} are radio-loud), we may constrain other parameters such as the gap width and average pulsar geometry $\alpha$ and $\zeta$, or $\langle P\rangle$ and $\langle\dot{P}\rangle$. Unfortunately, it is difficult to break this degeneracy using X-ray data, since one may expect that $N^{\rm X}_{\rm vis}\lesssim N^\gamma_{\rm vis}$ if their X-ray beams are slightly narrower than the $\gamma$-ray ones, and equality may not hold exactly. One may additionally write that $N^{\rm X}_{\rm tot} = N^{\rm X}_{\rm vis}+N^{\rm X}_{\rm invis}\geq N^{\rm rad}_{\rm vis}=37$ and $N^{\rm \gamma}_{\rm vis}\geq 37$. The product $M_\pm N^{\rm X}_{\rm vis}$ is being set by the LESR flux level, so these two parameters are degenerate. Using the HESR (\textit{Chandra}) flux level, we constrain the product $N^{\rm X}_{\rm vis}\langle M_\pm\rangle\sim1.9\times10^4$, with $\langle M_\pm\rangle$ the average number of pairs produced per primary extracted from the polar cap, per pulsar (the average electron-positron pair multiplicity). If we take $N^{\rm X}_{\rm vis}\approx 35$, we obtain $\langle M
_\pm\rangle\approx540$. However, this value depends crucially on the assumptions of the magnetospheric model: more optimistic assumptions about the electrodynamics (e.g, a higher $B$-field or current, that will influence the particle transport) may lead to a larger single-MSP spectrum, and yield a lower constant (value for the product of $N^{\rm X}_{\rm vis}\langle M_\pm\rangle$), thus lowering the value for $\langle M_\pm\rangle$. Previously, \citet{Kopp13} found an optimal source strength of $Q_0 \sim 6\times10^{33}$~erg$^{-1}$s$^{-1}$ when fitting the LESR and IC components. The value of $Q_0$ is usually constrained by assuming a parametric form for the particle injection spectrum \begin{equation} Q(E_{\rm e}) = Q_0E_{\rm e}^{-\Gamma} \end{equation} and using conservation of charge and energy per unit time (i.e., conservation of current and luminosity; \citealt{Buesching08,Venter15c}): \begin{eqnarray} \int_{E_{\rm e,min}}^{E_{\rm e,max}}Q(E_{\rm e})\,dE_{\rm e} & = & N_{\rm MSP,tot}\times\nonumber\\ & & \left(\langle M_\pm\rangle + 1\right)\langle\dot{n}_{\rm GJ}\rangle\\ \int_{E_{\rm e,min}}^{E_{\rm e,max}}E_{\rm e}Q(E_{\rm e})\,dE_{\rm e} & = & N_{\rm MSP,tot}\eta_{\rm p}\langle\dot{E}\rangle, \end{eqnarray} with $\langle\dot{n}_{\rm GJ}\rangle=4\pi^2B_{\rm s}R^3/ceP^2\propto \langle\dot{E}\rangle^{1/2}$ the average Goldreich-Julian rate of particles injected per second for a pulsar period $P$, surface magnetic field $B_{\rm s}$ and stellar radius $R$ \citep{GJ69}, and $\eta_{\rm p}$ the efficiency of converting the average spin-down luminosity to particle power. The ``+1'' in the first equation above represents the contribution from primary particles. The above system of equations may have up to 10 free parameters, implying a large degeneracy of parameters. We found an optimal value of $Q_0 \sim 1.4\times10^{34}$~erg$^{-1}$s$^{-1}$ (Fig.~\ref{fig:SED}) by fitting the unpulsed spectral components (for particular choices of other free parameters, e.g., $\kappa$ and $B$, and using $\langle\dot{E}\rangle=9.08\times10^{34}$~erg\,s$^{-1}$ and $\eta_{\rm p}=3\%$ and $N_{\rm MSP,tot}\sim40$). This leads to a constraint on the average multiplicity: \begin{eqnarray} \langle M_\pm\rangle & = & \frac{Q_0\left(E_{\rm e,max}^{1-\Gamma}-E_{\rm e,min}^{1-\Gamma}\right)}{(1-\Gamma)N_{\rm MSP,tot}\langle\dot{n}_{\rm GJ}\rangle} - 1\nonumber\\ & \approx & 20\left(\frac{\eta_{\rm p}}{3\%}\right)\!\!\!\left(\frac{2.7\times10^{32}~{\rm s}^{-1}}{\langle\dot{n}_{\rm GJ}\rangle}\right)\left(\frac{\langle\dot{E}\rangle}{9\times10^{34}~\rm erg\,s^{-1}}\right)\\ & \propto & \langle\dot{E}\rangle^{1/2},\!\!\!\!\!\!\!\!\!\label{eq:M1} \nonumber \end{eqnarray} with the value of $\langle\dot{n}_{\rm GJ}\rangle$ reflecting the choice for the average $\langle P \rangle$ and $\langle B_{\rm s}\rangle$ of the MSPs as mentioned earlier, for consistency. This estimate of $\langle M_\pm\rangle\approx20$ is quite a bit lower than the previous one of $\langle M_\pm\rangle\approx540$ as inferred from the HESR component. There are ways to mitigate this difference, given the uncertainty and degeneracy in several model parameters. The estimate of $\langle M_\pm\rangle$ using the unpulsed spectral components may be raised to $\langle M_\pm\rangle\approx60$ by using $E_{\rm min}\sim40$ GeV, $E_{\rm max}\sim7$ TeV, and $\Gamma = 1.6$, without significantly changing the SED. Next, the discrepancy can be lowered to a factor $\sim4.5$ by increasing $\langle I \rangle$ by a factor of $\sim2$, since $\langle M_\pm\rangle \propto \dot{E}^{1/2}$ for the unpulsed case, while $\langle M_\pm\rangle \propto \langle\dot{E}\rangle^{-1/2}$ for the pulsed case. This, however, raises $Q_0 \propto \langle\dot{E}\rangle$ by a factor of $\sim2$ so that LESR overshoots the data slightly, but this effect can then be mitigated by choosing $B \approx1~\mu$G. Lastly, the remaining discrepancy of a factor of $\sim4.5$ can be ameliorated by assuming a larger value for $\epsilon \sim~0.7$ (implying more pairs) and a larger gap width (say, increasing the upper boundary to $r_{\rm ovc}\sim0.96$, implying a larger active area on the stellar surface and thus lowering the demand on $\langle M_\pm\rangle$). There are also uncertainties in the angles $\langle\alpha\rangle$ and $\langle\zeta\rangle$ that may have a significant effect on the HESR flux. Lastly, using average values $\langle B_{\rm s}\rangle$ and $\langle P\rangle$ leads to average values for $\langle \dot{n}_{\rm GJ}\rangle$ and $\langle\dot{E}\rangle$, and this introduces further uncertainty. It is thus possible to pick (non-unique combinations of) values for some model parameters that would make the two estimates of $\langle M_\pm\rangle$ (using the pulsed and unpulsed SED components) consistent with each other, without violating the observed SED. The actual value of $M_\pm$ for MSPs is quite uncertain. Polar cap pair cascades in a pure dipole field give very low values of $M_\pm$ for the bulk of MSPs, which prompted the suggestion that distortions of the B-field near the neutron star could increase $M_\pm$ \citep{Harding11b} but the magnitude and structure of such distortions are not known. Comparing results of particle-in-cell simulations \citep{Kala18} with $\gamma$-ray spectral cutoffs seen in \textit{Fermi} pulsars can give estimates of MSP $M_\pm$ needed to screen the global electric fields. This study indicates that the estimated MSP $M_\pm$ span a large range from $1 - 10^3$. \subsubsection{Balancing the Energetics of the MSP Population} Our model provides reasonable fits to the \textit{Chandra} and \textit{Fermi} data for typical model parameters. However, one also has to consider whether this scenario is plausible in terms of energetics and the sensitivity of \textit{Chandra}, i.e., would \textit{Chandra} have seen these ``unresolved MSPs'' postulated by the model to explain the diffuse X-ray flux seen by \citet{Eger10}, or can one indeed explain the observed SR flux by a reasonable number of visible and invisible (unresolved) MSPs? The answer to this question lies in the (uncertain) population properties and emission energetics of the MSPs. We investigate this question by taking two approaches below. From the \textit{Chandra} data analysis, we can obtain three constraints. \citet{Eger10} assume a point-source sensitivity of $\sim2\times10^{-15}$~erg\,s$^{-1}$\,cm$^{-2}$ in the 0.5 $-$ 7.0~keV band. This leads to the first constraint of the minimum detectable luminosity of (i) $L_{\rm X, Chandra}\sim 7\times10^{30}$~erg\,s$^{-1}$ for their assumed distance of $d=5.5$~kpc. This is similar to the value of $L_{\rm X,Chandra}\sim (1-3)\times10^{31}$~erg\,s$^{-1}$ for an assumed distance of $d=8.7$~kpc found by \citet{Heinke06}. Let us adopt the first value. Second, \citet{Eger10} note that the total observed unabsorbed diffuse excess luminosity\footnote{We note that the power-law fit to the data implies that the visible non-thermal luminosity is $L_{\rm X,vis}=8.52\times 10^{32}$~erg\,s$^{-1}$ \citep{Eger10}, using data from annuli lying between $55^{\prime\prime}$ and $174^{\prime\prime}$. By integrating our predicted $E_\gamma dN/dE\gamma$ HESR spectrum in the 1 $-$ 7~keV band, we find $L_{\rm X,HESR}\sim5.5\times10^{32}$~erg\,s$^{-1}$. This is close to this luminosity, with the discrepancy explained by the fact that the model does not perfectly match the data in terms of the spectral slope. However, this power-law-luminosity is a factor $\sim2$ lower than the total observed luminosity as noted in the main text, which is also the number quoted by \citet{Eger10} in their interpretation section. We decided to use the higher value, following this usage by \citet{Eger10}, and note that if we use the lower value, the solutions in Table~\ref{tab:pop} have similar best-fit parameters but with lower MSP numbers reflecting the lower value of $L_{\rm X,vis}$ in this case.} is $L_{\rm X,tot}=2\times10^{33}$~erg\,s$^{-1}$, and estimate that the contribution of unresolved point sources\footnote{We use the label ``invisible'' in what follows to refer to those pulsars that have too low a spin-down luminosity to be detectable as single point sources by \textit{Chandra}, but that may contribute to the cumulative unresolved point-source luminosity as a population of low-energetic pulsars. As before, we discard the contribution of other source classes to this unresolved luminosity.} in the $1^\prime - 3^\prime$ region is $=7\times10^{31}$~erg\,s$^{-1}$. We thus set (ii) $L_{\rm X,vis}=2\times10^{33}$~erg\,s$^{-1}-7\times10^{31}$~erg\,s$^{-1}=1.93\times10^{33}$~erg\,s$^{-1}$ and (iii) $L^{\rm X}_{\rm invis}=7\times10^{31}$~erg\,s$^{-1}$. In order to convert X-ray luminosities to pulsar spin-down values, one needs an efficiency factor $\eta_{\rm X}$: \begin{eqnarray} L_{\rm X,vis} & = & \eta^{\rm X}_{\rm vis}N^{\rm X}_{\rm vis}\langle \dot{E} \rangle_{\rm vis}\\ L_{\rm X,invis} & = & \eta^{\rm X}_{\rm invis}N^{\rm X}_{\rm invis}\langle \dot{E}\rangle_{\rm invis},\label{eq:balance} \end{eqnarray} with the total number of MSPs $N^{\rm X}_{\rm tot}=N^{\rm X}_{\rm vis} + N^{\rm X}_{\rm invis}\geq N^{\rm rad}_{\rm vis}=37$. This is a very unconstrained system of equations. To simplify this, one may assume that the whole population of MSPs may be characterized by a single $\eta_{\rm X}=\eta^{\rm X}_{\rm vis}=\eta^{\rm X}_{\rm invis}$. Division of the former equation by the latter and fixing $\langle \dot{E} \rangle_{\rm vis}$ then yields the following constraint: \begin{equation} N^{\rm X}_{\rm vis}\langle \dot{E} \rangle_{\rm vis} = k_{\rm L}N^{\rm X}_{\rm invis}\langle \dot{E}\rangle_{\rm invis}, \end{equation} with $k_{\rm L} = L_{\rm X,vis}/L_{\rm X,invis}$ being a constant. While there is some degeneracy, this constraint may, e.g., be satisfied for the following choices: $\langle \dot{E} \rangle_{\rm vis}=9.08\times10^{34}$~erg\,s$^{-1}$, $\langle \dot{E} \rangle_{\rm invis}=8\times10^{33}$~erg\,s$^{-1}$, $N^{\rm X}_{\rm vis}=41$ and $N^{\rm X}_{\rm invis}=17$ (implying $\eta_{\rm X}=0.05\%$). If we adopt $\langle \dot{E} \rangle_{\rm vis}=1.8\times10^{34}$~erg\,s$^{-1}$ \citep[e.g.,][]{Abdo10}, the following values satisfy the constraint above: $\langle \dot{E} \rangle_{\rm invis}=10^{33}$~erg\,s$^{-1}$, $N^{\rm X}_{\rm vis}=23$ and $N^{\rm X}_{\rm invis}=15$ (implying $\eta_{\rm X}=0.5\%$); alternatively, we can set $\langle \dot{E} \rangle_{\rm vis}=10^{34}$~erg\,s$^{-1}$, obtaining $\langle \dot{E} \rangle_{\rm invis}=2.8\times10^{32}$~erg\,s$^{-1}$, $N^{\rm X}_{\rm vis}=19$ and $N^{\rm X}_{\rm invis}=25$ (implying $\eta_{\rm X}=1\%$). These numbers seem reasonable and adhere to the constraint that $N^{\rm X}_{\rm tot} = N^{\rm X}_{\rm vis} + N^{\rm X}_{\rm invis}\geq N^{\rm rad}_{\rm vis}=37$. Thus, we consider our assumption of attributing the X-ray emission to the cumulative pair SR, as used in the previous section, plausible. In an attempt to perform a more robust analysis, potentially obtain stronger constraints on the MSP population and break some parameter degeneracies, we consider a parametrized pulsar spin-down luminosity function $N_{\rm MSP}(>\!\!\dot{E})\propto \dot{E}^{-\gamma_{\rm L}}$ \citep{Johnston1996}. We will use this to balance the required X-ray energetics by assuming that $\dot{E}_{\rm vis}\propto L_{\rm X,vis}$ and $\dot{E}_{\rm invis}\propto L_{\rm X,invis}$. This implies $dN/d\dot{E}=N^\prime_0(\dot{E}/\dot{E}_0)^{-(\gamma_{\rm L} + 1)}$, with $N^\prime_0$ a normalization constant. \citet{Johnston1996} infer a typical GC value of $\gamma_{\rm L}\sim0.5$, while \citet{Heinke06} find $\gamma_{\rm L}\sim0.4 - 0.7$ for \terc depending on the energy band. By defining $\dot{E}_{\rm b} = L_{\rm X,Chandra}/\eta_{\rm X}$, one can next recover the following quantities: \begin{eqnarray} N^{\rm X}_{\rm tot} & = &\int_{\dot{E}_{\rm min}}^{\dot{E}_{\rm max}}\left(\frac{dN}{d\dot{E}}\right)\,d\dot{E}, \\ N^{\rm X}_{\rm vis} & = &\int_{\dot{E}_{\rm b}}^{\dot{E}_{\rm max}}\left(\frac{dN}{d\dot{E}}\right)\,d\dot{E}, \\ N^{\rm X}_{\rm invis} & = & \int_{\dot{E}_{\rm min}}^{\dot{E}_{\rm b}}\left(\frac{dN}{d\dot{E}}\right)\,d\dot{E}, \\ \langle\dot{E}\rangle_{\rm vis} & = & \frac{1}{N^{\rm X}_{\rm vis}}\int_{\dot{E}_{\rm b}}^{\dot{E}_{\rm max}}\dot{E}\left(\frac{dN}{d\dot{E}}\right)\,d\dot{E},\label{eq:Edotvis}\\ \langle\dot{E} \rangle_{\rm invis} & = & \frac{1}{N^{\rm X}_{\rm invis}}\int_{\dot{E}_{\rm min}}^{\dot{E}_{\rm b}}\dot{E}\left(\frac{dN}{d\dot{E}}\right)\,d\dot{E}. \end{eqnarray} We want to solve for four quantities: $\dot{E}_{\rm min}$, $\dot{E}_{\rm max}$, $N^\prime_0$ (or equivalently $N^{\rm X}_{\rm tot}$), and $\gamma_{\rm L}$; once these are fixed, we can infer the MSP population properties through the above equations. We note, however, that we are using this luminosity function to fit X-ray luminosities, which are integral quantities. We therefore expect to find degenerate solutions as different combinations might yield the same integral luminosities. Thus, we need four constraints or measurements. We can use the same three constraints as before. Crucially, one needs to specify a fourth parameter $\eta_{\rm X}$ to convert from spin-down luminosities to X-ray luminosities. By fixing $\eta_{\rm X}$, we implicitly fix the product $N^{\rm X}_{\rm vis}\langle\dot{E}\rangle_{\rm vis}$. As a first attempt, let us assume $\eta_{\rm X}=0.05\%$ (e.g., for $N^{\rm X}_{\rm vis}=41$ and $\langle\dot{E}\rangle_{\rm vis}=9.08\times10^{34}$~erg\,s$^{-1}$ to make the calculation consistent with the previous estimate). It is difficult to obtain the actual value for $\langle\dot{E}\rangle_{\rm vis}$, given the effect of the GC cluster potential on the $\dot{P}$ of each MSP \citep[e.g.,][]{Bogdanov08}. If we could, this would further constrain the system via Eq.~(\ref{eq:Edotvis}). \citet{Heinke06} note that while they did not detect an X-ray MSP explicitly, one X-ray source could plausibly be an MSP based on the proximity to a radio MSP position; they also noted that more identifications of X-ray MSPs could be made as radio positions become available. We thus have additional constraints $N^{\rm X}_{\rm vis}\gtrsim 1$ and $N^{\rm X}_{\rm tot}\geq N^{\rm rad}_{\rm vis}$ (which may be used as checks on the consistency of the solutions we obtain). We do obtain a non-unique solution for each fixed value of $\eta_{\rm X}$. However, $\eta_{\rm X}$ is not known and the parameters that satisfy the other three constraints are quite degenerate, as expected. Table~\ref{tab:pop} indicates a number of parameter combinations that satisfy the observational constraints\footnote{A preliminary Markov-chain Monte Carlo investigation \citep{EMCEE} confirmed the degenerate nature of the free parameters (some are correlated) as well as them being quite unconstrained (reflected by asymmetrical and flat probability distributions, as well as elongated confidence contours). Best-fit values furthermore depend on the choice of priors / parameter bounds. The median values are, however, similar to those in Table~\ref{tab:pop}.}. It is clear that a different choice of $\eta_{\rm X}$ will favor a different solution that will imply a different value of $\langle\dot{E} \rangle_{\rm vis}$ (which also depends on the the average moment of inertia $\langle I\rangle$, $\langle P\rangle$, and $\langle \dot{P}\rangle$). For example, a higher value of $\eta_{\rm X}$ will yield a lower value of $\langle\dot{E} \rangle_{\rm vis}$ or $\langle I\rangle$ for a given value of $N^{\rm X}_{\rm vis}$ and keeping other parameters fixed. If we require $\langle\dot{E} \rangle_{\rm vis}$ to be the same as assumed in the model used to predict the pulsed emission, this may lead to unrealistic values for $\gamma_{\rm L}$, for a given $\eta_{\rm X}$. Relaxing this requirement (which may easily be done, given other parameter uncertainties) implies more suitable values for the other parameters. It is therefore clear that the system of equations is very coupled and the parameters are degenerate, given the lack of suitable constraints. One may think to constrain the solution space by requiring $N^{\rm X}_{\rm tot}=N^{\gamma}_{\rm tot}\approx N^{\gamma}_{\rm vis}=180^{+120}_{-100}$, the latter being the estimated total number of visible MSPs in \ter as inferred from the \textit{Fermi}-measured GeV energy flux \citep{Abdo10}. However, this estimate is quite uncertain and does not contain uncertainties in distance (the square of which determines the $\gamma$-ray luminosity $L_\gamma$) and conversion efficiency of $\dot{E}_{\rm vis}$ to $L_\gamma$, so this does not seem to be a strong constraint. Likewise, we chose $N^{\gamma}_{\rm tot}=37$ when fitting the CR component, but this value is also subject to other model assumptions such as MSP geometry and gap width. Finally, it seems that the last few entries in Table~\ref{tab:pop} might be the more plausible combinations in view of the independent constraints on $\gamma_{\rm L}\sim0.4 - 0.7$ and $N^{\rm X}_{\rm vis}\gtrsim 1$ (i.e., probably relatively small numbers of visible X-ray pulsars) gleaned from the analysis of \citet{Heinke06}. Thus, the uncertainty in several parameters, particularly $\eta_{\rm X}$, as well as parameter degeneracies preclude us from making definite statements about the MSP population properties. Yet, we see that there are several plausible solutions that characterize and constrain the MSP population's energetics, implying that the scenario of MSPs being responsible for the broadband SED may be justified and thus be plausible. \citet{Gonthier2018} derive a luminosity function for MSPs in 47~Tucanae through a population synthesis that fits the \textit{Fermi} spectrum, presumed to be the combined emission from all MSPs in the cluster. They find a $\gamma$-ray luminosity distribution that peaks at $\sim 10^{33}$\,erg\,s$^{-1}$, spin-down power distribution peaking at $\sim 2 \times10^{34}$\,erg\,s$^{-1}$, extending down to $\sim 10^{30}$\,erg\,s$^{-1}$, with a $\gamma$-ray efficiency around 0.1. Their peak $\dot{E}$ value is similar to the $\langle\dot{E}_{\rm vis}\rangle$ we have derived for \terp \begin{deluxetable*}{cccccccccccc} \tablecaption{Sample parameter combinations that lead to a balance of the X-ray-implied energetics.\label{tab:pop}} \tablecolumns{9} \tablehead{\colhead{$\eta_{\rm X}$} & \colhead{$\langle\dot{E}\rangle_{\rm invis}$} & \colhead{$\langle\dot{E}\rangle_{\rm vis}$} &\colhead{$\dot{E}_{\rm min}$} &\colhead{$\dot{E}_{\rm max}$} &\colhead{$\gamma_{\rm L}$} &\colhead{$N^{\rm X}_{\rm vis}$} &\colhead{$N^{\rm X}_{\rm invis}$} &\colhead{$N^{\rm X}_{\rm tot}$}} \startdata 0.05\% & $3.0\times 10^{33}$ & $9.2\times 10^{34}$ & $10^{31}$ & $2.4\times 10^{35}$ & $-0.19$ & 43 & 45 & 88\\ 0.05\% & $3.5\times 10^{32}$ & $1.8\times 10^{35}$ & $10^{29}$ & $10^{36}$ & $0.21$ & 22 & 399 & 421\\ 0.5\% & $1.2\times 10^{32}$ & $3.8\times 10^{34}$ & $10^{31}$ & $10^{36}$ & $0.50$ & 10 & 116 & 126\\ 0.5\% & $1.5\times 10^{32}$& $2.9\times 10^{34}$ & $10^{31}$ & $3.6\times 10^{35}$ & $0.40$ & 14 & 96 & 110\\ 1\% & $7.4\times 10^{31}$ & $2.5\times 10^{34}$ & $10^{31}$ & $2.0\times 10^{36}$ & $0.60$ & 8 & 95 & 103\\ 1\% & $1.3\times 10^{32}$ & $2.4\times 10^{34}$ & $3.0\times 10^{31}$ & $2.9\times 10^{36}$ & $0.64$ & 8 & 53 & 61\\ 1\% & $2.5\times 10^{32}$ & $2.0\times 10^{34}$ & $10^{32}$ & $3.0\times 10^{36}$ & $0.69$ & 10 & 27 & 37 \enddata \tablecomments{The units of the spin-down luminosities are erg\,s$^{-1}$.} \end{deluxetable*} The bulk of visible radio MSPs occur in the core of the cluster, and one expects the majority of pulsars here due to the deep potential well of the GC. However, one may expect to find a small number of MSPs farther out, depending on their birth and evolutionary history. The fact that the diffuse X-ray flux profile measured by \citet{Eger10} drops off slightly slower than the generalized King profile fit \citep[e.g.,][]{King62} to the detected X-ray point source distribution \citep{Heinke06} as well as the infrared surface brightness profile measured by \citet{Trager1995} may support this idea, and a (slowly) decreasing MSP density with radius may plausibly correlate with the observed decreasing X-ray flux profile. \citet{Eger10} detected non-thermal X-ray emission beyond the half-mass radius of \terp If we take the above energetics argument as plausible, this would imply possibly tens of unresolved MSPs and a handful MSPs that are in principle visible in X-rays in this region. This would also imply that even more MSPs should be visible in X-rays at the core vs.\ outer reaches of the GC, but we do not have constraints on the diffuse X-ray emission at the GC centre at this stage, and source confusion in this dense region may complicate the matter. Future constraints on the central diffuse X-ray emission, the spatial distribution of the MSP population, and the average expected multiplicity and spin-down power will thus more deeply probe our hypothesis that the HESR component is due to magnetospheric, pulsed SR from pairs. \section{CONCLUSIONS} \label{sec:concl} The main focus of this paper has been two-fold: to gather more data on \ter and to scrutinize ideas about the particle sources and emission processes responsible for the broadband emission spectrum we observe from this cluster. Our models postulated four spectral components (LESR, HESR, CR and IC) and attempted to constrain the MSP population's distribution of spin-down luminosity using the observed X-ray diffuse emission. We obtained new \textit{Fermi} data that we could fit using a model for the cumulative CR from a population of MSPs embedded within \terp These data also proved to be constraining for the low-energy tail of the unpulsed IC component, yielding a particle efficiency of $\eta_{\rm p}\sim3\%$, depending on the choice of several parameters, notably $\langle\dot{E}_{\rm vis}\rangle$ and $N_{\rm MSP,tot}$. We demonstrated that we could fit the radio spectral points by invoking an LESR component that might extend into the optical range. We furthermore argued that our predicted LESR flux is far below the expected thermal optical flux level. Thus, obtaining an upper limit on the non-thermal flux in the optical band would be very difficult, given the roughly $N_*\sim10^{5-6}$ point sources that have to be subtracted from an optical map of the GC. Even when performing and subtracting a King model fit to the surface brightness profile, the uncertainty on the remaining diffuse flux would be very large. However, since the BB spectrum occurs over a much narrower energy range than the LESR, there may yet be hope of detecting the latter outside of the optical range. \citet{Bednarek16} concur with our prior predictions \citep{Venter08,Venter09_GC,Kopp13} that GCs should typically have SR components that peak in the optical / ultraviolet range, but also points out the problem of the dominating radiation field produced by the large population of GC stars. They furthermore mention that quite atypical parameters (a combination of very large cluster $B$-fields and particle energies) would be needed in order to produce an observable level of X-ray flux. Lastly, it would be problematic to compare optical and X-ray brightness profiles, since the underlying source distributions have different spatial and emission characteristics. The respective telescope point spread functions also differ, compounding the problem. Also, the observed \textit{Chandra} spectrum is not well fit by a single LESR component. To solve these problems and still fit the observed data, we invoked a new component to explain the hard \textit{Chandra} spectrum: cumulative SR from pair plasma in MSP magnetospheres. The low-energy tail of this HESR component reproduces the spectral slope of the X-ray data quite well. We argued that the required energetics and numbers of the MSP source population needed to reproduce the detected diffuse X-ray emission are plausible, albeit not very well constrained (although X-ray efficiencies of $\eta_{\rm X}\sim1\%$ and thus $\gamma_{\rm L}\sim0.7$ and $N^{\rm X}_{\rm vis}\sim10$ may be preferable). The MSP scenario to explain the broadband SED of \ter should thus be further scrutinized by future constraints on the properties (e.g., number of visible X-ray pulsars and their average spin-down luminosity) of the MSPs embedded in this GC. For the VHE band, there were no new data available. The high-energy tail of our predicted unpulsed IC component produced a good fit to the current H.E.S.S\ data. More data obtained by H.E.S.S.\ or new data from the CTA may better constrain the shape and cutoffs of the IC component, owing to the lower energy threshold as well as increased sensitivity of the latter. This may limit the particle minimum and maximum energies, source strength, average multiplicity, as well as the conversion efficiency of spin-down luminosity to particle power. We have modelled the pulsed SR and CR components using a magnetospheric pulsar model, while we have modelled the LESR and IC components using an independent transport and emission model. While we have attempted to apply both these codes simultaneously for consistent parameter choices, a unified approach may lead to even deeper constraints on the cluster environment and stellar members. The morphology of structures associated with \ter differ significantly in extent and position for the different energy bands, challenging the idea that single particle population is responsible for all spectral components. Higher-resolution images of the GC will aid in elucidating the spatial properties of the different emission structures, possibly constraining the diffusion coefficient and cluster $B$-field profile. Using \ter as a case study, we could constrain our leptonic model for broadband emission from GCs. CTA will probably detect many more VHE GCs \citep{Ndiyavala18}, while multi-wavelength data on these sources should also continue to improve in both quantity and quality. This will allow us to further scrutinize competing emission models, as well as developing new, more complete and comprehensive ones that might explain the spatial \textit{and} spectral properties of Galactic GCs at an ever increasing level of detail. \acknowledgments We acknowledge fruitful discussions with Michael Backes, Carlo van Rensburg, Matthew Kerr, Hongjun An, Daniel Castro, Gudlaugur Johannesson, and Philippe Bruel. This work is based on the research supported wholly / in part by the National Research Foundation of South Africa (NRF; Grant Numbers 81671, 87613, 90822, 92860, 93278, and 99072). The Grantholder acknowledges that opinions, findings and conclusions or recommendations expressed in any publication generated by the NRF supported research is that of the author(s), and that the NRF accepts no liability whatsoever in this regard. A.K.H. acknowledges the support from the NASA Astrophysics Theory Program. C.V.\ and A.K.H.\ acknowledge support from the \textit{Fermi} Guest Investigator Program. The \textit{Fermi} LAT Collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the LAT as well as scientific data analysis. These include the National Aeronautics and Space Administration and the Department of Energy in the United States, the Commissariat \`a l'Energie Atomique and the Centre National de la Recherche Scientifique / Institut National de Physique Nucl\'eaire et de Physique des Particules in France, the Agenzia Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and the K.~A.~Wallenberg Foundation, the Swedish Research Council and the Swedish National Space Board in Sweden. Additional support for science analysis during the operations phase is gratefully acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'\'Etudes Spatiales in France. This work performed in part under DOE Contract DE-AC02-76SF00515. \vspace{5mm} \facilities{\textit{Fermi} Large Area Telescope, Effelsberg Radio Telescope, \textit{Chandra} X-ray Satellite, H.E.S.S.}
\section{Introduction} \label{intro} Let \((W,S)\) be a Coxeter system and denote by \(\leqslant_{\mathsf{L}}\) the left weak order on \(W\) (defined by \(y\leqslant_{\mathsf{L}} x\) if and only if \(l(xy^{-1})=l(x)-l(y)\), where \(l\) denotes length relative to~\(S\)). In~\cite{howvan:wgraphDetSets} an algorithm was given that takes as input a pair \((\mathscr{I},J)\), where \(\mathscr{I}\) is an ideal of \((W,\leqslant_{\mathsf{L}})\) and \(J\) is a subset of \(S\), and produces a graph with edges labelled by integers and vertices coloured with subsets of~\(S\). If \((\mathscr{I},J)\) is a \(W\!\)-graph ideal then the output is a \(W\!\)-graph. It was shown in~\cite{howvan:wgraphDetSets} that \(W\!\)-graphs for the Specht modules can be produced in this way. In~\cite{nguyen:wgideals2} it was shown, more generally, that \(W\!\)-graphs for the Kazhdan--Lusztig left cells that contain longest elements of standard parabolic subgroups can be obtained from \(W\!\)-graph ideals. Indeed, if \(J \subseteq S\) and \(W_J\) is the subgroup generated by~\(J\), then the left cell containing \(w_J\) (the longest element of \(W_J\)) is equal to \(\mathscr{I}w_J\), where \((\mathscr{I},J)\) is a \(W\!\)-graph ideal. Our first main result says that if \((W,S)\) is of type~\(A\) and \(\mathscr I\) is an ideal of \((W,\leqslant_{\mathsf{L}})\) then \(\mathscr{I}w_J\) is a union of Kazhdan--Lusztig left cells whenever \((\mathscr I,J)\) is a \(W\!\)-graph ideal. Note that, by \cite[Theorem 9.5]{nguyen:wgbiideals}, this is not true for types other than~\(A\), even when \((W,S)\) has rank~2. However, \cite[Theorem 5.2]{nguyen:wgbiideals} shows that, for all types, if \(\mathcal C\) is a set of left cells that is upward closed, in the sense that \(c\in\mathcal C\) and \(c'\geqslant c\) implies \(c'\in \mathcal C\), then \(\bigcup_{c\in\mathcal C}c\) is a \(W\!\)-graph ideal. Our second main result is the classification, when \((W,S)\) is of type \(A_{n-1}\), of the pairs \((w,J)\) such that \((\mathscr{I
}(w),J)\) is a \(W\!\)-graph ideal, where \(\mathscr{I}(w) = \{v\in W \mid v\leqslant_{\mathsf{L}} w\}\). These are exactly the pairs \((w,J)\) such that \(l(ws)>l(w)\) for all~\(s\in J\) and \(\mathscr{I}(w)w_J\) is a union of Kahzdan--Lusztig left cells. Furthermore, they are parametrized by the skew partitions of \(n\), and in each case the elements of \(\mathscr{I}(w)\) are parametrized by the standard tableaux associated with the corresponding basic skew diagram. Since the current work is a sequel to~\cite{nguyen:wgbiideals}, we shall freely use the notation and terminology of that paper. \section{Relationship between \textit{W}-graph ideals and Kazhdan--Lusztig left cells in type \textit{A}} \label{sec:3} A complete classification of \(W\!\)-graph ideals of finite Coxeter groups of rank \(2\) is given in Theorem~9.5 of \cite{nguyen:wgbiideals}. We shall make use of the following special case. \begin{lemma}\label{idealA2} Let \((W,S)\) be a Coxeter system of type \(A_2 = I_2(2)\), and let \(S=\{s,t\}\). Then \((\mathscr{I}\!,\,J)\) is a \(W\!\)-graph ideal if and only if one of the following alternatives is satisfied:\setitemindent{xx(viii)} \begin{itemize}[topsep=1 pt] \item[\textup{(i)}]\((\mathscr{I}\!,\,J)=(\{1\},S)\), \item[\textup{(ii)}]\((\mathscr{I}\!,\,J)=(\{1\},\emptyset)\), \item[\textup{(iii)}]\((\mathscr{I}\!,\,J)=(\{1,t,st\},\{s\})\), \item[\textup{(iv)}]\((\mathscr{I}\!,\,J)=(\{\{1,t\},\{s\})\), \item[\textup{(v)}]\((\mathscr{I}\!,\,J)=(\{1,s,ts\},\{t\})\), \item[\textup{(vi)}]\((\mathscr{I}\!,\,J)=(\{\{1,s\},\{t\})\), \item[\textup{(vii)}]\((\mathscr{I}\!,\,J)=(\{1,s,t,ts,st,tst\},\emptyset)\), \item[\textup{(viii)}]\((\mathscr{I}\!,\,J)=(\{1,s,t,ts,st\},\emptyset)\). \end{itemize} \end{lemma} \begin{remark}\label{idealA2rem} Let \((\mathscr{I}\!,
modes. To this end we consider dimensionless quantities (in geometric units) that are formed from the frequency $\omega_R$ and the damping time $\tau$. In the simplest case these dimensionless quantities are formed with the mass of the star $M$ or the radius of the star $R$. But they can also involve the radius of gyration $\hat{R}=\sqrt{I/M}$, or the reference frequencies $\omega_o= \sqrt{\frac{3Mc^2}{4{R}^3}}=\frac{c}{M}\sqrt{\frac{3}{4}C^3}$ with the compactness $C=M/R$, or $\hat \omega_o= \sqrt{\frac{3Mc^2}{4\hat{R}^3}}=\frac{c}{M}\sqrt{\frac{3}{4}\eta^3}$ with the generalized compactness $\eta= M/\hat{R} = \sqrt{M^3/I}$, etc. We then consider these quantities as functions of the compactness $C$, the generalized compactness $\eta$, etc. When the dimensionless modes lie to good approximation on a single curve for the full set of equations of state, a universal relation is obtained representing the best fit. We have studied a large set of combinations of dimensionless quantities and exhibit in the following a set of interesting examples for the universal relations for the quadrupole $\phi$-modes. {Here for the quadrupole $\phi$-modes, and also later for the dipole and radial $\phi$-modes, we use a fourth-order polynomial as the fit function.} \begin{figure}[h!] \centering \resizebox{0.49\textwidth}{!}{\includegraphics{Fig_l2_uni_rel_mass_omega.eps}} \resizebox{0.49\textwidth}{!}{\includegraphics{Fig_l2_eta_uni_rel_radius_omega.eps}} \caption{Quadrupole $\phi$-mode universal relations: dimensionless frequency $M\omega_R/c$ versus compactness $C=M/R$ (left upper panel) and fit errors (left lower panel); dimensionless frequency $R\omega_R/c$ versus generalized compactness $\eta$ (right upper panel) and fit errors (right lower panel). The symbols indicate the respective equation of state, the massless scalar-tensor case is shown in green and the general relativistic case in black.} \label{fig:my_label2} \end{figure} We exhibit in Fig.~\ref{fig:my_label2} a simple set of universal relations for the frequency $\omega_R$ of the modes. The upper panel of the left figure exhibits the dimensionless frequency $M\omega_R/c$ versus the compactness $C=M/R$ of the star. The symbols identify the respective equation of state, while the colors green and black show the results for the massless scalar-tensor theory and general relativity, respectively. For both theories rather linear universal relations for the frequency are obtained, that lie far apart and thus are quite distinct. The lower panel of the figure shows the deviations from the best fit for all of the modes. As already clear from the upper figure, these universal relations are very good, exhibiting a mean error of 0.1\% for general relativity and 0.5\% for the massless scalar-tensor theory. The right figure in Fig.~\ref{fig:my_label2} employs the radius $R$ instead of the mass for the scaling of the frequency, thus it shows in the upper panel the dimensionless frequency $R\omega_R/c$, but now versus generalized compactness $\eta$, which yields slightly better universal relations (with mean errors 0.1\% and 0.4\% for general relativity and massless scalar-tensor theory, respectively) than the ordinary compactness $C$ (with respective mean errors 0.1\% and 0.5\%). Again, the relations are almost linear, well separated, and very good. \begin{figure}[h!] \centering \resizebox{0.49\textwidth}{!}{\includegraphics{Fig_l2_uni_rel_mass_tau.eps}} \resizebox{0.49\textwidth}{!}{\includegraphics{Fig_l2_eta_uni_rel_radius_tau.eps}} \caption{Quadrupole $\phi$-mode universal relations: dimensionless inverse damping time $M/(c\tau )$ versus compactness $C=M/R$ (left upper panel) and fit errors (left lower panel); dimensionless inverse damping time $R/(c\tau )$ versus generalized compactness $\eta$ (right upper panel) and fit errors (right lower panel). The symbols indicate the respective equation of state, the massless scalar-tensor case is shown in green and the general relativistic case in black.} \label{fig:my_label3} \end{figure} Fig.~\ref{fig:my_label3} shows the corresponding results for the damping time. Thus the left figure exhibits the universal relations for the dimensionless inverse damping time $M/(c\tau )$ versus the compactness $C=M/R$, while the right figure presents the universal relations for the dimensionless inverse damping time $R/(c\tau )$ versus the generalized compactness $\eta$. We note that these universal relations for the damping time are not as good as the ones for the frequency, as seen in the lower panels, where again the deviations from the best fits are shown. Interestingly, they are now better for the massless scalar-tensor case than for general relativity. However, both theories lead to rather close relations at least in certain ranges of the (effective) compactness, thus making these relations less useful to distinguish between the theories. \begin{figure}[h!] \centering \resizebox{0.49\textwidth}{!}{\includegraphics{Fig_l2_uni_rel_omega_scale.eps}} \resizebox{0.49\textwidth}{!}{\includegraphics{Fig_l2_eta_uni_rel_tau_scale.eps}} \caption{Quadrupole $\phi$-mode universal relations: dimensionless frequency $\omega_R/\omega_o$ versus compactness $C=M/R$ (left upper panel) and fit errors (left lower panel); dimensionless damping $\tau\omega_o$ versus generalized compactness $\eta$ (right upper panel) and fit errors (right lower panel). The symbols indicate the respective equation of state, the massless scalar-tensor case is shown in green and the general relativistic case in black.} \label{fig:my_label4} \end{figure} Among the numerous combinations of scaled frequencies and damping times tested, with mean errors displayed in the tables in the appendix, we here show another set of very good relations found. Fig.~\ref{fig:my_label4} exhibits on the left the universal relations for the dimensionless frequency $\omega_R/\omega_o$ versus the compactness $C=M/R$ with mean errors of 0.1\% and 0.6\% for general relativity and the massless scalar-tensor theory, respectively. Here both relations display a monotonic decrease with increasing $C$, and they differ by a factor of 2 to 3, thus leading to a good discernability of the theories. For the dimensionless scaled damping time $\tau\omega_o$ shown in the right figure versus the generalized compactness $\eta$, on the other hand, both relations are mostly very close for general relativity and the massless scalar-tensor theory. Thus although very good (with mean errors of 0.5\% for both) they are not useful to distinguish between the theories. {Up until this point, the use of solely the dimensionless frequency in the relations is able to discriminate between theories better than solely the dimensionless damping time in the relations.} \begin{figure}[h!] \centering \resizebox{0.49\textwidth}{!}{\includegraphics{Fig_l2_uni_rel_mass_tau__mass_omega.eps}} \resizebox{0.49\textwidth}{!}{\includegraphics{Fig_l2_eta_uni_rel_omega_tau.eps}} \caption{Quadrupole $\phi$-mode universal relations: dimensionless inverse damping time $M/(c\tau )$ versus dimensionless frequency $M\omega_R/c$ (left upper panel) and fit errors (left lower panel); dimensionless product $\omega_R \tau $ of frequency and damping time versus generalized compactness $\eta$ (right upper panel) and fit errors (right lower panel). The symbols indicate the respective equation of state, the massless scalar-tensor case is shown in green and the general relativistic case in black.} \label{fig:my_label5} \end{figure} The last set of universal relations selected here concerns relations involving both the frequency and the damping time. We thus show in Fig.~\ref{fig:my_label5} on the left the dimensionless inverse damping time $M/(c\tau )$ versus the dimensionless frequency $M\omega_R/c$ and on the right the dimensionless product $\omega_R \tau $ of the frequency and the damping time versus the generalized compactness $\eta$. As seen in the figures the universal relations for general relativity and massless scalar-tensor theory differ considerably, as desired, while their mean errors range from very good for the massless scalar-tensor theory (0.6\% mean error left figure and 0.3\% right figure) to average for general relativity (1.6\% left and 1.0\% right). \section{Dipole $\phi$-modes}\label{App_phi1} \subsection{Spectrum} \begin{figure}[h!] \centering \includegraphics[width=.32\textwidth, angle =-90]{plot_M_omegaR_l1_scalar_panel.eps} \includegraphics[width=.32\textwidth, angle =-90]{plot_M_omegaI_l1_scalar_panel.eps} \caption{ Frequency $\omega_R$ in kHz (left) and {damping time $\tau$ in milliseconds (right)} versus neutron star mass $M$ in $M_{\odot}$ for the dipole $\phi$-mode. {The six panels represent six equations of state, and the color red indicates the massless case with the general relativistic case in black.} } \label{fig:MR_MI_panel_l1} \end{figure} We now turn to the dipole $\phi$-modes of the neutron stars. We exhibit these $\phi$-modes for the chosen set of equations of state in Fig.~\ref{fig:MR_MI_panel_l1}, with frequency $\omega_R$ in kHz versus the neutron star mass in $M_\odot$ on the left and the damping time in milliseconds on the right. The black curves present the results for general relativity with a minimally coupled massless scalar field, and the red curves show the results for the massless scalar-tensor theory. The frequency of the dipole $\phi$-modes is always below 300 Hz, which is quite a bit lower than for the dipole F-modes obtained before \citep{Blazquez-Salcedo:2022pwc}. For the dipole $\phi$-modes general relativity leads to larger frequencies than the massless scalar-tensor theory. In general the frequency tends to increase for configurations close to the maximum mass. The damping time $\tau$ is typically less than 2 milliseconds for the general relativistic case, and the introduction of the massless scalar-tensor theory has the overall effect of increasing the damping time of the $\phi$-mode. In general the damping time tends to decrease for configurations close to the maximum mass. \subsection{Universal relations for the dipole $\phi$-modes}\label{App_phi2} We now address the universal relations for the dipole $\phi$-modes for the two considered theories, the massless scalar-tensor theory and general relativity with a minimally coupled scalar field. We proceed as for the quadrupole $\phi$-mode discussed in the previous section. \begin{figure}[h!] \centering \includegraphics[width=.49\textwidth, angle =0]{Fig_l1_uni_rel_mass_omega.eps} \includegraphics[width=.49\textwidth, angle =0]{Fig_l1_eta_uni_rel_mass_omega.eps} \caption{{Dipole $\phi$-mode universal relations: dimensionless frequency $M\omega_R/c$ (upper panels) and fit errors (lower panels) versus compactness $C=M/R$ (left panels); versus generalized compactness $\eta$ (right panels). The symbols indicate the respective equation of state, and the color green the massless case with the general relativistic case in black.} % } \label{fig:uni_rel_MOmegaR_scalar_l1_error} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=.49\textwidth, angle =0]{Fig_l1_uni_rel_mass_tau.eps} \includegraphics[width=.49\textwidth, angle =0]{Fig_l1_eta_uni_rel_mass_tau.eps} \caption{{Dipole $\phi$-mode universal relations: dimensionless inverse damping time $M/(c\tau)$ (upper panels) and fit errors (lower panels) versus compactness $C=M/R$ (left panels); versus generalized compactness $\eta$ (right panels). The symbols indicate the respective equation of state, and the color green the massless case with the general relativistic case in black.} % } \label{fig:uni_rel_MOmegaI_scalar_l1_error} \end{figure} In Figure \ref{fig:uni_rel_MOmegaR_scalar_l1_error} we show the dimensionless frequency $M\omega_R/c$ scaled with the neutron star mass $M$ versus the compactness $C$ on the left and $M\omega_R/c$ versus the generalized compactness $\eta$ on the right. The color green indicates the massless theory, and the results for general relativity are shown in black. The lower panels contain as before the associated fit errors. This simple scaling with the mass works very well for the dimensionless frequency $M \omega_R/c$ $-$ compactness $C$ relations, with low mean errors of {0.4\% and 0.3\% for general relativity and the massless theories respectively}. {These relations are far better than the relations involving the generalized compactness with the respective mean errors of 1.8\% and 1.6\%.} Moreover, in the case of the generalized compactness the curves for the massless theory and general relativity are very close. Figure \ref{fig:uni_rel_MOmegaI_scalar_l1_error} represents the corresponding figure for the damping time $\tau$, i.e., the dimensionless inverse damping time $M/c \tau $ is shown versus the compactness $C$ (left) and generalized compactness $\eta$ (right). The fit reveals, that the errors are larger for the damping time than for the frequency, with mean errors of 1.0\% and 1.1\% in the case of compactness and 1.8\% and 1.6\% in the case of generalized compactness. \begin{figure}[h!] \centering \includegraphics[width=.49\textwidth, angle =0]{Fig_l1_uni_rel_omega_scale.eps} \includegraphics[width=.49\textwidth, angle =0]{Fig_l1_uni_rel_radius_omega.eps} \caption{{Dipole $\phi$-mode universal relations: dimensionless frequencies {$\omega_R/\omega_o$} (left upper panel) and $R\omega_R/c$ (right upper panel), and their fit errors (lower panels) versus compactness $C=M/R$. % The symbols indicate the respective equation of state and the color green the massless case with the general relativistic case in black.} % } \label{fig:uni_rel_c_scalar_l1_error_no_log} \end{figure} Fig.~\ref{fig:uni_rel_c_scalar_l1_error_no_log} (left) shows the universal relations for the dimensionless frequency $\omega_R/\omega_o$ versus the compactness $C$. {These relations are very good and also distinct, showing means errors of 0.4\% for GR and 0.3\% for the massless theory.} Another set of very good universal relations for the frequency $\omega_R$ is shown in Fig.~\ref{fig:uni_rel_c_scalar_l1_error_no_log} (right), where the dimensionless $R\omega_R/c$ is considered as a function of the compactness $C$. {Again the mean errors are very small with 0.4\% and 0.3\% for general relativity and the massless scalar-tensor theory, respectively.} \begin{figure}[h!] \centering \includegraphics[width=.49\textwidth, angle =0]{Fig_l1_uni_rel_tau_scale.eps} \includegraphics[width=.49\textwidth, angle =0]{Fig_l1_eta_uni_rel_radius_tau.eps} \caption{{ {Dipole $\phi$-mode universal relations: dimensionless damping time $\tau\omega_o$ versus compactness $C=M/R$ (left upper panel) and fit errors (left lower panel); and dimensionless inverse damping time $R/(c\tau)$ versus generalized compactness $\eta$ (right upper panel) and fit errors (right lower panel). The symbols indicate the respective equation of state, and the color green the massless case with the general relativistic case in black.}} % } \label{fig:uni_rel_tau_scale_l1_phi} \end{figure} When considering the damping time $\tau$, the quality of the universal relations decreases again. We exhibit in Fig.~\ref{fig:uni_rel_tau_scale_l1_phi} (left) the dimensionless damping time $\tau\omega_o$ versus the compactness $C$. Scaling with $\hat{\omega}_0$ results in a worse fit. {In Fig.~\ref{fig:uni_rel_tau_scale_l1_phi} (right) the dimensionless quantity $R/(c\tau)$ is shown versus the generalized compactness $\eta$. For this dimensionless quantity the generalized compactness yields a better relation than it is against the compactness. The relations shown for $\tau$ are all equally meaningful, with Fig.~\ref{fig:uni_rel_tau_scale_l1_phi} (right) showing a slightly better fit.} Further dimensionless quantities that have been tested are $R/(cC\tau)$, $R/(cC^3\tau)$, $R\omega_R/(cC)$, $R\omega_R/(cC^3)$, and $\omega_R\tau$, versus the compactness and the generalized compactness. Those versus the compactness always lead to an improvement. Relations for the dimensionless damping time $M/(c\tau)$ versus the dimensionless frequency $M\omega_R/c$ have also been tested. In addition, we have also examined the relations for $\hat{R}\omega_R/c$ and $\hat{R}/c\tau$ with respect to the generalized compactness $\eta$ separately, but none of them provides further improvements in the errors of the relations or in the discernability between the theories. In fact, similar to what is observed in Fig.~\ref{fig:uni_rel_MOmegaI_scalar_l1_error}, when the proposed quantities are considered versus the generalized compactness $\eta$, the splitting of these relations with respect to the theories tends to diminish. \section{Radial $\phi$-modes} \subsection{Spectrum} \begin{figure}[h!] \centering \includegraphics[width=.32\textwidth, angle =-90]{plot_M_omegaR_l0_scalar_panel.eps} \includegraphics[width=.32\textwidth, angle =-90]{plot_M_omegaI_l0_scalar_panel.eps} \caption{ Frequency $\omega_R$ in kHz (left) and {damping time $\tau$ in milliseconds (right)} versus neutron star mass $M$ in $M_{\odot}$ for the radial $\phi$-mode. {The six panels represent six equations of state, and the color red indicates the massless case with the general relativistic case in black.} } \label{fig:MR_MI_panel_l0} \end{figure} We exhibit the sets of radial $\phi$-modes for general relativity and the massless scalar-tensor theory in Fig.~\ref{fig:MR_MI_panel_l0}. The frequency $\omega_R$ of the modes is shown in the left figure, and is located mostly in the range of 100 - 200 Hz. The damping time $\tau$ of the modes is on the order of 0.2 - 0.3 milliseconds, as seen in the right figure. Although these are not the only scalar-led modes in the spectrum of spherical perturbations, these are the modes with the highest damping time and best numerical precision in the shooting method we employ \footnote{In some cases the numerical calculations produce modes with shorter frequencies (of the order of 50 Hz).}. For the less massive neutron stars the frequency and the damping time are very similar for both theories, but deviate sizeably towards the maximum mass of the stars, with the general relativistic frequency larger and the damping time smaller than their counterparts in the massless scalar-tensor theory. \begin{figure}[h!] \centering \includegraphics[width=.3\textwidth, angle =-90]{plot_compactness-RomegaR-STT.eps} \includegraphics[width=.3\textwidth, angle =-90]{plot_compactness-Rtau-STT.eps} \caption{Scaled radial $\phi$-modes for the massless scalar-tensor theory: dimensionless frequency $R\omega_R/c$ versus compactness $C=M/R$ (left); dimensionless inverse damping time $R/(c\tau )$ versus compactness $C$ (right). The colors indicate the different equations of state.} \label{fig:my_label12a} \end{figure} In Fig.~\ref{fig:my_label12a} we show the scaled frequency $R\omega_R/c$ (left) and the scaled inverse damping time $R/(c\tau )$ versus the compactness $C$ for the massless scalar-tensor theory. The figure highlights that the scaled quantities are very close to each other for the different equations of state except in a region close the respective maximum mass. At the maximum mass the instability sets in, found in the $l=0$ sector of the theory, in the fundamental fluid F-mode \citep{Blazquez-Salcedo:2020ibb}. This results in an increased sensitivity of the $l=0$ $\phi$-modes with respect to the properties of the equations of state and thus a splitting of the associated curves close to the maximum mass. Although the differences with respect to the mean values are small, this splitting becomes clearly recognizable on the scales of the figure. When evaluating the universal relations for these cases, this splitting together with the decreased density of points in this region leads to rather wiggly universal relations. To avoid giving this region too much weight we have therefore decided to fit the universal relations only in the interval where there is a good agreement between the curves, as well as a high density of points, i.e., before the splitting of the curves arises (around $M/R=0.24$), as highlighted in the figure. {Meanwhile, for the general relativistic case, we fitted the data over the entire range.} \subsection{Universal relations for the radial $\phi$-modes} \begin{figure}[h!] \centering \resizebox{0.49\textwidth}{!}{\includegraphics{Fig_l0_uni_rel_mass_omega.eps}} \resizebox{0.49\textwidth}{!}{\includegraphics{Fig_l0_uni_rel_radius_omega.eps}} \caption{Radial $\phi$-mode universal relations: dimensionless frequency $M\omega_R/c$ versus compactness $C=M/R$ (left upper panel) and fit errors (left lower panel); dimensionless frequency $R\omega_R/c$ versus compactness $C$ (right upper panel) and fit errors (right lower panel). The symbols indicate the respective equation of state, the massless scalar-tensor case is shown in green and the general relativistic case in black.} \label{fig:my_label12} \end{figure} Analogously to the higher modes $l$-modes we now address the universal relations for the radial $\phi$-modes. Fig.~\ref{fig:my_label12} shows on the left the universal relations for the dimensionless frequency $M\omega_R/c$ versus the compactness $C=M/R$ and on the right for the dimensionless frequency $R\omega_R/c$ versus the compactness $C=M/R$. In both cases the universal relations for general relativity are excellent, yielding mean errors of only 0.04\%. The corresponding universal relations for the massless scalar-tensor theory are by far not as good. Scaling with the mass yields a mean error of 0.9\%, {and scaling with the radius yields a mean error of 0.7\% when we fit over the entire range. A fit up to the compactness of $C=0.24$ yields a mean error of 0.03\%, which is comparable to the GR case.} \begin{figure}[h!] \centering \resizebox{0.49\textwidth}{!}{\includegraphics{Fig_l0_uni_rel_mass_tau.eps}} \resizebox{0.49\textwidth}{!}{\includegraphics{Fig_l0_uni_rel_radius_tau.eps}} \caption{Radial $\phi$-mode universal relations: dimensionless inverse damping time $M/(c\tau )$ versus compactness $C=M/R$ (left upper panel) and fit errors (left lower panel); dimensionless inverse damping time $R/(c\tau )$ versus compactness $C$ (right upper panel) and fit errors (right lower panel). The symbols indicate the respective equation of state, the massless scalar-tensor case is shown in green and the general relativistic case in black.} \label{fig:my_label13} \end{figure} Similar universal relations for the damping time $\tau$ are shown in Fig.~\ref{fig:my_label13}. Again general relativity yields excellent relations with mean errors of 0.02\%. But here the relations of the massless scalar-tensor theory { % produce} mean errors of 0.4\%. Here as well, a fit up to $M/R=0.24$ improves the universal relations of the massless case. The mean errors {become} 0.01\% which are of the order of the errors in GR. \begin{figure}[h!] \centering \resizebox{0.49\textwidth}{!}{\includegraphics{Fig_l0_uni_rel_omega_scale.eps}} \resizebox{0.49\textwidth}{!}{\includegraphics{Fig_l0_uni_rel_tau_scale.eps}} \caption{Radial $\phi$-mode universal relations: dimensionless frequency $\omega_R/\omega_o$ versus compactness $C=M/R$ (left upper panel) and fit errors (left lower panel); dimensionless damping time $\tau\omega_o$ versus compactness $C$ (right upper panel) and fit errors (right lower panel). The symbols indicate the respective equation of state, the massless scalar-tensor case is shown in green and the general relativistic case in black.} \label{fig:my_label14} \end{figure} \begin{figure}[h!] \centering \resizebox{0.49\textwidth}{!}{\includegraphics{Fig_l0_uni_rel_mass_tau__mass_omega.eps}} \resizebox{0.49\textwidth}{!}{\includegraphics{Fig_l0_uni_rel_omega_tau.eps}} \caption{Radial $\phi$-mode universal relations: dimensionless inverse damping time $M/(c\tau )$ versus dimensionless frequency $M\omega_R/c$ (left upper panel) and fit errors (left lower panel); dimensionless product $\omega_R \tau $ of frequency and damping time versus compactness $C$ (right upper panel) and fit errors (right lower panel). The symbols indicate the respective equation of state, the massless scalar-tensor case is shown in green and the general relativistic case in black.} \label{fig:my_label15} \end{figure} Fig.~\ref{fig:my_label14} exhibits the dimensionless frequency $\omega_R/\omega_o$ versus the compactness $C=M/R$ on the left, and the dimensionless damping time $\tau\omega_o$ versus the compactness $C$ on the right. As before, general relativity gives excellent universal relations. Similarly, a fit for the massless case in the {lower compactness} range provides excellent universal relations. But the discernability of the theories is not too good. When considering the dimensionless inverse damping time $M/(c\tau )$ versus the dimensionless frequency $M\omega_R/c$ the universal relations for both theories are almost identical except for the larger masses of the stars, as seen in Fig.~\ref{fig:my_label15} (left). This also holds for the dimensionless product $\omega_R \tau $ of the frequency and the damping time versus the compactness $C$, shown in Fig.~\ref{fig:my_label15} (right). As observed in all these cases, the universal relations for
general relativity are excellent, whereas the universal relations for the massless scalar-tensor theory are less good, in particular, for the larger neutron star masses. { % That is, the quality of the universal relations in the massless theory is comparable to the GR case only in the lower range of the compactness.} \section{Conclusions} Universal relations of neutron stars represent valuable tools to test the viability of alternative gravity theories, as well as, in the future, to put bounds on them with high precision gravitational wave observatories. Here we have studied a particular set of such universal relations, that arise in a Brans-Dicke type massless scalar-tensor theory, and compared with their counterparts in general relativity. This type of theory is obtained as a particular limit of an $f(R)$ theory, where general relativity provides the other limit, endowing the present study with theoretical interest. The presence of a scalar degree of freedom leads to a rich spectrum of neutrons stars. The scalar field allows for the emission of monopole and dipole radiation of the stars, that would be prohibited otherwise. Moreover, the now propagating fluid monopole and dipole modes but also all higher multipole modes are supplemented with a new set of quasinormal modes, that are dominated by the scalar field, dubbed $\phi$-modes. It is these $\phi$-modes on which we have focused the present study. In order to be able to extract universal relations for the modes, and thus demonstrate (almost) independence of the equation of state employed, we have considered a set of six realistic equations of state, covering different possible star compositions, namely plain nuclear matter, nucleon-hyperon fluids, and hybrid nuclear-quark matter. We have then tested a large variety of ways of scaling the frequency $\omega_R$ and the damping time $\tau$ to obtain dimensionless quantities (in geometric units) and considering these as functions of other dimensionless variables like the compactness or the generalized compactness. A best fit to all the resulting points has then yielded the sought-after respective universal relations, provided the error is sufficiently small. We have presented sets of universal relations for the quadrupole $\phi$-modes, the dipole $\phi$-modes and the radial $\phi$-modes, both in the massless scalar-tensor theory and in general relativity with a minimally coupled scalar field. In all cases we have found very good universal relations with only small deviations from the best fits, but we have also obtained a number of rather unconvincing relations with large errors. Interestingly, the simple scaling with the mass works mostly quite well for these $\phi$-modes, when they are considered versus the compactness. For the potential use of such universal relations besides the required smallness of the errors it is, however, also relevant, that the universal relations for different theories differ sufficiently as to discern them. Having now provided the $\phi$-modes and their universal relations for the limiting theories of general relativity and the massless scalar-tensor theory, we should as our next step calculate the $\phi$-modes for finite values of the scalar field mass and extract the corresponding universal relations, as previously done for the fluid modes \citep{Blazquez-Salcedo:2020ibb,Blazquez-Salcedo:2021exm,Blazquez-Salcedo:2022pwc,Blazquez-Salcedo:2022dxh}, and the current investigations could serve as a guide in this endeavour. Moreover, the study of the polar modes of neutron stars and their universal relations in alternative theories of gravity has just begun, and numerous interesting alternative gravities are waiting to be explored. \newpage \section*{Appendix 1: Tables for universal relations for quadrupole $\phi$-mode.}\label{Tables-quad} We here present tables for the average error of all the universal relations we tested for the quadrupole $\phi$-modes. The average error $\bar{\epsilon}$ is given by \begin{equation} \bar{\epsilon} = \frac{1}{N}\sum\limits_{k=1}^N \left|1-\frac{F_k}{F_{\mathrm{fit,}k}}\right|, \end{equation} where $N$ is the total number of points for each theory. \begin{table}[h!] \centering \caption{Average error $\bar{\epsilon}$ in \% for universal relations for quadrupole $\phi$-mode when plotting against compactness $C$ (left) and against generalized compactness $\eta$ (right).} \label{tab:average_error_l2} \begin{minipage}[t]{0.45\textwidth}\vspace{0pt} \begin{tabular}{|l||c|c|} \hline & \textbf{GR} & \textbf{massless}\\ \hline $\boldsymbol{M\omega_R/c}$ & $0.1$ & $0.5$\\ \hline $\boldsymbol{M/(c\tau)}$ & $1.5$ & $1.0$\\ \hline $\boldsymbol{\omega_R/\omega_\mathrm{0}}$ & $0.1$ & $0.6$\\ \hline $\boldsymbol{\omega_R/\hat{\omega}_\mathrm{0}}$ & $1.3$ & $2.9$\\ \hline $\boldsymbol{\tau\omega_\mathrm{0}}$ & $1.6$ & $1.0$\\ \hline $\boldsymbol{\tau\hat{\omega}_\mathrm{0}}$ & $2.9$ & $3.7$\\ \hline $\boldsymbol{R\omega_R/c}$ & $0.1$ & $0.5$\\ \hline $\boldsymbol{R\omega_R/(cC)}$ & $0.1$ & $0.6$\\ \hline $\boldsymbol{R\omega_R/(cC^2)}$ & $0.6$ & $1.0$\\ \hline $\boldsymbol{R\omega_R/(cC^3)}$ & $2.4$ & $3.9$\\ \hline $\boldsymbol{R/(c\tau)}$ & $1.5$ & $1.0$\\ \hline $\boldsymbol{R/(c\tau C)}$ & $1.5$ & $1.0$\\ \hline $\boldsymbol{R/(c\tau C^2)}$ & $1.8$ & $1.6$\\ \hline $\boldsymbol{R/(c\tau C^3)}$ & $4.8$ & $5.4$\\ \hline $\boldsymbol{M\tau\omega_R^2/c}$ & $1.8$ & $0.3$\\ \hline $\boldsymbol{R\tau\omega_R^2/c}$ & $1.7$ & $0.3$\\ \hline $\boldsymbol{\omega_R\tau}$ & $1.6$ & $0.5$\\ \hline \end{tabular} \end{minipage} \begin{minipage}[t]{0.45\textwidth}\vspace{0pt} \begin{tabular}{|l||c|c|} \hline & \textbf{GR} & \textbf{massless}\\ \hline $\boldsymbol{M\omega_R/c}$ & $1.2$ & $1.7$\\ \hline $\boldsymbol{M/(c\tau)}$ & $2.1$ & $1.8$\\ \hline $\boldsymbol{\omega_R/\omega_\mathrm{0}}$ & $0.7$ & $0.5$\\ \hline $\boldsymbol{\omega_R/\hat{\omega}_\mathrm{0}}$ & $1.1$ & $1.7$\\ \hline $\boldsymbol{\tau\omega_\mathrm{0}}$ & $0.5$ & $0.5$\\ \hline $\boldsymbol{\tau\hat{\omega}_\mathrm{0}}$ & $2.1$ & $1.8$\\ \hline $\boldsymbol{R\omega_R/c}$ & $0.1$ & $0.4$\\ \hline $\boldsymbol{R\omega_R/(cC)}$ & $1.4$ & $1.1$\\ \hline $\boldsymbol{R\omega_R/(cC^2)}$ & $2.7$ & $2.7$\\ \hline $\boldsymbol{R\omega_R/(cC^3)}$ & $4.7$ & $5.2$\\ \hline $\boldsymbol{R/(c\tau)}$ & $0.9$ & $0.6$\\ \hline $\boldsymbol{R/(c\tau C)}$ & $0.8$ & $1.1$\\ \hline $\boldsymbol{R/(c\tau C^2)}$ & $2.2$ & $2.7$\\ \hline $\boldsymbol{R/(c\tau C^3)}$ & $5.3$ & $6.2$\\ \hline $\boldsymbol{M\tau\omega_R^2/c}$ & $0.8$ & $1.7$\\ \hline $\boldsymbol{R\tau\omega_R^2/c}$ & $1.1$ & $0.4$\\ \hline $\boldsymbol{\omega_R\tau}$ & $1.0$ & $0.3$\\ \hline $\boldsymbol{\hat{R}\omega_R/c}$ & $1.2$ & $1.7$\\ \hline $\boldsymbol{\hat{R}\omega_R/(c\eta)}$ & $1.1$ & $1.7$\\ \hline $\boldsymbol{\hat{R}\omega_R/(c\eta^2)}$ & $1.1$ & $1.7$\\ \hline $\boldsymbol{\hat{R}\omega_R/(c\eta^3)}$ & $1.1$ & $1.6$\\ \hline $\boldsymbol{\hat{R}/(c\tau)}$ & $2.1$ & $1.8$\\ \hline $\boldsymbol{\hat{R}/(c\tau\eta)}$ & $2.0$ & $1.8$\\ \hline $\boldsymbol{\hat{R}/(c\tau\eta^2)}$ & $2.0$ & $1.8$\\ \hline $\boldsymbol{\hat{R}/(c\tau\eta^3)}$ & $1.8$ & $1.7$\\ \hline \end{tabular} \end{minipage} \end{table} \newpage \section*{Appendix 2: Tables for universal relations for dipole $\phi$-mode.}\label{Tables-dipole} {We here present tables for the average error of all the universal relations we tested for the dipole $\phi$-modes.} \begin{table}[h!] \centering \caption{Average error $\bar{\epsilon}$ in \% for universal relations for dipole $\phi$-mode when plotting against compactness $C$ (left) and against generalized compactness $\eta$ (right).} \label{tab:average_error_l1} \begin{minipage}[t]{0.45\textwidth}\vspace{0pt} \begin{tabular}{|l||c|c|} \hline & \textbf{GR} & \textbf{massless}\\ \hline $\boldsymbol{M\omega_R/c}$ & $0.4$ & $0.3$\\ \hline $\boldsymbol{M/(c\tau)}$ & $1.0$ & $1.1$\\ \hline $\boldsymbol{\omega_R/\omega_\mathrm{0}}$ & $0.4$ & $0.3$\\ \hline $\boldsymbol{\omega_R/\hat{\omega}_\mathrm{0}}$ & $1.9$ & $1.7$\\ \hline $\boldsymbol{\tau\omega_\mathrm{0}}$ & $1.0$ & $1.1$\\ \hline $\boldsymbol{\tau\hat{\omega}_\mathrm{0}}$ & $1.8$ & $1.5$\\ \hline $\boldsymbol{R\omega_R/c}$ & $0.4$ & $0.3$\\ \hline $\boldsymbol{R\omega_R/(cC)}$ & $0.5$ & $0.3$\\ \hline $\boldsymbol{R\omega_R/(cC^2)}$ & $0.9$ & $0.9$\\ \hline $\boldsymbol{R\omega_R/(cC^3)}$ & $4.0$ & $4.5$\\ \hline $\boldsymbol{R/(c\tau)}$ & $1.0$ & $1.1$\\ \hline $\boldsymbol{R/(c\tau C)}$ & $1.0$ & $1.1$\\ \hline $\boldsymbol{R/(c\tau C^2)}$ & $1.6$ & $1.7$\\ \hline $\boldsymbol{R/(c\tau C^3)}$ & $5.1$ & $5.5$\\ \hline $\boldsymbol{M\tau\omega_R^2/c}$ & $1.5$ & $1.5$\\ \hline $\boldsymbol{R\tau\omega_R^2/c}$ & $1.5$ & $1.5$\\ \hline $\boldsymbol{\omega_R\tau}$ & $1.2$ & $1.3$\\ \hline \end{tabular} \end{minipage} \begin{minipage}[t]{0.45\textwidth}\vspace{0pt} \begin{tabular}{|l||c|c|} \hline & \textbf{GR} & \textbf{massless}\\ \hline $\boldsymbol{M\omega_R/c}$ & $1.8$ & $1.6$\\ \hline $\boldsymbol{M/(c\tau)}$ & $1.8$ & $1.6$\\ \hline $\boldsymbol{\omega_R/\omega_\mathrm{0}}$ & $0.4$ & $0.5$\\ \hline $\boldsymbol{\omega_R/\hat{\omega}_\mathrm{0}}$ & $1.8$ & $1.6$\\ \hline $\boldsymbol{\tau\omega_\mathrm{0}}$ & $1.2$ & $1.3$\\ \hline $\boldsymbol{\tau\hat{\omega}_\mathrm{0}}$ & $1.8$ & $1.6$\\ \hline $\boldsymbol{R\omega_R/c}$ & $0.5$ & $0.4$\\ \hline $\boldsymbol{R\omega_R/(cC)}$ & $1.1$ & $1.1$\\ \hline $\boldsymbol{R\omega_R/(cC^2)}$ & $2.5$ & $2.6$\\ \hline $\boldsymbol{R\omega_R/(cC^3)}$ & $4.3$ & $4.8$\\ \hline $\boldsymbol{R/(c\tau)}$ & $0.9$ & $0.9$\\ \hline $\boldsymbol{R/(c\tau C)}$ & $1.8$ & $2.0$\\ \hline $\boldsymbol{R/(c\tau C^2)}$ & $3.3$ & $3.5$\\ \hline $\boldsymbol{R/(c\tau C^3)}$ & $5.3$ & $5.8$\\ \hline $\boldsymbol{M\tau\omega_R^2/c}$ & $2.6$ & $2.6$\\ \hline $\boldsymbol{R\tau\omega_R^2/c}$ & $1.4$ & $1.4$\\ \hline $\boldsymbol{\omega_R\tau}$ & $1.1$ & $1.1$\\ \hline $\boldsymbol{\hat{R}\omega_R/c}$ & $1.8$ & $1.6$\\ \hline $\boldsymbol{\hat{R}\omega_R/(c\eta)}$ & $1.8$ & $1.6$\\ \hline $\boldsymbol{\hat{R}\omega_R/(c\eta^2)}$ & $1.8$ & $1.6$\\ \hline $\boldsymbol{\hat{R}\omega_R/(c\eta^3)}$ & $1.8$ & $1.6$\\ \hline $\boldsymbol{\hat{R}/(c\tau)}$ & $1.8$ & $1.6$\\ \hline $\boldsymbol{\hat{R}/(c\tau\eta)}$ & $1.8$ & $1.6$\\ \hline $\boldsymbol{\hat{R}/(c\tau\eta^2)}$ & $1.8$ & $1.6$\\ \hline $\boldsymbol{\hat{R}/(c\tau\eta^3)}$ & $1.9$ & $1.7$\\ \hline \end{tabular} \end{minipage} \end{table} \newpage \section*{Appendix 3: Tables for universal relations for radial $\phi$-mode.}\label{Table-radial} We here present tables for the average error of all the universal relations we tested for the radial $\phi$-modes. \begin{table}[h!] \centering \caption{Average error $\bar{\epsilon}$ in \% for universal relations for radial $\phi$-mode when plotting against compactness $C$ (left) and against generalized compactness $\eta$ (right). {The values in brackets indicate the mean error for a fit up to $M/R=0.24$.}} \label{tab:average_error_l0} \begin{minipage}[t]{0.45\textwidth}\vspace{0pt} \begin{tabular}{|l||c|c|} \hline & \textbf{GR} & \textbf{massless}\\ \hline $\boldsymbol{M\omega_R/c}$ & $0.04$ & $0.9\,[0.03]$\\ \hline $\boldsymbol{M/(c\tau)}$ & $0.02$ & $0.4\,[0.01]$\\ \hline $\boldsymbol{\omega_R/\omega_\mathrm{0}}$ & $0.05$ & $0.7\,[0.03]$\\ \hline $\boldsymbol{\omega_R/\hat{\omega}_\mathrm{0}}$ & $1.5$ & $1.2\,[0.7]$\\ \hline $\boldsymbol{\tau\omega_\mathrm{0}}$ & $0.02$ & $0.4\,[0.01]$\\ \hline $\boldsymbol{\tau\hat{\omega}_\mathrm{0}}$ & $1.5$ & $1.7\,[0.7]$\\ \hline $\boldsymbol{R\omega_R/c}$ & $0.04$ & $0.7\,[0.03]$\\ \hline $\boldsymbol{R\omega_R/(cC)}$ & $0.1$ & $0.7\,[0.03]$\\ \hline $\boldsymbol{R\omega_R/(cC^2)}$ & $0.6$ & $1.4\,[0.1]$\\ \hline $\boldsymbol{R\omega_R/(cC^3)}$ & $2.8$ & $4.9\,[0.4]$\\ \hline $\boldsymbol{R/(c\tau)}$ & $0.02$ & $0.4\,[0.01]$\\ \hline $\boldsymbol{R/(c\tau C)}$ & $0.1$ & $0.3\,[0.02]$\\ \hline $\boldsymbol{R/(c\tau C^2)}$ & $0.8$ & $1.2\,[0.1]$\\ \hline $\boldsymbol{R/(c\tau C^3)}$ & $3.2$ & $5.6\,[0.5]$\\ \hline $\boldsymbol{M\tau\omega_R^2/c}$ & $0.1$ & $2.5\,[0.07]$\\ \hline $\boldsymbol{R\tau\omega_R^2/c}$ & $0.1$ & $1.9\,[0.07]$\\ \hline $\boldsymbol{\omega_R\tau}$ & $0.06$ & $1.1\,[0.04]$\\ \hline \end{tabular} \end{minipage} \begin{minipage}[t]{0.45\textwidth}\vspace{0pt} \begin{tabular}{|l||c|c|} \hline & \textbf{GR} & \textbf{massless}\\ \hline $\boldsymbol{M\omega_R/c}$ & $1.4$ & $1.1$\\ \hline $\boldsymbol{M/(c\tau)}$ & $1.2$ & $1.5$\\ \hline $\boldsymbol{\omega_R/\omega_\mathrm{0}}$ & $0.7$ & $1.2$\\ \hline $\boldsymbol{\omega_R/\hat{\omega}_\mathrm{0}}$ & $1.4$ & $1.1$\\ \hline $\boldsymbol{\tau\omega_\mathrm{0}}$ & $0.8$ & $0.7$\\ \hline $\boldsymbol{\tau\hat{\omega}_\mathrm{0}}$ & $1.2$ & $1.4$\\ \hline $\boldsymbol{R\omega_R/c}$ & $0.06$ & $0.7$\\ \hline $\boldsymbol{R\omega_R/(cC)}$ & $1.3$ & $1.8$\\ \hline $\boldsymbol{R\omega_R/(cC^2)}$ & $2.8$ & $3.4$\\ \hline $\boldsymbol{R\omega_R/(cC^3)}$ & $4.8$ & $6.3$\\ \hline $\boldsymbol{R/(c\tau)}$ & $0.2$ & $0.3$\\ \hline $\boldsymbol{R/(c\tau C)}$ & $1.5$ & $1.3$\\ \hline $\boldsymbol{R/(c\tau C^2)}$ & $3.1$ & $3.0$\\ \hline $\boldsymbol{R/(c\tau C^3)}$ & $5.2$ & $6.3$\\ \hline $\boldsymbol{M\tau\omega_R^2/c}$ & $1.6$ & $1.9$\\ \hline $\boldsymbol{R\tau\omega_R^2/c}$ & $0.3$ & $1.8$\\ \hline $\boldsymbol{\omega_R\tau}$ & $0.2$ & $1.0$\\ \hline $\boldsymbol{\hat{R}\omega_R/c}$ & $1.4$ & $1.1$\\ \hline $\boldsymbol{\hat{R}\omega_R/(c\eta)}$ & $1.4$ & $1.1$\\ \hline $\boldsymbol{\hat{R}\omega_R/(c\eta^2)}$ & $1.4$ & $1.1$\\ \hline $\boldsymbol{\hat{R}\omega_R/(c\eta^3)}$ & $1.3$ & $1.2$\\ \hline $\boldsymbol{\hat{R}/(c\tau)}$ & $1.2$ & $1.4$\\ \hline $\boldsymbol{\hat{R}/(c\tau\eta)}$ & $1.2$ & $1.3$\\ \hline $\boldsymbol{\hat{R}/(c\tau\eta^2)}$ & $1.2$ & $1.3$\\ \hline $\boldsymbol{\hat{R}/(c\tau\eta^3)}$ & $1.2$ & $1.1$\\ \hline \end{tabular} \end{minipage} \end{table} \section*{Conflict of Interest Statement} The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. \section*{Author Contributions} All authors have contributed substantially to this paper and agree to be accountable for the content of the work. \section*{Acknowledgments} {We would like to gratefully acknowledge support by the DFG Research Training Group 1620 \textit{Models of Gravity}, DFG project Ku612/18-1, FCT project PTDC/FIS-AST/3041/2020, and the COST Actions CA15117 and CA16104. FSK thanks the Department of Theoretical Physics and IPARCOS of the Complutense University of Madrid for their hospitality. } \section*{Data Availability Statement} The datasets generated and analyzed can be obtained from the authors upon request. \bibliographystyle{plainnat}
\section{Introduction} All groups considered in the paper are finite, and all graphs considered are finite, undirected and simple. Let $\Gamma$ be a graph with vertex set $V$. For a vertex $v$ of $\Gamma$, denote by $\Gamma(v)$ the \emph{neighborhood} of $v$ in $\Gamma$, where two vertices are called \emph{neighbors} of each other if they are adjacent in the graph. A subset $C$ of $V$ is called a \emph{perfect code}~\cite{Kratochvil1986} in $\Gamma$ if every vertex of $\Gamma$ is at distance no more than one to exactly one vertex of $C$ (in particular, $C$ is an independent set of $\Gamma$). A subset $C$ of $V$ is said to be a \emph{total perfect code}~\cite{Zhou2016} in $\Gamma$ if every vertex of $\Gamma$ has exactly one neighbor in $C$ (in particular, $C$ induces a matching in $\Gamma$ and so $|C|$ is even). In the literature a perfect code is also called an \emph{efficient dominating set}~\cite{DS2003} or \emph{independent perfect dominating set}~\cite{Lee2001}, and a total perfect code is also called an \emph{efficient open dominating set}~\cite{HHS1998}. As a generalization of perfect and total perfect codes in a graph, the following notion was introduced in \cite{Cardoso2019} and further studied in \cite{BCGG2019,RCZ2018}. \begin{definition} \label{def:ab} Let $\Gamma$ be a graph with vertex set $V$, and let $a$ and $b$ be nonnegative integers. A nonempty proper subset $C$ of $V$ is called an $(a,b)$-\emph{regular set} in $\Gamma$ if $|\Gamma(v)\cap C|=a$ for each $v\in C$ and $|\Gamma(v)\cap C|=b$ for each $v\in V\setminus C$. An $(a,b)$-regular set is simply called a \emph{regular set} if the parameters $a$ and $b$ are not important in the context. \end{definition} In particular, a $(0,1)$-regular set in $\Gamma$ is exactly a perfect code in $\Gamma$, and a $(1,1)$-regular set in $\Gamma$ is exactly a total perfect code in $\Gamma$. It is not difficult to see that regular sets in a regular graph are precisely equitable partitions of the graph into two parts. In general, a partition $\mathcal{V} = \{V_1, V_2, \dots, V_r\}$ of the vertex set of a graph $\Gamma$ is said to be \emph{equitable} \cite[$\mathsection9.3$]{GR2001} if there exists an $r\times r$ matrix $M=(m_{ij})$ such that for any $i, j \in \{1, 2, \ldots, r\}$, every vertex in $V_i$ has exactly $m_{ij}$ neighbors in $V_j$. The matrix $M$ is called the \emph{quotient matrix}~\cite{BCGG2019} of the partition $\mathcal{V}$. If $\Gamma$ is a connected $k$-regular graph, then $M$ has all row sums equal to $k$, and so $k$ is a simple eigenvalue of $M$~\cite[Theorem 9.3.3]{GR2001}. The equitable partition $\mathcal{V}$ of $\Gamma$ is said to be \emph{$\mu$-equitable}~\cite{BCGG2019} if all eigenvalues of its quotient matrix $M$ other than $k$ are equal to $\mu$. It is shown in~\cite[Corrollary~2.3]{BCGG2019} that a non-trivial coarsening of a $\mu$-equitable partition is $\mu$-equitable. Thus it is especially important to study equitable partitions with exactly two parts, and for regular graphs such partitions are precisely regular sets in the graph. In fact, it can be verified (see also \cite{RCZ2018}) that for a connected $k$-regular graph $\Gamma$ with vertex set $V$, a nonempty proper subset $C$ of $V$ is an $(a,b)$-regular set in $\Gamma$ if and only if $\{C,V\setminus C\}$ is a $\mu$-equitable partition of $\Gamma$, where $a, b$ and $\mu$ are related by $$ a=((k-\mu)|C|+\mu|V|)/|V| $$ and $$ b=((k-\mu)|C|)/|V|. $$ (This can be proved using the fact that the quotient matrix of any $\mu$-equitable partition of $\Gamma$ has trace $k+\mu$ and that $|C|(k-a)=(|V|-|C|)b$ for any $(a,b)$-regular set $C$ in $\Gamma$.) In particular, any regular graph $\Gamma$ admitting an $(a,b)$-regular set must have $a-b$ as an eigenvalue (see also \cite{RCZ2018}), because all eigenvalues of the quotient matrix of any equitable partition of $\Gamma$ are also eigenvalues of $\Gamma$ (see \cite[Theorem 9.3.3]{GR2001}). This generalizes the well-known result that any regular graph admitting a perfect code should have $-1$ as an eigenvalue (see \cite[Lemma 9.3.4]{GR2001}). Perfect codes in Cayley graphs have attracted special attention \cite{Dinitz2006,Hajos1942,HXZ2018,Szab2006,SS2009} since they are generalizations of perfect codes under the Hamming and Lee metrics and are closely related to factorizations and tilings of groups. Denote by $e$ the identity element of the group under consideration. For a group $G$ and an inverse-closed subset $S$ of $G\setminus\{e\}$, the \emph{Cayley graph} $\Cay(G,S)$ of $G$ with \emph{connection set} $S$ is defined to be the graph with vertex set $G$ such that $x,y\in G$ are adjacent if and only if $yx^{-1}\in S$. It was observed in \cite{HXZ2018} that subgroups of a given group which are perfect codes in some Cayley graphs of the group are particularly interesting since they are an analogue of perfect linear codes in the classical setting of coding theory. In general, if a subset $C$ of $G$ is a (total) perfect code in some Cayley graph of $G$, then $C$ is called a \emph{(total) perfect code of $G$} \cite{HXZ2018}. Subgroups which are also perfect codes of the group were studied in \cite{HXZ2018}, and a characterization of those groups whose subgroups are all perfect codes of the group was given in \cite{MWWZ2019}. In~\cite[Theorem~2.2]{HXZ2018}, it was proved that a normal subgroup $H$ of $G$ is a perfect code of $G$ if and only if \begin{equation}\label{equ9} \text{for any $g\in G$ with $g^2\in H$, there exists $h\in H$ such that $(gh)^2=e$}, \end{equation} and that $H$ is a total perfect code of $G$ if and only if~\eqref{equ9} holds and $|H|$ is even. Generalizing the concept of perfect codes of a group, we call a subset $C$ of a group $G$ an \emph{$(a,b)$-regular set of $G$} if $C$ is an $(a,b)$-regular set in some Cayley graph of $G$. Thus a perfect code of $G$ is precisely a $(0,1)$-regular set of $G$, and a total perfect code of $G$ is precisely a $(1,1)$-regular set of $G$. In line with the study in \cite{HXZ2018}, it is nature to ask when a normal subgroup of a group is an $(a,b)$-regular set of the group. We answer this question in this paper by proving the following theorem which is the main result of the paper. \begin{theorem}\label{thm5} Let $G$ be a group and let $H$ be a non-trivial normal subgroup of $G$. Then the following statements are equivalent: \begin{enumerate} \item[{\rm(a1)}] $G$ and $H$ satisfy condition \eqref{equ9}; \item[{\rm(a2)}] $H$ is a perfect code of $G$; \item[{\rm(a3)}] $H$ is an $(a,b)$-regular set of $G$ for every pair of integers $a$ and $b$ with $0\leqslant a\leqslant|H|-1$ and $0\leqslant b\leqslant |H|$ such that $\gcd(2,|H|-1)$ divides $a$. \end{enumerate} And the following statements are also equivalent: \begin{enumerate} \item[{\rm(b1)}] $G$ and $H$ satisfy condition \eqref{equ9}, and $|H|$ is even; \item[{\rm(b2)}] $H$ is a total perfect code of $G$; \item[{\rm(b3)}] $H$ is an $(a,b)$-regular set of $G$ for every pair of integers $a$ and $b$ with $0\leqslant a\leqslant|H|-1$ and $0\leqslant b\leqslant |H|$. \end{enumerate} \end{theorem} The equivalence of (a1) and (a2) and that of (b1) and (b2) have been proved in \cite[Theorem~2.2]{HXZ2018}. So the essence of Theorem \ref{thm5} lies in that (a2) implies (a3) and (b2) implies (b3). Moreover, as will be seen in Construction \ref{con1}, based on an inverse-closed subset $S_0$ of $G\setminus\{e\}$ such that $\Cay(G,S_0)$ admits $H$ as a perfect code, we will give a construction of an inverse-closed subset $S$ of $G\setminus\{e\}$ such that $\Cay(G,S)$ admits $H$ as an $(a, b)$-regular set. A construction of an inverse-closed subset $S_0$ of $G\setminus\{e\}$ such that $\Cay(G,S_0)$ admits a given normal subgroup $H$ satisfying \eqref{equ9} as a perfect code was given in the proof of \cite[Theorem~2.2]{HXZ2018}. Combining this construction with Construction \ref{con1}, we can construct an inverse-closed subset $S$ of $G\setminus\{e\}$ such that $\Cay(G,S)$ admits $H$ as an $(a, b)$-regular set, for any non-trivial normal subgroup $H$ of $G$ satisfying \eqref{equ9} and every pair of integers $a$ and $b$ as in (a3). The condition that $H$ is normal in $G$ will be used in our proof of Theorem~\ref{thm5}. However, we do not know any example of a non-normal subgroup $H$ of a group $G$ such that the equivalence of (a1) and (a2) or that of (b1) and (b2) fails. This prompts us to ask the following question. \begin{question} \label{que1} Is it still true that (a1) and (a2) in Theorem \ref{thm5} are equivalent if the subgroup $H$ of $G$ is not normal? Is it still true that (b1) and (b2) in Theorem \ref{thm5} are equivalent if the subgroup $H$ of $G$ is not normal? \end{question} The rest of the paper is structured as follows. In the next section we will prove a lemma which will be used in the proof of Theorem \ref{thm5}. In Section~\ref{sec1}, we will establish some general results on subgroup perfect codes in Cayley graphs and prove Theorem~\ref{thm5} at the end of the section. We will conclude the paper with examples and remarks in Section~\ref{sec:remarks}. \section{A lemma} As usual, for a group $G$, denote by $\mathbb{Z}[G]$ the group ring of $G$ over $\mathbb{Z}$. For a subset $A$ of group $G$, denote \[ \overline{A}=\sum_{g\in G}\mu_A(g)g\in \mathbb{Z}[G], \] where \[ \mu_A(g)=\left\{\begin{aligned} 1,&\quad g\in A;\\ 0,&\quad g\in G\setminus A. \end{aligned} \right. \] In~\cite[Lemma~2.10]{HXZ2018}, a characterization of perfect codes and total perfect codes in Cayley graphs was given in the language of group rings. The following lemma extends this result to the general case of $(a, b)$-regular sets. \begin{lemma}\label{thm1} Let $G$ be a group, $C$ a subset of $G$, and $S$ an inverse-closed subset of $G\setminus\{e\}$. Let $a$ and $b$ be nonnegative integers. Then the following statements are equivalent: \begin{enumerate}[{\rm(a)}] \item $C$ is an $(a,b)$-regular set in $\Cay(G,S)$; \item $|Sx\cap C|=a$ for each $x\in C$ and $|Sx\cap C|=b$ for each $x\in G\setminus C$; \item $\overline{S}\cdot\overline{C}=a\,\overline{C}+b\,\overline{G\setminus C}$; \item $\overline{S}\cdot\overline{C}+(b-a)\,\overline{C}=b\,\overline{G}$. \end{enumerate} \end{lemma} \begin{proof} It is clear that (a) and (b) are equivalent and (c) and (d) are equivalent. Since $S$ is inverse-closed, we have \begin{align*} \overline{S}\cdot\overline{C} &=\sum_{s\in S}\sum_{c\in C}sc\\ &=\sum_{x\in G}\,\sum_{(s,c)\in S\times C, sc=x}x \\ &=\sum_{x\in G}\sum_{c\in C, xc^{-1}\in S}x\\ &=\sum_{x\in G}\sum_{c\in C, c\in S^{-1}x}x\\ &=\sum_{x\in G}|S^{-1}x\cap C|x\\ &=\sum_{x\in G}|Sx\cap C|x\\ &=\sum_{x\in C}|Sx\cap C|x+\sum_{x\in G\setminus C}|Sx\cap C|x. \end{align*} Note that (b) holds if and only if \[ \sum_{x\in C}|Sx\cap C|x=a\,\overline{C}\quad\text{and}\quad\sum_{x\in G\setminus C}|Sx\cap C|x=b\,\overline{G\setminus C}. \] It follows that (b) and (c) are equivalent. This completes the proof. \end{proof} Since a $(0,1)$-regular set is precisely a perfect code, in the special case when $(a, b)=(0,1)$, Lemma~\ref{thm1} gives rise to the following known result. \begin{corollary}\label{cor1} \emph{(\cite[Lemma~2.10]{HXZ2018})} Let $G$ be a group, $C$ a subset of $G$, and $S$ an inverse-closed subset of $G\setminus\{e\}$. Then $C$ is a perfect code in $\Cay(G,S)$ if and only if $\overline{S\cup\{e\}}\cdot\overline{C}=\overline{G}$. \end{corollary} \section{Subgroup regular sets} \label{sec1} We use the notation $\sqcup$ for the union of disjoint sets. For example, $A \sqcup B$ is the union of disjoint sets $A$ and $B$, and $\sqcup_{i=1}^n A_i$ is the union of pairwise disjoint sets $A_1, A_2, \ldots, A_n$. \begin{lemma}\label{lem1} Let $G$ be a group, $H$ a subgroup of $G$, and $S$ an inverse-closed subset of $G\setminus\{e\}$. Let $a$ and $b$ be nonnegative integers. Then $H$ is an $(a,b)$-regular set in $\Cay(G,S)$ if and only if $|S\cap H|=a$ and $\overline{S\setminus H}\cdot \overline{H}=b\,\overline{G\setminus H}$. \end{lemma} \begin{proof} According to Lemma~\ref{thm1}, $H$ is an $(a,b)$-regular set in $\Cay(G,S)$ if and only if $\overline{S}\cdot\overline{H}=a\,\overline{H}+b\,\overline{G\setminus H}$. Since $S=(S\cap H) \sqcup (S\setminus H)$ and $\overline{h}\cdot \overline{H}=\overline{H}$ for all $h\in H$, we have \begin{align*} \overline{S}\cdot\overline{H}&=(\overline{S\cap H}+\overline{S\setminus H})\cdot\overline{H}\\ &=\overline{S\cap H}\cdot\overline{H}+\overline{S\setminus H}\cdot\overline{H}=|S\cap H|\overline{H}+\overline{S\setminus H}\cdot\overline{H}. \end{align*} Thus the result follows. \end{proof} \begin{lemma}\label{lem4} Let $G$ be a group, $H$ a subgroup of $G$, and $S$ an inverse-closed subset of $G\setminus\{e\}$. Let $a$ and $b$ be nonnegative integers. Suppose that $H$ is a perfect code in some Cayley graph $\Cay(G,S_0)$ of $G$. Then $H$ is an $(a,b)$-regular set in $\Cay(G,S)$ if and only if $|S\cap H|=a$ and $\overline{S\setminus H}\cdot \overline{H}=b\,\overline{S_0}\cdot \overline{H}$. \end{lemma} \begin{proof} Since $H$ is a perfect code in $\Cay(G,S_0)$, we derive from Corollary \ref{cor1} that \[ \overline{G}=\overline{S_0\cup\{e\}}\cdot\overline{H}=\overline{S_0}\cdot\overline{H}+\overline{H}. \] Hence \[ \overline{G\setminus H}=\overline{G}-\overline{H}=\overline{S_0}\cdot\overline{H}. \] This together with Lemma~\ref{lem1} implies that $H$ is an $(a,b)$-regular set in $\Cay(G,S)$ if and only if $|S\cap H|=a$ and $\overline{S\setminus H}\cdot\overline{H}=b\,\overline{S_0}\cdot\overline{H}$. \end{proof} \begin{construction}\label{con1} Given a group $G$, a normal subgroup $H$ of $G$, an inverse-closed subset $K$ of $H\setminus\{e\}$, a nonnegative integer $b\leqslant|H|$ and an inverse-closed subset $S_0$ of $G\setminus\{e\}$ such that $H$ is a perfect code in $\Cay(G,S_0)$, construct a subset $S$ of $G$ as follows. Write $$ H=\{h_1,h_2,\ldots,h_d\} $$ and $$ S_0=\{s_1,s_2,\ldots,s_{2m-1},s_{2m},s_{2m+1},\ldots,s_n\}, $$ where $d=|H|$, $n=|S_0|$, $s_i^{-1}=s_{2m+1-i}$ for $i=1,2,\ldots,2m$ and $s^{-1}_j=s_j$ for $j=2m+1,2m+2,\ldots,n$. For $j=2m+1,2m+2,\ldots,n$, write \begin{equation*}\label{equ1} s_jH=\{u_{j,1},u^{-1}_{j,1},u_{j,2},{u^{-1}_{j,2}},\ldots,u_{j,\alpha_j},{u^{-1}_{j,\alpha_j}}, v_{j,1},v_{j,2},\ldots,v_{j,\beta_j}\} \end{equation*} with $|u_{j,k}|>2$ for $k=1,2,\ldots,\alpha_j$, $|v_{j,\ell}|=2$ for $\ell=1,2
,\ldots,\beta_j$ and $v_{j,1}=s_j$ (in particular, $2\alpha_j+\beta_j=d$). For $i=1,2,\ldots,b$, let \begin{equation}\label{equ3} S_i=\{s_1h_i,s_2h_i,\ldots,s_mh_i,(s_mh_i)^{-1},\ldots,(s_2h_i)^{-1},(s_1h_i)^{-1}\}. \end{equation} For $j=2m+1,\ldots,n$, let \[ T_j= \begin{cases} \big\{u_{j,1},u^{-1}_{j,1},u_{j,2},{u^{-1}_{j,2}},\ldots,u_{j,\alpha_j},{u^{-1}_{j,\alpha_j}},v_{j,1},v_{j,2},\ldots,v_{j,b-2\alpha_j}\big\} &\text{if $b>2\alpha_j$;}\\ \big\{u_{j,1},u^{-1}_{j,1},u_{j,2},{u^{-1}_{j,2}},\ldots,u_{j,\frac{b-1}{2}},u^{-1}_{j,\frac{b-1}{2}},v_{j,1}\big\} &\text{if $b\leqslant 2\alpha_j$ and $2\nmid b$;}\\ \big\{u_{j,1},u^{-1}_{j,1},u_{j,2},{u^{-1}_{j,2}},\ldots,u_{j,\frac{b}{2}},u^{-1}_{j,\frac{b}{2}}\big\} &\text{if $b\leqslant 2\alpha_j$ and $2\mid b$.} \end{cases} \] Let \begin{equation} \label{eq:SKT} S=K\cup\left(\bigcup_{i=1}^bS_i\right)\cup\left(\bigcup_{j=2m+1}^nT_j\right). \end{equation} \end{construction} \begin{theorem}\label{thm3} In the notation of Construction~$\ref{con1}$, the following hold: \begin{enumerate}[{\rm(a)}] \item $|S_i|=2m$ for $i=1,2,\ldots,b$; \item $|T_j|=b$ for $j=2m+1,2m+2\ldots,n$; \item $ S=K\sqcup\left(\bigsqcup_{i=1}^bS_i\right)\sqcup\left(\bigsqcup_{j=2m+1}^nT_j\right) $; \item $S$ is an inverse-closed subset of $G\setminus\{e\}$; \item $H$ is a $(|K|,b)$-regular set in $\Cay(G,S)$. \end{enumerate} \end{theorem} \begin{proof} It is clear from Construction~\ref{con1} that $|T_j|=b$ and $S_i$, $T_j$ and $K$ are all inverse-closed subsets of $G\setminus\{e\}$ for $i=1,2,\ldots,b$ and $j=2m+1,2m+2,\ldots,n$. Thus statement~(b) holds, and $S$ is an inverse-closed subset of $G\setminus\{e\}$, as statement~(d) asserts. Since $H$ is a perfect code in $\Cay(G,S_0)$, Corollary~\ref{cor1} implies that $S_0\cup\{e\}$ is an inverse-closed left transversal of $H$ in $G$. For $r=1,2,\dots,m$ and $i=1,2,\ldots,b$, we have \begin{equation}\label{equ8} (s_rh_i)^{-1}=h_i^{-1}s^{-1}_r\in Hs_r^{-1}=s_r^{-1}H=s_{2m+1-r}H. \end{equation} Hence the elements $s_1h_i,s_2h_i,\ldots,s_mh_i,(s_mh_i)^{-1},\ldots,(s_2h_i)^{-1},(s_1h_i)^{-1}$ are in pairwise distinct left cosets $s_1H,s_2H,\ldots,s_{2m-1}H,s_{2m}H$. As a consequence we obtain that \[ s_1h_i,s_2h_i,\ldots,s_mh_i,(s_mh_i)^{-1},\ldots,(s_2h_i)^{-1},(s_1h_i)^{-1} \] are pairwise distinct, which implies that $|S_i|=2m$, proving statement~(a). Moreover, for $i=1,2,\ldots,b$, we have \begin{equation}\label{equ2} S_i\subseteq\bigcup_{r=1}^{2m}s_rH \end{equation} and \begin{equation}\label{equ4} S_i\cap(s_rH)= \begin{cases} \big\{s_rh_i\big\}&\text{for $r=1,2,\ldots,m$;}\\ \big\{(s_{2m+1-r}h_i)^{-1}\big\}&\text{for $r=m+1,m+2,\ldots,2m$.} \end{cases} \end{equation} Note that $K\subseteq H$ and $T_j\subseteq s_jH$ for $j=2m+1,\ldots,n$. We derive from~\eqref{equ2} that \begin{equation}\label{equ5} S=K\sqcup\left(\bigcup_{i=1}^bS_i\right)\sqcup\left(\bigsqcup_{j=2m+1}^nT_j\right). \end{equation} Since $s_rh_i\neq s_rh_j$ for $r=1,2,\dots,n$ and distinct $i,j$ in $\{1,2,\ldots,d\}$, it follows from~\eqref{equ2} and~\eqref{equ4} that \begin{align*} S_i\cap S_j &=\left(S_i\cap\left(\bigcup_{r=1}^{2m}s_rH\right)\right)\cap\left(S_j\cap\left(\bigcup_{r=1}^{2m}s_rH\right)\right) \\ &=\bigcup_{r=1}^{2m}\Big((S_i\cap s_rH)\cap(S_j\cap s_rH)\Big) \\ &=\left(\bigcup_{r=1}^m\Big((S_i\cap s_rH)\cap(S_j\cap s_rH)\Big)\right) \cup \left(\bigcup_{r=m+1}^{2m}\Big((S_i\cap s_rH)\cap(S_j\cap s_rH)\Big)\right) \\ &=\left(\bigcup_{r=1}^m\Big(\{s_rh_i\}\cap\{s_rh_j\}\Big)\right) \cup \left(\bigcup_{r=m+1}^{2m}\Big(\{(s_{2m+1-r}h_i)^{-1}\}\cap\{(s_{2m+1-r}h_j)^{-1}\}\Big)\right)\\ &=\left(\bigcup_{r=1}^m\emptyset\right)\cup\left(\bigcup_{r=m+1}^{2m}\emptyset\right)\\ &=\emptyset. \end{align*} Thus $\bigcup_{i=1}^bS_i=\bigsqcup_{i=1}^bS_i$. This together with~\eqref{equ5} proves statement~(c). For $i=1,2,\ldots,b$, by \eqref{equ8} and the construction of $S_i$ we have \begin{align*} \overline{S_i}\cdot\overline{H}&=\left(\sum_{r=1}^ms_rh_i+\sum_{r=1}^m(s_rh_i)^{-1}\right)\cdot\overline{H}\\ &=\sum_{r=1}^m s_rh_i\overline{H}+\sum_{r=1}^m(s_rh_i)^{-1}\overline{H}\\ &=\sum_{r=1}^ms_r\overline{H}+\sum_{r=1}^ms_{2m+1-r}\overline{H}\\ &=\sum_{r=1}^{2m}s_r\overline{H}. \end{align*} Hence \begin{equation}\label{equ6} \sum_{i=1}^{b}\overline{S_i}\cdot\overline{H}=\sum_{i=1}^{b}\sum_{r=1}^{2m}s_r\overline{H}=b\sum_{r=1}^{2m}s_r\overline{H}. \end{equation} For $j=2m+1,\ldots,n$, we derive from the construction of $T_j$ that the elements of $T_j$ are all in $s_jH$, whence \[ \overline{T_j}\cdot\overline{H}=|T_j|s_j\overline{H}=bs_j\overline{H}. \] It follows that \begin{equation}\label{equ7} \sum_{j=2m+1}^n\overline{T_j}\cdot\overline{H}=\sum_{j=2m+1}^n\left(bs_j\overline{H}\right)=b\sum_{j=2m+1}^ns_j\overline{H}. \end{equation} Since $S\cap H=K$, we deduce from statement~(c) that \[ S\setminus H=\left(K\sqcup\left(\bigsqcup_{i=1}^bS_i\right)\sqcup\left(\bigsqcup_{j=2m+1}^nT_j\right)\right)\setminus H =\left(\bigsqcup_{i=1}^bS_i\right)\sqcup\left(\bigsqcup_{j=2m+1}^nT_j\right). \] This together with \eqref{equ6} and \eqref{equ7} shows that \begin{align*} \overline{S\setminus H}\cdot\overline{H}&=\left(\sum_{i=1}^b\overline{S_i}+\sum_{j=2m+1}^n\overline{T_j}\right)\overline{H}\\ &=\sum_{i=1}^b\overline{S_i}\cdot\overline{H}+\sum_{j=2m+1}^n\overline{T_j}\cdot\overline{H}\\ &=b\sum_{r=1}^{2m}s_r\overline{H}+b\sum_{j=2m+1}^{n}s_j\overline{H}\\ &=b\sum_{k=1}^{n}s_k\overline{H}\\ &=b\,\overline{S_0}\cdot\overline{H}. \end{align*} Thus, since $H$ is a perfect code in $\Cay(G,S_0)$ and $S\cap H=K$, Lemma~\ref{lem4} implies that $H$ is a $(|K|,b)$-regular set in $\Cay(G,S)$, as statement (e) asserts. This completes the proof. \end{proof} \begin{corollary}\label{thm2} Let $G$ be a group, and let $H$ be a normal subgroup of $G$. If $H$ is a perfect code of $G$, then $H$ is an $(a,b)$-regular set of $G$ for every pair of integers $a$ and $b$ with $0\leqslant a\leqslant|H|-1$ and $0\leqslant b\leqslant |H|$ such that $\gcd(2,|H|-1)$ divides $a$. \end{corollary} \begin{proof} If $|H|$ is odd, then $H\setminus\{e\}$ is partitioned into pairs of elements that are inverses of each other, and so $H\setminus\{e\}$ has an inverse-closed subset of size $a$ for each even integer $0\leqslant a\leqslant|H|-1$. If $|H|$ is even, then there exists an involution in $H$, and so $H\setminus\{e\}$ has an inverse-closed subset of size $a$ for each integer $0\leqslant a\leqslant|H|-1$. Since by our assumption $H$ is a perfect code of $G$, we may take an inverse-closed subset $S_0$ of $G\setminus\{e\}$ such that $H$ is a perfect code in $\Cay(G,S_0)$. Let $a$ and $b$ be integers such that $0\leqslant a\leqslant|H|-1$, $0\leqslant b\leqslant |H|$ and $\gcd(2,|H|-1)$ divides $a$. Note that $a$ is even if $|H|$ is odd, as $\gcd(2,|H|-1)$ divides $a$. We conclude that there exists an inverse-closed subset $K$ of $H\setminus\{e\}$ with $|K|=a$. Now let $S$ be as in Construction~\ref{con1}. Then Theorem~\ref{thm3} ensures that $H$ is an $(a,b)$-regular set in $\Cay(G,S)$. This completes the proof. \end{proof} We are now in a position to prove Theorem~\ref{thm5}. \begin{proof}[Proof of Theorem~$\ref{thm5}$] The equivalence of (a1) and~(a2) has been proved in~\cite[Theorem~2.2]{HXZ2018}. Corollary~\ref{thm2} shows that (a2) implies (a3). On the other hand, (a3) implies~(a2) since perfect codes are $(0,1)$-regular sets. Thus statements (a1), (a2) and (a3) are equivalent. Again, the equivalence of (b1) and~(b2) has been proved in \cite[Theorem~2.2]{HXZ2018}. Suppose that~(b2) holds. Then (a1) holds and $|H|$ is even. By the equivalence of~(a1), (a2) and (a3) as shown above, we then infer that~(a3) holds. As $|H|$ is even, we have $\gcd(2,|H|-1)=1$. Thus~(a3) leads to~(b3). This shows that~(b2) implies~(b3). On the other hand, suppose that~(b3) holds. Then in particular $H$ is a $(1,1)$-regular set of $G$. That is, $H$ is a total perfect code of $G$. Hence~(b3) implies~(b2). So (b2) and (b3) are equivalent. Therefore, statements~(b1), (b2) and (b3) are all equivalent, completing the proof. \end{proof} \section{Examples and remarks} \label{sec:remarks} We illustrate Construction~\ref{con1} by the following example. \begin{example}\label{exm1} Let $G=\langle x,y\mid x^{10}=e,y^2=x^5,y^{-1}xy=x^{-1}\rangle$ be the generalized quaternion group of order $20$. Let $H=\langle x^2\rangle=\{e,x^2,x^{-2},x^4,x^{-4}\}$, $K=\{x^2,x^{-2}\}$ and $S_0=\{y,y^{-1},x^5\}$. Then $H$ is a normal subgroup of $G$, $K$ is an inverse-closed subset of $H \setminus \{e\}$, and by Corollary~\ref{cor1}, $H$ is a perfect code in $\Cay(G,S_0)$. Using Construction~\ref{con1}, we now construct an inverse-closed subset $S$ of $G \setminus \{e\}$ such that $H$ is a $(2, 3)$-regular set in $\Cay(G,S)$. Write $s_1=y$, $s_2=y^{-1}$ and $s_3=x^5$ so that $S_0=\{s_1,s_2,s_3\}$. By \eqref{equ3}, we have \[ S_1=\{s_1,s_1^{-1}\}=\{y,y^{-1}\}, \] \[ S_2=\{s_1x^2,(s_1x^2)^{-1}\}=\{x^8y,(x^8y)^{-1}\}, \] and \[ S_3=\{s_1x^4,(s_1x^4)^{-1}\}=\{x^6y,(x^6y)^{-1}\}. \] Since $s_3H=x^5H=xH$, we have $s_3H=\{x,x^{-1},x^3,x^{-3},x^5\}$ and $T_3=\{x,x^{-1},x^5\}$. So \eqref{eq:SKT} yields $$ S = K\cup(S_1\cup S_2)\cup T_3 = \{x^2,x^{-2},y,y^{-1},x^8y,(x^8y)^{-1},x^6y,(x^6y)^{-1},x,x^{-1},x^5\}. $$ Finally, by Theorem~\ref{thm3}, $S$ is an inverse-closed subset of $G \setminus \{e\}$ and $H$ is a $(2,3)$-regular set in $\Cay(G,S)$. \qed \end{example} It may happen that a normal subgroup of a group is an $(a,b)$-regular set of the group for some (but not all) pairs of integers $a$ and $b$ as in (a3) (respectively, (b3)) of Theorem ~\ref{thm3} but is not a perfect code (respectively, total perfect code) of the group. We illustrate this by the following two examples. \begin{example} Let $\Q_8=\{1,-1,i,-i,j,-j,k,-k\}$ be the quaternion group. By Lemma~\ref{lem1}, the normal subgroup $H=\langle i\rangle=\{1,-1,i,-i\}$ of $\Q_8$ is a $(1,2)$-regular set in $\Cay(\Q_8,\{-1,j,-j\})$ and a $(2,2)$-regular set in $\Cay(\Q_8,\{i,-i,j,-j\})$. However, using \cite[Theorem~2.2]{HXZ2018}, one can verify that $H$ is not a perfect code of $\Q_8$. \end{example} \begin{example} Let $G=\langle x,y\mid x^8=e,y^2=x^4,y^{-1}xy=x^{-1}\rangle$ be the generalized quaternion group of order $16$. Then $H=\langle x^2\rangle$ is a normal subgroup of $G$. By Lemma~\ref{lem1}, we see that $H$ is a $(2,2)$-regular set in $\Cay(G,S)$, where $S = \{x^2,x^{-2},x,x^{-1},y,y^{-1},xy,(xy)^{-1}\}$. However, by~\cite[Theorem~2.2]{HXZ2018} we can show that $H$ is not a total perfect code of $G$. \end{example} \begin{remark} Let $G$ be a group and $H$ a subgroup of $G$. The following statements are immediate corollaries of Lemma~\ref{lem1}: \begin{itemize} \item[\rm (a)] For every integer $a$ with $0 \leqslant a\leqslant |H|-1$ such that $\gcd(2,|H|-1)$ divides $a$, $H$ is an $(a,|H|)$-regular set of $G$; \item[\rm (b)] if $H$ is an $(a,b)$-regular set of $G$ for some pair of integers $a$ and $b$, then it is also an $(a,|H|-b)$-regular set of $G$. \end{itemize} \end{remark} In fact, we obtain (a) by replacing $S$ in Lemma~\ref{lem1} by the union of $G \setminus S$ and any inverse-closed subset of size $a$ in $H$. Similarly, we obtain (b) by replacing $S$ with $(S\cap H)\cup(G\setminus(S\cup H))$ in Lemma~\ref{lem1}. \smallskip \noindent\textsc{Acknowledgements.} The first author gratefully acknowledges the financial support from China Scholarship Council (No.~201806010040). The third author was supported by the National Natural Science Foundation of China (No.~61771019) and the Research Grant Support Scheme of The University of Melbourne. The third author is grateful to Peter Cameron for introducing the concept of regular sets to him and bringing \cite{BCGG2019} to his attention.
\section{acknowledgements} \bibliographystyle{IEEEtran} \section{Introduction} For decades, debugging has been the subject of much research work. Researchers have investigated debugging in terms of what makes it hard \cite{eisenstadt1993tales, Layman-debugging-revisited, CokerQualitative2019}, the kinds of questions developers ask \cite{Ko2007InformationNeeds, LaToza2010ReachabilityQuestions}, categories of strategies developers follow \cite{BohmeFSE-DebuggingHypotheses, katz1987debugging}, and types of debugging tools developers use \cite{Murphy2006, petrillo2019swarm}. These studies have revealed many important aspects of debugging, opening new opportunities to improve the way we teach \cite{mccauley2008debugging} and design tools for debugging. However, prior studies of debugging are limited in several important ways. Most studies of debugging have been conducted either in the lab or through the analysis of log data from instrumented development environments. Lab settings allow researchers to observe developers while debugging after receiving a defect report. However, lab settings are inherently artificial, reflecting an unfamiliar context in which developers work on unfamiliar code. While log data enables understanding debugging in naturalistic contexts, it is inherently limited in its ability to reveal developer intent. While log data can effectively answer questions about how frequently various tools are used, it is much more difficult to answer questions developers' intent when using each tool. In this paper, we conducted the first \textit{exploratory study} of debugging episodes in a naturalistic setting. A debugging episode begins when a developer first encounters a defect while programming or receives a defect report. Throughout the episode, developers perform activities, where they edit, browse, test, inspect code, and consult online resources. As they do so, they navigate between files, edit files, and use tools to inspect program state. A debugging episodes ends when the developers fixes the defect or decides to stop. \input{tables/litTable} To understand the nature of debugging episodes, we observed developers. We observed developers debugging software projects in varying sizes, programming languages, practices, and domains. To curate such a diverse dataset, we used a recently available new source of data: live-streamed programming videos \cite{AlaboudiVLHCC2019}. In these videos, developers work and think aloud to explain how and why they are working as they do. Researchers have found that these videos show professional developers working in real-time on open source projects \cite{AlaboudiVLHCC2019, Alaboudi2019CHASE}, podcasting hours development and debugging work on projects used in production. Using this data source, we curated 15 sessions (each corresponding to a separate video) in which 11 professional developers worked for 30 hours. Our analysis of this dataset yielded 89 distinct debugging episodes spanning more than 15 hours of debugging time as well as 13 hours of programming time. We analyzed the developers' activities during debugging episodes and programming work, resulting in 2137 and 1407 activities for debugging and programming, respectively. We focused on answering the following research questions: \begin{itemize} \item[\textbf{RQ1}] What is the frequency and duration of debugging episodes? \item[\textbf{RQ2}] What are the characteristics of debugging activities? \item[\textbf{RQ3}] What characteristics differentiate long from short debugging episodes? \item[\textbf{RQ4}] How does the time spent on activities vary between debugging and programming? \end{itemize} We found that debugging episodes varied widely in duration range, from a few seconds to more than a hundred minutes, with a skewed distribution. Most debugging time was spend in the longest 25\% of debugging episodes. Debugging occurred frequently in programming work, occurring after every eight minutes of programming. There was no single activity that consumed the majority of debugging time. Long debugging episodes involved a wide range of activities, including editing and browsing files, inspect the program state, and consulting online resources. Short debugging episodes consisted primarily of editing and testing edits. Finally, we found that programming and debugging were remarkably similar in terms of the activities developers did, particularly in editing and browsing code. The largest differences were in how developers tested and inspected code. To support researchers in using live-streamed programming videos to study developers in real-world settings, we built the observe-dev.online platform. It includes a dataset of more than 100 hours of programming work by 33 professional developers. We discuss our platform in section \ref{sec:Observe.dev}. \section{Related Work} Many studies have long investigated debugging behavior, beginning at least as early as 1974 \cite{Gould1974}(Table \ref{tab:lit}). These studies encompass both field studies examining behavior in a naturalistic context as well as experiments investigating debugging in a controlled setting. Many studies directly observing developers, while others have indirectly observed debugging through log data or self-reports made by developers. However, only five studies have directly observed developers in a field setting. These studies have examined debugging from a wide range of perspectives, including the strategies developers use, the use of the debugger, challenges developers face, and the time developers spend debugging. Developers use a variety of debugging strategies. Developers often follow a simple hypothesis testing strategy \cite{Perscheid2017, zayour2016qualitative}, speculating about the cause of a defect based on their experience with similar defects \cite{eisenstadt1993tales, BohmeFSE-DebuggingHypotheses} and clues they gather about the program behavior, state, or output \cite{gould1975some, Gugerty86}. Developers then test their hypotheses by editing their code \cite{Zeller2005, Layman-debugging-revisited} or inspecting the program's behavior and state \cite{gould1975some, Perscheid2017}. Developers use both forwards and backwards reasoning strategies. In forwards reasoning, developers follow the execution of the failing test \cite{BohmeFSE-DebuggingHypotheses}, building a mental representation of the program \cite{katz1987debugging} and inspecting program execution and state through breakpoints \cite{romero2007debugging}. In backwards reasoning \cite{gould1975some, BohmeFSE-DebuggingHypotheses,lukey1980understanding}, developers start from the incorrect output and work backwards in the execution to the defect location. Information foraging theory (IFT) models how developers navigate code \cite{Lawrance2013IFT, piorkowski2015fix, piorkowski2013whats}, including in debugging tasks. According to IFT, developers navigate between patches (e.g., methods) based on their scent (e.g., method identifiers), which offer hints directing developers to their prey (e.g., the fault location). Developers more frequently switch between subgoals when debugging than in programming~\cite{Chattopadhyay2019}. Experienced developers are able to make use of their knowledge to comprehend code at a higher level of abstraction than novices~\cite{Vessey1985, vans1999program}. An important source of data for studying debugging are logs. Logs instrument programming and debugging tools to record what developers do in real-world programming work. Log data enables studying debugging at scale, enabling individual studies to examine as much as 18,000 hours of developer activity~\cite{beller2018dichotomy}. Log data has enabled extensive study of the use of debuggers in practice. Debuggers have been found to be among the most used features in modern IDEs \cite{Murphy2006, Amann2016}, which developers use at beginning of a debugging episode \cite{Afzal2018}. Developers often avoid complex debugger features such as breakpoints \cite{Damevski2017} and prefer simpler debugging techniques such as ``printf debugging'' \cite{beller2018dichotomy}. However, log data has important limitations \cite{myers2016programmers}. In recording only developer actions without additional context, it can be difficult or impossible to reconstruct the developers' intent and determine, for example, if developer actions relate to a debugging or programming task. A number of studies have enumerated specific \textit{challenges} developers face that can make debugging difficult. An analysis of debugging "war stories" found the two most common causes of difficulty were inapplicable debugging tools and ``large temporal or spatial chasms between the root cause and the symptom''~\cite{eisenstadt1993tales}. Developers face difficulties reproducing defects and determining the root cause of failures~\cite{Ko2007InformationNeeds}. Modern systems' multithreaded and distributed nature can make instrumentation and testing debugging hypotheses challenging~\cite{Layman-debugging-revisited}. API's degree of abstraction impose unique challenges for debugging~\cite{CokerQualitative2019}. Other studies have identified specific questions developers report to be hard-to-answer or that are associated with particularly time-consuming debugging episodes~\cite{LaToza2010ReachabilityQuestions, LaToza2010Hard-to-answerCode}. Studies have also been conducted to quantify the time developers spend debugging, yielding widely varying results. Beller et al. \cite{beller2018dichotomy} observed that developers spent only 14\% of their active IDE time in the debugger. Minelli et.al \cite{Minelli2016} and Meyer et.al \cite{meyer2014software} also reported a low percentage (0.87\%, and 3.9\% respectively) of total time using the debugger. However, developers reported that they spent between 20\%-60\% of the working time debugging, which researchers have argued to be an over-estimation\cite{beller2018dichotomy}. Other studies have focused on evaluating the impact of various debugging aids on the productivity of developers in debugging tasks. For example, automatic debugging tools model the debugging task as a search problem to identify a defect location, shrinking the search space developers need to inspect \cite{MarkWeiser1984ProgramSlicing, DeMillo1996, Zhang2006, XiangyuZhang2003, jones2002visualization}. Studies measuring the impact of these tools on developers' debugging performance have found mixed benefits, suggesting the need for support beyond localizing the defect \cite{Parnin2011AreAutomatedDebugging, Wang2015EvaluatingIR, Alaboudi2020}. \input{tables/datasetTable} This paper offers the first study of debugging episodes conducted in a naturalistic setting. Our work offers new findings on the duration, frequency, and activities of debugging and the differences between activity in programming and debugging. \section{Method} To investigate debugging episodes, we observed professional developers through live-streamed programming videos. For brevity, we refer to each live-streamed programming video as a \textit{session}. We collected sessions reflecting a diverse cross-section of software projects in varying sizes, programming languages, and domains. We then developed a coding scheme to identify specific episodes and activities within the sessions to explore debugging episodes. Finally, we coded the sessions using the coding scheme. \subsection{Live-Streamed Programming Videos} Using platforms such as YouTube and Twitch, some developers have recently begun the practice of live-streaming their work, broadcasting and recording their real-time work contributing to open source software projects \cite{Faas2018}. Researchers have found that these sessions are not rehearsed and illustrate developers' moment-to-moment work contributing to real software projects using their preferred development environment \cite{AlaboudiVLHCC2019} (Figure \ref{fig:examples}). Moreover, as they explain their work to watchers, the sessions document both developers' actions and a running commentary, similar to think-aloud, describing why they are working as they do. Observational studies in software engineering have traditionally been limited by the difficulty of gaining access to software developers at work on real projects and the impossibility of sharing datasets due to confidentiality. Much as widespread access by researchers to the repositories of open source projects or questions and answer on Stack Overflow has led to the proliferation of empirical software engineering \cite{lakhani2004, Mamykina2011, singer2014Twitter, MacLeod2015, chatterjee2019exploratory}, we believe these sessions offer a similar opportunity, complimenting these datasets by offering the ability to answer new questions where direct observation is required. To enable this opportunity, we have constructed an open source platform for researchers working with live-streamed programming videos, which we describe in Section \ref{sec:Observe.dev}. Sessions share a common session structure \cite{AlaboudiVLHCC2019}. Developers start the live-stream by stating a goal for the session. They then work towards this goal, reading documentation and writing, debugging, and running code. Most videos depict work on open source projects, which developers link to in their session descriptions. Developers are occasionally interrupted, either by developers watching live or by others in their physical space, mirroring the typical interruptions developers face in their day-to-day work \cite{meyer2014software, abad2018task}. After completing the session, developers may archive the video on platforms such as YouTube and Twitch, with most licensed under the Creative Commons. \subsection{Data Collection} To select sessions, we formulated a strict data collection methodology. To find a relevant session, we used YouTube search functionality with keywords such as ``open source contribution'' and ``live-stream programming'' as well live-streamed programming communities on Reddit\footnote{https://www.reddit.com/r/WatchPeopleCode/} and Github\footnote{https://github.com/bnb/awesome-developer-streams} for links to sessions. In selecting videos to include, we used three criteria: \begin{itemize} \item \textit{The archived session is available}: Many developers use Twitch to host their sessions. However, Twitch only archives videos for 14 days. As we needed the ability to conduct analyses over an extended period, we excluded sessions that were not archived on a long-term basis. \item \textit{The project is open source and ready to be used by other projects or users}: We first checked that the developer's project was hosted in a public code repository. After locating the project repository, we skimmed the documentation and issue tracker, looking for evidence that it had or will have a public release for general usage. For example, we looked for dates describing a future release or a link to an executable version of the project. \item \textit{The video shows significant development work}: To ensure the video contained a meaningful length of development work, we briefly skimmed each session. We excluded videos where developers primarily spent the session time communicating with other developers through chat. When a session was shorter than two hours, we chose another session from the same developer working on the same project. \end{itemize} We sought to create a diverse sample of sessions, encompassing developers using a variety of programming languages and working in a variety of application domains. Recent field studies of debugging have observed between eight and ten developers employed by one to four different companies \cite{Perscheid2017, Chattopadhyay2019}. Based on this, we chose to observe eleven developers working on eleven distinct open-source projects in sessions of at least two hours. Our dataset includes desktop applications, command-line programs, mobile apps, web apps, games, and operating systems. Table \ref{tab:dataset} lists the 15 sessions. As a conservative estimate of developers' experience, we examined each developers' GitHub profile page and identified their first commit to an open source project. All eleven developers actively worked on open source projects for a period of 7 to 31\footnote{GitHub allows developers to migrate their open source contributions from other legacy version control systems, resulting in commits that predate GitHub.} years (median = 9 years). Some shared their current or past employer in their GitHub profiles, including Google, Microsoft, Lyft, PayPal, and Mozilla. The total duration of these sessions is 30 hours. \begin{figure} \centering \includegraphics[scale=.5]{figures/session.png} \caption{Our analysis focused on identifying debugging episodes and their activities within a session.} \label{fig:session} \end{figure} \input{tables/codingTable} \subsection{Data Analysis} To identify debugging episodes and their activities within a session (Figure \ref{fig:session}), we collected different definition of debugging \cite{johnson1982software,ko2011state,Parnin2011AreAutomatedDebugging} to guide our identification of a debugging episode. The first author watched ten sessions iterative and built an initial codebook that contained the definition of a debugging episode and six different activities. The complete codebook is available in the replication package\footnote{https://figshare.com/s/6c4026a941db49ea3de3}. Due to a space limitation, we briefly summarize our coding book. A Debugging episode starts when one of three events happens during a session. First, An error message appears in the console, program UI or falling tests. Second, developers express variably that the output is not what they expect. Third, developers start reproducing a defect reported in the issue the tracker. Developers have to address the defect to mark a duration within a session as the start of the debugging episode. The debugging episode ends if one of two events occurs. First, the error disappears, or the program produced the expected output. Second, developers are verbally stating that they stopped debugging. We considered the rest of the session that does not include debugging episodes as programming work. To avoid coding irrelevant work such as break or social chatting, we exclude these segments of the session and mark them as irrelevant. We coded activities for both programming work and debugging episodes. These activities were capturing the types of work a developer did during a segment of time. These included five code-focused activities as well as an ``other'' activity containing non-code-focused work, such as interacting with the development environment, checking the version control system, and writing notes. Table \ref{tab:codingBook} summarizes the activities and the characteristics we coded for each. After constructing the initial codebook, the two authors iteratively coded sessions, discussing disagreements after each iteration and revising the codebook. Instead of coding a single session, we choose several representative sessions with differing codebases, programming languages, and development tools. Through these iterations, the two authors coded 20 distinct episodes and 166 activities. The last iteration yielded a Chen's Kappa inter-rater agreement \cite{Richard1977} of 75\% for episodes and 84\% for activities, reflecting substantial and almost perfect agreement, respectively. Using the final codebook, the first author then coded the entire dataset using observe.dev-online (Section \ref{sec:Observe.dev}). The entire dataset, including the videos and codes, is publicly available\footnote{https://bit.ly/3kkbL2W}. \section{Results} We coded 1407 activities in 13 hours of programming work and 2137 activities in 15 hours of debugging work, including 89 distinct debugging episodes. The average number of activities per debugging episode was 24 ($\pm$37)\footnote{$\pm$ is used to report stander deviation.}. Figure \ref{fig:debuggingAndprogramming} shows debugging episodes and programming work in the 15 sessions. Our exploratory analysis reveled observations regarding frequency and duration of debugging episodes (RQ1), activities of debugging episodes (RQ2), differences in activities between long and short episodes (RQ3), and differences in activities between debugging episodes and programming work (RQ4). \begin{figure} \centering \includegraphics[scale=.28,trim= 7.5cm 3cm 6.5cm 6cm, ,clip]{figures/debuggingAnnotations.pdf} \caption{ Debugging episodes and programming in the 15 sessions. } \label{fig:debuggingAndprogramming} \end{figure} \begin{figure} \centering \includegraphics[scale = .12, trim= 0cm 5cm 0cm 15cm,clip]{figures/episodesTime.png} \caption{Debugging episodes were sorted from shortest to longest and then grouped in tens. The majority of the total time came from small number of episodes. } \label{fig:episodesTime} \end{figure} \subsection{RQ1: Frequency and Duration of Debugging Episodes} Overall, debugging episodes constituted 13\%-95\% (avg = 48\%, sd = $\pm$24\%) of sessions time. This is considerably higher than estimates derived from log analysis of debugger usage (14\%). However, it is closer to developers' self-reported (20\%-60\%) \cite{beller2018dichotomy}. We categorize debugging episodes into two categories based on the defect source. The first category contains episodes related to reported defects. Ten debugging episodes started because five developers (D1, D2, D3, D5, and D8) worked towards debugging reported defects in the issue tracker. These episodes consumed most of the session time (79$\pm$16\%). This may not be a surprising observation since developers dedicated the entire session to these reported defects. The second category concerns episodes related to inserted defects. In the remaining 79 debugging episodes, developers (D3, D4, D6-D11) started debugging because they inserted defects while implementing new features. Although developers' goal was to add new features, they spent on average 40\% ($\pm$12\% ) of the session time on debugging. Developers spent an average of 51 ($\pm$38) minutes debugging each reported defect. In contrast, developers spent 6 ($\pm$8) minutes debugging defects they had just inserted themselves. One may conclude that due to differences in length, debugging episodes concerning reported defects were more problematic than those concerning inserted defects. Although debugging episodes concerning inserted defects were generally short, they were also frequent. While programming, developers constantly debugged new defects, on average after 8 ($\pm$10) minutes of programming. These debugging episodes also were not always short; 20\% lasted for tens of minutes (18$\pm$9). Not all debugging episodes concluded with a successful fix, suggesting that some might be longer if the developers continued. Half of debugging episodes triggered by reported defects did not end with a successful fix, either because more information was needed to reproduce the defect or the developer deferred debugging until later. For new defects created while programming, 14\% did not conclude with a successful fix, either because the developer had higher priority tasks (e.g., shipping the feature even it is not completely correct) or deferred the work until later. Overall, the debugging episodes' distribution was skewed with 80\% of the total episodes time stem from 25 episodes (Figure \ref{fig:episodesTime}). \input{tables/overallTable} \input{tables/charaTable} \begin{figure}[ht] \begin{subfigure}[b]{1\linewidth} \centering \includegraphics[scale=.29, trim = 2cm 1cm 0cm 0cm]{figures/debuggingActivites_2.png} \end{subfigure} \begin{subfigure}[b]{1\linewidth} \centering \includegraphics[scale=.21, trim = 3cm 1cm 0cm 0cm]{figures/legends.png} \end{
subfigure} \caption{The fraction of time spent in each activity per debugging episode.} \label{fig:episodesActiviries} \end{figure} \subsection{RQ2: Activities in Debugging Episodes} We found that debugging episodes were diverse in their activity: there was no single activity that dominated all debugging episodes. Instead, the time developers spent on each activity varied widely between debugging episodes (Figure \ref{fig:episodesActiviries}). Table \ref{tab:summeryActivities} summarizes the occupancy and occurrence of each activity. Table \ref{tab:summeryActivities} summarizes key characteristics of each activity. \textit{Browsing a file of code} activity occurred, on average, 7 ($\pm$14) times per episode, occupying 12\% ($\pm$12\%) of the episode time. Developers browsed 3 ($\pm$5) distinct files per episode (max=32 files). 67\% ($\pm37$\%) of the files that developers browsed during a debugging episode were later edited during the same episode. After finishing browsing a file of code, developers were most likely to visit another file to browse (40\%) or edit (24\%). \textit{Editing a file of code} activity occurred an average of 7 ($\pm$10) times during debugging episodes, for an average 40\% ($\pm$21\%) of episode time. Developers edited 2 ($\pm$2) files of code for each episode (max=13). When developers edited only a single file, they usually did not complete their edit in one time segment. Developers instead switched back and forth an average of three times between editing that file and other debugging activities. Developers were most likely to test their edit (58\%) or inspect program state (12\%). Developers engaged in an average of 6 ($\pm$6) \textit{testing program} activities per debugging episode. They spent 34\% ($\pm$22\%) of episode time on this activity. Developers most often ran and observed the program output manually (84\%) rather than through automated tests (16\%). Developers were likely to next either edit a file of code (56\%) or browse a file of code (33\%). \textit{Inspecting program state} occurred, on average, twice ($\pm$14) per debugging episode, for 8\% ($\pm$14\%) of debugging episode time. To inspect the program state, developers used a combination of log statements (70\%) and breakpoints (30\%). Developers spent 53 ($\pm$105) seconds each time they inspected program state, the longest instance duration of any activity. Developers were likely to edit (65\%) or browse a file of code (24\%) after inspecting the program state. Developers engaged in an average one ($\pm$2) times in \textit{Consulting external resources} activities per debugging episode. This activity was the least common, occurring in only 21\% of debugging episodes. However, 91\% of developers consulted external resources at least once. When consulting external resources, developers primarily searched for an explanation of an API (92\%) rather than an explanation of defect behavior (e.g., an error message)(8\%). The most common information source was API documentation (83\%). Developers also sought existing code examples (9\%) and relevant posts in Q\&A communities(19\%). After consulting external resources, developers often edited (39\%) or browsed (35\%) a file of code. \textit{Other} activity occupied on average 4\% ($\pm$ 8\%) of the debugging episode time and In this activity, the mechanics of interacting with the development environment to search for keywords inside files, navigate between files and folders constitute 74\% of what developers did. Developers also browsed issue trackers (15\%) and took note (3\%) as part of the other activity. We also examined the order of debugging activities within debugging episodes. We found that there was not a single point within debugging episodes when activities occurred. For example, developers browsed files of code anywhere from the start to the end of debugging episodes. However, there were \textit{peak} occurrences for most debugging activities when they were most common~(Table \ref{tab:summeryActivities}). Testing was most common at the beginning and end of debugging. One explanation is that developers test when beginning to debug to understand the defective behavior and at the end of debugging to confirm that it produces the output they code. Browsing a file of code was more common during the first half of debugging episodes. This might be because developers needed to collect relevant code or localize the defect before engaging in different activities. Editing source code was widely distributed across the debugging episode, peaking in the middle of episodes. Consulting external resources exhibited a strong peak in the middle of debugging. This may correspond to the point when developers first try to fix a defect but get stuck and decide to seek additional information elsewhere. Another explanation is that developers may start implementing a fix and later consult help to understand how to implement the fix. Inspecting program state and other activities were widely spread across debugging episodes. \begin{figure*}[ht] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=.5, trim= 0cm 1.5cm 0cm .5cm,clip]{figures/FirstInstanceOcc.pdf} \end{subfigure}% \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=.5, trim= 0cm 1.5cm 1.5cm .5cm,clip]{figures/InstancesActivitiesDuration.pdf} \end{subfigure} \caption{The distribution of the time within an episode when an activity first occurs, in log scale (left). The distribution of the duration of activities, in log scale (right). The thick black lines indicate the mean.} \label{fig:longVsShort} \end{figure*} \subsection{RQ3: Activities in Long and Short Debugging Episodes} To investigate the differences between long and short debugging episodes, we calculated the 75th and 25th percentile of debugging episode durations. We marked any debugging episode that lasted for 12.3 minutes or more (\textgreater= 75th percentile) as \textit{long} and any episodes with a duration less than a minute (\textless= 25th percentile) as \textit{short}. This resulted in 23 episodes in each group. The 23 short debugging episodes (31$\pm$17 seconds) were from eight developers (D3-D9, D11) and constituted only 1\% of overall debugging time. In contrast, the 23 long debugging episodes (32$\pm$23 minutes) were from ten developers (D1-D5, D7-D11) and constituted 80\% of all debugging time. The long debugging episode threshold that we defined aligns with what professional developers report as the 15-minute threshold at which they begin to perceive debugging as difficult \cite{BohmeFSE-DebuggingHypotheses}. As long debugging episodes were longer in duration, they involved many times more activities (62$\pm56$) than short episodes (4$\pm2$). Therefore, we focus on examining the fraction of each episode's time spent in each activity. \begin{figure}[ht] \begin{subfigure}[b]{1\linewidth} \centering \includegraphics[scale=.13,trim=9cm 4cm 10cm 3cm,clip]{figures/episodes.pdf} \end{subfigure} \begin{subfigure}[b]{1\linewidth} \centering \includegraphics[scale=.21, trim = 3cm 1cm 0cm 0cm]{figures/legends.png} \end{subfigure} \caption{Developer activity in short and long debugging episodes. The circle widths signify the fraction of time spent on each activity. The lines (with the color of the corresponding activity) describe the fraction of transitions to the next activity (indicated by line width). Transition percentages are labeled for the two most common transitions.} \label{fig:shortVsLongActivity} \end{figure} We observed two key differences distinguish long debugging episodes from short episodes. First, long debugging episodes involved a diverse set of activities, with developers switching between different type of activities (Figure \ref{fig:shortVsLongActivity}). Second, developers spend more time on each activity instance in long debugging episodes. They were also slower in making their first edit to the source code (Figure \ref{fig:longVsShort}). Developers engaged in a more diverse set of activities in long debugging episodes. Short debugging episodes mostly focused on editing and testing, occupying 88\% of their time. Developers edited an average of 1($\pm$.2) files very early in the episode (after 6 ($\pm$8) seconds) and spent only 11 ($\pm$11) seconds on each edit to a file. Developers tested their program an average of 2 ($\pm$1) times for an average of 8 ($\pm$8) seconds each time. Editing a file of code and testing program activities constituted only 54\% of the total debugging time on the long debugging episodes. However, developers edited more files than in the short debugging episodes. On average, developers edited three files ($\pm$3, max = 13) and spent on average 41 (75$\pm$) seconds each time they open a file to edit it. Developers were also slower to first edit in long debugging episodes. Developers spent 3.2 ($\pm$6, max = 30) minutes before introducing any changes to the source code. We found that developers tested their program more in longer debugging episodes. On average, developers tested their program 12 ($\pm$8) times for an average of 26 ($\pm$31) seconds each time. Developers spent more time (17\%) browsing a file of code in long debugging episodes (Figure \ref{fig:shortVsLongActivity}). This may be related to two factors. First, developers in long debugging episodes browsed more files (7$\pm$7, max = 32 ) than the short debugging episodes (0.2$\pm$.8, max = 3). Second, developers spent more time each time they browsed a file. In long debugging episodes, developers spent an average of 17 ($\pm$24) seconds each time they browsed a file compared to an average of 8 ($\pm$4) seconds in short debugging episodes. The amount of time spent inspecting program state varied the most in long debugging episodes, increasing from 1.3\% to 18\% in long debugging episodes. In short debugging episodes, inspecting program activity was not followed by another activity (i.e., there was no arrow from inspecting program activity to any other activity). However, in long debugging episodes, editing a file of code and inspecting program activities had the second most switches between activities. Developers engaged the most extended time per instance in inspecting program state activity than other activity (1.2$\pm$2 mins, max=3 mins) in long debugging episodes. Consulting external resources and other activities were very rare in short debugging episodes, with only one instance in our dataset. However, consulting external resources and other activities were common in long debugging episodes (48\% and 83\%, respectively) and constituted 3\% and 7\% of the time. Overall, we found that the differences between short and long debugging episodes were due to developers switching between different activities and spending more time each time they do. No activity emerged as a singular bottleneck that accounted for the majority of the time in long debugging episodes. \begin{figure*}[ht] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=.5, trim= 0cm 1.5cm 0cm .5cm,clip]{figures/InstancesActivitiesDurationProVsDebg.pdf} \end{subfigure}% \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=.43, trim= 0cm .5cm 0cm 0cm,clip]{figures/Activities_occupancy.pdf} \end{subfigure} \caption{The distribution of activity duration for programming and debugging work, in log scale (left). The thick black line indicates the mean. The percentage of debugging and programming time by activity (right). } \label{fig:programVsDebug} \end{figure*} \subsection{RQ4: Activities in Debugging Episodes and Programming Work} To investigate how debugging episodes vary from programming work, we compared the 1407 activities in the 13 hours of programming to the 2137 activities in the 15 hours of debugging. We found debugging and programming were broadly similar in the fraction of time developers spent browsing and editing code files and consulting external resources. There were more pronounced differences in the time developers spent testing and inspecting code. Figure \ref{fig:programVsDebug} plots the distribution the overall differences and similarities between programming and debugging activities. Three activities were most similar between debugging and programming work. Developers spent 16\% of their debugging time browsing a file of code, compared to 13\% of their programming time. Developers spent 14($\pm$21) seconds browsing a file of code while debugging compared to 15($\pm$22) seconds while programming. Editing a file of code activity constituted 38\% of debugging time and 36\% of the programming time. Developers spent 33($\pm$63) seconds and 44($\pm$78) seconds each time they edited a file of code in debugging and programming work, respectively. Developers spent similar fractions of their debugging (4\%) and programming (5\%) work consulting external resources, with similar durations for debugging (35$\pm$42 seconds) and programming (39$\pm$52 seconds). Programming and debugging work were less similar in the time spent testing and inspecting program state, mostly as developers did these more often when debugging. When debugging, developers switched to testing twice as often and inspecting the program state six times more often than when programming. Developers were more likely to test their changes (58\%) or inspect the program state (12\%) after finishing editing a file of code while debugging. However, developers switching behavior changed in programming work. After editing a file of code, developers often sought to edit another file of code (25\%) or test their changes (36\%). Besides the differences in switching behavior, the time developers spent on each instance of inspecting program activity was longer in debugging work (53s$\pm$105s) than in programming (28s$\pm$32s). However, the duration of testing activities in debugging (22s$\pm$26s) and programming (21s$\pm$22s) was similar. In programming and debugging, developers spent 61\% and 74\% of the other activity, respectively, while interacting with their development environment, navigating file systems, installing libraries, setting up their IDE, and opening up other tools and software. Browsing the issue tracker was the second most common type of other activity. In programming work, developers used the issue tracker three times more often than in debugging. The differences suggest that debugging is a more code-focus activity, while programming work involves much work relating to visiting the issue tracker, writing notes, and setting up the development environment. \section{Limitations and threats to validity} Our study has several important limitations. \textbf{Construct Validity.} Defining exactly when debugging episodes and activities start and end is challenging and potentially susceptible to human error. To minimize the risk incorrect codes, we collected past definitions of debugging \cite{johnson1982software,ko2011state,Parnin2011AreAutomatedDebugging} and used these to create an initial definitions. As we built the initial coding scheme, we refined the definitions until two authors were able to independently and consistently annotate the start and end of video segments within one to two seconds. \textbf{Internal Validity.} It has long been known that developers are frequently interrupted in programming work \cite{meyer2014software, abad2018task}, increasing the time needed to finish tasks. To ensure our measures of debugging activities did not include any irrelevant worked caused by interruptions, we coded any interruption that lasted more than five seconds and excluded from the debugging and programming work. We defined the five second threshold after we observed that interruptions that lasted less than that did not cause the developers to pause and switch context. \textbf{External Validity}. Researchers have found that live-streamed programming is a source of data that show developers working on open source project. However, these videos may also contain non-trivial of interruptions caused by other developers watching, which may not represent how developers typically work. To mitigate this issue, we used a strict inclusion criteria where a video has to show a significant work and less interactions with other developers watching. We have also other criteria that ensure that these videos showed developers working in nontrivial projects that have been used in production. Another potential threat to external validity is how representatives the sample of developers we observed. We sampled developers with a high range expertise levels, ranging from seven to 31 years of committing to open source code. However, this measure is only a proxy of developers' overall years of experience, which is itself a proxy of expertise. \section{Observe-Dev.online platform} \label{sec:Observe.dev} Live-streamed programming offers an important opportunity for researchers to observe professional developers in a natural setting. Conducting studies with similar settings would require researchers to conduct field studies and record developers' screens and voices during the work. Live-streamed programming is an alternative that requires no such effort with public access to both the source code and recording. As we used this rich source of information to explore debugging, we believe that researchers can use these videos to explore further debugging or observe developers exploring other research areas related to developers' programming behavior. For instance, researchers may investigate the use of online resources during debugging and programming work, the challenges developers face working in specific programming languages, or how developers make design decisions. We built a platform that supports software engineering researchers in organizing videos, collaboratively analyzing and annotating videos, and sharing datasets both publicly and privately. Figure \ref{fig:observe.dev} depicts the platform interface for annotating a live-streamed programming video. Observe-Dev.online offers four key features for supporting the use of live-streamed programming videos in software engineering research. First, the platform offers \textit{a dataset of programming sessions}. Identifying live-streamed programming videos can be time-consuming, particularly to identify videos with specific characteristics (e.g., working on a data analysis script in Python). Therefore, Observe-Dev.online includes a default dataset of more than 100 hours of programming sessions that are publicly available for use\footnote{https://bit.ly/3qWdMVA}. Each session is labeled with metadata, describing the programming languages, projects, and development environments used. Researchers can use this to filter sessions to match inclusion criteria. Second, Observe-Dev.online offers the \textit{ability to annotate video segments}. Researchers can create new codes and annotate specific video segments with these codes. Segments may vary in duration from one second to the entirety of the video time. After applying codes to segments, the tool offers a mini-timeline visualization of the codes, enabling them to see where codes are located at a glance and quickly navigate to specific code locations. For example, Figure \ref{fig:observe.dev} shows three codes for programming, debugging, and irrelevant episodes, each shown in a unique color. Third, researchers can \textit{share annotated datasets}. Observe-dev.online is a web-based platform, enabling datasets to be publicly or privately shared for viewing or editing through a URL and optional authentication. Finally, the platform supports \textit{exporting annotations} to a standard JSON format that can be imported into other tools for further analysis. \begin{figure} \centering \includegraphics[scale=.33, trim= 0cm 1cm 10cm 0cm]{figures/observe_dev.pdf} \caption{Observe-dev.online supports collaborative qualitative analysis of lives-streamed programming videos. Programming videos (A) are shown with an interactive timeline view (B) of video segment annotations supporting navigating between segments. New codes can be edited (C) or created based on the current position in the video. } \label{fig:observe.dev} \end{figure} \section{Discussion} Our observation of debugging episodes offered insight into the nature of debugging in a naturalistic context. Developers spend about half of their programming time debugging, suggesting that debugging clearly remains a core and time-consuming part of programming. Moreover, we found that debugging episodes are surprisingly frequent, where developers debug after every eight minutes of programming work. Analyzing the activities that occurred within debugging episodes revealed that episodes were diverse in activities. Developers may browse files of code more in specific episodes and consulate external resources more in others. Also, activities in longer debugging episodes were more diverse than the shortest debugging episodes. Developers were mostly testing and editing their code in the shortest debugging episodes, while developers in longer debugging episodes had to browse and edit many files, inspect program state, and consulate external resources. Another surprising result emerged after investigating differences between debugging and programming activities: there were similarities between how developers browsed and navigated code in debugging episodes and programming work. There were also differences in developers inspecting and testing program behavior. We found that developers' behavior in debugging is surprisingly varied. Further research is needed to understand the reasons for these differences. Investigating these differences in developer behavior might lead to new theories explaining what causes developers to spend more of their time browsing files of code, consulting external resources, or testing while debugging. The surprising similarities between debugging and programming behavior merit further investigation. Traditional debugging studies view debugging mostly as a fault localization activity \cite{wong2016survey}. However, our results suggest that developers browse and edit files when both editing and debugging. We also found the differences stem from how they test and inspect the program. Future studies investigating debugging and programming behavior may help understand what debugging is and how it differs from regular programming work. Our results indicate that developers engaged in a variate of activities in longer debugging episodes. Debugging tools should aim to reduce the potential overhead of switching between many activities. We propose two types of debugging tools. The first type of tool is \textit{a specialized debugging environment} in which the environment helps developers easily switch between different types of activities. One example of such an environment is the live programming environment. In a live programming environment, developers do not need to switch to another window or program to edit, inspect, or test their program while debugging. The program is continuously evaluated while the developer changes the source code, revealing the program state and outputting the final input for each source code change. The second type of tool is \textit{an aggregator and summative debugging tools}. Instead of developers switching between many activities, these tools predicate the files developer may need, the resources that developers may consulate, and the program state developers may inspect and present them for developers. These tools differ from existing tools that capture and maintain task context \cite{kersten2005mylar} as they aim to predict and aggregates relevant information within the IDE and on the internet.
\section{\bf Introduction} In this paper, we consider the system of two heat equations with coupled nonlinear Neumann boundary conditions, namely \begin{equation}\label{c1}\left. \begin{array}{lll} u_t= \Delta u,& v_t=\Delta v,& (x,t)\in B_R \times (0,T),\\ \frac{\partial u}{\partial \eta} =e^{v^p}, & \frac{\partial v}{\partial \eta}=e^{u^q},& (x,t)\in \partial B_R \times (0,T),\\ u(x,0)=u_0(x),& v(x,0)=v_0(x),& x \in {B}_R, \end{array} \right\} \end{equation} where $p,q>1,$ $B_R$ is a ball in $R^n,$ $\eta$ is the outward normal, $u_0,v_0$ are smooth, radially symmetric, nonzero, nonnegative functions satisfy the condition \begin{equation}\label{Mo}\Delta u_0,\Delta u_0\ge 0,\quad u_{0r}(|x|),v_{0r}(|x|)\ge 0,\quad x \in \overline{B}_R. \end{equation} The problem of system of two heat equations with nonlinear Neumann boundary conditions defined in a ball, \begin{equation}\label{3}\left. \begin{array}{lll} u_t= \Delta u,& v_t=\Delta v,& (x,t)\in B_R \times (0,T),\\ \frac{\partial u}{\partial \eta} =f(v), & \frac{\partial v}{\partial \eta}=g(u),& (x,t)\in \partial B_R \times (0,T),\\ u(x,0)=u_0(x),& v(x,0)=v_0(x),& x \in {B}_R, \end{array} \right\} \end{equation} was introduced in \cite{19,7,20,18}, for instance, in \cite{19} it was studied the blow-up solutions to the system (\ref{3}), where \begin{equation}\label{c11} f(v) =v^p, \quad g(u) =u^q, \quad p,q>1. \end{equation} It was proved that for any nonzero, nonnegative initial data $(u_0,v_0),$ the finite time blow-up can only occur on the boundary, moreover, it was shown in \cite{20} that, the blow-up rate estimates take the following form $$c\le \max_{x\in\overline{\Omega}}u(x,t)(T-t)^{\frac{p+1}{2(pq-1)} }\le C,\quad t\in(0,T),$$ $$c\le\max_{x\in\overline{\Omega}}v(x,t)(T-t)^{\frac{q+1}{2(pq-1)} }\le C,\quad t\in(0,T).$$ In \cite{7,18}, it was considered the solutions of the system (\ref{3}) with exponential Neumann boundary conditions model, namely \begin{equation}\label{c12} f(v) =e^{pv}, \quad g(u) =e^{qu},\quad p,q>0. \end{equation} It was proved that for any nonzero, nonnegative initial data, $(u_0,v_0),$ the solution blows up in finite time and the blow-up occurs only on the boundary, moreover, the blow-up rate estimates take the following forms \begin{equation*} C_1\le e^{qu(R,t)}(T-t)^{1/2} \le C_2, \quad C_3\le e^{pv(R,t)}(T-t)^{1/2} \le C_4. \end{equation*} In this paper, we prove that the upper blow-up rate estimates for problem (\ref{c1}) take the following form \begin{align*} \max_{ \overline{B}_R}u(x,t)\le \log C_1-\frac{\alpha}{2} \log(T-t),\quad 0<t<T, \\ \max_{ \overline{B}_R}v(x,t)\le \log C_2-\frac{\beta}{2} \log (T-t),\quad 0<t<T, \end{align*} where $\alpha=\frac{p+1}{pq-1},\beta=\frac{q+1}{pq-1}.$ Moreover, the blow-up occurs only on the boundary. \section{Preliminaries} The local existence and uniqueness of classical solutions to problem (\ref{c1}) is well known by \cite{47}. On the other hand, every nontrivial solution blows up simultaneously in finite time, and that due to the known blow-up results of problem (\ref{3}) with (\ref{c11}) and the comparison principle \cite{47}. In the following lemma we study some properties of the classical solutions of problem (\ref{c1}). We denote for simplicity $u(r,t)=u(x,t).$ \begin{lemma}\label{c} Let $(u,v)$ be a classical unique solution of (\ref{c1}). Then \begin{enumerate}[\rm(i)] \item $u, v$ are positive, radial. Moreover, $u_r ,v_r \ge 0$ in $[0,R]\times (0,T).$ \item $u_t,v_t>0$ in $\overline B_R\times (0,T).$ \end{enumerate} \end{lemma} \section{Rate Estimates} In order to study the upper blow-up rate estimates for problem (\ref{c1}), we need to recall some results from \cite{23,20}. \begin{lemma}{\bf \cite{20}}\label{d} Let $A(t)$ and $B(t)$ be positive $C^1$ functions in $[0,T)$ and satisfy $$A^{'} (t)\ge c \frac{B^p(t)}{\sqrt{T-t}},\quad B^{'} (t)\ge c \frac{A^q(t)}{\sqrt{T-t}} \quad \mbox{for} \quad t \in [0,T),$$ $$A(t) \longrightarrow + \infty \quad \mbox{or} \quad B(t) \longrightarrow + \infty \quad \mbox{as} \quad t \longrightarrow T^{-},$$ where $p,q>0,c>0 $ and $pq>1.$ Then there exists $C>0$ such that $$A(t) \le C(T-t)^{-\alpha /2}, \quad B(t) \le C(T-t)^{-\beta /2}, \quad t \in[0,T),$$ where $\alpha=\frac{p+1}{pq-1},\beta=\frac{q+1}{pq-1}.$ \end{lemma} \begin{lemma}\label{pk}{\bf \cite{23}} Let $x \in \overline{B}_R.$ If $0\le a <n-1.$ Then there exist $C>0$ such that $$\int_{S_R} \frac{ds_y}{|x-y|^a }\le C.$$ \end{lemma} \begin{theorem}\label{Jump}{\bf (Jump relation, \cite{23})} Let $\Gamma(x,t)$ be the fundamental solution of heat equation, namely \begin{equation}\label{op}\Gamma(x,t)=\frac{1}{(4\pi t)^{(n/2)}}\exp[-\frac {|x|^2}{4t}] .\end{equation} Let $\varphi$ be a continuous function on $S_R \times[0,T].$ Then for any $x \in {B}_R,x^0 \in S_R,0<t_1<t_2\le T,$ for some $T>0,$ the function $$U(x,t)=\int_{t_1}^{t_2} \int_{S_R} \Gamma (x-y,t-z)\varphi(y,z)ds_yd\tau $$ satisfies the jump realtion $$\frac{\partial}{\partial \eta}U(x,t) \rightarrow -\frac{1}{2}\varphi(x^0,t)+\frac{\partial}{\partial \eta}U(x^0,t), \quad as\quad x \rightarrow x^0.$$ \end{theorem} \begin{theorem}\label{theorem d} Let $(u,v)$ be a solution of (\ref{c1}), which blows up in finite time T. Then there exist positive constants $C_1,C_2$ such that \begin{align*} \max_{ \overline{B}_R}u(x,t)\le \log C_1-\frac{\alpha}{2} \log(T-t),\quad 0< t<T, \\ \max_{ \overline{B}_R}v(x,t)\le \log C_2-\frac{\beta}{2} \log (T-t),\quad 0< t<T. \end{align*} \end{theorem} \begin{proof} We follow the idea of \cite{20}, define the functions $M$ and $M_b$ as follows $$M(t)=\max_{\overline{B}_R}u(x,t), \quad \mbox{and}\quad M_b(t)=\max_{S_R}u(x,t).$$ Similarly, $$N(t)=\max_{ \overline{B}_R}v(x,t), \quad \mbox{and}\quad N_b(t)=\max_{S_R}v(x,t).$$ Depending on Lemma \ref{c}, both of $M,M_b$ are monotone increasing functions, and since $u$ is a solution of heat equation, it cannot attain interior maximum without being constant, therefore, $$M(t)=M_b(t).\quad \mbox{Similarly}\quad N(t)=N_b(t).$$ Moreover, since $u,v$ blow up simultaneously, therefore, we have \begin{equation}\label{Nobal} M(t)\longrightarrow +\infty, \quad N(t)\longrightarrow +\infty \quad \mbox{as} \quad t\longrightarrow T^{-}.\end{equation}As in \cite{22,20}, for $0<z_1<t<T $ and $ x\in B_R,$ depending on the second Green's identity with assuming the Green function: $$G(x,y;z_1,t)=\Gamma(x-y,t-z_1),$$ where $\Gamma$ is defined in (\ref{op}), the integral equation to problem (\ref{c1}) with respect to $u,$ can be written as follows \begin{eqnarray*}u(x,t)&=&\int_{B_R} \Gamma (x-y,t-z_1)u(y,z_1)dy+\int_{z_1}^t \int_{S_R}e^{v^p(y,\tau)} \Gamma (x-y,t-\tau)ds_y d_\tau\\ &&-\int_{z_1}^t \int_{S_R} u(y,\tau ) \frac{\partial \Gamma}{\partial \eta _y} (x-y,t-\tau )ds_y d \tau,\end{eqnarray*} As in \cite{22}, letting $x\rightarrow S_R$ and using the jump relation (Theorem \ref{Jump}) for the third term on the right hand side of the last equation, it follows that \begin{eqnarray*}\frac{1}{2}u(x,t)&=&\int_{B_R} \Gamma (x-y,t-{z_1})u(y,z_1)dy+\int_{z_1}^t \int_{S_R}e^{v^p(y,\tau)} \Gamma (x-y,t-\tau)d{s_y}d_\tau\\ &&-\int_{z_1}^t \int_{S_R} u(y,\tau ) \frac{\partial \Gamma}{\partial \eta _y} (x-y,t-\tau )ds_y d \tau,\end{eqnarray*} for $x\in S_R, 0<z_1<t<T.$ Depending on Lemma \ref{c} we notice that $u,v$ are positive and radial.Thus \
begin{eqnarray*}&&\int_{B_R} \Gamma (x-y,t-z_1)u(y,z_1)dy >0,\\ &&\int_{z_1}^t \int_{S_R}e^{v^p(y,\tau)} \Gamma (x-y,t-\tau)d{s_y}d_\tau=\int_{z_1}^t e^{v^p(R,\tau)}[\int_{S_R}\Gamma (x-y,t-\tau)ds_y]d\tau. \end{eqnarray*} This leads to \begin{eqnarray*}\frac{1}{2}M(t) &\ge& \int_{z_1}^t e^{N^p(\tau)}[\int_{S_R}\Gamma (x-y,t-\tau)ds_y]d\tau\\ &&-\int_{z_1}^t M(\tau)[\int_{S_R} | \frac{\partial \Gamma}{\partial \eta _y} (x-y,t-\tau )| ds_y] d \tau, \quad x\in S_R, 0<z_1<t<T.\end{eqnarray*} It is known that (see \cite{23}) there exist $C_0>0,$ such that $\Gamma$ satisfies $$|\frac{\partial \Gamma}{\partial \eta_{y}}(x-y,t-\tau)|\le \frac{C_0}{(t-\tau )^\mu}\cdot \frac{1}{|x-y|^{(n+1-2\mu- \sigma ) }},\quad x,y \in S_R, ~\sigma\in(0,1).$$ Choose $1-\frac{\sigma}{2} < \mu <1,$ from Lemma \ref{pk}, there exist $C^*>0$ such that $$\int_{S_R}\frac{ds_y}{|x-y|^{(n+1-2\mu- \sigma ) }} <C^*.$$ Moreover, for $0<t_1<t_2$ and $t_1$ is closed to $t_2,$ there exists $c>0,$ such that $$\int_{S_R} \Gamma (x-y,t_2-t_1)ds_y \ge \frac{c}{\sqrt{t_2-t_1}},$$ Thus $$\frac{1}{2} M(t) \ge c \int_{z_1}^t \frac{e^{N^p(\tau)}}{\sqrt{t-\tau}}d\tau-C\int_{z_1}^t \frac{M(\tau)}{|t-\tau|^{\mu}}d \tau.$$ Since for $0<z_1< t_0< t <T,$ it follows that $M(t_0) \le M(t),$ thus the last equation becomes \begin{equation*}\label{d8} \frac{1}{2}M(t)\ge c \int_{z_1}^t \frac{e^{N^p(\tau)}}{\sqrt{T-\tau}}d\tau-C^*_1 M(t)|T-z_1|^{1-\mu}. \end{equation*} Similarly, for $0<z_2< t <T,$ we have \begin{equation*} \frac{1}{2}N(t)\ge c \int_{z_2}^t \frac{e^{M^q(\tau)}}{\sqrt{T-\tau}}d\tau-C^*_2 N(t)|T-z_2|^{1-\mu}. \end{equation*} Taking $z_1,z_2$ so that $$C^*_1|T-z_1|^{1-\mu}\le1/2,\quad C^*_2|T-z_2|^{1-\mu}\le 1/2,$$ it follows \begin{equation}\label{mol} M(t)\ge c \int_{z_1}^t \frac{e^{N^p(\tau)}}{\sqrt{T-\tau}}d\tau, \quad N(t)\ge c \int_{z_2}^t \frac{e^{M^q(\tau)}}{\sqrt{T-\tau}}d\tau. \end{equation} Since both of $M,N$ increasing functions and from (\ref{Nobal}), we can find $T^*$ in $(0,T)$ such that $$M(t)\ge q^{\frac{1}{(q-1)}},\quad N(t)\ge p^{\frac{1}{(p-1)}}, \quad \mbox{for}\quad T^*\le t<T.$$ Thus $$e^{M^q(t)}\ge e^{qM(t)}, \quad e^{N^p(t)}\ge e^{pN(t)},\quad T^*\le t<T.$$ Therefore, if we choose $z_1,z_2$ in $(T^*,T),$ then (\ref{mol}) becomes $$e^{M(t)}\ge c \int_{z_1}^t \frac{e^{pN(\tau)}}{\sqrt{T-\tau}}d\tau\equiv I_1(t), \quad e^{N(t)}\ge c \int_{z_2}^t \frac{e^{qM(\tau)}}{\sqrt{T-\tau}}d\tau\equiv I_2(t).$$ Clearly, $$I_1^{'}(t)=c\frac{e^{pN(t)}}{\sqrt{T-t}}\ge \frac{cI_2^p}{\sqrt{T-t}},\quad I_2^{'}(t)=c\frac{e^{qM(t)}}{\sqrt{T-t}}\ge \frac{cI_1^q}{\sqrt{T-t}}.$$ By Lemma \ref{d}, it follows that \begin{equation}\label{ef} I_1(t)\le \frac{C}{(T-t)^{\frac{\alpha}{2}}},\quad I_2(t)\le \frac{C}{(T-t)^{\frac{\beta}{2}}},\quad t \in (\max\{z_1,z_2\},T).\end{equation} On the other hand, for $t^*=2t-T$ (Assuming that $t$ is close to $T$). $$I_1(t)\ge c\int_{t^*}^t \frac{e^{pN(\tau)}}{\sqrt{T-\tau}}d\tau\ge c e^{pN(t^*)} \int_{2t-T}^t \frac{1}{\sqrt{T-\tau}}d\tau=2c(\sqrt{2}-1)\sqrt{T-t} e^{pN(t^*)}.$$ Combining the last inquality with (\ref{ef}) yields $$e^{N(t^*)} \le \frac{C}{2c(\sqrt{2}-1)(T-t)^{\frac{p+1}{2p(pq-1)}+\frac{1}{2p} } }= \frac{2^{\frac{q+1}{2(pq-1)}}C}{2c(\sqrt{2}-1)(T-t^*)^{\frac{q+1}{2(pq-1)} } }.$$ Thus, there exists a constant $c_1>0$ such that \begin{equation*} e^{N(t^*)} (T-t^*)^{\frac{q+1}{2(pq-1)}} \le c_1.\end{equation*} In the same way we can show \begin{equation*}e^{M(t^*)} (T-t^*)^{\frac{p+1}{2(pq-1)}}\le c_2.\end{equation*} This leads to, there exists $C_1,C_2>0$ such that \begin{align}\label{yas} \max_{ \overline{B}_R}u(x,t)\le \log C_1-\frac{\alpha}{2} \log(T-t),\quad 0< t<T, \\ \label{sad}\max_{ \overline{B}_R}v(x,t)\le \log C_2-\frac{\beta}{2} \log (T-t),\quad 0< t<T.\end{align} \end{proof} \section{Blow-up Set} In order to show that the blow-up to problem (\ref{c1}) occurs only on the boundary, we need to recall the following lemma from \cite{18}. \begin{lemma}\label{power} Let $w$ is a continuous function on the domain $\overline{B}_R\times[0,T)$ and satisfies \begin{equation*} \left.\begin{array}{ll} w_t=\Delta w,&\quad (x,t) \in B_R \times (0,T),\\ w(x,t) \le \frac{C}{(T-t)^m},& \quad (x,t) \in S_R \times(0,T),\quad m>0. \end{array} \right\} \end{equation*} Then for any $0<a<R$ $$\sup\{ w(x,t): 0 \le|x|\le a,~0\le t < T \} < \infty.$$ \end{lemma} \begin{proof} Set $$h(x)=(R^2-r^2)^2 ,~ r=|x|,$$ $$z(x,t)=\frac{C_1}{[h(x)+C_2(T-t)]^m}.$$ We can show that: \begin{eqnarray*}\Delta h-\frac{(m+1)|\nabla h|^2}{h}&=& 8r^2-4n(R^2-r^2)-(m+1)16r^2\\ &\ge&-4nR^2-16R^2(m+1),\\ z_t-\Delta z&=&\frac{C_1m}{[h(x)+C_2(T-t)]^{m+1}}(C_2+\Delta h-\frac{(m+1)|\nabla h|^2}{h+C_2(T-t)})\\ &\ge& \frac{C_1m}{[h(x)+C_2(T-t)]^{m+1}}(C_2-4nR^2-16R^2(m+1)).\end{eqnarray*} Let $$C_2=4nR^2+16R^2(m+1)+1$$ and take $C_1$ to be large such that $$z(x,0)\ge w(x,0),\quad x\in B_R.$$ Let $C_1\ge C(C_2)^m,$ which implies that $$z(x,t)\ge w(x,t) \quad \mbox{on}\quad S_R \times [0,T).$$ Then from the maximum principle \cite{21}, it follows that $$z(x,t)\ge w(x,t),\quad (x,t) \in \overline{B}_R \times (0,T)$$ and hence $$\sup\{w(x,t): 0\le|x|\le a,0\le t <T\} \le C_1(R^2-a^2)^{-2m} < \infty, \quad 0\le a <R.$$ \end{proof} \begin{theorem} Let the assumptions of Theorem \ref{theorem d} be in force. Then $(u,v)$ blows up only on the boundary. \end{theorem} \begin{proof} Using equations (\ref{yas}), (\ref{sad}) $$u(R,t) \le \frac{c_1}{(T-t)^{\frac{\alpha}{2}}}, \quad v(R,t) \le \frac{c_2}{(T-t)^{\frac{\beta}{2}}}, \quad t \in (0,T).$$ From Lemma \ref{power}, it follows that $$\sup\{u(x,t): (x,t)\in B_a\times [0,T)\}\le C_1(R^2-a^2)^{-\alpha} <\infty,$$ $$\sup\{v(x,t): (x,t)\in B_a\times [0,T)\}\le C_1(R^2-a^2)^{-\beta} <\infty,$$ for $a<R.$ Therefore, $u,v$ blow up simultaneously and the blow-up occurs only on the boundary. \end{proof}
\section{Introduction} Transverse momentum dependent parton distributions (TMDs), as an important extension to the usual Feynman parton distributions, have attracted much attention in hadronic physics from both experiment and theory sides. Various hadronic processes have been used and proposed to study these distributions~\cite{ Cahn:1978se,Collins:1984kg,Sivers:1989cc,Efremov:1992pe,Collins:1992kk,Collins:1993kq,Kotzinian:1994dv,Mulders:1995dh,Boer:1997nt,Boer:1997mf,Boer:1999mm,Brodsky:2002cx,Collins:2002kn,Belitsky:2002sm,Bacchetta:2004zf,Cherednikov:2007tw,D'Alesio:2007jt,Barone:2001sp,Goeke:2005hb,Bacchetta:2006tn,Vogelsang:2005cs}. Together with the generalized parton distributions (GPDs) (for reviews, see~\cite{Goeke:2001tz,Diehl:2003ny,Ji:2004gf,Belitsky:2005qn,Boffi:2007yc}), TMDs shall lead us to a comprehensive picture of parton distributions inside the nucleon, in particular, in a three-dimension fashion. Phenomenologically, in order to extract these distribution functions from experiments, we have to ensure that the QCD factorization applies in the associated processes. These issues have been extensively discussed in the last few years, and the relevant factorization theorem has been built up for a number of semi-inclusive processes, such as semi-inclusive hadron production in deep inelastic scattering and low transverse momentum Drell-Yan lepton pair production in hadronic collisions~\cite{Collins:1981uk,Ji:2004wu,Collins:2004nx}. In the last few years, there has also been a remarkable experimental progress on experimental measurements (see Ref.~\cite{D'Alesio:2007jt} and references therein). More importantly, the proposed future experiments shall provide more constraints on these distribution functions. Meanwhile, reasonable model calculations of these transverse momentum dependent parton distributions have been proposed~\cite{Jakob:1997wg,Efremov:2002qh,Efremov:2003eq,Yuan:2003wk,Pobylitsa:2003ty,Efremov:2004qs,Efremov:2004tp,Burkardt:2003uw,Collins:2005ie,Collins:2005rq,Efremov:2006qm,Kotzinian:2006dw,Brodsky:2006hj,Pasquini:2006iv,Meissner:2007rx,Anselmino:2007fs,Anselmino:2008jk,Gamberg:2007wm,Gamberg:2003ey,Bacchetta:2007wc,Avakian:2007mv,Avakian:2007xa,Pasquini:2008ax,Bacchetta:2008af,Courtoy:2008vi,Courtoy:2008dn,Courtoy:2009pc,Avakian:2008dz,Anselmino:2008sga,Arnold:2008ap,Efremov:2009ze,She:2009jq,Meissner:2009ww,Gamberg:2009uk,Bianconi:2006yq}. These calculations promoted our understanding of the nucleon structure, and have been playing very important role as a first step to describe the experimental observations of the associated phenomena. In particular, these models provide us an intuitive way to connect the physical observables and the key input for the nucleon structure model, such as the quark spin and orbital angular momentum contributions to the proton spin. Transverse momentum dependent quark distributions are defined through the following quark-density matrix \begin{equation} {\cal M}(x,\boldsymbol k_\perp)=\int\frac{d\xi^-d^2\boldsymbol \xi_\perp}{(2\pi)^3}e^{-ik\cdot \xi} \langle PS|\bar\psi(\xi){\cal L}^\dagger_\xi{\cal L}_0\psi(0)|PS\rangle \ , \end{equation} where $x$ and $\boldsymbol k_\perp$ are the longitudinal momentum fraction and transverse momentum carried by the quark, respectively. Nucleon's momentum $P$ is dominated by the plus component $P^+=(P^0+P^z)/\sqrt{2}$, and $S$ represents the polarization vector. In the above equation, the gauge link ${\cal L}$ is very important to retain the gauge invariance and leading to nonzero naive-time-reversal-odd (T-odd) quark distributions. Among the leading order eight quark TMDs, six of them are called the naive-time-reversal even (T-even), whereas the rest two belong to the T-odd distributions. One is the so-called quark Sivers function, which describes the quark transverse momentum distribution correlated to the transverse polarization vector of the nucleon. The other is the so-called Boer-Mulders function, and usually interpreted as the transverse momentum correlated with the quark transverse polarization. Both quark distributions contribute to the azimuthal asymmetries in hadronic reaction processes. In Ref.~\cite{Pasquini:2008ax}, we have calculated the T-even quark distributions in a light-cone quark model, extending previous works on the parton distribution functions (PDFs)~\cite{Pasquini:2006iv}, the GPDs~\cite{Boffi:2002yy,Boffi:2003yj,Pasquini:2005dk,Pasquini:2007xz}, nucleon form factors~\cite{Pasquini:2007iz} and distribution amplitudes~\cite{Pasquini:2009ki}. Such a model, based on the light-cone wave-function (LCWF) overlap representation, is able to capture the relevant information on the three-quark contribution to different observables. These calculations are well suited to illustrate the relevance of the different orbital angular momentum components of the nucleon wave function, and provide an intuitive picture for the physical meaning of the quark TMDs. Moreover, they can be regarded as initial input for phenomenological studies for the semi-inclusive processes where quark TMDs play a very important role~\cite{Boffi:2009sh}. In this paper, we extend these works to the T-odd quark distributions. The unique feature for the latter distributions is the final/initial state interaction effects. Without these effects, the T-odd parton distributions would vanish. In the model calculation, these interactions are calculated by taking into account the one-gluon exchange mechanism between the struck quark and the nucleon spectators described by (real) LCWFs. This approach is complementary to a recent work~\cite{Brodsky:2010vs} where the rescattering effects are incorporated in augmented LCWFs containing an imaginary phase which depends on the choice of advanced or retarded boundary condition for the gauge potential in the light-cone gauge. Recently, there has also been interesting study to go beyond the one-gluon exchange approximation, by resumming all order contributions~\cite{Gamberg:2009uk,Burkardt:2003uw}. The rest of the paper is organized as follows. In Sec.~II, we briefly introduce the light-cone quark model, explaining its physical content and giving results for the light-cone wave-function amplitudes describing the different orbital angular momentum components of the nucleon state. In Sec.~III, we derive the quark Sivers function. We present a general formalism in terms of overlap of light-cone wave-function amplitudes, and then apply it to a specific light-cone quark model wave function. The corresponding formalism for the Boer-Mulders function is described in Sec.~IV. The model results for the T-odd distributions are presented in Sec.~V and compared to different phenomenological parametrizations. Finally, we conclude with a section summarizing our findings. \section{Light-Cone amplitudes in a constituent quark model} \label{sect:lcwf} The wave-function amplitudes in light-cone quantization for the three-quark Fock state of the nucleon has been studied extensively in the literature~\cite{Brodsky:1997de}. According to the total quark orbital angular momentum projection, these wave-function amplitudes are classified into $l_z=0$, $l_z=1$, $l_z=2$, $l_z=-1$ components for total spin $+1/2$ of nucleon, i.e., \begin{eqnarray} |P\uparrow\rangle_{uud}=|P\uparrow\rangle^{l_z=0}_{uud} +|P\uparrow\rangle^{l_z=1}_{uud} +|P\uparrow\rangle^{l_z=-1}_{uud} +|P\uparrow\rangle^{l_z=2}_{uud}.\label{eq:1} \end{eqnarray} For completeness, we list the parametrization for these wave-function amplitudes following Refs.~\cite{Ji:2002xn,Burkardt:2002uc,Ji:2003yj}: \begin{eqnarray} |P\uparrow\rangle_{uud}^{l_z=0} &=& \int d[1]d[2]d[3]\left( \psi_{uud}^{(1)}(1,2,3) + i\epsilon^{\alpha\beta}k_{1\alpha}k_{2\beta} \psi_{uud}^{(2)}(1,2,3)\right) \nonumber \\ && \times \frac{\epsilon^{ijk}}{\sqrt{6}} b^{\dagger\, u}_{i\uparrow}(1) \left(b^{\dagger\, u}_{j\downarrow}(2)b^{\dagger\, d}_{k\uparrow}(3) -b^{\dagger\, d}_{j\downarrow}(2)b^{\dagger\, u}_{k\uparrow}(3)\right) |0\rangle \ , \label{lca1}\\ |P\uparrow\rangle_{uud}^{l_z=1} &=& \int d[1]d[2]d[3]\left(k_{1\perp}^+ \psi_{uud}^{(3)}(1,2,3) + k_{2\perp}^+ \psi_{uud}^{(4)}(1,2,3)\right) \nonumber \\ && \times \frac{\epsilon^{ijk}}{\sqrt{6}} \left( b^{\dagger\, u}_{i\uparrow}(1) b^{\dagger\, u}_{j\downarrow}(2)b^{\dagger\, d}_{k\downarrow}(3) -b^{\dagger\, d}_{i\uparrow}(1)b^{\dagger\, u}_{j\downarrow}(2) b^{\dagger\, u}_{k\downarrow}(3)\right) |0\rangle \ ,\label{lca2} \end{eqnarray} \begin{eqnarray} |P\uparrow\rangle_{uud}^{l_z=-1} &=& \int d[1]d[2]d[3]~k_{2\perp}^- \psi_{uud}^{(5)}(1,2,3) \nonumber \\ && \times \frac{\epsilon^{ijk}}{\sqrt{6}} b^{\dagger\, u}_{i\uparrow}(1) \left( b^{\dagger\, u}_{j\uparrow}(2)b^{\dagger\,d}_{k\uparrow}(3) -b^{\dagger\, d}_{j\uparrow}(2)b^{\dagger\, u}_{k\uparrow}(3) \right) |0\rangle \ ,\label{lca3}\\ |P\uparrow\rangle_{uud}^{l_z=2} &=& \int d[1]d[2]d[3]~k_{1\perp}^+k_{3\perp}^+ \psi_{uud}^{(6)}(1,2,3) \nonumber \\ && \times \frac{\epsilon^{ijk}}{\sqrt{6}} b^{\dagger\, u}_{i\downarrow}(1) \left(b^{\dagger\, d}_{j\downarrow}(2)b^{\dagger\, u}_{k\downarrow}(3) -b^{\dagger\, u}_{j\downarrow}(2)d^ {\dagger}_{k\downarrow}(3) \right) |0\rangle \ ,\label{lca4} \end{eqnarray} where $\alpha,\beta=1,2$ are transverse indexes and $k^\pm_{i\perp}=k^x_i\pm k^y_i$. In Eqs.~(\ref{lca1})-(\ref{lca4}) the integration measures are defined as \begin{equation} \label{eq:7} d[1]d[2]d[3]= \frac{dx_1dx_2dx_3}{\sqrt{x_1x_2x_3}}\delta\left(1-\sum_{i=1}^3 x_i\right) \frac{d^2 \boldsymbol{k}_{1\perp}d^2\boldsymbol{k}_{2\perp} d^2\boldsymbol{k}_{3\perp}}{[2(2\pi^3)]^2} \delta\left(\sum_{i=1}^3 \boldsymbol{k}_{i\perp}\right), \end{equation} with $x_i$ the fraction of the longitudinal nucleon momentum carried by the quarks, and $\boldsymbol{k}_{i\perp}$ their transverse momenta. Furthermore, $b^{\dagger\, q}_{i,\,\lambda}$ and $b^q_{i,\,\lambda}$ are creation and annihilation operators of a quark with flavour $q$, helicity $\lambda$ and color $i$, respectively. In the following, we will describe the above light-cone wave-function amplitudes in a light-cone constituent quark model (CQM) following Ref.~\cite{Pasquini:2008ax}. Working in the so-called ``uds'' basis~\cite{Franklin:68,Capstick:1986bm} the proton state is given in terms of a completely symmetrized wave function of the form \begin{equation} |P\uparrow\rangle=|P\uparrow\rangle_{uud}+|P\uparrow\rangle_{udu}+ |P\uparrow\rangle_{duu} \,. \label{eq:2} \end{equation} In this symmetrization, the state $|P\uparrow\rangle_{udu}$ is obtained from $|P\uparrow\rangle_{uud}$ by interchanging the second and third spin and space coordinates as well as the indicated quark type, with a similar interchange of the first and third coordinates for $|P\uparrow\rangle_{duu}$. Following the derivation outlined in Ref.~\cite{Boffi:2002yy}, we find that the $uud$ component of the light-cone state of the proton can be written as \be\label{eq:12} \ket{P,\Lambda}_{uud} = \sum_{\lambda_i,c_i} \int d[1]d[2]d[3] \Psi^{\Lambda,[f]}_{uud}(\{x_i,\boldsymbol{ k}_{i\perp };\lambda_i\}) \frac{\epsilon^{ijk}}{\sqrt{6}} b^{\dagger\,u}_{i,\,\lambda_1}(1) b^{\dagger\,u}_{j,\,\lambda_2}(2) b^{\dagger\,d}_{k,\,\lambda_3}(3) |0\rangle\, . \ee In Eq.~(\ref{eq:12}), assuming SU(6) spin-flavor symmetry, we can factorize the LCWF $\Psi^{\Lambda,[f]}_{uud}(\{x_i,\boldsymbol{ k}_{i\perp };\lambda_i\})$ in a momentum-dependent wave function and a spin-dependent part, i.e., \begin{eqnarray} \label{eq:13} \Psi^{\Lambda,[f]}_{uud}(\{x_i,\boldsymbol{ k}_{i\perp }; \lambda_i\}) &=& \tilde \psi(\{x_i,\boldsymbol{ k}_{i\perp }\}) \frac{1}{\sqrt{3}}\tilde\Phi_{\Lambda}(\lambda_1,\lambda_2,\lambda_3). \end{eqnarray} In the above equation the momentum-dependent function is given by \begin{eqnarray}\label{eq:14} \tilde \psi(\{x_i,\boldsymbol{ k}_{i\perp }\})= 2(2\pi)^3\bigg[\frac{1}{M_0}\frac{\omega_1\omega_2\omega_3}{x_1x_2x_3}\bigg]^{1/2}\psi(\{x_i,\boldsymbol{ k}_{i\perp }\}), \end{eqnarray} where $\psi(\{x_i,\boldsymbol{ k}_{i\perp }\})$ is symmetric under exchange of the momenta of any quark pairs and is spherically symmetric, $\omega_i$ is the free-quark energy, and $M_0=\sum_i\omega_i$ is the mass of the non-interacting three-quark system. The spin-dependent part in Eq.~(\ref{eq:13}) is given by \begin{eqnarray} \tilde\Phi_{\Lambda}(\lambda_1,\lambda_2,\lambda_3) &=&\sum_{\mu_1\mu_2\mu_3} \langle 1/2,\mu_1; 1/2, \mu_2|1, \mu_1+\mu_2 \rangle \langle 1, \mu_1+\mu_2;1/2, \mu_3| 1/2, \Lambda\rangle\nn\\ &&\times D_{\mu_1\lambda_1}^{1/2*}(R_{cf}(x_1,\boldsymbol{ k}_{1\perp })) D_{\mu_2\lambda_2}^{1/2*}(R_{cf}(x_2,\boldsymbol{ k}_{2\perp })) D_{\mu_3\lambda_3}^{1/2*}(R_{cf}(x_3,\boldsymbol{ k}_{3\perp })). \label{eq:15} \end{eqnarray} In Eq.~(\ref{eq:15}), $D_{\lambda\mu}^{1/2}(R_{cf}(x,\boldsymbol{ k}_\perp))$ is the matrix element of the Melosh rotation $R_{cf}$~\cite{Melosh:74} \begin{eqnarray} D_{\lambda\mu}^{1/2}(R_{cf}(x,\boldsymbol{ k}_\perp)) &=& \langle\lambda|R_{cf}(x,\boldsymbol{k}_\perp)|\mu\rangle\nonumber\\ &=& \langle\lambda|\frac{m + xM_0 - i\boldsymbol{\sigma}\cdot(\hat{\boldsymbol{z}}\times\boldsymbol{k}_\perp)}{\sqrt{(m + xM_0)^2 + \boldsymbol{k}^{\, 2}_\perp}}|\mu\rangle. \label{eq:16} \end{eqnarray} The Melosh rotation corresponds to the unitary transformation which converts the instant-form spin eigenstates (the Pauli spinors) to light-front helicity eigenstates. In particular, the light-cone spin wave function of Eq.~(\ref{eq:15}) is obtained from the transformation of the canonical-spin wave function with zero orbital angular momentum component. The effects of the Melosh transformation are immediately evident in the presence of the spin-flip term $i\boldsymbol{\sigma}\cdot(\hat{\boldsymbol{z}}\times\boldsymbol{k}_\perp)$ in Eq.~(\ref{eq:16}). Such a term generates non-zero orbital angular momentum, even if the original (instant-form) wave function only contained S-wave components. Therefore, as a consequence of total angular momentum conservation, the LCWF has components with total quark helicity different from the nucleon helicity. Making explicit the dependence on the quark helicities, the light-cone spin wave function of Eq.~(\ref{eq:15}) takes the following values: \begin{eqnarray} \label{eq:17} \tilde \Phi_\uparrow\left(\uparrow,\uparrow,\downarrow\right) &=&\prod_i\frac{1}{\sqrt{N(x_i,\boldsymbol{k}_{i\perp })}} \frac{1}{\sqrt{6}}(2a_1a_2a_3+a_1 k_2^-k_3^+ +a_2k_1^-k_3^+), \\\label{eq:18} \tilde \Phi_\uparrow\left(\uparrow,\downarrow,\uparrow\right) &=&\prod_i\frac{1}{\sqrt{N(x_i,\boldsymbol{ k}_{i\perp })}} \frac{1}{\sqrt{6}}(-a_1a_2a_3+a_3 k_1^- k_2^+-2a_1k_2^+k_3^-), \\\label{eq:19} \tilde \Phi_\uparrow\left(\downarrow,\uparrow,\uparrow\right) &=&\prod_i\frac{1}{\sqrt{N(x_i,\boldsymbol{ k}_{i\perp })}} \frac{1}{\sqrt{6}}(-a_1a_2a_3+a_3 k_1^+k_2^--2a_2k_1^+k_3^-), \\\label{eq:20} \tilde \Phi_\uparrow\left(\uparrow,\downarrow,\downarrow\right) &=&\prod_i\frac{1}{\sqrt{N(x_i,\boldsymbol{ k}_{i\perp })}} \frac{1}{\sqrt{6}}(a_1a_2k_3^+- k_1^- k_2^+k_3^+-2a_1 a_3k_2^+), \\\label{eq:21} \tilde \Phi_\uparrow\left(\downarrow,\uparrow,\downarrow\right) &=&\prod_i\frac{1}{\sqrt{N(x_i,\boldsymbol{ k}_{i\perp })}} \frac{1}{\sqrt{6}}(-k_1^+k_2^-k_3^++a_1a_2 k_3^+ -2a_2 a_3k_1^+), \\\label{eq:21a} \tilde \Phi_\uparrow\left(\downarrow, \downarrow, \uparrow\right) &=&\prod_i\frac{1}{\sqrt{N(x_i,\boldsymbol{ k}_{i\perp })}} \frac{1}{\sqrt{6}}(a_2 a_3 k_1^+ +a_1 a_3k_2^+ +2k_1^+k_2^+k_3^-), \\\label{eq:22} \tilde \Phi_\uparrow\left(\uparrow,\uparrow,\uparrow\right) &=&\prod_i\frac{1}{\sqrt{N(x_i,\boldsymbol{ k}_{i\perp })}} \frac{1}{\sqrt{6}}(-a_1a_3k_2^- - a_2a_3 k_1^-+2a_1a_2k_3^-), \\\label{eq:23} \tilde \Phi_\uparrow\left(\downarrow,\downarrow,\downarrow\right) &=&\prod_i\frac{1}{\sqrt{N(x_i,\boldsymbol{ k}_{i\perp })}} \frac{1}{\sqrt{6}}(-a_2k_1^+k_3^+ - a_1 k_2^+k_3^++2a_3k_1^+k_2^+), \end{eqnarray} where $a_i=(m+x_i M_0)$, and $N(x_i,\boldsymbol{ k}_{i\perp })= [(m+x_i M_0)^2+ \boldsymbol{ k}^2_{i\perp}]$. Taking into account the quark-helicity dependence in Eqs.~(\ref{eq:17})-(\ref{eq:23}), the nucleon state can be mapped out into the different angular momentum components. After straightforward algebra, one finds the following representation for the nucleon wave-function amplitudes in the light-cone CQM \begin{eqnarray} \psi^{(1)}(1,2,3)&=&\tilde \psi(\{x_i,\boldsymbol{ k}_{i\perp }\})\nn\\ &&\times \prod_i\frac{1}{\sqrt{N(x_i,\boldsymbol{ k}_{i\perp })}} \frac{1}{\sqrt{3}}( -a_1 a_2 a_3 +a_3 \boldsymbol{ k}_{1\perp}\cdot \boldsymbol{ k}_{2\perp} +2a_1 \boldsymbol{ k}_{1\perp}\cdot \boldsymbol{ k}_{2\perp} +2a_1 \boldsymbol{ k}_{2\perp}^2),\nn\\ &&\label{eq:24}\\ \psi^{(2)}(1,2,3)&=&\tilde \psi(\{x_i,\boldsymbol{ k}_{i\perp }\}) \prod_i\frac{1}{\sqrt{N(x_i,\boldsymbol{ k}_{i\perp })}}\frac{1}{\sqrt{3}} (a_3 + 2 a_1), \label{eq:25} \end{eqnarray} \begin{eqnarray} \psi^{(3)}(1,2,3)&=&-\tilde \psi(\{x_i,\boldsymbol{ k}_{i\perp }\}) \prod_i\frac{1}{\sqrt{N(x_i,\boldsymbol{ k}_{i\perp })}} \frac{1}{\sqrt{3}}(a_1 a_2 + \boldsymbol{ k}_{2\perp}^2), \label{eq:26}\\ \psi^{(4)}(1,2,3)&=&-\tilde \psi(\{x_i,\boldsymbol{ k}_{i\perp }\}) \prod_i\frac{1}{\sqrt{N(x_i,\boldsymbol{ k}_{i\perp })}} \frac{1}{\sqrt{3}}(a_1 a_2 + 2a_3 a_1-\boldsymbol{ k}_{1\perp}^2 -2 \boldsymbol{ k}_{1\perp}\cdot \boldsymbol{ k}_{2\perp}), \label{eq:27}\\ \psi^{(5)}(1,2,3)&=&\tilde \psi(\{x_i,\boldsymbol{ k}_{i\perp }\}) \prod_i\frac{1}{\sqrt{N(x_i,\boldsymbol{ k}_{i\perp })}} \frac{1}{\sqrt{3}}(a_1 a_3), \label{eq:28}\\ \psi^{(6)}(1,2,3)&=&\tilde \psi(\{x_i,\boldsymbol{ k}_{i\perp }\}) \prod_i\frac{1}{\sqrt{N(x_i,\boldsymbol{ k}_{i\perp })}} \frac{1}{\sqrt{3}} a_2. \label{eq:29} \end{eqnarray} Notice that the results in Eqs.~(\ref{eq:24})-(\ref{eq:29}) follow from the spin and orbital angular momentum structure generated from the Melosh rotations, and are independent on the functional form of the momentum-dependent wave function. \section{Sivers function} \label{sec:sivers} The quark Sivers function can be calculated from the following definition \begin{equation} \label{sivers} f_{1T}^\perp(x,\boldsymbol k^{\, 2}_\perp)=-i(k^x+ik^y)\frac{M}{2\boldsymbol k^{\, 2}_\perp} \int\frac{d\xi^-d^2\boldsymbol \xi_\perp}{(2\pi)^3} e^{-i(\xi^- k^+-\boldsymbol\xi_\perp\cdot\boldsymbol k_\perp)} \langle P\uparrow|\bar\psi(\xi^-,\boldsymbol\xi_\perp){\cal L}^\dagger_\xi\gamma^+{\cal L}_0\psi(0)|P \downarrow\rangle \ . \end{equation} As we discussed in the Introduction, the gauge link is crucial to obtain a non-zero Sivers function. In the covariant gauge, the gauge link can be reduced to the light-cone gauge link\footnote{An off-light-cone gauge link has to be used to regulate the light-cone singularities for higher-order calculations. In this paper, we will not encounter this singularity. Therefore, we will simply adopt the gauge link along the light-cone direction in covariant gauge and the transverse gauge link at spatial infinity in light-cone gauge.}. According to the light-cone wave function model, in the following calculations we choose the light-cone gauge $A^+=0$, where the gauge link reduces to a transverse gauge link at $\xi^-=\infty$, i.e., \begin{equation} {\cal L}_\xi|_{A^+=0}={\cal P}\exp\left(-ig\int_{\boldsymbol \xi_\perp}^\infty {\rm d}^2\boldsymbol \zeta_\perp\cdot \boldsymbol A_\perp(\xi^-=\infty,\boldsymbol\zeta_\perp)\right) \ . \end{equation} In the Sivers function of Eq.~(\ref{sivers}), we will expand the above gauge link to take into account the contribution from the one-gluon exchange diagram. Furthermore, in the light-cone gauge the gluon propagator takes the following form \begin{equation} d^{\mu\nu}(q)=-g^{\mu\nu}+\frac{n^\mu q^\nu+n^\nu q^\mu}{[n\cdot q]} \ , \end{equation} where $n$ is the light-like vector $n^2=0$ and $n\cdot q=q^+$. The gluon propagator has a light-cone singularity, as can be seen from the above equation. We will adopt the principal-value prescription to regulate this singularity. We have also checked that the final results do not depend on the prescription.\footnote{For example, if we choose the so-called advanced boundary condition for the gauge potential, the transverse gauge link becomes unit, whereas the above gluon propagator generates phases which allow to recover the previous results with the principal-value prescription.} Under this prescription, there is no phase contribution from the above propagator. However, the transverse gauge link expansion, when combining with the $n^-\boldsymbol q_\perp/n\cdot q$ factor of the above equation, leads to the following expression \begin{equation} \frac{e^{iq^+\infty}}{q^+}=i\pi\delta(q^+)\ . \end{equation} This contribution provides the phase needed to generate a non-zero Sivers function. The dominance in the gluon propagator of the $n^-\boldsymbol q_\perp/n\cdot q,$ with the $\perp$ index coming from the contraction with the transverse gauge link, also simplifies the interactions between the quark fields, since the quark scattering conserves the helicity. Finally, we obtain the following expression for the quark Sivers function \begin{eqnarray} &&f_{1T}^{\perp\,q}(x,\boldsymbol k^{\, 2}_\perp) =-g^2\frac{k^x+ik^y}{\boldsymbol k^{\, 2}_\perp}\frac{M}{2} \frac{1}{(2\pi)^{11}}\frac{1}{\sqrt{(2k^+)(2k^+_1)}} \int\frac{{\rm d}k_3^+{\rm d}^2\,\boldsymbol k_{3\perp}}{\sqrt{(2k_3^+)(2k^+_4)}} \int{\rm d}^2\, \boldsymbol q_\perp \nonumber\\ &&\times \Big\{\frac{1}{\boldsymbol{q}^{\,2}_\perp}\sum_{\lambda_1,\lambda_3} \sum_f\sum_{i.j}\sum_{k,l} T^a_{ij}T^b_{kl}\delta_{ab} \langle P\uparrow|b^{\dagger\, q}_{i\lambda_1}(k_1)b^q_{j\lambda_1}(k) b^{\dagger\, f}_{k\lambda_3}(k_3)b^f_{l\lambda_3}(k_4) |P\downarrow\rangle\Big\}, \label{eq:sivers1} \end{eqnarray} where the quark momenta are defined as $k_1=k-q$, $k_4=k_3-q$, $T^a$ is the $SU_c(6)$ Gell-Mann matrix and $g$ is the gluon coupling with the quark field. Equation~(\ref{eq:sivers1}) corresponds to the diagrams in Fig.~\ref{fig1} with $\lambda=\lambda_1$ and $\lambda_4=\lambda_3$, for the helicity of the interacting and spectator quarks, respectively, and $\Lambda=-\Lambda'$ for the helicity of the nucleon in the initial and final states. \begin{figure} \centerline{ \epsfig{file=diagram_sivers2.eps, width=0.6\columnwidth}} \caption{\label{fig1} The leading contribution from the one-gluon exchange mechanism to the T-odd distribution functions.} \end{figure} A few comments are in order to explain the above derivations. First, we have made an approximation for the interaction vertex between the gauge field from the gauge link and the quark fields in the proton wave function, by the covariant interaction form. In principle, we shall use the light-cone time-order perturbation theory to describe this interaction. However, we expect the modification being beyond the approximation we made in modelling the light-cone wave function itself. Nevertheless, it will be interesting to check how large these effects would be. Second, we used the perturbation theory to calculate the final-state interaction effects. For numerical estimate, we choose a reasonable value for the strong coupling constant (see Sec.~\ref{sect:results}). Meanwhile, we notice it may be not appropriate to use a perturbative coupling for this non-perturbative calculations. We regarded this as an important theoretical uncertainty, which exists in all model calculations of the Sivers function. As we discussed, in Eq.~(\ref{eq:sivers1}) the quark helicity is conserved. On the other side, the hadron helicity flips from the initial to the final state. As a consequence, non-zero results for the Sivers function can be obtained only with a transfer of one unit of orbital angular momentum between the initial and the final nucleon states. \newline \noindent Inserting in Eq.~(\ref{eq:sivers1}) the light-cone wave-function amplitude decomposition of the nucleon state introduced in Sec.~\ref{sect:lcwf}, one finds the following results in terms of the amplitudes $\psi^{(i)}$ \begin{eqnarray} f_{1T}^{\perp\,q}(x,\boldsymbol k_\perp^{\, 2})&=&-\frac{2}{3}g^2M\frac{k^x+ik^y}{\boldsymbol k^{\, 2}_\perp} \int \frac{{\rm d^2}\, \boldsymbol q_\perp}{(2\pi)^2}\frac{1}{\boldsymbol{q}^{\,2}_\perp} \int{\rm d}x'\int{\rm d}^2\boldsymbol t '_\perp \int d[1]d[2]d[3] \sqrt{x_1 x_2 x_3}\,{\cal F}^{\perp\,q}.\nonumber\\ &&\label{eq:sivers-overlap} \end{eqnarray} The function ${\cal F}^{\perp\,q}$ for $u$ quark is given by \begin{eqnarray} {\cal F}^{\perp\, u}= A^{(1,2)}\phi^{(3,4)}(1,2,3)-A^{(3,4)}\phi^{(1,2)}(1,2,3)- A^{(5)}\phi^{(6)}(1,2,3)+A^{(6)}\phi^{(5)}(1,2,3), \label{eq:F-up} \end{eqnarray} where \begin{eqnarray} \phi^{(1,2)}(1,2,3)&=&\psi^{(1)}(1,2,3)-i(k_1^xk_2^y-k_1^yk_2^x)\psi^{(2)}(1,2,3),\nonumber\\ \phi^{(3,4)}(1,2,3)&=&k_1^-\psi^{(3)}(1,2,3) +k_2^- \psi^{(4)}(1,2,3) ,\nonumber\\ \phi^{(5)}(1,2,3)&=&k_2^+ \psi^{(5)}(1,2,3) -k_3^+ \psi^{(5)}(1,3,2) ,\nonumber\\ \phi^{(6)}(1,2,3)&=&k_1^-k_3^- \psi^{(6)}(1,2,3) -k_1^-k_2^- \psi^{(6)}(1,3,2) . \end{eqnarray} The functions $A$ in Eq.~(\ref{eq:F-up}) are defined through \begin{eqnarray} A^{(1,2)}&=&\delta^3(k-k_1)\Big[\delta^3(t'-k_2)\phi^{(1,2)*}(\hat 2,1'',3) +\delta^3(t'-k_3)\phi^{(1,2)*}(2,1'',\hat 3)\Big]\nonumber\\ &&+ \delta^3(k-k_2)\Big[\delta^3(t'-k_1)\Big(2\phi^{(1,2)*}(2'',\hat 1,3) +\phi^{(1,2)*}(3,\hat 1,2'')\Big)\nonumber\\ && +\delta^3(t'-k_3)\Big(2\phi^{(1,2)*}(2'',1,\hat 3) +\phi^{(1,2)*}(\hat 3,1,2'')\Big)\Big]\nonumber\\ &&+\delta^3(k-k_3)\Big[\delta^3(t'-k_1)\Big(\phi^{(1,2)*}(2,\hat 1,3'') +\phi^{(1,2)*}(3'',\hat 1,2)\Big)\nonumber\\ &&+\delta^3(t'-k_2)\Big(\phi^{(1,2)*}(\hat 2,1,3'') +\phi^{(1,2)*}(3'',1,\hat 2)\Big)\Big] , \nonumber \end{eqnarray} \begin{eqnarray} A^{(3,4)}&=&\delta^3(k-k_2)\Big[\delta^3(t'-k_1)\phi^{(3,4)*}(2'',\hat 1,3)+ \delta^3(t'-k_3)\phi^{(3,4)*}(2'', 1,\hat 3)\Big]\nonumber\\ &&+ \delta^3(k-k_1)\Big[ \delta^3(t'-k_2)\Big(2\phi^{(3,4)*}(\hat 2,1'',3)+\phi^{(3,4)*}(\hat 2,3,1'')\Big)\nonumber\\ &&+\delta^3(t'-k_3)\Big(2\phi^{(3,4)*}(2,1'',\hat 3)+\phi^{(3,4)*}(2,\hat 3,1'')\Big)]\Big]\nonumber\\ &&+\delta^3(k-k_3)\Big[\delta^3(t'-k_1)\Big(\phi^{(3,4)*}(2,\hat 1,3'')+ \phi^{(3,4)*}(2,3'',\hat 1)\Big)\nonumber\\ &&+\delta^3(t'-k_2)\Big(\phi^{(3,4)*}(\hat 2,1,3'')+ \phi^{(3,4)*}(\hat 2,3'',1)\Big)\Big],\nonumber \end{eqnarray} \begin{eqnarray} A^{(5)}&=&\delta^3(k-k_1)\Big[\delta^3(t'-k_2)\Big(\phi^{(5)*}(1'',\hat 2,3)+ \phi^{(5)*}(\hat 2,1'',3)\Big)\nonumber\\ &&+\delta^3(t'-k_3)\Big(\phi^{(5)*}(1'',2,\hat 3)+ \phi^{(5)*}(2,1'',\hat 3)\Big)\Big]\nonumber\\ &&+\delta^3(k-k_2)\Big[\delta^3(t'-k_1)\Big(\phi^{(5)*}(\hat 1,2'',3)+ \phi^{(5)*}(2'',\hat 1,3)\Big)\nonumber\\ &&+\delta^3(t'-k_3)\Big(\phi^{(5)*}(1,2'',\hat 3)+ \phi^{(5)*}(2'',1,\hat 3)\Big)\Big],\nonumber \end{eqnarray} \begin{eqnarray} A^{(6)}&=&\delta^3(k-k_1)\Big[\delta^3(t'-k_2)\Big(\phi^{(6)*}(1'',\hat 2,3)+ \phi^{(6)*}(\hat 2,1'',3)\Big)\nonumber\\ &&+\delta^3(t'-k_3)\Big(\phi^{(6)*}(1'', 2,\hat 3)+ \phi^{(6)*}( 2,1'',\hat 3)\Big)\Big]\nonumber\\ &&+\delta^3(k-k_2)\Big[\delta^3(t'-k_1)\Big(\phi^{(6)*}(\hat 1,2'',3)+ \phi^{(6)*}(2'',\hat 1,3)\Big)\nonumber\\ &&+\delta^3(t'-k_3)\Big(\phi^{(6)*}(1,2'',\hat 3)+ \phi^{(6)*}(2'',1,\hat 3)\Big)\Big], \label{eq:A} \end{eqnarray} where the quark coordinates are $\imath''=(x,\boldsymbol k_\perp -\boldsymbol q_\perp)$, and $\hat{\imath}=(x',\boldsymbol{t}\,'_{\perp}+\boldsymbol{q}_\perp)$, $\delta^3(k-k_i)=\delta(x-x_i)\delta^2(\boldsymbol k_\perp-\boldsymbol k_{i\perp})$ and we used the notation $\delta^3(t'-k_i)=\delta(x'-x_i)\delta^2(\boldsymbol t\, '_\perp-\boldsymbol k_{i\perp})$. In the above equations, the complex conjugate only acts on the wave function $ {\psi}^{(i)}$. In Eq.~(\ref{eq:sivers-overlap}), the contributions from the functions $A^{(1,2)}$ and $A^{(3,4)}$ describe the interference between $S$ and $P$ waves, while the terms with $A^{(5)}$ and $A^{(6)}$ correspond to the contribution from $P-D$ wave interference. Similarly for the d-quark, one has \begin{eqnarray} {\cal F}^{\perp\, d}= B^{(1,2)}\phi^{(3,4)}(1,2,3)-B^{(3,4)}\phi^{(1,2)}(1,2,3)- B^{(5)}\phi^{(6)}(1,2,3)+B^{(6)}\phi^{(5)}(1,2,3) , \label{eq:F-down} \end{eqnarray} where the terms with $B^{(1,2)}$ and $B^{(3,4)}$ describe the interference between $S$ and $P$ waves, while the terms with $B^{(5)}$ and $B^{(6)}$ correspond to the contribution from $P-D$ wave interference. The explicit expression for these functions is \begin{eqnarray} B^{(1,2)}&=&\delta^3(k-k_3)\Big[\delta^3(t'-k_2)\phi^{(1,2)*}(\hat 2,1,3'') +\delta^3(t'-k_1)\phi^{(1,2)*}(2,\hat 1,3'')\Big]\nonumber\\ &&+\delta^3(k-k_1) \Big[\delta^3(t'-k_2) \Big(\phi^{(1,2)*}(\hat 2,1'',3)+\phi^{(1,2)*}(3,1'',\hat 2)\Big)\nonumber\\ &&+\delta^3(t'-k_3) \Big(\phi^{(1,2)*}(2,1'',\hat 3)+\phi^{(1,2)*}(\hat 3,1'',2)\Big)\Big], \nonumber \end{eqnarray} \begin{eqnarray} B^{(3,4)}&=&\delta^3(k-k_3)\Big[\delta^3(t'-k_1) \phi^{(3,4)*}(2,\hat 1,3'')+\delta^3(t'-k_2) \phi^{(3,4)*}(\hat 2,1,3'')\Big]\nonumber\\ && +\delta^3(k-k_2)\Big[\delta^3(t'-k_1)\Big(\phi^{(3,4)*}(2'',\hat 1,3) +\phi^{(3,4)*}(2'',3,\hat 1)\Big)\nonumber\\ &&+\delta^3(t'-k_3)\Big(\phi^{(3,4)*}(2'',1,\hat 3) +\phi^{(3,4)*}(2'',\hat 3,1)\Big)\Big],\nonumber\\ &&\nonumber\\ B^{(5)}&=&\delta^3(k-k_3)\Big[\delta(t'-k_2)\Big(\phi^{(5)*}(1,\hat 2,3'')+ \phi^{(5)*}(\hat 2,1,3'')\Big)\nonumber\\ &&+\delta(t'-k_1)\Big(\phi^{(5)*}(\hat 1,2,3'')+ \phi^{(5)*}(2,\hat 1,3'')\Big)\Big] ,\nonumber\\ &&\nonumber\\ B^{(6)}&=&\delta^3(k-k_3)\Big[\delta^3(t'-k_1)\Big(\phi^{(6)*}(\hat 1,2,3'')+ \phi^{(6)*}(2,\hat 1,3'')\Big)\nonumber\\ &&+\delta^3(t'-k_2)\Big(\phi^{(6)*}(1,\hat 2,3'')+ \phi^{(6)*}(\hat 2,1,3'')\Big)\Big]. \label{eq:B} \end{eqnarray} In the above equations, the complex conjugate only acts on the wave function $ {\psi}^{(i)}$. Using the CQM expressions for the three-quark light cone amplitudes given in Sec.~\ref{sect:lcwf}, we obtain the following results for the Sivers function \begin{eqnarray} &&f_{1T}^{\perp\,q}(x,\boldsymbol k^{\, 2}_\perp)=-\frac{2}{3}g^2M \frac{k^x+ik^y}{\boldsymbol k^{\,2}_\perp} \int \frac{{\rm d}^2\boldsymbol q_\perp}{(2\pi)^2}\frac{1}{\boldsymbol q^{\; 2}_\perp} \int{\rm d}x'\int{\rm d}^2\boldsymbol t'_\perp \int d[1]d[2]d[3] \sqrt{x_1 x_2 x_3}\nonumber\\ &&\times \delta(x-x_3) \delta^2(\boldsymbol k_\perp-\boldsymbol k_{3\perp})\delta(x'-x_1) \delta^2(\boldsymbol t\, '_\perp-\boldsymbol k_{1\perp}) \,\psi^*(\{x'_i\},\{\boldsymbol k\,'_{i\perp}\}) \,\psi(\{x_i\},\{\boldsymbol k_{i\perp}\})\nonumber\\ &&\times3\delta_{\tau_3\tau_q}\ \left\{ \delta_{\tau_q 1/2}X^{00}(\{\boldsymbol k\,'_i\},\{\boldsymbol k_i\}) +\frac{1}{3}[\delta_{\tau_q 1/2}+2\delta_{\tau_q -1/2}] X^{11}(\{\boldsymbol k\,'_i\},\{\boldsymbol k_i\})\right\},\label{eq:sivers-lcwf} \end{eqnarray} where the quark momenta in the final state are $(x'_3=x,\boldsymbol k\, '_{3\perp}=\boldsymbol k_{3\perp}-\boldsymbol q_\perp)$, $(x'_1=x',\boldsymbol k\,'_1=\boldsymbol t\, '_\perp+\boldsymbol q_\perp)$, $(x'_2=x_2,\boldsymbol k\, '_{2\perp}=\boldsymbol k_{2\perp})$. In Eq.~(\ref{eq:sivers-lcwf}), the functions $X^{00}$ and $X^{11}$ are given by \begin{eqnarray} X^{00}(\{\boldsymbol k\,'_i\},\{\boldsymbol k_i\}) &=& \prod_{i=1}^3 N^{-1}(\boldsymbol{k}\,'_i) N^{-1}(\boldsymbol{k}_i) (i\,B_{3x}+B_{3y}) (A_1A_2 + \boldsymbol{B}_1\cdot\boldsymbol{B}_2),\label{eq:x00_flip}\\ X^{11}(\{\boldsymbol k\,'_i\},\{\boldsymbol k_i\}) &=& \prod_{i=1}^3 N^{-1}(\boldsymbol{k}\,'_i) N^{-1}(\boldsymbol{k}_i)\nonumber\\ &&\hspace{-0.2 truecm}\times\frac{1}{3}\Big\{ -(A_1A_2 +\boldsymbol{B}_1\cdot\boldsymbol{B}_2)(iB_{3x}+B_{3y}) \nonumber \\ & & \ {}\quad + 2\boldsymbol B_1\cdot\boldsymbol B_3(iB_{2x}+B_{2y}) +2\boldsymbol B_2\cdot\boldsymbol B_3(iB_{1x}+B_{1y}) \nonumber \\ & & \ {}\quad +2i\Big [A_3A_1(iB_{2x}+B_{2y})+A_3A_2(iB_{1x}+B_{1y}) \Big] \Big\},\label{eq:x11_flip} \end{eqnarray} where \begin{eqnarray} A_i &=& (m+ x'_iM'_0)(m+ x_i M_0) + k'^y_i k^y_i + k'^x_i k^x_i, \nn\\ B_{i,x} &=& - (m+ x'_iM'_0) k^y_i + (m+ x_i M_0) k'^y_i, \nn\\ B_{i,y}& =& (m+ x'_iM'_0) k^x_i - (m+ x_i M_0) k'^x_i, \nn\\ B_{i,z} &=& k'^x_i k^y_i - k'^y_i k^x_i . \label{eq:def-ab} \end{eqnarray} \section{Boer-Mulders function} The calculation of Sec.~\ref{sec:sivers} can be repeated for the Boer-Mulders function, defined from the following quark correlation function \begin{equation} \label{bm} h_{1}^\perp(x,\boldsymbol k^{\, 2}_\perp)=\epsilon^{ij} k^j_\perp\frac{M}{2\boldsymbol k_\perp^{\, 2}} \int\frac{d\xi^-d^2\boldsymbol \xi_\perp}{(2\pi)^3}e^{-i(\xi^- k^+-\boldsymbol\xi_\perp\cdot\boldsymbol k_\perp)} \frac{1}{2}\sum_\Lambda \langle P\Lambda|\bar\psi(\xi^-,\boldsymbol \xi_\perp){\cal L}^\dagger_\xi i\sigma^{i+}\gamma_5{\cal L}_0\psi(0)|P \Lambda\rangle \ . \end{equation} Also in this case we expand the gauge link up to the next-to leading order, and following the same method we used in the calculation of the Sivers function, we find for the Boer-Mulders function \begin{eqnarray} &&h_{1}^{\perp\,q}(x,\boldsymbol k_\perp^{\, 2})=-g^2 \frac{k^x-i k^y}{\boldsymbol k^{\,2}_\perp} \frac{M}{2} \frac{1}{(2\pi)^{11}}\frac{1}{\sqrt{(2k^+)(2k^+_1)}} \int\frac{{\rm d}k_3^+{\rm d}^2\boldsymbol k_{3\perp}}{\sqrt{(2k_3^+)(2k^+_4)}} \int{\rm d}^2\boldsymbol q_\perp \nonumber\\ &&\times \Big\{\frac{1}{\boldsymbol{q}^{\,2}_\perp}\sum_{\Lambda,\lambda_3} \sum_f\sum_{i.j}\sum_{k,l} T^a_{ij}T^b_{kl}\delta_{ab} \langle P\Lambda|b^{\dagger\, q}_{i,\,\uparrow}(k_1)b^q_{j,\,\downarrow}(k) b^{\dagger\, f}_{k,\, \lambda_3}(k_3)b^f_{l,\, \lambda_3}(k_4) |P\Lambda\rangle\Big\}, \label{eq:bm1} \end{eqnarray} where the quark momenta are defined as $k_1=k-q$, $k_4=k_3-q$. The above equation corresponds to the diagram of Fig.~\ref{fig1} with $\lambda=-\lambda_1$ and $\lambda_4=\lambda_3$ for the helicity of the interacting and spectator quarks, respectively, and $\Lambda=\Lambda'$ for the helicity of the nucleon in the initial and final states, i.e. the helicity is conserved at the quark-gluon vertex, while the helicity of the struck quark flips from the initial to the final state. Since the nucleon state has the same helicity in the initial and final state, the quark helicity flip must be compensated by a transfer of one unit of orbital angular momentum. Inserting in Eq.~(\ref{eq:bm1}) the light-cone wave-function amplitude decomposition of the nucleon state introduced in Sec.~\ref{sect:lcwf}, one finds the following results in terms of the amplitudes $\psi^{(i)}$ \begin{eqnarray} h_{1}^{\perp\,q}(x,\boldsymbol k^{\, 2}_\perp)&=&\frac{2}{3}g^2M \frac{k^x-ik^y}{{\boldsymbol k}^{\, 2}_\perp} \int \frac{{\rm d^2}\boldsymbol q_\perp}{(2\pi)^2}\frac{1}{\boldsymbol q^{\, 2}_\perp} \int{\rm d}x'\int{\rm d}^2\boldsymbol t\,'_\perp \int d[1]d[2]d[3] \sqrt{x_1 x_2 x_3}\,{\cal H}^{\perp\,q},\nonumber\\ \label{eq:bm-overlap} \end{eqnarray} where the function ${\cal H}^{\perp\, q}$ for the up quark is \begin{eqnarray} {\cal H}^{\perp\, u}&=& -C^{(1,2)}\tilde \phi^{(3,4)}(1,2,3)+\widetilde C^{(3,4)}\phi^{(1,2)}(1,2,3) -C^{(3,4)}\tilde\phi^{(6)}(1,2,3)\nonumber\\ &&+\widetilde C^{(6)}\phi^{(3,4)}(1,2,3)+ \widetilde C^{(1,2)}\phi^{(5)}(1,2,3)-C^{(5)}\tilde\phi^{(1,2)}(1,2,3), \label{eq:H-up} \end{eqnarray} with \begin{eqnarray} \tilde \phi^{(1,2)}(1,2,3)&=&\psi^{(1)}(1,2,3)+ i(k_1^xk_2^y-k_1^yk_2^x) \psi^{(2)}(1,2,3) ,\nonumber\\ \tilde \phi^{(3,4)}(1,2,3)&=&k_1^+\psi^{(3)}(1,2,3) +k_2^+\psi^{(4)}(1,2,3),\nonumber\\ \tilde \phi^{(6)}(1,2,3)&=&k_1^+k_3^+\psi^{(6)}(1,2,3) -k_1^+k_2^+\psi^{(6)}(1,3,2) . \end{eqnarray} In Eq.~(\ref{eq:H-up}), the terms containing $C^{(1,2)}$ and $C^{(3,4)}$ describe the contribution from $S$ and $P$ wave interference, while $C^{(5)}$ and $C^{(6)}$ are associated with the $P-D$ wave interference terms. The explicit expression for these functions is \begin{eqnarray} C^{(1,2)}&=&\delta^3(k-k_2)\Big[\delta^3(t'-k_1)\Big(\phi^{(1,2)*}(\hat 1,3,2'') +2\phi^{(1,2)*}(2'',3,\hat 1)\Big)\nonumber\\ && +\delta^3(t'-k_3)\Big(\phi^{(1,2)*}(1,\hat 3,2'') +2\phi^{(1,2)*}(2'',\hat 3,1)\Big) \Big]\nonumber\\ &&+\delta^3(k-k_3)\Big[\delta^3(t'-k_1)\phi^{(1,2)*}(3'',2,\hat 1) +\delta^3(t'-k_2)\phi^{(1,2)*}(3'',\hat 2,1)\Big],\nonumber \end{eqnarray} \begin{eqnarray} \tilde C^{(3,4)}&=&\delta^3(k-k_1)\left[\delta^3(t'-k_2)\left( \tilde \phi^{(3,4)*}(3,\hat 2,1'')+2\tilde\phi^{(3,4)*}(3,1'',\hat 2)\right)\right.\nonumber\\ &&+\left.\delta^3(t'-k_3)\left( \tilde \phi^{(3,4)*}(\hat 3,2,1'')+2\tilde\phi^{(3,4)*}(\hat 3,1'',2)\right)\right] \nonumber\\ &&+\delta^3(k-k_3)\left[\delta^3(t'-k_1)\tilde \phi^{(3,4)*}(\hat 1,3'',2) +\delta^3(t'-k_2)\tilde\phi^{(3,4)*}(\hat 1,3'',2) \right],\nonumber \end{eqnarray} \begin{eqnarray} C^{(3,4)}&=&\delta^3(k-k_1)\left[\delta^3(t'-k_2)\phi^{(3,4)*}(1'',\hat 2,3) +\delta^3(t'-k_3)\phi^{(3,4)*}(1'',2,\hat 3)\right]\nonumber\\ &&+\delta^3(k-k_2)\left[\delta^3(t'-k_1)\phi^{(3,4)*}(2'',\hat 1, 3) +\delta^3(t'-k_3)\phi^{(3,4)*}(2'',1,\hat 3)\right],\nonumber\\ &&\nonumber\\ \tilde C^{(6)}&=&\delta^3(k-k_1) \left[\delta^3(t'-k_2)\left(\tilde\phi^{(6)*}(1'',\hat 2,3) +\tilde\phi^{(6)*}(\hat 2,1'',3)\right)\right.\nonumber\\ &&+\left. \delta^3(t'-k_3)\left(\tilde\phi^{(6)*}(1'',2,\hat 3) +\tilde\phi^{(6)*}(2,1'',\hat 3)\right)\right],\nonumber \end{eqnarray} \begin{eqnarray} \tilde C^{(1,2)}&=&\delta^3(k-k_1) \left[\delta^3(t'-k_2)\tilde\phi^{(1,2)*}(\hat 2,1'',3)+ \delta^3(t'-k_3)\tilde\phi^{(1,2)*}(2,1'',\hat 3)\right]\nonumber\\ &&+\delta^3(k-k_2) \left[\delta^3(t'-k_1)\tilde\phi^{(1,2)*}(\hat 1,2'',3)+ \delta^3(t'-k_3)\tilde\phi^{(1,2)*}(1,2'',\hat 3)\right],\nonumber\\ &&\nonumber\\ C^{(5)}&=&\delta^3(k-k_2) \Big[\delta^3(t'-k_1)\Big(\phi^{(5)*}(\hat 1,2'',3) +\phi^{(5)*}(2'',\hat 1,3)\Big)\nonumber\\ &&+\delta^3(t'-k_3)\Big(\phi^{(5)*}(1,2'',\hat 3) +\phi^{(5)*}(2'',1,\hat 3)\Big)\Big]. \label{eq:C} \end{eqnarray} In the above equations, the complex conjugate only acts on the wave function ${\psi}^{(i)}$. Analogously, the function ${\cal H}^{\perp}$ for the down quark is \begin{eqnarray} {\cal H}^{\perp\, d}&=& D^{(1,2)}\tilde \phi^{(3,4)}(1,2,3)-\tilde D^{(3,4)}\phi^{(1,2)}(1,2,3) +D^{(3,4)}\tilde\phi^{(6)}(1,2,3)\nonumber\\ &&-\tilde D^{(6)}\phi^{(3,4)}(1,2,3)+ \tilde D^{(1,2)}\phi^{(5)}(1,2,3)-D^{(5)}\tilde\phi^{(1,2)}(1,2,3), \label{eq:H-down} \end{eqnarray} where the $S-P$ wave interference contribution comes from the terms proportional to $D^{(1,2)}$ and $D^{(3,4)}$, while the remaining two terms give the contribution from the interference of $P$ and $D$ waves. The function $D$ in Eq.~(\ref{eq:H-down}) are defined as \begin{eqnarray} D^{(1,2)}&=&\delta^3(k-k_3)\left[\delta^3(t'-k_1)\phi^{(1,2)*}(\hat 1,2,3'') +\delta^3(t'-k_2)\phi^{(1,2)*}(1,\hat 2,3'')\right],\nonumber\\ &&\nonumber\\ \tilde D^{(3,4)}&=&\delta^3(k-k_3)\left[\delta^3(t'-k_1) \tilde \phi^{(3,4)*}(\hat 1,2,3'') +\delta^3(t'-k_2)\tilde\phi^{(3,4)*}(1,\hat 2,3'')\right],\nonumber\\ &&\nonumber\\D^{(3,4)}&=&\delta^3(k-k_3)\left[\delta^3(t'-k_1)\left( \phi^{(3,4)*}(3'',2,\hat 1)+\phi^{(3,4)*}(3'',\hat 1,2)\right) \right.\nonumber\\ &&+\left.\delta^3(t'-k_2)\left(\phi^{(3,4)*}(3'',\hat 2,1)+ \phi^{(3,4)*}(3'',1,\hat 2)\right)\right],\nonumber\\ &&\nonumber\\ \tilde D^{(6)}&=&\delta^3(k-k_1) \left[\delta^3(t'-k_2)\left(\tilde\phi^{(6)*}(3,\hat 2,1'')+\tilde\phi^{(6)*}(\hat 2,3,1'') \right)\right.\nonumber\\ &&+\left. \delta^3(t'-k_3)\left(\tilde\phi^{(6)*}(\hat 3,2,1'')+\tilde\phi^{(6)*}(2,\hat 3,1'') \right)\right],\nonumber\\ &&\nonumber\\ \tilde D^{(1,2)}&=&\delta^3(k-k_2) \left[\delta^3(t'-k_1)\left(\tilde\phi^{(1,2)*}(\hat 1,2'',3) +\tilde\phi^{(1,2)*}(3,2'',\hat 1)\right)\right.\nonumber\\ &&+\left. \delta^3(t'-k_3)\left(\tilde\phi^{(1,2)*}(1,2'',\hat 3) +\tilde\phi^{(1,2)*}(\hat 3,2'',1)\right)\right],\nonumber\\ &&\nonumber\\ D^{(5)}&=&\delta^3(k-k_2) \Big[\delta^3(t'-k_1)\Big(\phi^{(5)*}(\hat 1,2'',3) +\phi^{(5)*}(3,2'',\hat1)\Big)\nonumber\\ &&+\delta^3(t'-k_3)\Big(\phi^{(5)*}(1,2'',\hat 3) +\phi^{(5)*}(\hat 3,2'',1)\Big)\Big]. \label{eq:D} \end{eqnarray} In the above equations, the complex conjugate only acts on the wave function ${\psi}^{(i)}$. In the model for the three-quark light cone amplitudes introduced in Sec.~\ref{sect:lcwf}, we find the following explicit results \begin{eqnarray} &&h_{1}^{\perp\,q}(x,\boldsymbol k^{\, 2}_\perp)=\frac{2}{3}g^2M \frac{k^x-ik^y}{{\boldsymbol k}^2_\perp} \int \frac{{\rm d}^2\boldsymbol q_\perp}{(2\pi)^2}\frac{1}{\boldsymbol q^{\, 2}_\perp} \int{\rm d}x'\int{\rm d}^2\boldsymbol t\,'_\perp \int d[1]d[2]d[3] \sqrt{x_1 x_2 x_3}\nonumber\\ &&\times \delta(x-x_3) \delta^2(\boldsymbol k_\perp-\boldsymbol k_{3\perp})\delta(x'-x_1) \delta^2(\boldsymbol t\, '_\perp-\boldsymbol k\, '_{1\perp}) \,\psi^*(\{x'_i\},\{\boldsymbol k'_{i\perp}\}) \,\psi(\{x_i\},\{\boldsymbol k_{i\perp}\})\nonumber\\ &&\times3\delta_{\tau_3\tau_q}\ \left\{ \delta_{\tau_q 1/2}\tilde X^{00}(\{\boldsymbol k\,'_i\},\{\boldsymbol k_i\}) +\frac{1}{3}[\delta_{\tau_q 1/2}+2\delta_{\tau_q -1/2}] \tilde X^{11}(\{\boldsymbol k\,'_i\},\{\boldsymbol k_i\})\right\},\label{eq:boer-lcwf} \end{eqnarray} where the quark momenta in the final state are $(x'_3=x,\boldsymbol k\, '_{3\perp}=\boldsymbol k_{3\perp}-\boldsymbol q_\perp)$, $(x'_1=x',\boldsymbol k\,'_1=\boldsymbol t\, '_\perp+\boldsymbol q_\perp)$, $(x'_2=x_2,\boldsymbol k\, '_{2\perp}=\boldsymbol k_{2\perp})$. In Eq.~(\ref{eq:boer-lcwf}), the functions $\tilde X^{00}$ and $\tilde X^{11}$ are given by, \begin{eqnarray} \tilde X^{00}(\{\boldsymbol k\,'_i\},\{\boldsymbol k_i\}) & = & \prod_{i=1}^3 N^{-1}(\boldsymbol{k}\,'_i) N^{-1}(\boldsymbol{k}_i) \Big[ (A_1A_2 + \boldsymbol{B}_1\cdot\boldsymbol{B}_2)\tilde A_{3}\Big] , \label{eq:x00_tilde_noflip}\\ \tilde X^{11}(\{\boldsymbol{k}\,'\},\{\boldsymbol{k}\})\Big) & = & \prod_{i=1}^3 N^{-1}(\{\boldsymbol{k}\,'_i\}) N^{-1}(\{\boldsymbol{k}_i\})\nonumber\\ &&\hspace{-0.2 truecm}\times\frac{1}{3} \Big[ (3 A_1A_2 -\boldsymbol{B}_1\cdot\boldsymbol{B}_2 )\tilde A_3 + 2 (A_1 B_{2,x}+A_2 B_{1,x})\tilde B_{3,x}\nonumber\\ & & \quad + 2(A_1 B_{2,y}+A_2 B_{1,y})\tilde B_{3,y} +2(A_1 B_{2,z}+A_2 B_{1,z})\tilde B_{3,z}\Big], \label{eq:x11_tilde_noflip} \end{eqnarray} where the functions $A_i$ and $\boldsymbol{B}_i$ are defined in Eq.~(\ref{eq:def-ab}), and \begin{eqnarray} \tilde A_3&=& (m+ x_3 M_0)(k'^x_3 +i k'^y_3)- (m+ x'_3M'_0)(k^x_3+i k^y_3),\nn \\ \tilde B_3^x &=& -i(m+ x'_3M'_0) (m+ x_3 M_0)+i(k'^x_3+ik'^y_3)(k^x_3+ ik^y_3),\nn \\ \tilde B_3^y &=& (m+ x'_3M'_0) (m+ x_3 M_0)+(k'_{3,x}+ik'_{3y})(k_3^x+ ik_3^y),\nn \\ \tilde B_3^z &=& i(m+ x'_3M'_0)(k_3^x+ik_3^y) +i(m+ x_3 M_0)(k'^x_3+ik'^y_3). \end{eqnarray} \section{Results and discussion} \label{sect:results} The formalism described in the previous sections is applied in the following to a specific CQM, adopting a power-law form for the momentum-dependent part of the light-cone wave function, i.e.\, \bea \psi(\{x_i,\boldsymbol{ k}_{i\perp }\})= \frac{N'}{(M_0^2+\beta^2)^\gamma}, \label{eq:30} \eea with $N'$ a normalization factor. In Eq.~(\ref{eq:30}), the scale $\beta$, the parameter $\gamma$ for the power-law behaviour, and the quark mass $m$ are
taken from Ref.~\cite{Schlumpf:94a}, i.e., $\beta=0.607 $ GeV, $\gamma=3.4$ and $m=0.267$ GeV. According to the analysis of Ref.~\cite{Schlumpf:94b} these values lead to a very good description of many baryonic properties. The same parametrization of the momentum dependent part of the LCWF in Eq.~(\ref{eq:30}) has been successfully applied also in recent works for the calculation of the electroweak properties of the nucleon~\cite{Pasquini:2007iz}, GPDs~\cite{Boffi:2002yy,Boffi:2003yj,Pasquini:2005dk,Pasquini:2006iv,Boffi:2007yc} and T-even TMDs~\cite{Pasquini:2008ax,Boffi:2009sh}. In order to fix the coupling constant appearing in Eqs.~(\ref{eq:sivers-overlap}) and (\ref{eq:bm-overlap}), we need to determine the hadronic scale of the model. This is achieved in a model independent way following the prescription of Ref.~\cite{Pasquini:2004gc}, by matching the value of the momentum fraction carried by the valence quarks, as computed in the model, with that obtained evolving backward the value experimentally determined at large $Q^2$. The strong coupling constant $\alpha_S(Q^2)$ entering the evolution code at NLO is computed by solving the NLO transcendental equation numerically, \be \ln {Q^2\over\Lambda_{\rm NLO}^2}-{4\,\pi\over\beta_0\,\alpha_s} + {\beta_1\over\beta_0^2}\,\ln\left[ {4\,\pi\over\beta_0\,\alpha_s} + {\beta_1\over\beta_0^2}\right] = 0\,, \label{0:10} \ee as obtained from the renormalization group analysis~\cite{Pasquini:2004gc,Weigl:1995hx}. It differs from the more familiar expression \begin{equation} {\alpha_s(Q^2) \over 4\pi}={1 \over \beta_0\ln(Q^2/\Lambda_{\rm NLO}^2)} \left(1-{\beta_1 \over \beta_0^2}\,{\ln\ln(Q^2/\Lambda_{\rm NLO}^2)\over \ln(Q^2/\Lambda_{\rm NLO}^2)}\right), \label{1:10} \end{equation} valid only in the limit $Q^2\gg\Lambda_{\rm NLO}^2$, where $\Lambda_{\rm NLO}$ is the so-called QCD scale parameter. The hadronic scale, $\mu_0^2$, consistent with the presence of valence degrees of freedom only is $\mu_0^2 = 0.094$ GeV$^2$, with $\Lambda_{\rm NLO}= 0.248$ GeV. This corresponds to a value of the strong coupling constant in Eq.~(\ref{0:10}) $\alpha_S(\mu^2_0)/(4\pi)=g^2/(4\pi)^2=0.14$, and is consistent with the analysis of Refs.~\cite{Courtoy:2009pc,Courtoy:2008vi,Courtoy:2008dn} where a similar procedure was adopted. The first transverse-momentum moments of the Sivers and Boer-Mulders functions are shown in Figs.~\ref{fig2} and \ref{fig3}, using the definition \begin{eqnarray} j^{(1)}(x)= \int{\rm d}^2 \boldsymbol k_\perp\frac{\boldsymbol k^{\, 2}_\perp}{2M^2} j(x,\boldsymbol k^{\, 2}_\perp), \end{eqnarray} with $j=f_{1T}^{\perp\,q}$ and $j=h_1^{\perp}$, respectively. In the figures the dashed curves correspond to the results at the hadronic scale of the model $\mu_0^2$, while the solid curves are obtained by applying a NLO evolution to $Q^2=2.5$ GeV$^2$, assuming for the first transverse-momentum moment of the Sivers function the same anomalous dimension of the unpolarized parton distribution and for the first transverse-momentum moment of the Boer-Mulders the evolution pattern of the chiral-odd transversity distribution. Although these are not the exact evolution patterns, this is the standard procedure adopted so far in model calculations~\cite{Courtoy:2009pc,Courtoy:2008vi,Courtoy:2008dn,Bacchetta:2008af} and parametrizations~\cite{Anselmino:2008sga,Collins:2005ie,Efremov:2004tp} of the T-odd TMDs, since the exact evolution equations are still under study~\cite{Ceccopieri:2005zz,Cherednikov:2007tw,Kang:2008ey,Zhou:2008mz,Vogelsang:2009pj,{Braun:2009vc}} and evolution codes for these distributions are not yet available. \begin{figure}[t] \begin{center} \epsfig{file=first_moment_sivers_ev_nlo_bis.eps, width=\columnwidth} \end{center} \caption{Results for the first transverse-momentum moment of the Sivers function, for up (left) and down (right) quark, as function of $x$. The dashed curves show the results at the hadronic scale of the model $\mu_0^2=0.094$ GeV$^2$, and the solid curves correspond to the results after NLO evolution to $Q^2=2.5$ GeV$^2$, using the evolution pattern of the unpolarized parton distribution. The lighter and darker shaded areas are the uncertainty bands due to the statistical error of the parametrizations of Ref.~\cite{Anselmino:2008sga} and Ref.~\cite{Collins:2005ie,Efremov:2004tp}, respectively. Both parametrizations refer to an average scale of $Q^2=2.5$ GeV$^2$. } \label{fig2} \end{figure} For the Sivers function in Fig.~\ref{fig2} we also show the results from recent parametrizations, valid at an average scale of $Q^2= 2.5$ GeV$^2$, obtained from a fit to available experimental data on transverse single spin asymmetries for pion and kaon in semi-inclusive deep inelastic scattering. In particular, the darker shaded area represents the uncertainty due to the statistical errors in the parametrization of Ref.~\cite{Anselmino:2008sga}, while the lighter shaded area corresponds to the same for Ref.~\cite{Collins:2005ie,Efremov:2004tp}. The model predictions for the contribution of $u$ and $d$ quarks are of the same order of magnitude and opposite sign, and after evolution are well compatible with the phenomenological parametrizations. The effects of the evolution are crucial to reproduce the position of the peak at $x\approx 0.2$ for both the $u$ and $d$ quark distributions, and to rescale the magnitude of the distributions within the range of the parametrizations. A non trivial constraint in model calculations of the Sivers function is given by the Burkardt sum rule~\cite{Burkardt:2004ur} \begin{eqnarray} \sum_{q=u,\, d,\, s,\, g,\cdots}\ \int{\rm d}x \,f_{1T}^{\perp \, (1)\,q}(x,\boldsymbol k^{\, 2}_\perp)=0\ , \label{burk-sr} \end{eqnarray} which corresponds to require that the net (summed over all partons) transverse momentum due to final-state interaction is zero~\cite{Burkardt:2004vm}. Restricting the sum in Eq.~(\ref{burk-sr}) over the up- and down-quark contributions, our model calculation of the Sivers function reproduces exactly the sum rule. In Fig.~\ref{fig3} we compare the model results for the absolute value of the Boer-Mulders function with phenomenological parametrizations obtained from recent fits to available experimental data. In particular, the dashed-dotted curve corresponds to the analysis of Refs.~\cite{Barone:2008tn,Barone:2009hw} at the average scale of $Q^2=2.4 $ GeV$^2$ of the HERMES~\cite{Giordano:2009hi} and COMPASS~\cite{Kafer:2008ud,Bressan:2009eu} measurements of the $\cos2\phi $ asymmetry in SIDIS, while the short-dashed curve shows the results of Refs.~\cite{Zhang:2008ez,Lu:2009ip} valid at $Q^2\approx 1$ GeV$^2$, obtained from a fit to $pd$~\cite{Zhu:2006gx} and $pp$~\cite{Zhu:2008sj} Drell-Yan data measured by the E866/NuSea Collaboration, with the shaded area describing the variation ranges allowed by positivity bounds. \begin{figure}[t] \begin{center} \epsfig{file=first_moment_bm_ev_nlo_bis.eps, width=\columnwidth} \end{center} \caption{Results for the first transverse-momentum moment of the Boer-Mulders function, for up (left) and down (right) quark, as function of $x$. The dashed curves show the results at the hadronic scale of the model $\mu_0^2=0.094$ GeV$^2$ and the solid curves correspond to the results after NLO evolution to $Q^2=2.4$ GeV$^2$, using the evolution pattern of the transversity distribution. The dashed-dotted curves are the results of the phenomenological parametrization of Refs.~\cite{Barone:2008tn,Barone:2009hw} at the average scale of $Q^2=2.4 $ GeV$^2$, and the short-dashed curves correspond the results of Refs.~\cite{Zhang:2008ez,Lu:2009ip} valid at $Q^2\approx 1$ GeV$^2$, with the shaded area describing the variation ranges allowed by positivity bounds. } \label{fig3} \end{figure} We note that the available data do not allow yet a full fit of $h_1^\perp$ with its $x$ and $\boldsymbol k^{\, 2}_\perp$ dependence and these phenomenological parametrizations are only first attempts to extract information on this distribution. Upcoming experimental SIDIS data also from JLab and plans for Drell-Yan experiments at GSI will play a crucial role to better constrain these analysis. Our model predictions after the ``approximate'' evolution to $Q^2=2.4$ GeV$^2$ are compatible with the phenomenological analysis of SIDIS data, reproducing both the peak position and the behaviour in $x$, while are at variance with the analysis of the Drell-Yan data. In particular we confirm the findings of Ref.~\cite{Barone:2008tn} and the expectations from various theoretical analysis~\cite{Burkardt:2005hp,Burkardt:2007xm,Courtoy:2009pc,Gockeler:2006zu,Pobylitsa:2003ty,Gamberg:2007wm,Bacchetta:2008af}, predicting the same sign for both the up and down contributions, with the $u$ component of $h_1^\perp$ larger in magnitude than the corresponding component of $f_{1T}^\perp$ and the $d$ components of $h_1^\perp$ and $f_{1T}^\perp$ with approximately the same magnitude and opposite sign. \begin{figure}[t!] \epsfig{file=first_moment_sivers_bm_ang_mom.eps, width=0.7\columnwidth} \caption{ Angular momentum decomposition of the first $\boldsymbol k_\perp$ moment of the Sivers function for the up (left panel ) and down (right panel) quark. The dashed curves show the contribution from the interference of $S$ and $P$ waves, and the dotted curves correspond to the contribution from the interference of $P$ and $D$ waves. The solid curves are the total results, sum of all the partial-wave contributions.} \label{fig4} \end{figure} In Fig.~\ref{fig4} we show the decomposition of the Sivers and Boer-Mulders functions in the contributions from the different partial-wave amplitudes of the nucleon LCWF. The dashed curves correspond to the results from the interference of $S$ and $P$ waves, the dotted curves show the contribution from $P-D$ wave interference, and the solid curves are the total results, sum of all the partial wave contributions. \begin{figure} \epsfig{file=tmd_sivers_up.eps, width=0.45\columnwidth} \hspace{0.3 truecm} \epsfig{file=tmd_sivers_down.eps, width=0.45\columnwidth} \caption{ The Sivers function $f_{1T}^\perp$ as function of $x$ and $\boldsymbol k^2_\perp$ for up (left panel) and down quark (right panel).} \label{fig5} \end{figure} \begin{figure} \epsfig{file=tmd_bm_up.eps, width=0.45\columnwidth} \hspace{0.3 truecm} \epsfig{file=tmd_bm_down.eps, width=0.45\columnwidth} \caption{ The Boer-Mulders function $h_1^\perp$ as function of $x$ and $\boldsymbol k^2_\perp$ for up (left panel) and down quark (right panel).} \label{fig6} \end{figure} The $S-P$ wave interference terms give the dominant contribution to the Sivers function of both $u$ and $d$ quarks, while the $P-D$ wave interference terms contribute at most by 20$\%$ of the total results. On the other side, the relative weight of the $P-D$ wave interference terms increases in the case of the Boer-Mulders function. It corresponds to $30\%$ of the total results for the up-quark distribution and becomes the dominant contribution in the case of down quark, reaching up to $60\%$ of the total result. We also note that, contrary to the case of T-even TMDs~\cite{Pasquini:2008ax}, the assumption of SU(6) symmetry in the model does not imply any proportionality between the T-odd distributions of up and down quark. As outlined in Ref.~\cite{Courtoy:2009pc}, this is due to the fact that in the case of the T-odd functions one is using a two-body operator associated with FSI, while for the T-even TMDs the proportionality results from the calculation with a one-body operator. In comparison with other model calculations, our light-cone model predictions are similar in shape but significantly different in magnitude from those in the non-relativistic CQM of Refs.~\cite{Courtoy:2009pc,Courtoy:2008vi,Courtoy:2008dn}. The main differences in this calculation can be traced back to the use of covariant quantization and non-relativistic wave functions. Furthermore, the quark-gluon interaction vertex is treated non relativistically. Analogous discrepancies are evident in the comparison of our predictions with the results of the bag-model~\cite{Yuan:2003wk,Courtoy:2008dn,Courtoy:2009pc}, although in this case the calculation is fully relativistic. Since the bag model uses covariant quantization, the spin structure is worked out in terms of canonical spin instead of light-cone helicity, and at the quark-gluon vertex one has contributions from both spin-flip and spin-conserving terms. However, these terms reduce to helicity conserving terms ($\lambda_3=\lambda_4$ in the diagram of Fig.~\ref{fig1}) when written in terms of light-cone helicity, in agreement with our model calculation. For a more detailed discussion on the relation between the structure of TMDs in terms of canonical spin and light-cone helicity we refer to Ref.~\cite{cedric}. \newline \noindent Finally, with respect to the diquark models of Refs.~\cite{Gamberg:2007wm,Bacchetta:2008af} we have different magnitude and shape for both the Sivers and Boer Mulders functions. The different magnitude might be due to the choice of different values for the quark-gluon coupling constant as well as to the absence of D-wave components in these calculations. Note however that our results are at variance with the calculations in the diquark models also for the relative magnitude between up- and down-quark contributions. The dependence on $x$ and $\boldsymbol k^{\, 2}_\perp$ of the Sivers and Boer-Mulders functions is shown in Figs.~\ref{fig5} and ~\ref{fig6}, respectively, for the separate up (left) and down (right) quark contributions. The behaviour in $\boldsymbol k^2_\perp$ is very similar for the two distributions, and does not depend on the quark flavour. The $\boldsymbol k^{\, 2}_\perp$-dependence shown in Figs.~\ref{fig5} and \ref{fig6} is definitely not of Gaussian form. However, following the exercise performed in Ref.~\cite{Boffi:2009sh} for the T-even distributions, it is interesting to compare the model predictions for the mean square transverse momenta with the results of the Gaussian model. We define the mean transverse momenta ($n=1$) and the mean square transverse momenta ($n=2$) for the TMD $j(x,\boldsymbol k^{\, 2}_\perp)$ as follows \be\label{Eq:define-mean-pT} \langle k_{\perp,j}^n\rangle = \frac{\int{\rm d} x\int{\rm d}^2 \boldsymbol k_\perp \;k_\perp^n\,j(x,\boldsymbol k^{\, 2}_\perp)} {\int{\rm d} x\int{\rm d}^2\boldsymbol k_\perp \;j(x,\boldsymbol k^{\, 2}_\perp)} \;, \ee where $k_\perp=|\boldsymbol k_\perp|$. The corresponding results for the T-odd distributions are shown in Table~I. In the Gaussian model the following relation holds \be\label{Eq:Gauss-relation-pT-pT2} \la k_\perp^2\ra \stackrel{\rm Gauss}{=} \frac{4}{\pi}\,\la k_\perp\ra^2\;, \ee which implies that the ratio shown in the last column of Table~I should be equal to one. The model results deviates from unit by 10$\%$. We also note that the mean transverse momenta in Table~I are quite small, much smaller than expected from phenomenological studies. This is due to the low scale of the model, and Sudakov effects are expected to make the $\boldsymbol k^{\, 2}_\perp$ distributions larger when evolving to larger and experimentally relevant scale. \begin{table}[t] \begin{tabular}{c|cc|cc|cc} \ \hspace{2cm} \ & \ \hspace{2cm} \ & \ \hspace{2cm} \ & \ \hspace{2cm} \ & \ \hspace{2cm} \ & \ \hspace{2cm} \ & \ \hspace{2cm} \ \cr TMD $j$ & \multicolumn{2}{c|}{$\begin{array}{l}\langle k_\perp\rangle \, {\rm in \; GeV} \end{array}$} & \multicolumn{2}{c|}{$\begin{array}{l}\la k_\perp^2\ra \, {\rm in \; GeV}^2 \end{array}$} & \multicolumn{2}{c} { $\displaystyle\frac{4\la k_\perp\ra^2}{\pi\la k_\perp^2\ra}$ }\cr & & & & & &\cr \hline \hline & up & down &up &down &up &down\\ $f_{1T}^\perp$&0.22&0.24&0.071&0.084&0.90&0.90\\ $h_{1}^\perp$&0.23&0.24&0.077&0.080&0.90&0.91 \end{tabular} \label{exper} \caption{ The mean transverse momenta and the mean square transverse momenta of T-odd TMDs, as defined in Eq.~(\ref{Eq:define-mean-pT}), from the light-cone CQM. If the transverse momenta in the TMDs were Gaussian, then the result for the ratio in the third column would be unity, see text.} \end{table} In Fig.~\ref{fig7}, we show the spin density in the transverse-momentum space of unpolarized up (left panel) and down (right panel) quark in a transversely polarized nucleon, defined as \begin{eqnarray} \rho^q_{f_{1T}^\perp}(\boldsymbol k_\perp)= \int {\rm d} x \frac{1}{2}\left[f^q_1(x,\boldsymbol k^{\, 2}_\perp) +S^i_\perp\epsilon^{ij}k^j\frac{1}{M}f_{1T}^{\perp\,q}(x,\boldsymbol k^{\,2}_\perp) \right] \end{eqnarray} with $\boldsymbol S_\perp$ the nucleon transverse-polarization vector, and $f^q_1(x,k_\perp^2)$ the monopole distribution corresponding to spin densities for unpolarized quarks in an unpolarized target. When $\boldsymbol S_\perp$ points in the $\hat x$ direction, the dipole contribution related to the Sivers function introduces a large distortion on the monopole term, perpendicular to both the spin and the momentum of the proton and with opposite sign for up and down quarks. The corresponding average transverse-momentum shift is defined as \begin{eqnarray} \langle k^y\rangle^q_{f_{1T}^\perp} =\frac{\int{\rm d}^2\boldsymbol k_\perp k^y \rho^q_{f_{1T}^\perp}(\boldsymbol k_\perp)}{\int{\rm d}^2\boldsymbol k_\perp \rho^q_{f_{1T}^\perp}(\boldsymbol k_\perp)} \end{eqnarray} and results \begin{eqnarray} \langle k^y\rangle^u_{f_{1T}^\perp}=\frac{M}{2} \int{\rm d}x f_{1T}^{(1)\perp\, u}(x)=-70.31\, \mbox{MeV}, \quad\langle k^y\rangle^d_{f_{1T}^\perp}=M \int{\rm d}x f_{1T}^{(1)\perp\, d}(x)=140.62 \, \mbox{MeV}.\nonumber\\ \end{eqnarray} The fact that the absolute value of the average transverse momentum induced by the Sivers function is twice as large for $d$ quark than for $u$ quark is just a consequence of the Burkardt sum rule in Eq.~(\ref{burk-sr}). This intrinsic $\boldsymbol k_\perp$ shift is the analogous of the dipole deformation related to the GPD $E$ in impact-parameter space~\cite{Meissner:2009ww,Diehl:2005jf}. Although it is not possible to establish a model independent relation between the Sivers function and the GPD $E$~\cite{Meissner:2009ww}, we note that the LCWF overlap representation of $E$, for vanishing longitudinal momentum transfer, is given in terms of the same combinations of light-cone amplitudes parametrizing the Sivers function in the one gluon-exchange approximation, but evaluated for different values of quark variables~\cite{Ji:2002xn,Boffi:2002yy}. The values for the average shifts in impact-parameter space within the present light-cone quark model were found $\langle b^y\rangle^u=\kappa^u/(2M)=0.20\, \mbox{fm}$ and $\langle b^y\rangle^d=\kappa^d/(M)=-0.33 \, \mbox{fm}$~\cite{Pasquini:2007xz}, where $\kappa^q$ is the quark contribution to the proton anomalous magnetic moment. \begin{figure}[t] \begin{center} \epsfig{file=density_f1_sivers_up_bw.eps, width=0.45\columnwidth} \hspace{0.3 truecm} \epsfig{file=density_f1_sivers_down_bw.eps, width=0.45\columnwidth} \vspace{-2 truecm} \end{center} \caption{ Spin density in the transverse-momentum plane for unpolarized quarks in a transversely polarized nucleon. The left panel is for up quark, and the right panel for down quark.} \label{fig7} \end{figure} Analogously, the spin density of transversely polarized quarks and unpolarized nucleon is related to the Boer-Mulders effect by \begin{eqnarray} \rho^q_{h_{1}^\perp}(\boldsymbol k_\perp,\boldsymbol s_\perp)= \int {\rm d} x \frac{1}{2}\left[f_1^q(x,\boldsymbol k^{\, 2}_\perp) +s^i\epsilon^{ij}k^j\frac{1}{M}h_1^{q\,\perp}(x,k_\perp^2)\right], \end{eqnarray} where $\boldsymbol s_\perp$ is the quark transverse-polarization vector. \begin{figure}[t] \begin{center} \epsfig{file=density_f1_bm_up_bw.eps, width=0.45\columnwidth} \hspace{0.3 truecm} \epsfig{file=density_f1_bm_down_bw.eps, width=0.45\columnwidth} \vspace{-2 truecm} \end{center} \caption{ Spin density in the transverse-momentum plane for transversely polarized quarks in an unpolarized nucleon. The left panel is for up quark, and the right panel for down quark.} \label{fig8} \end{figure} In Fig.~\ref{fig8} we show the spin density for quark polarization in the $\hat x$ direction. Since the Boer-Mulders function is negative for both up and down quarks, the sideway shift is always in the positive $\hat y$ direction. The corresponding average dipole distortion is \begin{eqnarray} \langle k^y\rangle^u_{h_{1}^\perp} =\frac{M}{2}\int{\rm d}x h_{1}^{(1)\perp\, u}(x)=-159.40 \, \mbox{MeV},\quad \langle k^y\rangle^d_{h_{1}^\perp} =M \int{\rm d}x h_{1}^{(1)\perp\, d}(x)=-215.73 \, \mbox{MeV}. \nonumber\\ \end{eqnarray} Although the Boer-Mulders function is smaller in magnitude for down quark than for up quark, one observes that the average sideways distortion for down quark is stronger. This is because the monopole distribution related to $f_1^q$ is twice as large for up quarks as for down quarks, therefore adding the dipole contributions results in a more pronounced distortion for down quarks than for up quarks. The corresponding dipole distribution in impact-parameter space is described by the chiral odd GPDs $E_T+2\tilde H_T$. As found in Ref.~\cite{Pasquini:2005dk}, these GPDs for zero longitudinal momentum transfer are given by the same combination of LCWFs which enter the calculation of $h_{1}^\perp$ in the one-gluon exchange approximation, but at different kinematics. The corresponding average distortion in impact-parameter space is proportional to the tensor anomalous magnetic moment $\kappa^q_T$, and in the present light-cone quark model is given by $\kappa^u_T/(2M)=0.42 \,\mbox{fm}$ and $\kappa^d_T/(M)=0.55 \,\mbox{fm}$ for up and down quark, respectively~\cite{Pasquini:2007xz}. \section{Conclusions} In this paper we have investigated the naive-time-reversal-odd quark distributions, the quark Sivers and Boer-Mulders functions, in a light-cone quark model. The final-state interaction effects are calculated by approximating the gauge link operator with a one-gluon exchange interaction. In this framework, we have derived the general formalism for the T-odd quark distributions in terms of overlap of light-cone wave function amplitudes describing the different orbital angular momentum components of the nucleon state. This model independent expressions are particularly suitable to emphasize the correlations of quark transverse momentum and transverse polarizations of the nucleon and of the quark. For numerical estimates, the nucleon light-cone wave-function has been constructed by assuming a light-cone constituent quark model with SU(6) spin-flavor symmetry and a momentum-dependent part which is spherically symmetric. Under this assumption the orbital angular momentum content of the wave function is fully generated by the Melosh rotations which boost the rest-frame spin into the light-cone. As a result, we found explicit expressions for the light-cone amplitudes which match the analytic structure expected from model-independent arguments~\cite{Ji:2002xn,Burkardt:2002uc,Ji:2003yj}. The model dependence enters the choice of the momentum-dependent part of the light-cone wave function. In this work, we adopted a phenomenological description, by assuming a specific functional form with parameters fitted to hadronic structure constants. The same wave function was used to predict many other hadronic properties, providing a good description of available experimental data, and being able to capture the main features of the quark contribution to hadronic structure functions, like parton distributions~\cite{Pasquini:2006iv}, generalized parton distributions~\cite{Boffi:2002yy,Boffi:2003yj,Pasquini:2005dk,Pasquini:2007xz}, nucleon form factors~\cite{Pasquini:2007iz} and distribution amplitudes~\cite{Pasquini:2009ki}, and T-even transverse momentum dependent quark distributions~\cite{Pasquini:2008ax,Boffi:2009sh}. \newline \noindent The corresponding results for the Sivers and Boer-Mulders function have been presented in this paper by showing the decomposition into the contributions from different orbital angular momentum components. Both functions require a transfer of one unit of orbital angular momentum between the initial and final states. In particular, the Sivers function for both up and down quark is dominated by the interference of $S$- and $P$-wave components, while the $P-D$ wave interference terms contribute at most by $20 \%$ of the total results. On the other side, the relative weight of the $P-D$ wave interference terms increases in the case of the Boer-Mulders function, in particular for the down-quark component. Furthermore, the model results for the Sivers function satisfy exactly the so-called Burkardt sum rule, which is a non-trivial constraint for model calculations and parametrizations. In order to compare with phenomenological parametrizations obtained from a fit to available experimental data for semi-inclusive deep inelastic scattering and Drell-Yan processes, we evolved the model results to the experimental scale. Since the exact evolution equations for the T-odd quark distributions are still under study, we used those evolution equations which seem most promising to be able to simulate the correct evolution. We evolved the first transverse-momentum moment of the Sivers function by means of the evolution pattern of the unpolarized parton distribution, while for the first transverse-momentum moment of the Boer-Mulders we used the evolution pattern of the transversity. After evolution, the model results are consistent with the available parametrizations, especially for the Sivers function. There is agreement between the signs of the various flavor components, and also for the magnitude and the position of the maxima in $x$. These findings encourage further phenomenological applications of the model to describe azimuthal asymmetries in hadronic reactions. We also found that the $x$ and $ \boldsymbol k_\perp^2$ dependence is similar for the Sivers and Boer-Mulders functions, and approximately independent on the quark flavor. In particular, the $\boldsymbol k_\perp^2$ is not of Gaussian form. However, it is worthwhile to evaluate the degree of approximation introduced by the Gaussian Ansatz within the model in the calculation of observables. This task is left for future applications of the model. Finally, we discussed the spin densities in the transverse-momentum space related to the Sivers and Boer-Mulders effects, showing that they are consistent with the model results for the corresponding spin densities in the impact-parameter space described by generalized parton distributions. \acknowledgments B.P. is grateful to A. Bacchetta, F. Conti, A. Courtoy and M. Radici for discussions, and to the Nuclear Science Division of Lawrence Berkeley National Laboratory, where this work was initiated, for hospitality. This work was supported in part by the Research Infrastructure Integrating Activity ``Study of Strongly Interacting Matter'' (acronym HadronPhysics2, Grant Agreement n. 227431) under the Seventh Framework Programme of the European Community, by the Italian MIUR through the PRIN 2008EKLACK ``Structure of the nucleon: transverse momentum, transverse spin and orbital angular momentum'', and by the U.S. Department of Energy under contracts DE-AC02-05CH11231 and DE-AC02-76SF00515. We are grateful to RIKEN, Brookhaven National Laboratory and the U.S. Department of Energy (contract number DE-AC02-98CH10886) for providing the facilities essential for the completion of this work. \clearpage
\section{Abstract} Based on waveform data from a profile of aftershocks following the north-south trace of the June 28, 1992 Landers rupture across the Mojave desert, we construct a new velocity model for the Mojave region which features a thin, slow crust. Using this model, we obtain source parameters, including depth and duration, for each of the aftershocks in the profile, and in addition, any significant ($M>3.7$) Joshua Tree--Landers aftershock between April, 1992 and October, 1994 for which coherent TERRAscope data were available. In all, we determine source parameters and stress-drops for 45 significant ($M_w > 4$) earthquakes associated with the Joshua Tree and Landers sequences, using a waveform grid--search algorithm. Stress drops for these earthquakes appear to vary systematically with location, with respect to previous seismic activity, proximity to previous rupture (i.e., with respect to the Landers rupture), and with tectonic province. In general, for areas north of the Pinto Mountain fault, stress-drops of aftershocks located off the faults involved with the Landers rupture are higher than those located on the fault, with the exception of aftershocks on the newly recognized Kickapoo (Landers) fault. Stress drops are moderate south of the Pinto Mountain fault, where there is a history of seismic swarms but no single through-going fault. In contrast to aftershocks in the eastern Transverse ranges, and related to the 1992 Big Bear, California, sequence,
Landers events show no clear relationship between stress--drop and depth. Instead, higher stress--drop aftershocks appear to correlate with activity on nascent faults, or those which experienced relatively small slip during mainshock rupture. \section{Introduction} Stress-drop and style, depth and timing of aftershock activity relative to mainshock rupture plane or fault trace yields clues about how the regional `stress budget' is settled following a large earthquake. Aftershock stress-drops vary with source area and tectonic environment [Lindley and Archuleta, 1992], reflecting regional differences in the source properties of small earthquakes. The $M_w7.3$ Landers earthquake of 11:58 GMT, June 28, 1992, was preceded by the April 23, 1992, Joshua Tree mainshock ($M_w6.1$) which is now considered a precursory event [Stein et al., 1994] with its own substantial fore-- and aftershock sequence. The Landers event was followed by tens of thousands of aftershocks [Kanamori et al., 1992; Hauksson et al., 1993; Sieh et al., 1993], many in areas with no surface rupture [e.g, Big Bear region, see Figure 1]. Stress-drops and source parameters of Joshua Tree--Landers aftershocks provide information critical to understanding fault kinematics in the Eastern California Shear Zone (ECSZ), which encompasses the Landers rupture area and may
methods and understand where the proof for expressiveness of Transformers breaks down when full-attention is relaxed to form the proposed attention pattern. This understanding helped us develop \textsc{BigBird}\xspace, which is theoretically as expressive and also empirically useful. In particular, our \textsc{BigBird}\xspace consists of three main part: \begin{itemize}[leftmargin=6mm, itemsep=0mm, partopsep=0pt,parsep=0pt] \item A set of $g$ global tokens attending on all parts of the sequence. \item All tokens attending to a set of $w$ local neighboring tokens. \item All tokens attending to a set of $r$ random tokens. \end{itemize} This leads to a high performing attention mechanism scaling to much longer sequence lengths (8x). To summarize, our main \textbf{contributions} are: \begin{enumerate}[leftmargin=6mm, itemsep=2mm, partopsep=0pt,parsep=0pt] \item \textsc{BigBird}\xspace satisfies all the known theoretical properties of full transformer (\Cref{sec:theory}). In particular, we show that adding extra tokens allows one to express all continuous sequence to sequence functions with only $O(n)$-inner products. Furthermore, we show that under standard assumptions regarding precision, \textsc{BigBird}\xspace is Turing complete. \item Empirically, we show that the extended context modelled by \textsc{BigBird}\xspace benefits variety of NLP tasks. We achieve \emph{state of the art} results for question answering and document summarization on a number of different datasets. Summary of these results are presented in \Cref{sec:expt-nlp}. \item Lastly, we introduce a novel application of attention based models where long contexts are beneficial: extracting contextual representations of genomics sequences like DNA. With longer masked LM pretraining, \textsc{BigBird}\xspace improves performance on downstream tasks such as promoter-region and chromatin profile prediction (\Cref{sec:expt-bio}). \end{enumerate} \subsection{Related Work} There have been a number of interesting attempts, that were aimed at alleviating the quadratic dependency of Transformers, which can broadly categorized into two directions. First line of work embraces the length limitation and develops method around it. Simplest methods in this category just employ sliding window~\citep{wang2019multi}, but in general most work fits in the following general paradigm: using some other mechanism select a smaller subset of relevant contexts to feed in the transformer and optionally iterate, i.e. call transformer block multiple time with different contexts each time. Most prominently, SpanBERT~\citep{joshi2020spanbert}, ORQA~\citep{lee2019latent}, REALM~\citep{guu2020realm}, RAG~\citep{lewis2020retrieval} have achieved strong performance for different tasks. However, it is worth noting that these methods often require significant engineering efforts (like back prop through large scale nearest neighbor search) and are hard to train. Second line of work questions if full attention is essential and have tried to come up with approaches that do not require full attention, thereby reducing the memory and computation requirements. Prominently,~\citet{dai2019transformer, sukhbaatar2019adaptive, rae2019compressive} have proposed auto-regresive models that work well for left-to-right language modeling but suffer in tasks which require bidirectional context. \citet{child2019generating} proposed a sparse model that reduces the complexity to $O(n\sqrt{n})$, \citet{kitaev2019reformer} further reduced the complexity to $O(n\log(n))$ by using LSH to compute nearest neighbors. \citet{ye2019bp} proposed binary partitions of the data where as \citet{qiu2019blockwise} reduced complexity by using block sparsity. Recently, Longformer \cite{beltagy2020longformer} introduced a localized sliding window based mask with few global mask to reduce computation and extended BERT to longer sequence based tasks. Finally, our work is closely related to and built on the work of Extended Transformers Construction~\citep{ainslie2020etc}. This work was designed to encode structure in text for transformers. The idea of global tokens was used extensively by them to achieve their goals. Our theoretical work can be seen as providing a justification for the success of these models as well. It is important to note that most of the aforementioned methods are heuristic based and empirically are not as versatile and robust as the original transformer, i.e. the same architecture do not attain SoTA on multiple standard benchmarks. (There is one exception of Longformer which we include in all our comparisons, see~\Cref{sec:app-related-work} for a more detailed comparison). Moreover, these approximations do not come with theoretical guarantees. \section{\textsc{BigBird}\xspace Architecture} \label{sec:arch} In this section, we describe the \textsc{BigBird}\xspace model using the \emph{generalised attention mechanism} that is used in each layer of transformer operating on an input sequence ${\bm{X}} = ({\bm{x}}_1, ..., {\bm{x}}_n) \in \mathbb{R}^{n\times d}$. The \emph{generalized attention mechanism} is described by a directed graph $D$ whose vertex set is $[n] = \set{1,\dots,n}$. The set of arcs (directed edges) represent the set of inner products that the attention mechanism will consider. Let $N(i)$ denote the out-neighbors set of node $i$ in $D$, then the $i^\text{th}$ output vector of the generalized attention mechanism is defined as \useshortskip \vspace{-1mm} \begin{equation}\vspace{-2mm} \small \ensuremath{\textsc{Attn}\xspace}_D({\bm{X}})_i = {\bm{x}}_i + \sum_{h=1}^H \sigma \left(Q_h({\bm{x}}_i) K_h({\bm{X}}_{N(i)})^T \right) \cdot V_h({\bm{X}}_{N(i)}) \label{AT} \tag{AT} \end{equation} where $Q_h, K_h:\mathbb{R}^d \to \mathbb{R}^m$ are query and key functions respectively, $V_h:\mathbb{R}^d \to \mathbb{R}^d$ is a value function, $\sigma$ is a scoring function (e.g. softmax or hardmax) and $H$ denotes the number of heads. Also note $X_{N(i)}$ corresponds to the matrix formed by only stacking $\{{\bm{x}}_j : j\in N(i) \}$ and not all the inputs. If $D$ is the complete digraph, we recover the full quadratic attention mechanism of \citet{vaswani2017attention}. To simplify our exposition, we will operate on the adjacency matrix $A$ of the graph $D$ even though the underlying graph maybe sparse. To elaborate, $A \in [0, 1]^{n \times n}$ with $A(i,j)=1$ if query $i$ attends to key $j$ and is zero otherwise. For example, when $A$ is the ones matrix (as in BERT), it leads to quadratic complexity, since all tokens attend on every other token. This view of self-attention as a fully connected graph allows us to exploit existing graph theory to help reduce its complexity. The problem of reducing the quadratic complexity of self-attention can now be seen as a \emph{graph sparsification problem}. It is well-known that random graphs are expanders and can approximate complete graphs in a number of different contexts including in their spectral properties~\citep{spielman2011spectral,hoory2006expander}. We believe sparse random graph for attention mechanism should have two desiderata: small average path length between nodes and a notion of locality, each of which we discuss below. Let us consider the simplest random graph construction, known as Erd\H{o}s-R\'enyi model, where each edge is independently chosen with a fixed probability. In such a random graph with just $\tilde{\Theta}(n)$ edges, the shortest path between any two nodes is logarithmic in the number of nodes~\citep{chung2002average, katzav2018distribution}. As a consequence, such a random graph approximates the complete graph spectrally and its second eigenvalue (of the adjacency matrix) is quite far from the first eigenvalue ~\citep{benaych2019largest,benaych2020spectral,alt2019extremal}. This property leads to a rapid mixing time for random walks in the grpah, which informally suggests that information can flow fast between any pair of nodes. Thus, we propose a sparse attention where each query attends over $r$ random number of keys i.e. $A(i, \cdot ) = 1$ for $r$ randomly chosen keys (see~\Cref{fig:rnd_atn}). The second viewpoint which inspired the creation of \textsc{BigBird}\xspace is that most contexts within NLP and computational biology have data which displays a great deal of \emph{locality of reference}. In this phenomenon, a great deal of information about a token can be derived from its neighboring tokens. Most pertinently, \citet{clark2019does} investigated self-attention models in NLP tasks and concluded that that neighboring inner-products are extremely important. The concept of locality, proximity of tokens in linguistic structure, also forms the basis of various linguistic theories such as transformational-generative grammar. In the terminology of graph theory, clustering coefficient is a measure of locality of connectivity, and is high when the graph contains many cliques or near-cliques (subgraphs that are almost fully interconnected). Simple Erd\H{o}s-R\'enyi random graphs do not have a high clustering coefficient~\citep{sussman2017clusteringcoeff}, but a class of random graphs, known as small world graphs, exhibit high clustering coefficient~\citep{watts1998collective}. A particular model introduced by~\citet{watts1998collective} is of high relevance to us as it achieves a good balance between average shortest path and the notion of locality. The generative process of their model is as follows: Construct a regular ring lattice, a graph with $n$ nodes each connected to $w$ neighbors, $w$/2 on each side. \begin{wraptable}{r}{64mm} \vspace{-3mm} \centering \small \begin{tabular}{@{}lrrr@{}} \toprule Model & MLM & SQuAD & MNLI \\ \midrule BERT-base & 64.2 & 88.5 & 83.4 \\ Random (R) & 60.1 & 83.0 & 80.2 \\ Window (W) & 58.3 & 76.4 & 73.1 \\ R + W & 62.7 & 85.1 & 80.5 \\ \bottomrule \end{tabular} \caption{Building block comparison @512} \label{tab:init} \vspace{-2mm} \end{wraptable} In other words we begin with a sliding window on the nodes. Then a random subset ($k$\%) of all connections is replaced with a random connection. The other (100 - $k$)\% local connections are retained. However, deleting such random edges might be inefficient on modern hardware, so we retain it, which will not affect its properties. In summary, to capture these local structures in the context, in \textsc{BigBird}\xspace, we define a sliding window attention, so that during self attention of width $w$, query at location $i$ attends from $i-w/2$ to $i+w/2$ keys. In our notation, $A(i, i-w/2:i+w/2) = 1$ (see \Cref{fig:wnd:atn}). As an initial sanity check, we performed basic experiments to test whether these intuitions are sufficient in getting performance close to BERT like models, while keeping attention linear in the number of tokens. We found that random blocks and local window were insufficient in capturing all the context necessary to compete with the performance of BERT. The final piece of \textsc{BigBird}\xspace is inspired from our theoretical analysis (\Cref{sec:theory}), which is critical for empirical performance. More specifically, our theory utilizes the importance of ``global tokens'' (tokens that attend to all tokens in the sequence and to whom all tokens attend to (see \Cref{fig:gbl_atn}). These global tokens can be defined in two ways: \begin{itemize}[leftmargin=6mm, itemsep=2mm, partopsep=0pt,parsep=0pt] \item \textsc{BigBird}\xspace-\textsc{itc}: In internal transformer construction (\textsc{itc}), we make some existing tokens ``global'', which attend over the entire sequence. Concretely, we choose a subset $G$ of indices (with $g:=|G|$), such that $A(i, :) = 1$ and $A(:,i) =1$ for all $i \in G$. \item \textsc{BigBird}\xspace-\textsc{etc}: In extended transformer construction (\textsc{etc}), we include additional ``global'' tokens such as CLS. Concretely, we add $g$ global tokens that attend to all existing tokens. In our notation, this corresponds to creating a new matrix $B \in [0, 1]^{(N+g)\times (N+g)}$ by adding $g$ rows to matrix $A$, such that $B(i,:) = 1$, and $B(:, i) =1$ for all $i \in \{1,2, \ldots g\}$, and $B(g+i, g+j) = A(i, j) \forall\ i,j \in \{1, \ldots, N \}$. This adds extra location to store context and as we will see in the experiments improves performance. \end{itemize} The final attention mechanism for \textsc{BigBird}\xspace (\Cref{fig:bigb_atn}) has all three of these properties: queries attend to $r$ random keys, each query attends to $w/2$ tokens to the left of its location and $w/2$ to the right of its location and they contain $g$ global tokens (The global tokens can be from existing tokens or extra added tokens). We provide implementation details in~\Cref{sec:apndx-impl}. \section{Theoretical Results about Sparse Attention Mechanism} \label{sec:theory} In this section, we will show that that sparse attention mechanisms are as powerful and expressive as full-attention mechanisms in two respects. First, we show that when sparse attention mechanisms are used in a standalone encoder (such as BERT), they are Universal Approximators of sequence to sequence functions in the style of \citet{Yun19}. We note that this property was also explored theoretically in contemporary work~\citet{yun2020on}. Second, unlike~\citep{yun2020on}, we further show that sparse encoder-decoder transformers are Turing Complete (assuming the same conditions defined in~\citep{Perez19}). Complementing the above positive results, we also show that moving to a sparse-attention mechanism incurs a cost, i.e.~there is no free lunch. In~\Cref{sec:limit}, we show lower bounds by exhibiting a natural task where any sufficiently sparse mechanism will require polynomially more layers. \subsection{Notation} The complete Transformer {\em encoder} stack is nothing but the repeated application of a single-layer encoder (with independent parameters). We denote class of such Transformer encoders stack, defined using generalized encoder (\Cref{sec:arch}), by $\mathcal{T}_{D}^{H,m,q}$ which consists of $H$-heads with head size $m$ and $q$ is the hidden layer size of the output network, and the attention layer is defined by the directed graph $D$. The key difference between our proposed attention mechanism to that of \citet{vaswani2017attention,Yun19} is that we add a special token at the beginning of each sequence and assign it a special vector. We will refer to this as ${\bm{x}}_0$. Therefore our graph $D$ will have vertex set $\set{0} \cup [n] = \set{0,1,2,\dots,n}$. We will assume that this extra node and its respective vector will be dropped at the final output layer of transformer. To avoid cumbersome notation, we will still treat transformer as mapping sequences ${\bm{X}} \in \mathbb{R}^{n \times d}$ to $\mathbb{R}^{n \times d}$. We will also allow the transformer to append position embeddings $E \in \mathbb{R}^{d\times n}$ to matrix $X$ in the input layer. Finally, we need to define the function class and distance measure for proving universal approximation property. Let $\mathcal{F}_{CD}$ denote the set of continuous functions $f: [0,1]^{n \times d} \to \mathbb{R}^{n \times d} $ which are continuous with respect to the topology defined by $\ell_p$ norm. Recall for any $p \geq 1$, the $\ell_p$ distance is $d_p(f_1,f_2) = \left(\int \norm{f_1(X) - f_2(X)}_p^p dX \right)^{1/p}$. \subsection{Universal Approximators} \begin{definition} The star-graph $S$ centered at $0$ is the graph defined on $\set{0,\dots, n}$. The neighborhood of all vertices $i$ is $N(i) = \{0,i\} $ for $i \in \{1\dots n\}$ and $N(0) = \{1,\dots n\}$. \end{definition} Our main theorem is that the sparse attention mechanism defined by any graph containing $S$ is a universal approximator: \begin{theorem} \label{thm:universal} Given $1 < p < \infty$ and $\epsilon > 0 $, for any $f \in \mathcal{F}_{CD}$, there exists a transformer with sparse-attention, $g \in \mathcal{T}_D^{H,m,q}$ such that $d_p(f,g)\leq \epsilon$ where $D$ is any graph containing star graph $S$. \end{theorem} To prove the theorem, we will follow the standard proof structure outlined in~\citep{Yun19}. \textbf{Step 1: Approximate $\mathcal{F}_{CD}$ by piece-wise constant functions.} Since $f$ is a continuous function with bounded domain $[0,1)^{n \times d}$, we will approximate it with a suitable piece-wise constant function. This is accomplished by a suitable partition of the region $[0,1)$ into a grid of granularity $\delta$ to get a discrete set $\ensuremath{\mathbb{G}_{\delta}}$. Therefore, we can assume that we are dealing with a function $\bar{f}: \ensuremath{\mathbb{G}_{\delta}} \to \mathbb{R}^{n \times d}$, where $d_p(f,\bar{f}) \leq \frac{\epsilon}{3}$. \textbf{Step 2: Approximate piece-wise constant functions by modified transformers.} This is the key step of the proof where the self-attention mechanism is used to generate a \emph{contextual-mapping} of the input. Informally, a contextual mapping is a unique code for the pair consisting of a matrix $({\bm{X}},{\bm{x}}_{i})$ and a column. Its uniqueness allows the Feed forward layers to use each code to map it to a unique output column. The main technical challenge is computing the contextual mapping using only sparse attention mechanism. This was done in \cite{Yun19} using a ``selective'' shift operator which shift up entries that are in a specific interval. Key to their proof was the fact that the shift, was exactly the range of the largest entry to the smallest entry. Creating a contextual mapping with a sparse attention mechanism is quite a challenge. In particular, because each query only attends to a few keys, it is not at all clear that sufficient information can be corralled to make a contextual embedding of the entire matrix. To get around this, we develop a sparse shift operator which shifts the entries of the matrices if they lie in a certain range. The exact amount of the shift is controlled by the directed sparse attention graphg $D$. The second key ingredient is the use of additional global token. By carefully applying the operator to a set of chosen ranges, we will show that each column will contain a unique mapping of the full mapping. Therefore, we can augment the loss of inner-products in the self attention mechanism by using multiple layers and an auxiliary global token. \textbf{Step 3: Approximate modified transformers by original Transformers}: The final step is to approximate the modified transformers by the original transformer which uses ReLU and softmax. We provide the full details in~\Cref{sec:apndx-universal}. \subsection{Turing Completeness} Transformers are a very general class. In the original paper of \citet{vaswani2017attention}, they were used in both an encoder and a decoder. While the previous section outlined how powerful just the encoders were, another natural question is to ask what the additional power of both a decoder along with an encoder is? \citet{Perez19} showed that the full transformer based on a quadratic attention mechanism is Turing Complete. This result makes one unrealistic assumption, which is that the model works on arbitrary precision model. Of course, this is necessary as otherwise, Transformers are bounded finite state machines and cannot be Turing Complete. It is natural to ask if the full attention mechanism is necessary. Or can a sparse attention mechanism also be used to simulate any Turing Machine? We show that this is indeed the case: we can use a sparse encoder and sparse decoder to simulate any Turing Machine. To use the sparse attention mechanism in the transformer architecture, we need to define a suitable modification where each token only reacts to previous tokens. Unlike the case for BERT, where the entire attention mechanism is applied once, in full transformers, the sparse attention mechanism at decoder side is used token by token. Secondly the work of \citet{Perez19}, uses each token as a representation of the tape history and uses the full attention to move and retrieve the correct tape symbol. Most of the construction of \citet{Perez19} goes through for sparse attentions, except for their addressing scheme to point back in history (Lemma B.4 in \citep{Perez19}). We show how to simulate this using a sparse attention mechanism and defer the details to~\Cref{sec:apndx-turing}. \subsection{Limitations} \label{sec:limit} We demonstrate a natural task which can be solved by the full attention mechanism in $O(1)$-layers. However, under standard complexity theoretic assumptions, this problem requires $\tilde{\Omega}(n)$-layers for any sparse attention layers with $\tilde{O}(n)$ edges (not just \textsc{BigBird}\xspace). (Here $\tilde{O}$ hides poly-logarthmic factors). Consider the simple problem of finding the corresponding furthest vector for each vector in the given sequence of length $n$. Formally, \textbf{Task 1.} \ Given $n$ unit vectors $\{u_1,\dots,u_n\}$, find $f(u_1,\dots,u_n) \to (u_{1^*}, \dots, u_{n^*})$ where for a fixed $j \in [n]$, we define $ j^* = \argmax_{k} \|u_k - u_j\|_2^2$. Finding vectors that are furthest apart boils down to minimize inner product search in case of unit vectors. For a full-attention mechanism with appropriate query and keys, this task is very easy as we can evaluate all pair-wise inner products. The impossibility for sparse-attention follows from hardness results stemming from Orthogonal Vector Conjecture(OVC) \citep{abboud2014consequences,abboud2015tight,backurs2015edit,williams2005new}. The OVC is a widely used assumption in fine-grained complexity. Informally, it states that one cannot determine if the minimum inner product among $n$ boolean vectors is $0$ in subquadratic time. In~\Cref{sec:apndx-limit}, we show a reduction using OVC to show that if a transformer $g \in \mathcal{T}_D^{H=1,m=2d,q=0}$ for any sparse directed graph $D$ can evaluate the Task $1$, it can solve the orthogonal vector problem. \begin{proposition} There exists a single layer full self-attention $g\in\mathcal{T}^{H=1,m=2d,q=0}$ that can evaluate Task 1, i.e. $g(u_1,...,u_n) = [u_{1^*},\dots, u_{n^*}]$, but for any sparse-attention graph $D$ with $\tilde{O}(n)$ edges (i.e.~inner product evaluations), would require $\tilde{\Omega}(n^{1-o(1)})$ layers. \end{proposition} \vspace{-3mm} We give a formal proof of this fact in~\Cref{sec:apndx-limit}. \section{Experiments: Natural Language Processing} \label{sec:expt-nlp} In this section our goal is to showcase benefits of modeling longer input sequence for NLP tasks, for which we select three representative tasks. We begin with basic masked language modeling (MLM; \citealt{devlin2018bert}) to check if better contextual representations can be learnt by utilizing longer contiguous sequences. Next, we consider QA with supporting evidence, for which capability to handle longer sequence would allow us to retrieve more evidence using crude systems like TF-IDF/BM25. Finally, we tackle long document classification where discriminating information may not be located in first 512 tokens. Below we summarize the results for \textsc{BigBird}\xspace using sequence length 4096\footnote{code available at \url{http://goo.gle/bigbird-transformer}}, while we defer all other setup details including computational resources, batch size, step size, to~\Cref{sec:apndx-expt-nlp}. \paragraph{Pretraining and MLM} We follow \citep{devlin2018bert, liu2019roberta} to create base and large versions of \textsc{BigBird}\xspace and pretrain it using MLM objective. This task involves predicting a random subset of tokens which have been masked out. We use four standard data-sets for pretraining (listed in \Cref{sec:app-expt-nlp:mlm}, \Cref{tab:mlm_data}), warm-starting from the public RoBERTa checkpoint\footnote{\url{https://github.com/pytorch/fairseq/tree/master/examples/roberta}}. We compare performance in predicting the masked out tokens in terms of bits per character, following~\citep{beltagy2020longformer}. As seen in~\Cref{sec:app-expt-nlp:mlm}, ~\Cref{tab:mlm_bpc}, both \textsc{BigBird}\xspace and Longformer perform better than limited length RoBERTa, with \textsc{BigBird}\xspace-\textsc{etc} performing the best. We note that we trained our models on a reasonable $16GB$ memory/chip with batch size of 32-64. Our memory efficiency is due to efficient blocking and sparsity structure of the sparse attention mechanism described in~\Cref{sec:arch}. \begin{table} \centering \small \begin{tabular}{@{}l c c c ccc c cc c c@{}} \toprule \multirow{2}{*}{Model} & & \multicolumn{3}{c}{HotpotQA} & & \multicolumn{2}{c}{NaturalQ} & & TriviaQA & & WikiHop\\ \cmidrule{3-5} \cmidrule{7-8} \cmidrule{10-10} \cmidrule{12-12} && Ans & Sup & Joint && LA & SA && Full && MCQ \\ \midrule RoBERTa && 73.5 & 83.4 & 63.5 && - & - && 74.3 && 72.4 \\ Longformer && 74.3 & 84.4 & 64.4 && - & - && 75.2 && 75.0 \\ \textsc{BigBird}\xspace-\textsc{itc} && \textbf{75.7} & 86.8 & 67.7 && 70.8 & 53.3 && \textbf{79.5} && \textbf{75.9} \\ \textsc{BigBird}\xspace-\textsc{etc} && 75.5 & \textbf{87.1} & \textbf{67.8} && \textbf{73.9} & \textbf{54.9} && 78.7 && \textbf{75.9} \\ \bottomrule \end{tabular} \vspace{2mm} \caption{QA Dev results using Base size models. We report accuracy for WikiHop and F1 for HotpotQA, Natural Questions, and TriviaQA.} \label{tab:QADev} \end{table} \begin{table} \centering \small \begin{tabular}{@{}l c c ccc c cc c cc@{}} \toprule \multirow{2}{*}{Model} & \multicolumn{3}{c}{HotpotQA} & & \multicolumn{2}{c}{NaturalQ} & & \multicolumn{2}{c}{TriviaQA} & & WikiHop\\ \cmidrule{2-4} \cmidrule{6-7} \cmidrule{9-10} \cmidrule{12-12} & Ans & Sup & Joint && LA & SA && Full & Verified && MCQ \\ \midrule HGN \citep{fang2019hierarchical} & \textbf{82.2} & 88.5 & \textbf{74.2} && - & - && - & - && - \\ GSAN & 81.6 & 88.7 & 73.9 && - & - && - & - && - \\ ReflectionNet \citep{gong2020reflection} & - & - & - && 77.1 & \textbf{64.1} && - & - && - \\ RikiNet-v2 \citep{liu2020rikinet} & - & - & - && 76.1 & 61.3 && - & - && - \\ Fusion-in-Decoder \citep{izacard2020fid} & - & - & - && - & - && 84.4 & 90.3 && - \\ SpanBERT \citep{joshi2020spanbert} & - & - & - && - & - && 79.1 & 86.6 && - \\ MRC-GCN \citep{tang2020multi} & - & - & - && - & - && - & - && 78.3 \\ MultiHop \citep{chen2019multi} & - & - & - && - & - && - & - && 76.5 \\ Longformer \citep{beltagy2020longformer} & 81.2 & 88.3 & 73.2 && - & - && 77.3 & 85.3 && 81.9 \\ \midrule \textsc{BigBird}\xspace-\textsc{etc} & 81.2 & \textbf{89.1} & 73.6 && \textbf{77.8} & 57.9 && \textbf{84.5} & \textbf{92.4} && \textbf{82.3} \\ \bottomrule \end{tabular} \vspace{2mm} \caption{Fine-tuning results on \textbf{Test} set for QA tasks. The Test results (F1 for HotpotQA, Natural Questions, TriviaQA, and Accuracy for WikiHop) have been picked from their respective leaderboard. For each task the top-3 leaders were picked not including \textsc{BigBird}\xspace-etc. \textbf{For Natural Questions Long Answer (LA), TriviaQA, and WikiHop, \textsc{BigBird}\xspace-ETC is the new state-of-the-art}. On HotpotQA we are third in the leaderboard by F1 and second by Exact Match (EM).} \label{tab:QATest} \end{table} \paragraph{Question Answering (QA)} We considered following four challenging datasets: \vspace{-1mm} \begin{enumerate}[leftmargin=6mm, itemsep=2mm, partopsep=0pt,parsep=0pt] \item Natural Questions~\citep{kwiatkowski2019natural}: For the given question, find a short span of answer (SA) from the given evidences as well highlight the paragraph from the given evidences containing information about the correct answer (LA). \item HotpotQA-distractor~\citep{yang2018hotpotqa}: Similar to natural questions, it requires finding the answer (Ans) as well as the supporting facts (Sup) over different documents needed for multi-hop reasoning from the given evidences. \item TriviaQA-wiki~\citep{JoshiTriviaQA2017}: We need to provide an answer for the given question using provided Wikipedia evidence, however, the answer might not be present in the given evidence. On a smaller \emph{verified} subset of question, the given evidence is guaranteed to contain the answer. Nevertheless, we model the answer as span selection problem in this case as well. \item WikiHop~\citep{welbl2018constructing}: Chose correct option from multiple-choice questions (MCQ), by aggregating information spread across multiple documents given in the evidences. \end{enumerate} As these tasks are very competitive, multiple highly engineered systems have been designed specific each dataset confirming to respective output formats. For a fair comparison, we had to use some additional regularization for training \textsc{BigBird}\xspace, details of which are provided in~\Cref{sec:app-expt-nlp:qa} along with exact architecture description. We experiment using the base sized model and select the best configuration on the development set for each dataset (as reported in~\Cref{tab:QADev}). We can see that \textsc{BigBird}\xspace-\textsc{etc}, with expanded global tokens consistently outperforms all other models. Thus, we chose this configuration to train a large sized model to be used for evaluation on the hidden test set. In~\Cref{tab:QATest}, we compare \textsc{BigBird}\xspace-\textsc{etc} model to top-3 entries from the leaderboard excluding \textsc{BigBird}\xspace. One can clearly see the importance of using longer context as both Longformer and \textsc{BigBird}\xspace outperform models with smaller contexts. Also, it is worth noting that \textsc{BigBird}\xspace submission is a single model, whereas the other top-3 entries for Natural Questions are ensembles, which might explain the slightly lower accuracy in exact answer phrase selection. \paragraph{Classification} We experiment on datasets of different lengths and contents, specifically various document classification and GLUE tasks. Following BERT, we used one layer with cross entropy loss on top of the first [CLS] token. We see that gains of using \textsc{BigBird}\xspace are more significant when we have longer documents and fewer training examples. For instance, using base sized model, \textsc{BigBird}\xspace improves state-of-the-art for Arxiv dataset by about $\bm{5\%}$ \textbf{points}. On Patents dataset, there is improvement over using simple BERT/RoBERTa, but given the large size of training data the improvement over SoTA (which is not BERT based) is not significant. Note that this performance gain is not seen for much smaller IMDb dataset. Along with experimental setup detail, we present detailed results in~\Cref{sec:app-expt-nlp:cls} which show competitive performance. \begin{table}[b] \centering \small \begin{tabular}{@{}p{1mm}l @{}p{3mm}@{} rrr @{}p{5mm}@{} rrr @{}p{5mm}@{} rrr@{}} \toprule \multicolumn{2}{l}{\multirow[b]{2}{*}{\hspace{-2mm}\normalsize{Model}}} & & \multicolumn{3}{c}{Arxiv} & & \multicolumn{3}{c}{PubMed} & & \multicolumn{3}{c}{BigPatent}\\ \cmidrule{4-6} \cmidrule{8-10} \cmidrule{12-14} & & & R-1 & R-2 & R-L & & R-1 & R-2 & R-L & & R-1 & R-2 & R-L \\ \midrule \multirow{10}{*}{\rotatebox[origin=c]{90}{Prior Art}} & SumBasic~\citep{nenkova2005impact} & & 29.47 & 6.95 & 26.30 & & 37.15 & 11.36 & 33.43 & & 27.44 & 7.08 & 23.66\\ & LexRank~\citep{erkan2004lexrank} & & 33.85 & 10.73 & 28.99 & & 39.19 & 13.89 & 34.59 & & 35.57 & 10.47 & 29.03 \\ & LSA~\citep{wiseman2017challenges} & & 29.91 & 7.42 & 25.67 & & 33.89 & 9.93 & 29.70 & & - & - & - \\ & Attn-Seq2Seq~\citep{sutskever2014sequence} & & 29.30 & 6.00 & 25.56 & & 31.55 & 8.52 & 27.38 & & 28.74 & 7.87 & 24.66 \\ & Pntr-Gen-Seq2Seq~\citep{see2017get} & & 32.06 & 9.04 & 25.16 & & 35.86 & 10.22 & 29.69 & & 33.14 & 11.63 & 28.55 \\ & Long-Doc-Seq2Seq~\citep{cohan2018discourse} & & 35.80 & 11.05 & 31.80 & & 38.93 & 15.37 & 35.21 & & - & - & - \\ & Sent-CLF~\citep{subramanian2019extractive} & & 34.01 & 8.71 & 30.41 & & 45.01 & 19.91 & 41.16 & & 36.20 & 10.99 & 31.83 \\ & Sent-PTR~\citep{subramanian2019extractive} & & 42.32 & 15.63 & 38.06 & & 43.30 & 17.92 & 39.47 & & 34.21 & 10.78 & 30.07 \\ & Extr-Abst-TLM~\citep{subramanian2019extractive} & & 41.62 & 14.69 & 38.03 & & 42.13 & 16.27 & 39.21 & & 38.65 & 12.31 & 34.09 \\ & Dancer~\citep{gidiotis2020divide} & & 42.70 & 16.54 & 38.44 & & 44.09 & 17.69 & 40.27 & & - & - & - \\ \midrule \multirow{4}{*}{\rotatebox[origin=c]{90}{Base}} & Transformer & & 28.52 & 6.70 & 25.58 & & 31.71 & 8.32 & 29.42 & & 39.66 & 20.94 & 31.20 \\ & \; + RoBERTa~\citep{rothe2019leveraging} & & 31.98 & 8.13 & 29.53 & & 35.77 & 13.85 & 33.32 & & 41.11 & 22.10 & 32.58 \\ & \; + Pegasus~\citep{zhang2019pegasus} & & 34.81 & 10.16 & 30.14 & & 39.98 & 15.15 & 35.89 & & 43.55 & 20.43 & 31.80 \\ & \textsc{BigBird}\xspace-RoBERTa & & \underline{41.22} & \underline{16.43} & \underline{36.96} & & \underline{43.70} & \underline{19.32} & \underline{39.99} & & \underline{55.69} & \underline{37.27} & \underline{45.56} \\ \midrule \multirow{3}{*}{\rotatebox[origin=c]{90}{Large}} & Pegasus (Reported)~\citep{zhang2019pegasus} & & 44.21 & 16.95 & 38.83 & & 45.97 & 20.15 & 41.34 & & 52.29 & 33.08 & 41.75 \\ & Pegasus (Re-eval) & & 43.85 & 16.83 & 39.17 & & 44.53 & 19.30 & 40.70 & & 52.25 & 33.04 & 41.80 \\ & \textsc{BigBird}\xspace-Pegasus & & \textbf{46.63} & \textbf{19.02} & \textbf{41.77} & & \textbf{46.32} & \textbf{20.65} & \textbf{42.33} & & \textbf{60.64} & \textbf{42.46} & \textbf{50.01} \\ \bottomrule \end{tabular} \vspace{2mm} \caption{Summarization ROUGE score for long documents.} \label{tab:long_sum_res} \end{table} \subsection{Encoder-Decoder Tasks} \label{sec:seq2seq} For an encoder-decoder setup, one can easily see that both suffer from quadratic complexity due to the full self attention. We focus on introducing the sparse attention mechanism of \textsc{BigBird}\xspace only at the encoder side. This is because, in practical generative applications, the length of output sequence is typically small as compared to the input. For example for text summarization, we see in realistic scenarios (c.f.~\Cref{sec:appn_summarization}~\Cref{tab:long_sum_data}) that the median output sequence length is $\sim 200$ where as the input sequence's median length is $>3000$. For such applications, it is more efficient to use sparse attention mechanism for the encoder and full self-attention for the decoder. \paragraph{Summarization} Document summarization is a task of creating a short and accurate summary of a text document. We used three long document datasets for testing our model details of which are mention in~\Cref{tab:long_sum_data}. In this paper we focus on abstractive summarization of long documents where using a longer contextual encoder should improve performance. The reasons are two fold: First, the salient content can be evenly distributed in the long document, not just in first 512 tokens, and this is by design in the BigPatents dataset~\citep{sharma2019bigpatent}. Second, longer documents exhibit a richer discourse structure and summaries are considerably more abstractive, thereby observing more context helps. As has been pointed out recently ~\citep{rothe2019leveraging,zhang2019pegasus}, pretraining helps in generative tasks, we warm start from our general purpose MLM pretraining on base-sized models as well as utilizing state-of-the-art summarization specific pretraining from Pegasus~\citep{zhang2019pegasus} on large-sized models. The results of training \textsc{BigBird}\xspace sparse encoder along with full decoder on these long document datasets are presented in~\Cref{tab:long_sum_res}. We can clearly see modeling longer context brings significant improvement. Along with hyperparameters, we also present results on shorter but more widespread datasets in~\Cref{sec:appn_summarization}, which show that using sparse attention does not hamper performance either. \section{Experiments: Genomics} \label{sec:expt-bio} There has been a recent upsurge in using deep learning for genomics data \citep{tampuu2019viraminer, zhang2019ncnet, busia2019deep}, which has resulted in improved performance on several biologically-significant tasks such as promoter site prediction \citep{oubounyt2019deepromoter}, methylation analysis \citep{levy2020methylnet}, predicting functional effects of non-coding variant \citep{zhou2015predicting}, etc. These approaches consume DNA sequence fragments as inputs, and therefore we believe longer input sequence handling capability of \textsc{BigBird}\xspace would be beneficial as many functional effects in DNA are highly non-local \citep{buldyrev1995long}. Furthermore, taking inspiration from NLP, we learn powerful contextual representations for DNA fragments utilizing abundant unlabeled data (e.g. human reference genome, Saccharomyces Genome Database) via MLM pretraining. Next, we showcase that our long input \textsc{BigBird}\xspace along with the proposed pretraining significantly improves performances in two downstream tasks. Detailed experimental setup for the two tasks are provided in~\Cref{sec:apndx-expt-bio}. \begin{wraptable}{r}{39mm} \vspace{-4mm} \centering \small \begin{tabular}{@{}lr@{}} \toprule Model & BPC \\ \midrule SRILM \cite{liang2012segmenting} & 1.57 \\ BERT (sqln. 512) & 1.23 \\ \midrule \textsc{BigBird}\xspace (sqln. 4096) & \textbf{1.12} \\ \bottomrule \end{tabular} \caption{MLM BPC} \label{tab:gml} \vspace{-3mm} \end{wraptable} \paragraph{Pre-training and MLM} As explored in \citet{liang2012segmenting}, instead of operating on base pairs, we propose to first segment DNA into tokens so as to further increase the context length (\Cref{sec:apndx-expt-bio}, \Cref{fig:apndx_mlm_data}). In particular, we build a byte-pair encoding~\citep{kudo2018sentencepiece} table for the DNA sequence of size 32K, with each token representing 8.78 base pairs on average. We learn contextual representation of these token on the human reference genome (GRCh37)\footnote{\url{https://www.ncbi.nlm.nih.gov/assembly/GCF_000001405.13/}} using MLM objective. We then report the bits per character (BPC) on a held-out set in \Cref{tab:gml}. We find that attention based contextual representation of DNA does improve BPC, which is further improved by using longer context. \begin{wraptable}{r}{35mm} \vspace{-4mm} \centering \small \begin{tabular}{@{}lr@{}} \toprule Model & F1 \\ \midrule CNNProm~\citep{umarov2017recognition} & 69.7 \\ DeePromoter~\citep{oubounyt2019deepromoter} & 95.6 \\ \midrule \textsc{BigBird}\xspace & \textbf{99.9} \\ \bottomrule \end{tabular} \caption{Comparison.} \label{tab:gpp} \vspace{-3mm} \end{wraptable} \paragraph{Promoter Region Prediction} Promoter is a DNA region typically located upstream of the gene, which is the site of transcription initiation. Multiple methods have been proposed to identify the promoter regions in a given DNA sequence~\citep{yang2017exploiting, lin2017identifying, bharanikumar2018promoterpredict, xiao2019ipsw, oubounyt2019deepromoter}, as it is an important first step in understanding gene regulation. The corresponding machine learning task is to classify a given DNA fragment as promoter or non-promoter sequence. We use the dataset compiled by \citet{oubounyt2019deepromoter} which was built from Eukaryotic Promoter Database (EPDnew) \citep{dreos2013epd} \footnote{ \url{https://epd.epfl.ch/human/human_database.php?db=human}}. We finetuned the pretrained \textsc{BigBird}\xspace model from above, using the training data and report F1 on test dataset. We compare our results to the previously reported best method in \Cref{tab:gpp}. We see that \textsc{BigBird}\xspace achieve nearly perfect accuracy with a $5\%$ jump from the previous best reported accuracy. \begin{wraptable}{r}{57mm} \centering \small \begin{tabular}{@{}lrrr@{}} \toprule Model & TF & HM & DHS \\ \midrule gkm-SVM~\citep{ghandi2014enhanced} & 89.6 & - & - \\ DeepSea~\citep{zhou2015predicting} & 95.8 & 85.6 & \textbf{92.3} \\ \midrule \textsc{BigBird}\xspace & \textbf{96.1} & \textbf{88.7} & 92.1 \\ \bottomrule \end{tabular} \caption{Chromatin-Profile Prediction} \label{tab:gnve} \vspace{-3mm} \end{wraptable} \paragraph{Chromatin-Profile Prediction} Non-coding regions of DNA do not code for proteins. Majority of diseases and other trait associated single-nucleotide polymorphism are correlated to non-coding genomic variations \citep{zhou2015predicting, khurana2016role}. Thus, understanding the functional effects of non-coding regions of DNA is a very important task. An important step in this process, as defined by \citet{zhou2015predicting}, is to predict large-scale chromatin-profiling from non-coding genomic sequence. To this effect, DeepSea \citep{zhou2015predicting}, compiled 919 chromatin-profile of 2.4M non-coding variants from Encyclopedia of DNA Elements (ENCODE)\footnote{\url{https://www.encodeproject.org/}} and Roadmap Epigenomics projects\footnote{\url{http://www.roadmapepigenomics.org/}}. The corresponding ML task is to predict, for a given non-coding region of DNA, these 919 chromatin-profile including $690$ transcription factors (TF) binding profiles for $160$ different TFs, $125$ DNase I sensitivity (DHS) profiles and $104$ histone-mark (HM) profiles. We jointly learn 919 binary classifiers to predict these functional effects from sequence of DNA fragments. On held-out chromosomes, we compare AUC with the baselines in~\Cref{tab:gnve} and see that we significantly improve on performance on the harder task HM, which is known to have longer-range correlations~\citep{gates2017histone} than others. \section{Conclusion} We propose \textsc{BigBird}\xspace: a sparse attention mechanism that is linear in the number of tokens. \textsc{BigBird}\xspace satisfies a number of theoretical results: it is a universal approximator of sequence to sequence functions and is also Turing complete. Theoretically, we use the power of extra global tokens preserve the expressive powers of the model. We complement these results by showing that moving to sparse attention mechanism do incur a cost. Empirically, \textsc{BigBird}\xspace gives \emph{state-of-the-art} performance on a number of NLP tasks such as question answering and long document classification. We further introduce attention based contextual language model for DNA and fine-tune it for down stream tasks such as promoter region prediction and predicting effects of non-coding variants. \section{Universal Approximators} \label{sec:apndx-universal} \subsection{Notation} \label{sec:apndx-enc-notation} We begin by setting up some notations following \citet{Perez19} to formally describe the complete architecture of Transformers. A single layer of Transformer encoder is a parametric function $\operatorname{Enc}$ receiving a sequence ${\bm{X}} = ({\bm{x}}_1, ..., {\bm{x}}_n)$ of vectors in $\mathbb{R}^d$ and returning a sequence ${\bm{Z}} = ({\bm{z}}_1, ..., {\bm{z}}_n)$ of the same length. Each ${\bm{z}}_i$ is a $d$ dimensional vector as well. We interchangeably treat the sequence ${\bm{X}}$ as a matrix in $\mathbb{R}^{n\times d}$. $\operatorname{Enc}$ has two components: \begin{enumerate}[leftmargin=6mm, itemsep=2mm, partopsep=0pt,parsep=0pt] \item An attention mechanism $\ensuremath{\textsc{Attn}\xspace}$ that takes in the sequence ${\bm{X}}$ and returns sequence $({\bm{a}}_1, ..., {\bm{a}}_n)$ of the same length and dimensionality; and \item A two layer fully connected network $O$ that takes in a vector in $\mathbb{R}^d$ and returns a vector in $\mathbb{R}^d$. \end{enumerate} Then $i$-th output vector of $\operatorname{Enc}({\bm{X}})$ is computed as follows: \begin{align} {\bm{z}}_i = O({\bm{a}}_i) + {\bm{a}}_i \qquad\text{where}\qquad {\bm{a}}_i = \ensuremath{\textsc{Attn}\xspace}({\bm{X}})_i + {\bm{x}}_i \end{align} Now it remains to define $\ensuremath{\textsc{Attn}\xspace}$ and $O$ which we do next. As described in~\Cref{sec:arch}, an attention mechanism is parameterized by three functions: $Q,K,V: \mathbb{R}^{d} \to \mathbb{R}^{m}$. In this paper, we assume that they are simply matrix products: $Q({\bm{x}}) = {\bm{x}} W_Q $, $K({\bm{x}}) = {\bm{x}} W_K $, and $V({\bm{x}}) = {\bm{x}} W_V $, where $W_Q, W_K, W_V \in \mathbb{R}^{d \times m}$ and $W_V\in \mathbb{R}^{d \times d}$. In reality a multi-headed attention is used, i.e. we have not only one, but $H$-sets of Query/Key/Value weight matrices, $W_Q^h, W_V^h, W_K^h \text{ for }h=1,...,H$. Thus, for a directed graph $D$ over $[n]$, the $i^\text{th}$ output vector of the generalized attention mechanism would be \begin{align} \ensuremath{\textsc{Attn}\xspace}_D({\bm{X}})_i &= \sum_{h=1}^H \sigma \left(({\bm{x}}_i W_Q^h) ({\bm{X}}_{N(i)} W_K^h)^T \right) \cdot ({\bm{X}}_{N(i)} W_V^h ) \label{AT_app} \tag{AT} \end{align} where $N(i)$ denote the out-neighbors set of node $i$ in $D$. In other words, the set of arcs (directed edges) in $D$ represents the set of inner products that our attention mechanism will consider. Also recall that $\sigma$ is a scoring function such as softmax or hardmax. Lastly, we define the output fully connected network as follows: \begin{align*} O({\bm{a}}_i) &= \operatorname{ReLU} \left({\bm{a}}_i W_1 + b_1 \right) W_2\cdot + b_2 \label{FF} \tag{FF} \end{align*} Here $W_1 \in \mathbb{R}^{ d\times q }$, $W_2 \in \mathbb{R}^{q \times d}$, $b_1\in\mathbb{R}^p$, and $b_2\in\mathbb{R}^d$ are parameters of output network $O$. \textbf{Additional Notation} We introduce a few pieces of additional notation that will be useful. Let $[a,b)_{\delta} = \{ a, a+\delta, \dots, a + \lfloor \frac{b-a}{\delta} \rfloor \cdot \delta \} $. Therefore, $[0,1)_{\delta} = \{ 0, \delta, 2\delta, \dots, (1-\delta)\}$. We use $\mathbf{1} [ \mathcal{E}]$ to denote the indicator variable; it is $1$ if the event $\mathcal{E}$ occurs and $0$ otherwise. \subsection{Proof} In this section, we will present the full proof of~\cref{thm:universal}. The proof will contain three parts. The first and the third part will largely follow standard techniques. The main innovation lies is in the second part. \subsubsection{Approximate \texorpdfstring{$\mathcal{F}_{CD}$}{Fcd} by piece-wise constant functions} First, we consider a suitable partition of the region $(0,1)$ into a grid of granularity $\delta$, which we denote by $G_\delta$. We do this using Lemma~8 from \citet{Yun19}, which we restate for completeness: \begin{lemma}[Lemma 8~\citep{Yun19}] \label{lem:piecewise} For any given $f \in \mathcal{F}_{CD}$ and $1\leq p \leq \infty$, there exists a $\delta >0$ such that there exists a piece-wise constant function $\bar{f}$ with $d_{p}(f,\bar{f}) \leq \frac{\epsilon}{3}$. Concretely, $\bar{f}$ is defined as \[ \bar{f}(X) = \sum_{P \in \ensuremath{\mathbb{G}_{\delta}}} f(P) \cdot \mathbf{1} \left[ \norm{\operatorname{ReLU}(X-P)}_{\infty} \leq \delta \right] \] \end{lemma} Since transformers can learn a positional embedding $E$, without any loss of generality, we can consider the translated function. In particular, define \[ E = \begin{bmatrix} 0 & 0 & 0 & \dots & 0 \\ \delta^{-d} & \delta^{-d} & \delta^{-d} & \dots & \delta^{-d} \\ \delta^{-2d} & \delta^{-2d} & \delta^{-2d} & \dots & \delta^{-2d} \\ \vdots \\ \delta^{-(n-1)d} & \delta^{-(n-1)d} & \delta^{-(n-1)d} & \dots & \delta^{-(n-1)d} \\ \end{bmatrix} \] We will try to approximate $g(X) = f(X - E)$ where $g$ is defined on the domain $[0,1]^d \times [\delta^{-d}, \delta^{-d}+1]^d \times \dots \times [\delta^{-(n-1)d}, \delta^{-(n-1)d}+1]^d $. To do so, we will apply a suitable modification of ~\Cref{lem:piecewise}, which will consider the discretized grid \[ \ensuremath{\mathbf{G}^{E}_{\delta}} := [0,1]_{\delta}^d \times [\delta^{-d}, \delta^{-d}+1]_{\delta}^d \times \dots \times [\delta^{-(n-1)d}, \delta^{-(n-1)d}+1]_{\delta}^d. \] Therefore, it suffices to approximate a function $\bar{f}: \ensuremath{\mathbf{G}^{E}_{\delta}} \to \mathbb{R}^{n \times d}$ defined as \[ \bar{f}(X) = \sum_{P \in \ensuremath{\mathbf{G}^{E}_{\delta}}} f(P-E) \cdot \mathbf{1} \left[ \norm{\operatorname{ReLU}(X-P)}_{\infty} \leq \delta \right]. \] \subsubsection{Contextual Mappings and Sparse Attention Mechanisms} Throughout this section, we will assume that we are given a function that has an extra global token at index $0$ and all vectors have an extra dimension appended to them. The latter assumption is without loss of generality as we can use the Feed-Forward Network to append sparse dimensions. In particular, we will associate $X \in \mathbb{R}^{(n+1) \times (d+1)}$ where we write $X = (x_0,x_1,\dots,x_n)$. Although our function is only defined for $\ensuremath{\mathbf{G}^{E}_{\delta}} \subset \mathbb{R}^{n \times d}$, we can amend the function in a natural way by making it ignore the first column. To avoid excessive clutter, we will assume that the function value is evaluated on the last $n$ columns. The main idea in this section is the use of contextual mapping to enable Transformers to compute any discretized function. A contextual mapping is an unique encoding of each tuple $(X,x_i) $ where $X \in \ensuremath{\mathbf{G}^{E}_{\delta}}$, and each column $x_i \in [\delta^{-(i-1)d}, \delta^{-(i-1)d}+1)^d_{\delta}$ for all $i \in [n]$. We restate the definition adapted to our setting below \begin{definition}[Defn 3.1~\cite{Yun19}] (Contextual Mapping) \label{defn:contextual-mapping} A contextual mapping is a function mapping $q: \ensuremath{\mathbf{G}^{E}_{\delta}} \to \mathbb{R}^{n}$ if it satisfies the following: \begin{enumerate} \item For any $P \in \ensuremath{\mathbf{G}^{E}_{\delta}}$, $q(P)$ contains distinct entries. \item For any two $P, P' \in \ensuremath{\mathbf{G}^{E}_{\delta}}$ with $P \neq P'$, all entries of $q(P)$ and $q(P')$ are distinct. \end{enumerate} \end{definition} The key technical novelty of the proof is computing a contextual mapping using only the sparse attention mechanism. We create a ``selective shift'' operator which only shifts entries of a vector that lie in a certain range. We will use this shift operator strategically to ensure that we attain a contextual mapping at the end of the process. The lemma below, which is based on parts of the proof of Lemma 6 of \cite{Yun19}, states that we can implement a suitable ``selective'' shift operator using a sparse attention mechanism. \begin{lemma} Given a function $\psi : \mathbb{R}^{(n+1)\times (d+1)} \times \mathbb{R}^2 \to \mathbb{R}^{(n+1) \times 1}$ and a vector $u \in \mathbb{R}^{d+1}$ and a sparse attention mechanism based on the directed graph $D$, we can implement a selective shift operator that receives as input a matrix $X \in \mathbb{R}^{(n+1) \times (d+1) }$ and outputs $X + \rho \cdot \psi_u(X,b_1,b_2)$ where \[ \psi_u(Z; b_1, b_2)_{i} = \begin{cases} (\max_{j \in N(i)} u^T Z_{j} - \min_{j \in N(i)} u^T Z_{j})e_1 & \text{ if } b_1\leq u^T Z_{j} \leq b_2\\ 0 & \text{ else. } \end{cases} \] Note that $e_1 \in R^{d+1}$ denotes $(1,0,\dots,0)$. \end{lemma} \begin{proof} Consider the function , which can be implemented by a sparse attention mechanism : \[ \tilde{\psi}(X,b)_i = \sigma_H \Big[ (u^T \cdot X_i)^T \cdot (u^T X_{N(i)} - b1_{N(i)}^T) e^{(1)} (u^T X_{N(i)}) \Big] \] This is because the Key, Query and Value functions are simply affine transformations of $X$. Given any graph $D$, the above function will evaluate to the following: \[ \tilde{\psi}(Z; b)_{i} = \begin{cases} (\max_{j \in N(i)} u^T Z_{j}) e_1 & \text{ if } u^T Z_{j} > b\\ (\min_{j \in N(i)} u^T Z_{j}) e_1 & \text{ if } u^T Z_{j} < b \\ \end{cases} \] Therefore we can say that $\tilde{\psi}(Z; b_{Q}) - \tilde{\psi}(Z; b_{Q'})$ satisfies \[ \psi(Z; b_1, b_2)_{i} = \begin{cases} (\max_{j \in N(i)} u^T Z_{j} - \min_{j \in N(i)} u^T Z_{j}) e_1 & \text{ if } b_1\leq u^T Z_{j} \leq b_2\\ 0 & \text{ else } \end{cases} \] \end{proof} The following lemma, which is the heart of the proof, uses the above selective shift operators to construct contextual mappings. \begin{lemma} \label{lem:contextual} There exists a function $g_c: \mathbb{R}^{(n+1) \times (d+1)} \to \mathbb{R}^{(n+1)} $ and a unique vector $u$, such that for all $P \in \ensuremath{\mathbf{G}^{E}_{\delta}}$ $g_c(P) := \bkt{u}{g(P)}$ satisfies the property that $g_c$ is a contextual mapping of $P$. Furthermore, $g_c \in \mathcal{T}_D^{2,1,1}$ using a composition of sparse attention layers as long as $D$ contains the star graph. \end{lemma} \begin{proof} Define $u \in \mathbb{R}^{d+1} = [1,\delta^{-1},\delta^{-2},\dots, \delta^{-d+1},\delta^{-nd}]$ and let $X_{0} = (0,\dots,0,1)$. We will assume that $\bkt{x_i}{x_0}=0$, by assuming that all the columns $x_1,\dots, x_n$ are appended by $0$. To successfully encode the entire context in each token, we will interleave the shift operator to target the original columns $1,\dots,n$ and to target the global column $0$. After a column $i$ is targeted, its inner product with $u$ will encode the entire context of the first $i$ columns. Next, we will shift the global token to take this context into account. This can be subsequently used by the remaining columns. For $i \in \{0,1,\dots,n\} $, we will use $l_i$ to denote the innerproducts $\bkt{u}{x_i}$ at the beginning. For $f_i = \bkt{u}{x_i}$ after the $i^{th}$ column has changed for $i \in \{1,\dots,n\}$ and we will use $f_0^k $ to denote $\bkt{u}{x_0}$ after the $k^{th}$ phase. We need to distinguish the global token further as it's inner product will change in each phase. Initially, given $X \in \ensuremath{\mathbf{G}^{E}_{\delta}}$, the following are true: \begin{align*} \delta^{-(i-1)d} &\leq \bkt{u}{X_{i}} \leq \delta^{-id}-\delta \qquad \text{ for all } i \in [n] \\ \delta^{-(n+1)d} &= \bkt{u}{X_{0}} \end{align*} Note that all $l_i$ ordered in distinct buckets $l_1 < l_2 < \dots < l_n <l_0$. We do this in phases indexed from $i \in \{1,\dots, n\}$. Each phase consists of two distinct parts: \newline \textbf{ The low shift operation:} These operation will be of the form \[ X \leftarrow X + \delta^{-d}\psi \left(X,v -\delta/2, v + \delta/2 \right) \] for values $v \in [\delta^{-id}), \delta^{-(i+1)d})_{\delta}$. The range is chosen so that only $l_i$ will be in the range and no other $l_j$ $j\neq i$ is in the range. This will shift exactly the $i^{th}$ column $x_i$ so that the new inner product $f_i = \bkt{u}{x_i} $ is substantially larger than $l_i$. Furthermore, no other column of $X$ will be affected. \newline \textbf{ The high shift operation: } These operation will be of the form \[ X \leftarrow X + \delta^{-nd} \cdot \psi \left(X,v -\delta/2, v + \delta/2 \right)\] for values $v \in [S_i, T_i)_{\delta}$. The range $[S_{i}, T_i)_{\delta}$ is chosen to only affect the column $x_0$ (corresponding to the global token) and no other column. In particular, this will shift the global token by a further $\delta^{-nd}$. Let $\ensuremath{\tilde{f}}_0^i$ denote the value of $\ensuremath{\tilde{f}}^i_0 = \bkt{u}{x_0}$ at the end of $i^{th}$ high operation. Each phase interleaves a shift operation to column $i$ and updates the global token. After each phase, the updated $i^{th}$ column $f_i = \bkt{u}{x_i} $ will contain a unique token encoding the values of all the $l_1,\dots,l_i$. After the high update, $\ensuremath{\tilde{f}}_{0}^i = \bkt{u}{x_0}$ will contain information about the first $i$ tokens. Finally, we define the following constants for all $k \in \{0,1,\dots,n\}$. \begin{align} T_k &= (\delta^{-(n+1)d} + 1)^{k} \cdot \delta^{-nd} - \sum_{t=2}^k (\delta^{-(n+1)d} + 1)^{k-t}( 2\delta^{-nd-d} + \delta^{-nd} +1) \delta^{-td} \notag \\ & \qquad - (\delta^{-(n+1)d} + 1)^{k-1}(\delta^{-nd-d} + \delta^{-nd} )\delta^{-d} - \delta^{-(k+1)d} \label{eqn:upper-bound} \tag{UP} \end{align} \begin{align} S_k &= (\delta^{-(n+1)d} + 1)^{k} \cdot \delta^{-nd} - \sum_{t=2}^k (\delta^{-(n+1)d} + 1)^{k-t}( 2\delta^{-nd-d} + \delta^{-nd} +1) \delta^{-(t-1)d} \notag \\ & \qquad - (\delta^{-(n+1)d} + 1)^{k-1}(\delta^{-nd-d} + \delta^{-nd} ) - \delta^{-kd} \label{eqn:lower-bound} \tag{LP} \end{align} After each $k$ phases, we will maintain the following invariants: \begin{enumerate} \item $S_k < \ensuremath{\tilde{f}}^k_0 < T_k$ for all $k \in \{ 0, 1, \dots, n\}$. \item $T_{k-1} \leq f_k < S_k $ \item The order of the inner products after $k^{th}$ phase is \[l_{k+1} < l_{k+2} \dots < l_n < f_1 < f_2< \dots < f_{k} < \ensuremath{\tilde{f}}_0^k .\] \end{enumerate} \paragraph{Base case} The case $k=0$, is trivial as we simply set $S_0 = \delta^{-(n+1)d}$, $T_0 = \delta^{-(n+1)\cdot d}+\delta$. The first nontrivial case is $k=1$. \paragraph{ Inductive Step } First, in the low shift operation is performed in the range $[\delta^{-(k-1)d}, \delta^{-kd})_{\delta}$ Due to the invariant, we know that there exists only one column $x_k$ that is affected by this shift. In particular, for column $k$, we will have $\max_{j \in N(k)} \bkt{u}{x_j} = \bkt{u}{x_{0}} = \ensuremath{\tilde{f}}^{k-1}_0$. The minimum is $l_k$. Thus the update will be $f_k = \delta^{-d} ( \ensuremath{\tilde{f}}_0^{k-1} - l_k ) + l_k $. Observe that for small enough $\delta$, $f_k \geq \ensuremath{\tilde{f}}_0^{k-1}$. Hence the total ordering, after this operation is % \begin{align} l_k+1 < l_{k+2} \dots < l_n < f_1 < f_2< \dots < \ensuremath{\tilde{f}}_0^{k-1} < f_{k} \label{eqn:intermediate} \end{align} % Now when we operate a higher selective shift operator in the range $[S_{k-1},T_{k-1})_{\delta}$. Since only global token's innerproduct $\ensuremath{\tilde{f}}_0^{k-1}$ is in this range, it will be the only column affected by the shift operator. The global token operates over the entire range, we know from~\Cref{eqn:intermediate} that, $f_k = \max_{i \in [n]} \bkt{u}{x_i}$ and $l_{k+1} = \min_{i \in [n]} \bkt{u}{x_i}$. The new value $\ensuremath{\tilde{f}}_0^k = \delta^{-nd} \cdot (f_k - l_{k+1} ) + \ensuremath{\tilde{f}}_0^{k-1}$. Expanding and simplifying we get, \begin{align*} \ensuremath{\tilde{f}}_0^k &= \delta^{-nd} \cdot (f_k - l_{k+1} ) + \ensuremath{\tilde{f}}_0^{k-1} \\ &= \delta^{-nd} \cdot ( \delta^{-d} ( \ensuremath{\tilde{f}}_0^{k-1} - l_k ) + l_k - l_{k+1} ) + \ensuremath{\tilde{f}}_0^{k-1} \\ &= \delta^{-(n+1)d} \cdot ( \ensuremath{\tilde{f}}_0^{k-1} - l_k ) + \delta^{-nd}(l_k - l_{k+1}) + \ensuremath{\tilde{f}}_0^{k-1} \\ &= (\delta^{-(n+1)d} + 1) \ensuremath{\tilde{f}}_0^{k-1} - (\delta^{-nd-d} + \delta^{-nd}) l_k - l_{k+1} \\ \intertext{Expanding the above recursively, we get} &= (\delta^{-(n+1)d} + 1)^{k} \cdot \ensuremath{\tilde{f}}_0^{0} - \sum_{t=2}^k (\delta^{-(n+1)d} + 1)^{k-t}( 2\delta^{-nd-d} + \delta^{-nd} +1) l_t \\ & \qquad - (\delta^{-(n+1)d} + 1)^{k-1}(\delta^{-nd-d} + \delta^{-nd} )l_1 - l_{k+1} \end{align*} Since we know that $\ensuremath{\tilde{f}}_0^0 = \delta^{-nd}$ and each $l_i < \delta^{-id}$, we can substitute this to get~\Cref{eqn:upper-bound} and we can get an lower-bound~\Cref{eqn:lower-bound} by using $l_i \geq \delta^{-(i-1)d} $. By construction, we know that $S_k \leq \ensuremath{\tilde{f}}_0^k < T_k$. For sufficiently small $\delta$, observe that $S_k \leq \ensuremath{\tilde{f}}_0^k < T_k$ all are essentially the dominant term $ \approx O(\delta^{-n(k+1)d - kd})$ and all the lower order terms do not matter. As a result it is immediate to see that that $f_k > \delta^{-d} (\ensuremath{\tilde{f}}_0^{k-1} - l_k) > T_{k-1}$ and hence we can see that the invariant 2 is also satisfied. Since only column $k$ and the global token are affected, we can see that invariant 3 is also satisfied. After $n$ iterations, $\ensuremath{\tilde{f}}^n_0$ contains a unique encoding for any $P \in \ensuremath{\mathbf{G}^{E}_{\delta}}$. To ensure that all tokens are distinct, we will add an additional layer $X = X + \delta^{-n^2d} \psi(X,v -\delta/2, v+\delta/2)$ for all $v \in [S_1, T_n)_{\delta}$. This ensures that for all $P, P' \in \ensuremath{\mathbf{G}^{E}_{\delta}}$, each entry of $q(P) $ and $q(P')$ are distinct. \end{proof} The previous lemma shows that we can compute a contextual mapping using only sparse transforms. We now use the following lemma to show that we can use a contextual mapping and feed-forward layers to accurately map to the desired output of the function $\bar{f}$. \begin{lemma}[Lemma 7~\cite{Yun19}] Let $g_c$ be the function in~\Cref{lem:contextual}, we can construct a function $g_v: \mathbb{R}^{(n+1) \times (d+1)} \to \mathbb{R}^{(n+1) \times d} $ composed of $O(n \delta^{-nd})$ feed-forward layers (with hidden dimension $q=1$) with activations in $\Phi$ such that $g_v$ is defined as $g_v(Z) = [g_v^{tkn}(Z_{1}), \dots, g^{tkn}_v(Z_{n})]$, where for all $j \in \{1,\dots, n\}$, \[ g_v^{tkn}(g_c(L)_{j}) = f(L)_{j} \] \end{lemma} \subsubsection{Approximating modified Transformers by Transformers} The previous section assumed we used Transformers that used hardmax operator $\sigma_H$ and activations functions belonging to the set $\Phi$. This is without loss of generality as following lemma shows. \begin{lemma}[Lemma 9 \cite{Yun19}] For each $g \in \bar{\mathcal{T}}^{2,1,1}$ and $1 \leq p \leq \infty$, $\exists g \in \mathcal{T}^{2,1,4}$ such that $d_p(g,\bar{g}) \leq \epsilon/3$ \end{lemma} Combining the above lemma with the \Cref{lem:contextual}, we get our main result: \begin{theorem} Let $1\leq p \leq \infty$ and $\epsilon > 0$, there exists a transformer network $g \in \mathcal{T}_D^{2,1,4}$ which achieves a ratio of $d_{p}(f,g) \leq \epsilon$ where $D$ is the sparse graph. \end{theorem} Since the sparsity graph associated with \textsc{BigBird}\xspace contains a star network, we know that it can express any continuous function from a compact domain. \paragraph{Contemporary work on Universal Approximability of Sparse Transformers} We would like to note that, contemporary work done by \citet{yun2020on}, also parallelly explored the ability of sparse transformers with linear connections to capture sequence-to-sequence functions on the compact domain. \section{Turing Completeness} \label{sec:apndx-turing} In this section, we will extend our results to the setting of \citet{Perez19}. Our exposition will largely use their proof structure but we will make a few changes. We repeat some of the lemmas with the amendments to make the exposition self-contained. \subsection{Notation} \paragraph{Transformer Decoder} We need both an encoder and a decoder in the transformer for simulating a Turing machine. We utilize the same notation used in~\Cref{sec:apndx-enc-notation} for encoders. The decoder is similar to an encoder but with additional attention to an external pair of key-value vectors $({\bm{K}}^{\textbf{e}}\in\mathbb{R}^{n\times m},{\bm{V}}^{\textbf{e}}\in\mathbb{R}^{n\times d})$, which usually come from the encoder stack. A single layer of Transformer decoder is a parametric function $\operatorname{Dec}$ receiving a sequence ${\bm{Y}}_j=({\bm{y}}_1,\ldots, {\bm{y}}_j)$ of vectors in $\mathbb{R}^d$ plus the external $({\bm{K}}^{\textbf{e}}, {\bm{V}}^{\textbf{e}})$ and returning a sequence of vectors ${\bm{Z}}_j=({\bm{z}}_1,\ldots,{\bm{z}}_j)$ of the same length. Each ${\bm{z}}_i$ is a $d$ dimensional vector as well. $\operatorname{Dec}$ has three components, one more than $\operatorname{Enc}$: \begin{enumerate}[leftmargin=6mm, itemsep=2mm, partopsep=0pt,parsep=0pt] \item An attention mechanism $\ensuremath{\textsc{Attn}\xspace}$ that takes in the sequence ${\bm{Y}}_j$ and returns sequence $({\bm{p}}_1, ..., {\bm{p}}_j)$ of the same length and dimensionality; \item A cross-attention mechanism $\ensuremath{\textsc{CrossAttn}\xspace}$ that takes in the sequence $({\bm{p}}_1, ..., {\bm{p}}_j)$ plus the external $({\bm{K}}^{\textbf{e}}, {\bm{V}}^{\textbf{e}})$ and returns sequence $({\bm{a}}_1, ..., {\bm{a}}_j)$, with each ${\bm{a}}_i\in\mathbb{R}^d$; and \item A two layer fully connected network $O$ that takes in a vector in $\mathbb{R}^d$ and returns a vector in $\mathbb{R}^d$. \end{enumerate} \vspace{-2mm} Then $i$-th output vector of $\operatorname{Dec}({\bm{Y}}_j; {\bm{K}}^{\textbf{e}}, {\bm{V}}^{\textbf{e}})$ is computed as follows: \begin{flalign} && {\bm{z}}_i &= O({\bm{a}}_i) + {\bm{a}}_i & \label{eq:dec-ff} \\ &\text{where} & {\bm{a}}_i &= \ensuremath{\textsc{CrossAttn}\xspace}({\bm{p}}_i, {\bm{K}}^{\textbf{e}}, {\bm{V}}^{\textbf{e}}) + {\bm{p}}_i & \label{eq:dec-ext} \\ &\text{and} & {\bm{p}}_i &= \ensuremath{\textsc{Attn}\xspace}_D({\bm{Y}}_j)_i + {\bm{y}}_i \label{eq:dec-self}& \end{flalign} $\ensuremath{\textsc{Attn}\xspace}_D$ and $O$ are as defined in~\Cref{sec:apndx-enc-notation} and it remains to define $\ensuremath{\textsc{CrossAttn}\xspace}$. The $i^\textrm{th}$ output vector of multi-head cross-attention attention is given by \begin{align} \ensuremath{\textsc{CrossAttn}\xspace}({\bm{Y}}_j)_i &= \sum_{h=1}^H \sigma \left(({\bm{y}}_i W_Q^h) ({\bm{K}}^{(e)} W_K^h)^T \right) \cdot ({\bm{V}}^{(e)} W_V^h ) \end{align} where $W_Q^h, W_K^h, W_V^h \in \mathbb{R}^{d \times m}$, $W_V^h\in \mathbb{R}^{d \times d}$, for all $h = 1, \ldots H$ heads. \paragraph{Turning Machine} We will use the same setup of Turning Machine that was used by \citet{Perez19} (see section B.4). Given a Turing Machine $M = (Q,\Sigma, \delta, q_{init},F)$, we use the following notation \begin{align*} q^{(j)} &: \text{ state of Turing machine } M \text{ at time }j. \\ s^{(j)} &: \text{ symbol under the head of } M \text{ at time }j. \\ v^{(j)} &: \text{ symbol written by } M \text{ at time }j. \\ m^{(j)} &: \text{ head direction in the transition of } M \text{ at time }j. \end{align*} \paragraph{Vector representations} For a symbol $s\in \Sigma$, $\oh{s}$ denotes its one-hot vector representation in $\mathbb{Q}^{|\Sigma|}$. All the transformer intermediate vectors used in our simulations have dimension $d=2|Q|+4|\Sigma|+16$. Note that we use five extra dimension as compared to \citet{Perez19}. We follow the convention used in \citet{Perez19} and write a a vector ${\bm{v}}\in\mathbb{Q}^d$ arranged in four groups of values as follows \[ \begin{array}{rcllr} {\bm{v}} & = & [ & {\bm{q}}_1,{\bm{s}}_1,x_1, \\ &&& {\bm{q}}_2,{\bm{s}}_2,x_2,x_3,x_4,x_5,x_6, \\ &&& {\bm{s}}_3,x_7,{\bm{s}}_4, \\ &&& x_8,x_9,x_{10},x_{11},x_{12},x_{13},x_{14},x_{15},x_{16} & ] \end{array} \] where ${\bm{q}}_i\in \mathbb{Q}^{|Q|}$, ${\bm{s}}_i\in \mathbb{Q}^{|\Sigma|}$, and $x_i\in\mathbb{Q}$. \subsection{Details of the Simulation} \label{sec:details} In this section, we give more details on the architecture of the encoder and decoder needed to implement our simulation strategy. \begin{figure}[b] \vspace{-5mm} \centering \includegraphics[width=\linewidth]{figures/Turing.pdf} \vspace{-10mm} \caption{Mapping between transformer step and original Turing machine step.} \label{fig:apndx_turing} \end{figure} \paragraph{High Level Overview:} Given the Turing machine $M$, we will show that a transformer with an appropriate encoder and decoder $\mathcal{T}_D$ can simulate each step of $M$'s execution. Our simulation strategy will mostly follow \citet{Perez19}, except we will use a sparse attention mechanism. The main idea is to maintain the current Turing machine state $q^{(j)}$ and symbol under the head $s^{(j)}$ as part of the decoder sequence ${\bm{Y}}$ for all time step $j$ so that we can always simulate the corresponding Turing machine transition $\delta(q^{(j)}, s^{(j)}) = (q^{(j)}, v^{(j)}, m^{(j)})$. The key difference will rise in Lemma~B.4 of \citet{Perez19}, where full attention is used to select the appropriate symbol from tape history in one step. To accomplish the same task with sparse attention, we will exploit the associative property of max and break down the symbol selection over multiple steps. Thus, unlike \citet{Perez19} one decoding step of our sparse transformer $\mathcal{T}_D$ does not correspond to one step of the Turing machine $M$. In particular, we will have two type of steps: compute step corresponding to update of $M$'s state and intermediate steps corresponding to aggregating the max (which in turn is used for symbol selection). Let $i$ denote the step of $\mathcal{T}_D$ and $g(i)$ denote the step of $M$ being simulated at step $i$ of the decoder. At each decoding step we want to maintain the current Turing machine state $q^{g(i)}$ and symbol under the $s^{g(i)}$ in ${\bm{y}}_i$. For roughly $O(\sqrt{i})$ intermediate steps the state will remain the same, while we aggregate information about relevant past output symbols through sparse attention. To maintain the same state for intermediate steps, we introduce an extra switching layer (\Cref{sec:appndx-new-layer}). Finally, at the next compute step we will make the transition to new state $q^{g(i)+1}$, new head movement $m^{g(i)}$, and new output symbol $v^{g(i)}$ to be written. Thereby we are able to completely simulate the given Turing machine $M$. As a result, we can prove the following main theorem: \begin{theorem} There exists a sparse attention mechanism using $O(n)$ inner products such that the resulting class of Transformer Networks using this sparse attention mechanism is Turing Complete. \end{theorem} \subsubsection*{Encoder} As \citep{Perez19}, we use the same trivial single layer encoder where resulting ${\bm{K}}^{(e)}$ contains position embedding and ${\bm{V}}^{(e)}$ contains one-hot symbol representation. \subsubsection*{Decoder} \paragraph{Sparse Self-Attention mechanism for Decoder} In this section, we will consider a particular instance of the sparse graph $D$ at decoder. We define its edges to be given by the following relations: $\forall j\in\mathbb{N}_{+}, 1\leq k\leq j+1$, \begin{align*} &\left(\frac{j(j+1)}{2} + k, \frac{k(k+1)}{2} \right) \text{ and }\\ \\ &\left(\frac{j(j+1)}{2} + k, \frac{j(j+1)}{2} + k \right) \text{ if } k>1 \text{ else } \left(\frac{j(j+1)}{2} + 1, \frac{j(j+1)}{2} \right). \end{align*} This graph can be seen as a special case of \textsc{BigBird}\xspace where first type of edges are realizations of random and second type of edges correspond to locality. Also note that this graph satisfies the left-to-right constraint of decoder, i.e.~no node attends to a node in the future. \paragraph{Embeddings and positional encodings} Our construction needs a different positional encoding $\operatorname{pos}_{\operatorname{Dec}} :\mathbb{N}\to\mathbb{Q}^{d}$ for decoder: \begin{equation*} \begin{array}{rcllr} \operatorname{pos}_{\operatorname{Dec}}(i) & = & [ & 0,\ldots,0, \\ &&& 0,\ldots,0, \\ &&& 0,\ldots,0, \\ &&& 1,g(i)+1,\frac{1}{g(i)+1},\frac{1}{(g(i)+1)^2},h(i),0,0,0,0 & ] \end{array} \end{equation*} where $g(i) = \left\lfloor \frac{-1+\sqrt{1+8i}}{2} \right\rfloor$ and $h(i) = g(i+1) - g(i)$. Note that $h(i)$ reduces to a binary indicator variable $ \mathbf{1}\left\lbrace\frac{-1+\sqrt{1+8i}}{2}=\left\lfloor \frac{-1+\sqrt{1+8i}}{2} \right\rfloor\right\rbrace$. \subsubsection*{Induction Setup} We next show how to construct the decoder layers to produce the sequence of outputs ${\bm{y}}_1,{\bm{y}}_2,\ldots$, where ${\bm{y}}_i$ is given by: \begin{equation*} \begin{array}{rcllr} {{\bm{y}}}_i & = & [ & \oh{q^{g(i)}},\oh{s^{g(i)}},c^{g(i)}, \\ &&& 0,\ldots,0, \\ &&& {\bm{0}}_s,0,\oh{w^{(i)}}, \\ &&& 0,0,0,0,0,u_1^{(i)},u_2^{(i)},u_3^{(i)},u_4^{(i)} & ] \end{array} \end{equation*} That is, at step $i$ of our sparse decoder ${\bm{y}}_i$, it will contain the information about the state of the turing machine $M$ at time $g(i)$, the symbol under the head of $M$ at time $g(i)$, and the current location of head of $M$ at time $g(i)$. We also have a placeholder symbol $w$ and placeholder scalars $u_1,u_2, u_3$, whose role will be clear from our construction. We consider as the starting vector for the decoder the vector \begin{equation*} \begin{array}{rcllr} {{\bm{y}}}_1 & = & [ & \oh{q_{\text{init}}},\oh{\#},0, \\ &&& 0,\ldots,0, \\ &&& 0,\ldots,0, \\ &&& 0,\ldots,0 & ] \end{array} \end{equation*} We assume that the start head is at $c^{(0)}=0$, the initial state is $q^{(0)}=q_{\text{init}}$, and $s^{(0)}=\#$ as we initialize from clean tape. We show the correctness of our construction by an inductive argument: we describe the architecture piece by piece and at the same time will show for every $r\geq 0$ , our architecture constructs ${\bm{y}}_{r+1}$ from the previous vectors $({\bm{y}}_0,\ldots,{\bm{y}}_r)$. Thus, assume that ${\bm{y}}_1,\ldots,{\bm{y}}_r$ satisfy the properties stated above. Since we are using positional encodings, the actual input for the first layer of the decoder is the sequence \[ {\bm{y}}_1+\operatorname{pos}_{\operatorname{Dec}}(1),\ {\bm{y}}_2+\operatorname{pos}_{\operatorname{Dec}}(2),\ \ldots,\ {\bm{y}}_{r}+\operatorname{pos}_{\operatorname{Dec}}(r). \] We denote by $\overline{{\bm{y}}}_i$ the vector ${\bm{y}}_i$ plus its positional encoding. Thus we have $\forall \ 1 \leq i \leq r$ that \begin{equation*} \begin{array}{rcllr} \overline{{\bm{y}}}_i & = & [ & \oh{q^{g(i)}},\oh{s^{g(i)}},c^{g(i)}, \\ &&& 0,\ldots,0, \\ &&& {\bm{0}}_s,0,\oh{w^{(i)}}, \\ &&& 1,g(i)+1,\frac{1}{g(i)+1},\frac{1}{(g(i)+1)^2},h(i),u_1^{(i)},u_2^{(i)},u_3^{(i)},u_4^{(i)} &] \end{array} \end{equation*} \subsubsection{Layer 1: Simulate Transition Function} In this layer, we use the cross-attention between encoder and decoder to access the input string and a feed-forward network to simulate the transition function of $M$. The first self attention in~\Cref{eq:dec-self} is not used in this layer and we just produce the identity. This identity function is achieved by setting all queries, keys, values to be 0 everywhere plus the residual connection. Thus, we have ${{\bm{p}}}^1_i=\overline{{\bm{y}}}_i$. Since ${\bm{p}}^1_i$ is of the form $[\underline{\phantom{A}},\ldots,\underline{\phantom{A}},1,g(i)+1,\underline{\phantom{A}},\ldots,\underline{\phantom{A}}]$, we know by Lemma B.1 of \citet{Perez19} that if we use ${\bm{p}}^1_i$ to attend over the encoder we obtain \begin{equation*}\label{eq:first-layer2} \begin{array}{rcllr} \ensuremath{\textsc{CrossAttn}\xspace}({\bm{p}}^1_i,{\bm{K}}^{\textbf{e}},{\bm{V}}^{\textbf{e}}) & = & [ & 0,\ldots,0, \\ &&& 0,\ldots,0, \\ &&& \oh{\alpha^{g(i)+1}},\beta^{g(i)+1},{\bm{0}}_s, \\ &&& 0,\ldots,0 & ] \end{array} \end{equation*} where $\alpha$ and $\beta$ are as defined in Eq. (21) of \citep{Perez19}. Thus in~\Cref{eq:dec-ext} we finally produce the vector ${\bm{a}}^1_i$ given by \begin{equation}\label{eq:ai_1} \begin{array}{rcllr} {\bm{a}}^1_{i} & = && \ensuremath{\textsc{CrossAttn}\xspace}({\bm{p}}^1_i,{\bm{K}}^{\textbf{e}},{\bm{V}}^{\textbf{e}}) + {\bm{p}}^1_i \\ & = & [ & \oh{q^{g(i)}},\oh{s^{g(i)}},c^{g(i)}, \\ &&& 0,\ldots,0, \\ &&& \oh{\alpha^{g(i)+1}},\beta^{g(i)+1},\oh{w^{(i)}}, \\ &&& 1,g(i)+1,\frac{1}{g(i)+1},\frac{1}{(g(i)+1)^2},h(i),u_1^{(i)},u_2^{(i)},u_3^{(i)},u_4^{(i)} & ] \end{array} \end{equation} As the final piece of the first decoder layer we use a function $O_1(\cdot)$ (\Cref{eq:dec-ff}) that satisfies the following lemma. \begin{lemma}[Lemma B.2 \citep{Perez19}]\label{lem:M} There exists a two-layer feed-forward network $O_1:\mathbb{Q}^d\to\mathbb{Q}^d$ such that with input vector ${\bm{a}}^1_{i}$ (\Cref{eq:ai_1}) produces as output \begin{equation*} \begin{array}{rcllr} O_1({\bm{a}}^1_i) & = & [ & 0,\ldots,0, \\ &&& \oh{q^{g(i)+1}},\oh{v^{g(i)}},m^{g(i)},0,0,0,0 \\ &&& 0,\ldots,0, \\ &&& 0,\ldots,0 & ] \end{array} \end{equation*} \end{lemma} That is, function $O_1(\cdot)$ simulates transition $\delta(q^{g(i)},s^{g(i)})$ to construct $\oh{q^{g(i)+1}}$, $\oh{v^{g(i)}}$, and $m^{g(i)}$ besides some other linear transformations. Thus, finally the output of the first decoder layer is \begin{equation*} \begin{array}{rcllr} {\bm{z}}^1_i = O_1({\bm{a}}^1_i) + {\bm{a}}^1_i & = & [ & \oh{q^{g(i)}},\oh{s^{g(i)}},c^{g(i)}, \\ &&& \oh{q^{g(i)+1}},\oh{v^{g(i)}},m^{g(i)},0,0,0,0, \\ &&& \oh{\alpha^{g(i)+1}},\beta^{g(i)+1},\oh{w^{(i)}}, \\ &&& 1,g(i)+1,\frac{1}{g(i)+1},\frac{1}{(g(i)+1)^2},h(i),u_1^{(i)},u_2^{(i)},u_3^{(i)},u_4^{(i)} & ] \end{array} \end{equation*} \subsubsection{Layer 2: Finding Head Node} In this layer, we only use the feed-forward network to evaluate the next location of the head. The self-attention and cross-attention are set to be the identity function, so ${\bm{a}}_i^2={\bm{p}}_i^2={\bm{z}}_i^1$. Recall that $c^{g(i)}$ is the cell to which $M$ is pointing to at time $g(i)$, and that it satisfies the following recursion $c^{g(i)+1}=c^{g(i)}+m^{g(i)}$, which can be expanded to see that that $c^{g(i)+1}=m^{(0)}+m^{(1)}+\cdots + m^{g(i)}$. Its not difficult to see that a two layer network with non-linearity can compute $c^{g(i)+1}/(g(i)+1)$ and $c^{g(i)}/(g(i)+1)$ from $c^{g(i)}$, $m^{g(i)}$, and $1/(g(i)+1)$ using the relation $c^{g(i)+1}=c^{g(i)}+m^{g(i)}$. At the end of layer 2, we obtain \begin{equation*} \begin{array}{rcllr} {\bm{z}}^2_i \ = \ O_2({\bm{a}}^2_i) + {\bm{a}}^2_i & = & [ & \oh{q^{g(i)}},\oh{s^{g(i)}},c^{g(i)}, \\ &&& \oh{q^{g(i)+1}},\oh{v^{g(i)}},c^{g(i)+1},\frac{1}{g(i)+1},\frac{1}{(g(i)+1)^2},\frac{c^{g(i)+1}}{g(i)+1},\frac{c^{g(i)}}{g(i)+1}, \\ &&& \oh{\alpha^{g(i)+1}},\beta^{g(i)+1},\oh{w^{(i)}}, \\ &&& 1,g(i)+1,\frac{1}{g(i)+1},\frac{1}{(g(i)+1)^2},h(i),u_1^{(i)},u_2^{(i)},u_3^{(i)},u_4^{(i)} & ] \end{array} \end{equation*} \subsubsection{Layer 3: Distinguishing Node Type} \label{sec:appndx-new-layer} This is an additional layer (not present in the work of \cite{Perez19}), where we propagate computations in our sparse graph. In particular, we will use this layer to ``compute'' or accumulate state in intermediate nodes. We make this clear below. The self-attention and cross-attention are all set to be the identity function, so ${\bm{a}}_i^3={\bm{p}}_i^3={\bm{z}}_i^2$. In this layer, we only use the dense attention layers to select the newly computed states or to continue with previous states. Using idea similar to Lemma B.6 of \citep{Perez19}, we can construct a dense network such that \[ O([{\bm{x}},{\bm{y}},{\bm{z}},b])) = \begin{cases} [{\bm{0}},{\bm{0}},{\bm{0}},0] & \text{if }b=1, \\ [{\bm{0}},{\bm{z}}-{\bm{y}},-{\bm{z}},0] & \text{if }b=0. \end{cases} \] The negatives are generated to offset results from skip connection. We utilize such network to switch Turing machine state and position embedding for intermediate steps to the values received from previous time step and do nothing for compute nodes. We use $h(i)$ as the flipping bit $b$. Thus, at end of layer 3, we obtain \begin{equation*} \begin{array}{rcllr} {\bm{z}}^3_i \ = \ O_3({\bm{a}}^3_i) + {\bm{a}}^3_i & = & [ & 0,\ldots,0, \\ &&& \oh{\hat{q}^{(i)}},\oh{\hat{v}^{(i)}},\hat{c}^{(i)},\frac{1}{g(i)+1},\frac{1}{(g(i)+1)^2},\frac{c^{g(i)+1}}{g(i)+1},\hat{u}_4^{(i)}, \\ &&& \oh{\hat{\alpha}^{(i)}},\hat{\beta}^{(i)},{\bm{0}}_s, \\ &&& 1,\hat{u}_1^{(i)},\hat{u}_2^{(i)},\hat{u}_3^{(i)},h(i),0,0,0,0 & ] \end{array} \end{equation*} where we used $h(i)$ for selecting old states. In particular, \vspace{-3mm} \begin{itemize}[leftmargin=6mm, itemsep=2mm, partopsep=0mm,parsep=0mm] \item We copy the input state and head position as is for intermediate nodes. We do not need to transition to next Turing machine states in these nodes. \end{itemize} \begin{table}[h] \small \vspace{-3mm} \hspace*{12mm} \begin{tabular}{lclcl} $ \hat{q}^{(i)} = \begin{cases} q^{g(i)+1} & \text{if }h(i)=1 \\ q^{g(i)} & \text{if }h(i)=0 \end{cases} $ , & $ \hat{v}^{(i)} = \begin{cases} v^{g(i)} & \text{if }h(i)=1 \\ w^{(i)} & \text{if }h(i)=0 \end{cases} $ , & $ \hat{c}^{(i)} = \begin{cases} c^{g(i)+1} & \text{if }h(i)=1 \\ c^{g(i)} & \text{if }h(i)=0 \end{cases} $ . \end{tabular} \vspace{-3mm} \end{table} \begin{itemize}[leftmargin=6mm, itemsep=2mm, partopsep=0mm,parsep=0mm] \item To preserve the symbol under the head for intermediate nodes, we copy the previous symbol to $\alpha$ location and set $\beta=g(i)+1$, as the symbol at $\alpha$ location will be copied as the symbol under head for next transformer step by the final transformation layer if $\beta=g(i)+1$. Thus, we correctly preserve the previous symbol under head as Turing machine does not transition these nodes. For compute nodes, things happen as usual. \end{itemize} \begin{table}[h] \small \vspace{-3mm} \hspace*{12mm} \begin{tabular}{lclcl} $ \hat{\alpha}^{(i)} = \begin{cases} \alpha^{g(i)+1} & \text{if }h(i)=1 \\ s^{g(i)} & \text{if }h(i)=0 \end{cases} $ , && $ \hat{\beta}^{(i)} = \begin{cases} \beta^{g(i)+1} & \text{if }h(i)=1 \\ g(i)+1 & \text{if }h(i)=0 \end{cases} $ . \end{tabular} \vspace{-3mm} \end{table} \begin{itemize}[leftmargin=6mm, itemsep=3mm, partopsep=0mm,parsep=0mm] \item Finally for the intermediate nodes, we copy the position embedding corresponding to current best symbol $w$, which is stored in $u_1,u_2,u_3$. For compute node, we let the position embedding correspond to current Turing machine step. \end{itemize} \vspace{-3mm} \begin{table}[ht] \small \hspace*{12mm} \begin{tabular}{lclcl} $ \hat{u}_1^{(i)} = \begin{cases} g(i)+1 & \text{if }h(i)=1 \\ u_1^{(i)} & \text{if }h(i)=0 \end{cases} $ , && $ \hat{u}_2^{(i)} = \begin{cases} \frac{1}{(g(i)+1)} & \text{if }h(i)=1 \\ u_2^{(i)} & \text{if }h(i)=0 \end{cases} $ , \\\\ $ \hat{u}_3^{(i)} = \begin{cases} \frac{1}{(g(i)+1)^2} & \text{if }h(i)=1 \\ u_3^{(i)} & \text{if }h(i)=0 \end{cases} $ , && $ \hat{u}_4^{(i)} = \begin{cases} \frac{c^{g(i)}}{g(i)+1} & \text{if }h(i)=1 \\ u_4^{(i)} & \text{if }h(i)=0 \end{cases} $ . \end{tabular} \end{table} For further simplification note that $g(i+1) = g(i) $ if $h(i) = 0$ else $ g(i) + 1$ when $h(i) = 1$. With this fact, we can conclude that $\hat{q}^{(i)}=q^{g(i+1)}$ and $\hat{c}^{(i)}=c^{g(i+1)}$. Thus, we can write, \begin{equation*} \begin{array}{rcllr} {\bm{z}}^3_i \ & = & [ & 0,\ldots,0, \\ &&& \oh{{q}^{g(i+1)}},\oh{\hat{v}^{(i)}},{c}^{g(i+1)},\frac{1}{g(i)+1},\frac{1}{(g(i)+1)^2},\frac{c^{g(i)+1}}{g(i)+1},\hat{u}_4^{(i)}, \\ &&& \oh{\hat{\alpha}^{(i)}},\hat{\beta}^{(i)},{\bm{0}}_s, \\ &&& 1,\hat{u}_1^{(i)},\hat{u}_2^{(i)},\hat{u}_3^{(i)},h(i),0,0,0,0 & ] \end{array} \end{equation*} \subsubsection{Layer 4: Finding next symbol on tape} To find the symbol on tape under next head position $c^{g(i)+1}$, we try to find what was written last at the location $c^{g(i)+1}$. To facilitate this, following \citep{Perez19}, we define $\ell(j)$ to be the last time (previous to $j$) in which $M$ was pointing to position $c^{(j)}$, or it is $j-1$ if this is the first time that $M$ is pointing to $c^{(j)}$. Recall $j$ is the Turing machine step counter, which is different from sparse transformer step $i$. \citep{Perez19} could utilize full attention mechanism to find $v^{\ell(j+1)}$ at one go, but we have to do it over multiple steps owing to our sparse attention mechanism. We use similar query, key, value functions as used for full attention by \citep{Perez19} $\forall i $: \begin{equation*} \begin{array}{rcllr} Q_4({\bm{z}}^3_i) & = & [ & 0,\ldots,0 \\ &&& 0,\ldots,0, \\ &&& 0,\ldots,0, \\ &&& 0, \frac{c^{g(i)+1}}{g(i)+1}, \frac{1}{g(i)+1}, \frac{1}{3(g(i)+1)^2}, 0,0,0,0,0 & ] \\ \end{array} \end{equation*} \begin{equation*} \begin{array}{rcllr} K_4({\bm{z}}^3_i) & = & [ & 0,\ldots,0 \\ &&& 0,\ldots,0, \\ &&& 0,\ldots,0, \\ &&& 0, \hat{u}_2^{(i)}, \hat{u}_4^{(i)}, \hat{u}_3^{(i)},0,0,0,0,0 & ] \\ V_4({\bm{z}}^3_i) & = & [ & 0,\ldots,0, \\ &&& 0,\ldots,0, \\ &&& {\bm{0}}_s,0,\oh{\hat{v}^{(i)}}, \\ &&& 0,0,0,0,0, \hat{u}_1^{(i)}, \hat{u}_2^{(i)}, \hat{u}_3^{(i)} , \hat{u}_4^{(i)} & ] \end{array} \end{equation*} It is clear that the three functions are linear transformations and thus they can be defined by feed-forward networks. Notice that the query vector is always formed using current time step position embedding, whereas key and value vectors are formed using copied over entries for intermediate nodes and using current entries only for compute node. \citet{Perez19} find the desired $v^{l(j+1)}$ as $v^{m(j)}$ using full attention, where \begin{equation*} m(t) = \argmin_{m\in\{0,...,t\}} \chi_t^j = \argmin_{m\in\{0,...,t\}} | \langle Q_4({\bm{z}}_j^3), K_4({\bm{z}}_m^3) \rangle | \end{equation*} Note the minimization is only over Turing machine steps, i.e. over compute nodes in our case. We show below that we can estimates $m(j)$ by parts using sparse attention mechanism. The main idea is just to notice that minimization problem $\min_{m\in\{0,...,t\}} \chi_t^j$ can be expressed as $\min\{ \cdots \min\{ \min\{\chi^j_0, \chi^j_1\}, \chi^j_2 \}, ..., \chi^j_t \} $ by the associativity property. By definition of our graph $D$, at every intermediate node $i$ of the form $j(j+1)/2+k$, i.e. where $k>0$, $g(i)=j$ and $h(i)=0$, we will attend over node $k(k+1)/2$ and best till now copied from $i-1$. The node $k(k+1)/2$ is never an intermediate node as $h(k(k+1)/2)=1$ for all $k$ and in fact corresponds to Turing machine's step $k$. This will help us select the key and value corresponding to min between node $k(k+1)/2$ and $i-1$. In other words, at node $i$ of the form $j(j+1)/2+k$ we would have evaluated $m(k)$ and corresponding value selected: \begin{equation*} w^{(j(j+1)/2+k+1)} = \hat{v}^{m(k-1)} \end{equation*} and similarly for $u$'s. So after going through all the intermediate nodes, finally at the next compute node, i.e. when $k=j+1$, we will obtain the minimum value over all of $0,1,...,j$. This implies at a compute node will be able to recover $\ell(g(i)+1)$ and its corresponding value as shown in Lemma B.4 of \citep{Perez19}. Then we have that ${\bm{p}}^4_i$ is given by \begin{equation} \begin{array}{rcllr} {\bm{p}}^4_i & = && \ensuremath{\textsc{Attn}\xspace}_D ({\bm{Z}}_i^3) + {\bm{z}}^3_i \\ & = & [ & 0,\ldots,0, \\ &&& \oh{q^{g(i+1)}},\oh{\hat{v}^{(i)}},c^{g(i+1)},0,\frac{c^{g(i)+1}}{g(i)+1},\hat{u}_4^{(i)}, \\ &&& \oh{\hat{\alpha}^{(i)}},\hat{\beta}^{(i)},\oh{w^{(i+1)}}, \\ &&& 1,\hat{u}_1^{(i)},\hat{u}_2^{(i)},\hat{u}_3^{(i)},h(i),{u}_1^{(i+1)},{u}_2^{(i+1)},{u}_3^{(i+1)},{u}_4^{(i+1)} & ] \end{array} \end{equation} The cross-attention and feed-forward network are set to be identity, so ${\bm{z}}_i^4={\bm{a}}_i^4={\bm{p}}_i^4$. \subsubsection{Final transformation} We finish our construction by using the final transformation function $F(\cdot)$ from the corresponding lemma from~\citet{Perez19}, with a slight modification. \begin{lemma}[Lemma B.5 \citep{Perez19}] \label{lem:yr+1} There exists a function $F:\mathbb{Q}^d\to \mathbb{Q}^d$ defined by a feed-forward network such that \begin{equation*} \begin{array}{rcllr} F({\bm{z}}^4_r) & = & [ & \oh{q^{g(r+1)}},\oh{s^{g(r+1))}},c^{g(r+1)}, \\ &&& 0,\ldots,0, \\ &&& {\bm{0}}_s,0,\oh{w^{(r+1)}}, \\ &&& 0,0,0,0,0,{u}_1^{(r+1)}, {u}_2^{(r+1)}, {u}_3^{(r+1)}, {u}_4^{(r+1)} ] \\ & = & & {\bm{y}}_{r+1} \end{array} \end{equation*} \end{lemma} The modification is to let $w, u_1, u_2, u_3$ to pass through. This yields the desired input to transformer at next time step for both intermediate and compute node, thereby concluding our induction. \section{Limitations} \label{sec:apndx-limit} Finally, we show that sparse attention mechanisms can not universally replace dense attention mechanisms, i.e.~there is no free lunch. We demonstrate a natural task which can be solved by the full attention mechanism in $O(1)$-layers. However, under standard complexity theoretic assumptions, we show that this problem will require $\tilde{\Omega}(n)$-layers for any sparse attention layers with $\tilde{O}(n)$ edges (not just \textsc{BigBird}\xspace). (We use the standard notation $\tilde{\Omega}(n)$ to hide the dependence on poly-logarithmic factors. ) We consider the simple problem of finding the furthest vector for each vector in the given sequence of length $n$ and dimension $d \in \Omega(\log^2n)$. The assumption on the dimension is mild , as in many situations the dimension $d =768$ is actually comparable to the number of $n$. \begin{task} Given $n$ unit vectors $\{u_1,\dots,u_n\}$, each in $\mathbb{R}^d$ where $d=\Theta(\log^2n)$, compute $f(u_1,\dots,u_n) \to (u_{1^*}, \dots, u_{n^*})$ where for a fixed $j \in [n]$, we define $ j^* = \argmax_{k} \|u_k - u_j\|_2^2$. \end{task} Finding vectors that are furthest apart boils down to minimizing inner product search in case of unit vectors. For a full-attention mechanism with appropriate query and keys, this task is very easy as we can evaluate all pair-wise inner products. The impossibility for sparse-attention follows from hardness results stemming from Orthogonal Vector Conjecture (OVC) \citep{abboud2015tight,abboud2014consequences,williams2005new,backurs2015edit}, which is a widely used assumption in fine-grained complexity. Informally, it states that one cannot determine if the minimum inner product among $n$ Boolean vectors is $0$ in subquadratic time. \begin{conjecture}[Orthogonal Vectors Conjecture] \label{conj:ovc} For every $\epsilon>0$, there is a $c \geq 1$ such that given $n$ Boolean vectors in $d$ dimension, cannot determine if there is a pair of orthogonal vectors in $O(n^{2-\epsilon})$ time on instances with $d \geq c \log n$. \end{conjecture} Using~\cref{conj:ovc}, we show a reduction to show that a transformer $g \in \mathcal{T}_D^{H=O(d),m=O(d),q=O(d)}$ for any sparse directed graph $D$ which completes Task $1$ must require a superlinear number of layers. \begin{proposition} There exists a single layer full-attention network $g\in\mathcal{T}^{H=1,m=2d,q=0}$ that can evaluate Task 1, i.e. $g(u_1,...,u_n) = [u_{1^*},\dots, u_{n^*}]$, but for any sparse-attention network in $\mathcal{T}_D^{H=O(d),m=O(d),q=O(d)}$ with graph $D$ having $\tilde{O}(n)$ edges (i.e.~inner product evaluations), would require $\tilde{\Omega}(n^{1-o(1)})$ layers. \end{proposition} \begin{proof} We will break this proof into two parts: \paragraph{Part 1: The full attention mechanism can solve the problem in $O(1)$ layer} We begin by providing an explicit construction of a single layer full self-attention that can evaluate Task 1. \textbf{Step 1} We embed each $u_i$ in the input into $\mathbb{R}^{2d}$ as follows: \begin{equation} x_i := E(u_i) = [u_i; 0] \end{equation} \textbf{Step 2} Construct query, key, value functions as follows: \begin{equation} \begin{aligned} Q([a; b]) &= -a \\ K([a; b]) &= a \\ V([a; b]) &= [0; a] \\ \end{aligned} \end{equation} Then $\mathrm{Attn}(Q(x_i), K(X), V(X) = [0; u_{\argmax_{j} \langle -u_i, u_j \rangle}] $. Then, \begin{equation} a_i = \mathrm{Attn}(Q(x_i), K(X), V(X)) + x_i = [u_i; u_{\argmax_{j} \langle -u_i, u_j \rangle}] = [u_i; u_{i^*}] \end{equation} \textbf{Step 3} Let $O(a_i) = 0$, then the output $z_i = [u_i; u_{i^*}]$ as desired. To complete the argument, observe that it now only takes $O(n)$ inner products to check if there is a pair of orthogonal vectors as we need only compare $\bkt{u_i}{u_{i^*}}$. \paragraph{Part 2: Every Sparse Attention Mechanism will need $\tilde{\Omega}(n^{1-\epsilon})$ layers} We prove by contradiction that it is impossible to solve Task 1 by any $g\in\mathcal{T}_D^{H=O(d),m=O(d),q=O(d)}$ sparse-attention graph $D$ with $\tilde{O}(n)$ edges. Suppose we can solve Task 1 using a network $g\in\mathcal{T}_D^{H=O(d),m=O(d),q=O(d)}$ that has $l$ layers. Recall that all the computation we do in one layer is: \begin{equation} \begin{aligned} a_i &= \ensuremath{\textsc{Attn}\xspace}_D(Q(x_i), K(X_{N(i)}), V(X_{N(i)}) + x_i \\ x_i &= O(a_i) + a_i \end{aligned} \end{equation} where $\mathrm{Attn}_D$ is defined in \cref{AT_app}. Thus, total computation per layer is $\tilde{O}(nd^3)$ and consequently $\tilde{O}(nld^3)$ for the whole network consisting of $l$ layers. We can use the result of Task 1 to solve the orthogonal vector (OV) problem (defined in~\Cref{conj:ovc}) in linear time. So in total, we will be able to solve any instance of OV in $\tilde{O}(nld^3)$ time. Now if $l=O(n^{1-\epsilon})$ for any $\epsilon > 0$ and $d=\Theta(\log^2 n)$, then it appears that we are able to solve OV in $\tilde{O}(n^{2-\epsilon})$ which contradicts~\Cref{conj:ovc}. Therefore, we need at least $\tilde{\Omega}(n^{1-o(1)})$ layers. \end{proof} \section{Implementation details} \label{sec:apndx-impl} We optimize the code for modern hardware. Hardware accelerators like GPUs and TPUs truly shine on coalesced memory operations which load blocks of contiguous bytes at once. Thus, its not very efficient to have small sporadic look-ups caused by a sliding window or random element queries. We alleviate this by ``blockifying'' the lookups. \begin{figure}[b] \centering \begin{subfigure}{.24\textwidth} \includegraphics[width=\linewidth]{figures/BloackRandAttn.pdf} \caption{Random Attention \label{fig:apndx_rnd_atn}} \end{subfigure}\hfill \begin{subfigure}{.24\textwidth} \includegraphics[width=\linewidth]{figures/BlockWindAttn.pdf} \caption{Window Attention \label{fig:apndx_wnd:atn}} \end{subfigure}\hfill \begin{subfigure}{.24\textwidth} \includegraphics[width=\linewidth]{figures/GlobalAttention.pdf} \caption{Global Attention \label{fig:apndx_gbl_atn}} \end{subfigure}\hfill \begin{subfigure}{.24\textwidth} \includegraphics[width=\linewidth]{figures/BlockBigBird.pdf} \caption{\textsc{BigBird}\xspace \label{fig:apndx_bigb_atn}} \end{subfigure} \hfill \caption{Building blocks of the \emph{block-attention} mechanism used in \textsc{BigBird}\xspace with block size = $2$. This implies the attention matrix is split into blocks of size $2 \times 2$. All the previous \textsc{BigBird}\xspace parameters work on each block as a unit. White color indicates absence of attention. (a) random attention with $r=1$, (b) sliding window attention with $w=3$ (c) global attention with $g=1$. (d) the combined \textsc{BigBird}\xspace model.} \label{fig:apndx_my_label} \end{figure} \paragraph{GPU/TPU and Sparsity} Ideally, if the adjacency matrix $A$ described in~\Cref{sec:arch} is sparse, one would hope this would be sufficient to speed up the implementation. Unfortunately, it is well known~\citep{gray2017gpu,yao2019balanced}, that such sparse multiplications cannot be efficiently implemented in GPUs. GPUs have thousands of cores performing operations in parallel. Thus, we cannot efficiently perform the sparse matrix multiplication mentioned in section~\Cref{sec:arch}. As a result we propose to first blockify the attention pattern i.e. we pack sets of query and keys together and then define attention on these blocks. It is easier to explain this process using the example shown in \Cref{fig:apndx_my_label}. Suppose, there are $12$ query and $12$ key vectors to attend to. Using a block size of $2$, we split the query matrix into $12/2 = 6$ blocks and similarly the key matrix into $12/2 = 6$ blocks. Then the three different building components of \textsc{BigBird}\xspace are defined on the block matrix. In particular the three different components are: \begin{enumerate} \item Random attention: Each query block attends to $r$ random key blocks. In~\Cref{fig:apndx_rnd_atn}, $r=1$ with block size $2$. This implies that each query block of size $2$ randomly attends to a key block of size $2$. \item Window local attention: While creating the block, we ensure that the number of query blocks and the number of key blocks are the same. This helps us in defining the block window attention. Every query block with index $j$ attends to key block with index $j - (w-1)/2$ to $j + (w-1)/2$, including key block $j$. In~\Cref{fig:apndx_wnd:atn}, $w=3$ with block size $2$. It means that each query block $j$ (size $2$ queries) attends to key block $j-1, j, j+1$. \item Global attention: Global attention remains the same as defined in~\Cref{sec:arch}, but we compute it in terms of blocks. In \Cref{fig:apndx_gbl_atn}, $g=1$ with block size $2$. For \textsc{BigBird}\xspace-\textsc{itc} this implies that one query and key block, attend to everyone. \end{enumerate} The resulting overall attention matrix is shown in~\Cref{fig:apndx_bigb_atn}. Unfortunately, simply trying to compute this attention score as multiplying arbitrary pairs of query and key vectors would require use of gather operation, which is inefficient. Upon closer examination of window and global attention, we observe that we can compute these attention scores without using a gather operation. \begin{figure} \vspace{-15mm} \centering \begin{subfigure}[b]{0.84\textwidth} \centering \includegraphics[trim=10mm 15mm 10mm 50mm, clip,width=\textwidth,page=1]{figures/BigBird-Figures.pdf} \caption{Full all pair attention can be obtained by direct matrix multiplication between the query and key matrix. Groupings just shown for guidance.} \label{fig:apndx_full_attn_score} \end{subfigure} \hfill \begin{subfigure}[b]{0.84\textwidth} \centering \includegraphics[trim=10mm 36mm 10mm 50mm, clip,width=\textwidth,page=2]{figures/BigBird-Figures.pdf} \caption{Block diagonal attention can be computed by ``blockifying'' the query and key matrix} \label{fig:apndx_block_diag_attn_score} \end{subfigure} \hfill \begin{subfigure}[b]{0.84\textwidth} \centering \includegraphics[trim=10mm 10mm 10mm 10mm, clip,width=\textwidth,page=3]{figures/BigBird-Figures.pdf} \caption{Window local attention obtained by ``blockifying'' the query/key matrix, copying key matrix, and rolling the resulting key tensor (Obtaining rolled key-block tensor is illustrated in detail in~\Cref{fig:apndx_copy-roll}). This ensures that every query attends to at least one block and at most two blocks of keys of size $b$ on each side.} \label{fig:apndx_wind_diag_attn_score} \end{subfigure} \begin{subfigure}[b]{0.84\textwidth} \centering \includegraphics[trim=10mm 10mm 10mm 10mm, clip,width=\textwidth,page=4]{figures/BigBird-Figures.pdf} \caption{Window + Random attention obtained by following the procedure above along with gathering some random key blocks.} \label{fig:apndx_rand_att} \end{subfigure} \caption{Idea behind fast sparse attention computation in \textsc{BigBird}\xspace.} \label{fig:apndx_bigb_calc_idea} \vspace{-2mm} \end{figure} Recall, full dense attention scores can be calculated by simple matrix product of query and key matrix with a cost of $O(n^2d)$, as illustrated in~\Cref{fig:apndx_full_attn_score}. Now note that if we blockify the query and key matrix and multiply, then with only $O(nbd)$ cost we will obtain the block diagonal portion of the attention score, as depicted in~\Cref{fig:apndx_block_diag_attn_score}. To elaborate this lets assume that $Q, K \in \mathbb{R}^{n \times d}$ are the query and key matrix corresponding to $n$ tokens such that $Q_{i.} = x_i W_Q$ and $K_{i.} = x_i W_K$. We reshape $n \times d$ query matrix, $Q$, and key matrix, $K$, along the sequence length to obtain $\lceil n/b \rceil \times b \times d$ tensors $Q'$ and $K'$ respectively. Now we multiply the two tensors as \begin{equation} A_{jst} = \sum_{u} Q'_{jsu} K'_{jtu}, \qquad j=0,1,...,\lceil n/b \rceil \end{equation} The resulting $A$ tensor of size $\lceil n/b \rfloor \times b \times b $ can be reshaped to correspond to the block diagonal portion of the full attention pattern. Now to extend the attention from block diagonal to a window, i.e. where query block with index $j$ attends to key block with index $j - (w-1)/2$ to $j + (w-1)/2$, we make $w$ copies of the reshaped key tensor $K'$. We ``roll'' each copy of key-block tensor incrementally along the first axis of length $\lceil n/b \rceil$ as illustrated in~\Cref{fig:apndx_copy-roll}. Multiplying these $w$ rolled key-block tensors with the query-block tensor would yield the desired window attention scores (\Cref{fig:apndx_wind_diag_attn_score}). Likewise the global component, we can always include the first $g$ blocks from key tensor corresponding to the global tokens. Finally, for the random attention, which is very small ($r=3$ for all of our experiments), we resort to using gather ops (\Cref{fig:apndx_rand_att}). Also note by design, each query block attends to exactly $r$ random blocks. Thus, the result of all the three components is basically a compact dense tensor $K''$ of size $\lceil n/b \rceil \times (g+w+r)b \times d$ as shown in~\Cref{fig:apndx_blk_cmp}. Computing the final attention score then just boils down to a dense tensor multiplication, at which TPU/GPU are very efficient. Specifically, we need to multiply $Q'$ (size: $\lceil n/b \rceil \times b \times d$) and $K''$ (size: $\lceil n/b \rceil \times (g+w+r)b \times d$) with a cost of $O(n(g+w+r)bd)$ to yield the desired attention score tensor of size $\lceil n/b \rceil \times b \times (g+w+r)b $, which can be reshaped to obtain all the attention scores according to the BigBird pattern. \begin{figure} \vspace{-7mm} \centering \includegraphics[trim=10mm 15mm 10mm 10mm, clip,width=0.7\textwidth,page=5]{figures/BigBird-Figures.pdf} \caption{Construction of rolled key-block tensor. Make $w$ copies of the key matrix. Index the copies as $-(w-1)/2 \leq j \leq (w-1)/2$. Roll $j^\text{th}$ copy by $j$ blocks. Positive roll means circular shift entries left and likewise for negative roll corresponds to right shift. Finally, reshape by grouping the blocks along a new axis to obtain the key-blocked tensor. For illustration purpose $w=3$ is chosen.} \label{fig:apndx_copy-roll} \vspace{-2mm} \end{figure} \begin{figure}[hbt] \centering \includegraphics[width=0.8\linewidth]{figures/EfficientMultiplication.pdf} \caption{Overview of \textsc{BigBird}\xspace attention computation. Structured block sparsity helps in compactly packing our operations of sparse attention, thereby making our method efficient on GPU/TPU. On the left, we depict the transformed dense query and key tensors. The query tensor is obtained by simply blocking and reshaping while the final key tensor by concatenating three transformations: The first green columns, corresponding to global attention, is fixed. The middle blue columns correspond to window local attention and can be obtained by appropriately rolling as illustrated in~\Cref{fig:apndx_copy-roll}. For the final orange columns, corresponding to random attentions, we need to use computationally inefficient gather operation. Dense multiplication between the query and key tensors efficiently calculates the sparse attention pattern (except the first row-block, which is computed by direct multiplication), using the ideas illustrated in~\Cref{fig:apndx_bigb_calc_idea}. The resultant matrix on the right is same as that shown in~\Cref{fig:apndx_bigb_atn}. } \label{fig:apndx_blk_cmp} \vspace{-3mm} \end{figure} \section{NLP experiments details} \label{sec:apndx-expt-nlp} \subsection{MLM Pretraining} \label{sec:app-expt-nlp:mlm} We use four publicly available datasets Books \citep{zhu2015aligning}, CC-News \citep{guu2020realm}, Stories \citep{trinh2018simple} and Wikipedia to pretrain \textsc{BigBird}\xspace. We borrow the sentencepiece vocabulary as RoBERTa (which is in turn borrowed from GPT2). We split any document longer than $4096$ into multiple documents and we join documents that were much smaller than $4096$. Following the original BERT training, we mask $15\%$ of tokens in these four datasets, and train to predict the mask. We warm start from RoBERTa's checkpoint. We train two different models: \textsc{BigBird}\xspace-\textsc{itc}-base and \textsc{BigBird}\xspace-\textsc{etc}-base. The hyper-parameters for these two models are given in~\Cref{tab:app_mlm_param}. In all experiments we use a learning rate warmup over the first 10,000 steps, and linear decay of the learning rate. Similar to the norm, we trained a large version of model as well, which has 24 layers with 16 heads and hidden dimension of 1024. Following the observation from RoBERTa, we pretrain on a larger batch size of 2048 for this size. For \textsc{BigBird}\xspace-\textsc{itc} the block length was kept same as base size, but for \textsc{BigBird}\xspace-\textsc{etc} the block length was almost doubled to 169. All the remaining parameters were the same. \begin{table}[bht] \small \centering \begin{tabular}{@{}l c r c r @{}} \toprule Parameter & & \textsc{BigBird}\xspace-\textsc{itc} & & \textsc{BigBird}\xspace-\textsc{etc} \\ \midrule Block length, $b$ & & $64$ & & 84\\ $\#$ of global token, $g$ & & $2\times b$ & & $256$ \\ Window length, $w$ & & $3\times b$ & & $3\times b$ \\ $\#$ of random token, $r$ & & $3\times b$ & & $0$ \\ Max. sequence length & & $4096$ & & $4096$\\ $\#$ of heads & & $12$ & & $12$\\ $\#$ of hidden layers & & $12$ & & $12$ \\ Hidden layer size & & $768$ & & $768$ \\ Batch size & & $256$ & & $256$ \\ Loss & & MLM & & MLM \\ Activation layer & & gelu & & gelu \\ Dropout prob & & $0.1$ & & $0.1$ \\ Attention dropout prob & & $0.1$ & & $0.1$ \\ Optimizer & & Adam & & Adam\\ Learning rate & & $10^{-4}$ & & $10^{-4}$\\ Compute resources & & $8 \times 8$ TPUv3 & & $8 \times 8$ TPUv3\\ \bottomrule \end{tabular} \vspace{2mm} \caption{Hyperparameters for the two \textsc{BigBird}\xspace base models for MLM.} \label{tab:app_mlm_param} \end{table} \begin{table}[b] \centering \small \parbox{.45\linewidth}{ \centering \begin{tabular}{@{}lrr@{}} \toprule Dataset & $\#$ tokens & Avg. doc len. \\ \midrule Books \citep{zhu2015aligning} & $1.0$B & $37$K \\ CC-News \citep{guu2020realm} & $7.4$B & $561$ \\ Stories \citep{trinh2018simple} & $7.7$B & $8.2$K \\ Wikipedia & $3.1$B & $592$ \\ \bottomrule \end{tabular} \vspace{2mm} \caption{Dataset used for pre training.} \label{tab:mlm_data} } \quad \quad \parbox{.45\linewidth}{ \centering \begin{tabular}{@{}lcc@{}} \toprule Model & Base & Large \\ \midrule RoBERTa (sqln: 512) & 1.846 & 1.496 \\ Longformer (sqln: 4096) & 1.705 & 1.358 \\ \textsc{BigBird}\xspace-\textsc{itc} (sqln: 4096) & 1.678 & 1.456 \\ \textsc{BigBird}\xspace-\textsc{etc} (sqln: 4096) & \textbf{1.611} & \textbf{1.274} \\ \bottomrule \end{tabular} \vspace{2mm} \caption{MLM performance on held-out set.} \label{tab:mlm_bpc} } \end{table} \begin{table} \small \centering \begin{tabular}{@{}lrrrrrr@{}} \toprule & & \multicolumn{2}{c}{Instances} & & \multicolumn{2}{c}{Instance Length} \\ \cmidrule{3-4} \cmidrule{6-7} Dataset & & Training & Dev & & Median & Max \\ \midrule HotpotQA-distractor \citep{yang2018hotpotqa} & & $90447$ & $7405$ & & $1227$ & $3560$ \\ Natural Questions \citep{kwiatkowski2019natural} & & $307373$ & $7830$ & & $3258$ & $77962$ \\ TriviaQA \citep{JoshiTriviaQA2017} & & $61888$ & $7993$ & & 4900 & 32755 \\ WikiHop \cite{welbl2018constructing} & & $43738$ & $5129$ & & $1541$ & $20337$ \\ \bottomrule \end{tabular} \vspace{2mm} \caption{Question Answering Datasets} \label{tab:qa_data} \end{table} \begin{table}[t] \small \centering \begin{tabular}{@{}l c rr c rr c rr c rr@{}} \toprule Parameter & & \multicolumn{2}{c}{HotpotQA} & & \multicolumn{2}{c}{NaturalQ} & & \multicolumn{2}{@{}c@{}}{TriviaQA} & & \multicolumn{2}{@{}c@{}}{WikiHop}\\ \cmidrule{3-4} \cmidrule{6-7} \cmidrule{9-10} \cmidrule{12-13} Global token location & & \textsc{itc} &\textsc{etc} & & \textsc{itc} & \textsc{etc} & &\textsc{itc} & \textsc{etc}& &\textsc{itc} & \textsc{etc} \\ \midrule $\#$ of global token, $g$ & & $128$ & $256$ & & $128$ & $230$ & & $128$ & $320$ & & $128$ & $430$ \\ Window length, $w$ & & $192$ & $252$ & & $192$ & $252$ & & $192$ & $252$ & & $192$ & $252$ \\ $\#$ of random token, $r$ & & $192$ & $0$ & & $192$ & $0$ & & $192$ & $0$ & & $192$ & $0$ \\ Max. sequence length & & $4096$ & $4096$ & & $4096$ & $4096$ & & $4096$ & $4096$ & & $4096$ & $4096$\\ $\#$ of heads & & $12$ & $12$ & & $12$ & $12$ & & $12$ & $12$ & & $12$ & $12$\\ $\#$ of hidden layers & & $12$ & $12$ & & $12$ & $12$ & & $12$ & $12$ & & $12$ & $12$ \\ Hidden layer size & & $768$ & $768$ & & $768$ & $768$ & & $768$ & $768$ & & $768$ & $768$ \\ Batch size & & $32$ & $32$ & & $128$ & $128$ & & $32$ & $32$ & & $64$ & $64$ \\ \multirow{2}{*}{Loss} & & \multicolumn{2}{c}{cross-entropy} & & \multicolumn{2}{c}{cross-entropy} & & \multicolumn{2}{@{}c@{}}{cross-entropy} & & \multicolumn{2}{@{}c@{}}{cross-entropy} \\ & & \multicolumn{2}{c}{golden spans} & & \multicolumn{2}{c}{golden spans} & & \multicolumn{2}{@{}c@{}}{noisy spans~\citep{clark2017simple}} & & \multicolumn{2}{@{}c@{}}{ans choices} \\ Compute resources & & \multicolumn{2}{c}{$4 \times 2$ TPUv3} & & \multicolumn{2}{c}{$4 \times 8$ TPUv3} & & \multicolumn{2}{@{}c@{}}{$4 \times 2$ TPUv3} & & \multicolumn{2}{@{}c@{}}{$4 \times 4$ TPUv3}\\ \bottomrule \end{tabular} \vspace{2mm} \caption{Hyperparameters of base \textsc{BigBird}\xspace model used for Question Answering i.e.~the numbers reported in \Cref{tab:QADev}} \label{tab:app_qa_dev} \end{table} \begin{table}[t] \small \centering \begin{tabular}{@{}l c r c r c r c r@{}} \toprule Parameter & & HotpotQA & & NaturalQ & & TriviaQA & & WikiHop \\ \midrule Global token location & & \textsc{etc} & & \textsc{etc} & & \textsc{etc} & &\textsc{etc} \\ $\#$ of global token, $g$ & & $256$ & & $230$ & & $320$ & & $430$ \\ Window length, $w$ & & $507$ & & $507$ & & $507$ & & $507 $ \\ $\#$ of random token, $r$ & & $0$ & & $0$ & & $0$ & & $0$ \\ Max. sequence length & & $4096$ & & $4096$ & & $4096$ & & $4096$\\ $\#$ of heads & & $16$ & & $16$ & & $16$ & & $16$\\ $\#$ of hidden layers & & $24$ & & $24$ & & $24$ & & $24$ \\ Hidden layer size & & $1024$ & & $1024$ & & $1024$ & & $1024$ \\ Batch size & & $32$ & & $64$ & & $32$ & & $64$ \\ Loss & & cross-entropy & & cross-entropy & & cross-entropy & & cross-entropy \\ Num epochs & & $\{5, 9\}$ & & $\{3,5\}$ & & $\{3,5\}$ & & $\{5, 10\}$ \\ Optimizer & & Adam & & Adam & & Adam & & LAMB\\ Learning rate & & $3\times 10^{-5}$ & & $\{5, 10\}\times 10^{-5}$ & & $\{3, 5\}\times 10^{-5}$ & & $\{2,5 \}\times 10^{-5}$ \\ Compute resources & & $4 \times 4$ TPUv3 & & $4 \times 8$ TPUv3 & & $4 \times 4$ TPUv3 & & $4 \times 8$ TPUv3\\ \bottomrule \end{tabular} \vspace{2mm} \caption{Hyperparameters of large \textsc{BigBird}\xspace model for Question Answering submitted for test i.e.~the numbers reported in~\Cref{tab:QATest}} \label{tab:app_qa} \end{table} \subsection{Question Answering} \label{sec:app-expt-nlp:qa} The detailed statistics of the four datasets used are given in~\Cref{tab:qa_data}. All the hyperparameters for \textsc{BigBird}\xspace, used for creating~\Cref{tab:QADev} are shown in~\Cref{tab:app_qa_dev} and those submitted to get~\Cref{tab:QATest} are shown in \Cref{tab:app_qa}. We use two types of regularization in training: \begin{itemize}[leftmargin=12mm, itemsep=0mm, partopsep=0pt,parsep=0pt] \item We used a variant of contrastive predictive coding~\citep{oord2018representation} as a dual encoder model. \item We use position embedding for \textsc{itc} and relative position encoding~\citep{shaw2018self} for \textsc{etc}. \end{itemize} Next, we will mention the dataset/task specific part of the model. \paragraph{HotpotQA} The data consists of each question with multiple evidence paragraphs. We filtered 16 QA where the answer was not in the given evidences. For \textsc{BigBird}\xspace-\textsc{itc}, we use first $128$ global tokens. For \textsc{BigBird}\xspace-\textsc{etc}, we have one global token for each question token, one for each evidence paragraph, and one for each sentence within the paragraph, for a maximum of $256$ global token. We use a dense layer on the output corresponding to global token of the evidence paragraph to predict whether its a supporting fact with a threshold over the output logits. The answer type (yes/no/span) is predicted with a single dense layer from the global CLS token. For span based answers, the spans are predicted with dense layers on the sequence with the distance between start and end positions to be no more than 30 words. The spans are ranked by sum of start and end logits. \paragraph{Natural Questions} Here also the data consists of question with supporting evidence, but in form of a single, potentially long, document and not multiple paragraphs. We largely follow the setup of \citep{alberti2019bert}. For documents, that are longer than 4096, a sliding window approach is used with stride of 2048. We use CLS token at the beginning, followed by the question followed by a separator token followed by the document as input. For \textsc{BigBird}\xspace-\textsc{itc}, we make the first $128$ tokens as global. For \textsc{BigBird}\xspace-\textsc{etc}, we make a global token for CLS, question, and one token for each of the paragraphs. We train four predictors at the final layer to predict long answer start, long answer end, short answer start and short answer end respectively. Instead of independently predicting the start and end of answers we first predict the start and then predict the best end location beyond the start. For short answer, we limit the distance between start and end positions to be no more than 38 words. The answer type (null, yes, no, short, long) is predicted from CLS token output embedding. When the logit for a yes/no answer is higher than the logits for short, long or null answer, we replace the short answer with a corresponding yes/no text. \paragraph{TriviaQA} The data consists of question-answer pairs with Wikipedia articles as the ``noisy'' supporting evidence. We call them noisy because the given Wikipedia articles may or may not contain the answer. Moreover, the answer entities is not annotated to appropriate span in the article, rather all occurrences found using fuzzy string matching are listed. We use CLS token at the beginning, followed by the question followed by a separator token followed by the document as input. For \textsc{BigBird}\xspace-\textsc{itc}, we make the first $128$ tokens as global. For \textsc{BigBird}\xspace-\textsc{etc}, we make a global token for CLS, question, and one token for each sentence up to a maximum of 320 global tokens. Given the noisy nature of answer span, we follow~\citet{clark2017simple} for training. We use a dense layer on the sequence to predict the answer span for each article independently, with the distance between start and end positions to be no more than 16 words. For each article the span with maximum start logit + end logit is chosen. Then we normalize over all the documents associated with that question. \paragraph{WikiHop} For each question in WikiHop, we are given upto $79$ candidates, and $63$ supporting paragraphs. In our \textsc{BigBird}\xspace-\textsc{itc} model, following~\citet{beltagy2020longformer}, we concatenate the answer and the question with special tokens, \texttt{[q] Question [/q] [ans] Ans1 [/ans] $\ldots$ [ans] AnsN [/ans]} along with the context. As the start of the text, always contains questions followed by answers, we make the first $128$ token attend globally. In \textsc{BigBird}\xspace-\textsc{etc} model, we do not need to insert special \texttt{[ans]}, \texttt{[/ans]} etc.~as we design global tokens appropriately. Along with global tokens for question, we have one per candidate answer up to a maximum of 430. Further, we linked answer tokens to their mentions using relative position label. Lastly, we use a dense layer that takes in the output vector corresponding to a candidate answer, and predicts a score for the current candidate to be the correct answer. We apply this dense layer to each candidate independently and the candidate with the best score is picked as our final answer. It is worthwhile to note that explicitly designed attention connection in \textsc{etc} works slightly better, the random connection based \textsc{itc} is pretty competative. \subsection{Relationship to Contemporary Work} \label{sec:app-related-work} \paragraph{Longformer} \citet{child2019generating} introduced localized sliding window to reduce computation. A more recent version, which includes localized sliding windows and global tokens was introduced independently by Longof
rmer\citep{beltagy2020longformer}. Although \textsc{BigBird}\xspace contains additional random tokens, there are also differences in the way global and local tokens are realized. In particular even when there is no random token, as used to get SoTA in question answering, there are two key differences between Longformer and \textsc{BigBird}\xspace-etc (see \citep{ainslie2020etc}): \begin{enumerate} \item We use global-local attention with relative position encodings enables it to better handle structured inputs \item Unlike Longformer, we train the global tokens using CPC loss and learn their use during finetuning. \end{enumerate} \subsection{Classification} \label{sec:app-expt-nlp:cls} We try two types of classification task. \begin{table}[bht] \small \centering \begin{tabular}{@{}l c r r r r r @{}} \toprule Parameter & & IMDb & Arxiv & Patents & Hyperpartisan & Yelp-5 \\ \midrule Batch size & & 64 & 64 & 64 & 32 & 32 \\ Learning rate & & $1\times 10^{-5}$ & $3\times 10^{-5}$ & $5\times 10^{-5}$ & $5\times 10^{-6}$ & $2\times 10^{-5}$\\ Num epochs & & 40 & 10 & 3 & 15 & 2 \\ TPUv3 slice & & $4 \times 4$ & $4 \times 4$ & $4 \times 4$ & $4 \times 2$ & $4 \times 8$ \\ $\#$ of heads & & \multicolumn{4}{c}{12} & 16\\ $\#$ of hidden layers & & \multicolumn{4}{c}{12} & 24 \\ Hidden layer size & & \multicolumn{4}{c}{768} & $1024$ \\ Block length, $b$ & & \multicolumn{5}{c}{64} \\ Global token location & & \multicolumn{5}{c}{\textsc{itc}} \\ $\#$ of global token, $g$ & & \multicolumn{5}{c}{$2\times b$} \\ Window length, $w$ & & \multicolumn{5}{c}{$3\times b$ } \\ $\#$ of random token, $r$ & & \multicolumn{5}{c}{$3\times b$} \\ Max. sequence length & & \multicolumn{5}{c}{4096}\\ Vocab size & & \multicolumn{5}{c}{$50358$}\\ Activation layer & & \multicolumn{5}{c}{gelu} \\ Dropout prob & & \multicolumn{5}{c}{0.1} \\ Attention dropout prob & & \multicolumn{5}{c}{0.1} \\ Loss & & \multicolumn{5}{c}{cross-entropy} \\ Optimizer & & \multicolumn{5}{c}{Adam}\\ \bottomrule \end{tabular} \vspace{2mm} \caption{Hyperparameters for document classification.} \label{tab:app_dc} \end{table} \begin{table} \centering \small \begin{tabular}{@{}lrrrrr@{}} \toprule Model & IMDb~\citep{maas2011learning} & Yelp-5~\citep{zhang2015character} & Arxiv~\citep{he2019long} & Patents~\citep{lee2020patent} & Hyperpartisan~\citep{kiesel2019semeval}\\ \midrule \# Examples & 25000 & 650000 & 30043 & 1890093 & 645 \\ \# Classes & 2 & 5 & 11 & 663 & 2 \\ Excess fraction & 0.14 & 0.04 & 1.00 & 0.90 & 0.53 \\ \midrule SoTA & \citep{thongtan2019sentiment} 97.4 & \citep{abreu2019hierarchical} 73.28 & \citep{olson2019adapting} 87.96 & \citep{olson2019adapting} 69.01 & \citep{jiang2019team} 90.6 \\ RoBERTa & $95.0 \pm 0.2$ & 71.75 & 87.42 & 67.07 & $87.8 \pm 0.8$ \\ \textsc{BigBird}\xspace & $95.2\pm0.2$ & 72.16 & \textbf{92.31} & 69.30 & $\mathbf{92.2 \pm 1.7}$ \\ \bottomrule \end{tabular} \vspace{2mm} \caption{Classification results. We report the F1 micro-averaged score for all datasets. Experiments on smaller IMDb and Hyperpartisan datasets are repeated 5 times and the average performance is presented along with standard deviation.} \label{tab:cls} \end{table} \begin{table}[bht] \small \centering \begin{tabular}{@{}lcccccccc@{}} \toprule System & MNLI-(m/mm) & QQP & QNLI & SST-2 & CoLA & STS-B & MRPC & RTE \\ & 392k & 363k & 108k & 67k & 8.5k & 5.7k & 3.5k & 2.5k \\ \midrule BERT & 84.6/83.4 & 71.2 & 90.5 & 93.5 & 52.1 & 85.8 & 88.9 & 66.4 \\ XLNet & 86.8/- & 91.4 & 91.7 & 94.7 & 60.2 & 89.5 & 88.2 & 74.0 \\ RoBERTa & 87.6/- & 91.9 & 92.8 & 94.8 & 63.6 & 91.2 & 90.2 & 78.7 \\ \textsc{BigBird}\xspace & 87.5/87.3 & 88.6 & 92.2 & 94.6 & 58.5 & 87.8 & 91.5 & 75.0 \\ \bottomrule \end{tabular} \vspace{2mm} \caption{GLUE Dev results on base sized models. Number of training examples is reported below each task. MCC score is reported for CoLA, F1 score is reported for MRPC, Spearman correlation is reported for STS-B, and accuracy scores are reported for the other tasks.} \label{tab:glue_dev} \end{table} \paragraph{Document classification} We experiment on datasets of different lengths and contents, as listed in ~\Cref{tab:cls}. In particular, we look at sentiment analysis (IMDb~\citep{maas2011learning} and Yelp-5~\citep{zhang2015character}) task and topic assignment (Arxiv~\citep{he2019long}, Patents~\citep{lee2020patent}, and Hyperpartisan~\citep{kiesel2019semeval}) task. Following BERT, we used one layer with cross entropy loss on top of the first [CLS] token from the \textsc{BigBird}\xspace encoder consuming 4096 tokens. We report the results of document classification experiments in~\Cref{tab:cls}. We compare against state-of-the-art (SoTA) methods for each dataset and plain RoBERTa model with 512 tokens truncation. In all experiments we use a learning rate warmup over the first 10\% steps, and linear decay of the learning rate and detail list of remaining hyperparameters are provided in \Cref{tab:app_dc}. For better quantitative evaluation, we compute the fraction of the dataset that exceeds 512 tokens, i.e. the length at which the document are often truncated. We see that gains of using \textsc{BigBird}\xspace are more significant when we have longer documents and fewer training examples. For instance, using base sized model, \textsc{BigBird}\xspace improves state-of-the-art for Arxiv dataset by about $\bm{5\%}$ \textbf{points}. On Patents dataset, there is improvement over using simple BERT/RoBERTa, but given the large size of training data the improvement over SoTA (which is not BERT based) is not significant. Note that this performance gain is not seen for much smaller IMDb dataset. Along with experimental setup detail, we present detailed results in~\Cref{sec:app-expt-nlp:cls} which show competitive performance. \paragraph{GLUE} The General Language Understanding Evaluation (GLUE) benchmark \citep{wang2018glue}, test language models on 8 different natural language understanding tasks. We used the same training parameters as mentioned in \url{https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.glue.md}. Our model parameters are $b=64, g=2\times b, w = 3 \times b, r = 3 \times b$ ( we used the \textsc{BigBird}\xspace-\textsc{itc} base model pretrained on MLM task). We compare the performance of \textsc{BigBird}\xspace to BERT, XLNet \citep{yang2019xlnet} and RoBERTa in \Cref{tab:glue_dev}. We find that even on task that have a much smaller context, our performance is competitive to full attention models. \subsection{Summarization\label{sec:appn_summarization}} As discussed in~\Cref{sec:seq2seq}, given the small length of output sequence, we used sparse \textsc{BigBird}\xspace attention only for encoder, while keeping the full attention for decoder. The number of hidden layers, number of heads, and hidden dimension is same for encoder and decoder. The hyperparameters are detailed in \Cref{tab:app_sum}. We summarize our result in~\Cref{tab:app_sum_num}. In all experiments, we use a learning rate warmup over the first 10,000 steps, and square root decay of the learning rate. \begin{table}[bht] \small \centering \begin{tabular}{@{}l r r c r @{}} \toprule Parameter & \multicolumn{2}{r}{Base: \textsc{BigBird}\xspace-RoBERTa} & & Large: \textsc{BigBird}\xspace-Pegasus \\ \midrule Block length, $b$ & & $64$ & & $64$\\ Global token location & & \textsc{itc} & & \textsc{itc} \\ $\#$ of global token, $g$ & & $2\times b$ & & $2\times b$ \\ Window length, $w$ & & $3\times b$ & & $3\times b$ \\ $\#$ of random token, $r$ & & $3\times b$ & & $3\times b$ \\ \multirow{3}{*}{Max. encoder sequence length} & & BBC-XSUM: \qquad \phantom{.}1024 & & 1024 \\ & & CNN/DM: \qquad \phantom{.}2048 & & 2048 \\ & & Others: \qquad \phantom{.}3072 & & 3072 \\ \multirow{3}{*}{Max. decoder sequence length} & & BBC-XSUM: \qquad \phantom{10.}64 & & 64 \\ & & CNN/DM: \qquad \phantom{1.}128 & & 128 \\ & & Others: \qquad \phantom{1.}256 & & 256 \\ Beam size & & 5 & & 5 \\ \multirow{2}{*}{Length penalty} & & BBC-XSUM: \qquad \phantom{10}0.7 & & 0.7 \\ & & Others: \qquad \phantom{10}0.8 & & 0.8 \\ $\#$ of heads & & $12$ & & $16$\\ $\#$ of hidden layers & & $12$ & & $16$ \\ Hidden layer size & & $768$ & & $1024$ \\ Batch size & & $128$ & & $128$ \\ \multirow{2}{*}{Loss} & & teacher forced & & teacher forced \\ & & cross-entropy & & cross-entropy \\ Activation layer & & gelu & & gelu \\ Dropout prob & & $0.1$ & & $0.1$ \\ Attention dropout prob & & $0.1$ & & $0.1$ \\ Optimizer & & Adam & & Adafactor\\ Learning rate & & $1\times10^{-5}$ & & $1\times10^{-4}$\\ Compute resources & & $4 \times 4$ TPUv3 & & $4 \times 8$ TPUv3\\ \bottomrule \end{tabular} \vspace{2mm} \caption{Encoder hyperparameters for Summarization. We use full attention in decoder} \label{tab:app_sum} \end{table} \begin{table}[bht] \centering \small \begin{tabular}{@{}lrrrrrccrcc@{}} \toprule & & \multicolumn{3}{c}{Instances} & & \multicolumn{2}{c}{Input Length}& & \multicolumn{2}{c}{Output Length} \\ \cmidrule{3-5} \cmidrule{7-8} \cmidrule{10-11} Dataset & & Training & Dev & Test & & Median & 90\%-ile & & Median & 90\%-ile \\ \midrule Arxiv \citep{cohan2018discourse} & & 203037 & 6436 & 6440 & & 6151 & 14405 & & 171 & 352 \\ PubMed \citep{cohan2018discourse} & & 119924 & 6633 & 6658 & & 2715 & 6101 & & 212 & 318 \\ BigPatent \citep{sharma2019bigpatent} & & 1207222 & 67068 & 67072 & & 3082 & 7693 & & 123 & 197 \\ \bottomrule \end{tabular} \vspace{2mm} \caption{Statistics of datasets used for summarization.} \label{tab:long_sum_data} \end{table} Following success of several recent works~\citep{rothe2019leveraging,liu2019roberta}, we warm start our encoder-decoder \textsc{BigBird}\xspace transformer model with pretrained weights and the weights between encoder and decoder are shared. In particular, the query/key/value matrix of self-attention and all the feedforward layers are shared between encoder and decoder. The only variable that is initialized randomly is the encoder-decoder attention. For base sized model, we utilize our MLM pretrained model on 4096 sequence length from~\Cref{sec:app-expt-nlp:mlm}, which is in turn initialized using the public RoBERTa checkpoint. For the large size model, we lift weight from the state-of-the-art Pegasus model~\citep{zhang2019pegasus}, which is pretrained using an objective designed for summarization task. To check if sparse attention causes significant degradation as compared to full attention, we further experiment on two shorter but popular datasets, where full attention can be used without significantly truncating the document. The statistics of these two datasets are in~\Cref{tab:sum_data}. We see that our performance is competitive, which shows that sparse attention can achieve similar performance to a full attention models. \begin{table} \small \centering \begin{tabular}{@{}lrrrrrrrrrr@{}} \toprule & & \multicolumn{3}{c}{Instances} & & \multicolumn{2}{c}{Input Length}& & \multicolumn{2}{c}{Output Length} \\ \cmidrule{3-5} \cmidrule{7-8} \cmidrule{10-11} Dataset & & Training & Dev & Test & & Median & 90\%-ile & & Median & 90\%-ile \\ \midrule BBC XSum \citep{narayan2018don} & & 204044 & 11332 & 11334 & & 359 & 920 & & 25 & 32 \\ CNN/DailyMail \citep{hermann2015teaching} & & 287113 & 13368 & 11490 & & 777 & 1439 & & 59 & 93 \\ \bottomrule \end{tabular} \vspace{2mm} \caption{Shorter summarization dataset statistics.} \label{tab:sum_data} \end{table} \begin{table} \centering \small \begin{tabular}{@{}llrrrrrrrr@{}} \toprule \multicolumn{2}{l}{\multirow[b]{2}{*}{\hspace{-2mm}\normalsize{Model}}} & & \multicolumn{3}{c}{BBC XSum} & & \multicolumn{3}{c}{CNN/DailyMail}\\ \cmidrule{4-6} \cmidrule{8-10} & & & R-1 & R-2 & R-L & & R1 & R2 & R-L \\ \midrule \multirow{9}{*}{\rotatebox[origin=c]{90}{Prior Art}} & Lead & & $16.30$ & $1.61$ & $11.95$ & & $39.60$ & $ 17.70$ & $ 36.20$ \\ & PtGen~\citep{see2017get} & & $29.70$ & $ 9.21$ & $23.24$ & & $39.53$ & $17.28$ & $36.38$ \\ & ConvS2S~\citep{gehring2017convolutional} & & $31.89$ & $11.54$ & $25.75$ & & $-$ & $-$ & $-$ \\ & MMN~\citep{kim2018abstractive} & & $32.00$ & $12.10$ & $26.00$ & & $-$ & $-$ & $-$ \\ & Bottom-Up~\citep{gehrmann2018bottom} & & $-$ & $-$ & $-$ & & $41.22$ & $18.68$ & $38.34$ \\ & TransLM~\citep{khandelwal2019sample} & & $-$ & $-$ & $-$ & & $39.65$ & $17.74$ & $ 36.85$ \\ & UniLM~\citep{dong2019unified} & & $-$ & $-$ & $-$ & & $43.47$ & $20.30$ & $40.63$ \\ & Extr-Abst-BERT~\citep{liu2019text} & & 38.81 & 16.50 & 31.27 & & 42.13 & 19.60 & 39.18 \\ & BART~\citep{lewis2019bart} & & 45.14 & 22.27 & 37.25 & & 44.16 & 21.28 & 40.90 \\ \midrule \multirow{4}{*}{\rotatebox[origin=c]{90}{Base}} & Transformer~\citep{vaswani2017attention} & & 29.61 & 9.47 & 23.17 & & 34.89 & 13.13 & 32.12 \\ & \; + RoBERTa~\citep{rothe2019leveraging} & & \underline{39.92} & \underline{17.33} & \underline{32.63} & & 39.44 & 18.69 & 36.80 \\ & \; + Pegasus~\citep{zhang2019pegasus} & & 39.79 & 16.58 & 31.70 & & \underline{41.79} & \underline{18.81} & \underline{38.93} \\ & \textsc{BigBird}\xspace-RoBERTa & & 39.52 & 17.22 & 32.30 & & 39.25 & 18.46 & 36.61 \\ \midrule \multirow{3}{*}{\rotatebox[origin=c]{90}{Large}} & Pegasus (Reported)~\citep{zhang2019pegasus} & & 47.60 & 24.83 & 39.64 & & 44.16 & 21.56 & 41.30 \\ & Pegasus (Re-eval) & & \textbf{47.37} & \textbf{24.31} & \textbf{39.23} & & \textbf{44.15} & \textbf{21.56} & \textbf{41.05} \\ & \textsc{BigBird}\xspace-Pegasus & & 47.12 & 24.05 & 38.80 & & 43.84 & 21.11 & 40.74 \\ \bottomrule \end{tabular} \vspace{2mm} \caption{Summarization ROUGE score for shorter documents.} \label{tab:app_sum_num} \end{table} \section{Genomics experiments details} \label{sec:apndx-expt-bio} In this section we provide details of the experimental setup for \textsc{BigBird}\xspace on genomics data. \subsection{Pretraining} \label{sec:apndx-expt-bio:mlm} We try to keep the experimental setup as close to a typical NLP pipeline. In this regard, we take human reference GRCh37\footnote{\url{https://www.ncbi.nlm.nih.gov/assembly/GCF_000001405.39}} and convert it into documents $\mathcal{D}$. Each document $d\in \mathcal{D}$ is a sequence of sentences, where each sentence is a sequence of fragments of DNA. We construct the documents as follows: \begin{enumerate}[leftmargin=6mm, itemsep=2mm, partopsep=0pt,parsep=0pt] \item Start with empty document set $D = \emptyset$. \item For each chromosome $C$, repeat the following procedure 10 times. \begin{enumerate}[leftmargin=6mm, itemsep=2mm, partopsep=0pt,parsep=0pt] \item Pick uniformly at random a starting point $q$ between base pairs 0 and 5000 from the 5' end. \item Repeat until $q > |C|$ \begin{enumerate}[leftmargin=6mm, itemsep=2mm, partopsep=0pt,parsep=0pt] \vspace{1mm} \item Pick uniformly at random $s$ a number between 50 and 100 to denote number of sentences per document. \item Constructs a document $d$ containing $s$ sentences using consecutive base pairs (bps). The length of each sentence is chosen uniformly at random between 500-1000. Thus the resulting document has $25,000$ - $100,000$ bps. \item $D = D \bigcup d$ \item $q = q + |d|$ \end{enumerate} \end{enumerate} \end{enumerate} By this procedure we end-up with approximately $450K$ documents. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/Unsupervised_genomics_data.pdf} \caption{Visual description of how the masked language modeling data was generated from raw DNA dataset. The raw DNA sequences of GRCh37, where split at random positions to create documents with 50-100 sentences where each sentence was 500-1000 base pairs (bps). Thus each document had a continuous strand of 25000-100,000 bps of DNA. This process was repeated 10 times to create 10 sets of document for each chromosome of GRCH37. The resulting set of documents was then passed through Sentencepiece that created tokens of average 8bp. For pretraining we used masked language model and masked $10\%$ of the tokens and trained on predicting the masked tokens.} \label{fig:apndx_mlm_data} \end{figure} Next we run sentencepiece~\citep{kudo2018sentencepiece} tokenization on the resulting documents. In particular, using 5 characters as the building blocks (four for bases - A, T, C, G and one for missing symbol N), we construct a byte pair encoding table of size 32k, with each token representing 8.78 base pairs on average. Using the above constructed documents, we construct a dataset for two pretraining tasks following \citet{devlin2018bert}: \begin{itemize}[leftmargin=6mm, itemsep=2mm, partopsep=0pt,parsep=0pt] \item \textbf{Masked Language Model (MLM):} In order to train a deep bidirectional representation, BERT training introduces the MLM task, where we simply mask out 15\% of the input tokens at random, and then predict those masked tokens. We can simply replace such masked out of the tokens with a [MASK] placeholder, but it leads to a distribution mis-match for downstream tasks which will not have such placeholders. To mitigate with this issue, out of the 15\% of the tokens selected for masking: \begin{itemize} \item 80\% of the tokens are actually replaced with the token [MASK]. \item 10\% of the time tokens are replaced with a random token. \item 10\% of the time tokens are left unchanged, but are still predicted at output. \end{itemize} We run this entire sequence through the \textsc{BigBird}\xspace transformer encoder and then predict corresponding to the masked positions, based on the context provided by the other non-masked tokens in the sequence. \item \textbf{Next Sentence Prediction (NSP):} In order to understand relationship between two sequences, BERT training introduces the NSP task, where we predict if a given pair of sequences are contiguous or not. During training the model gets as input pairs of sequences separated by [SEP] token along with a [CLS] token at the start. Overall the input pattern is: [CLS] sequence A [SEP] sequence B [SEP]. For 50\% of the time the second sequence comes from true sequence after the first one. Remaining 50\% of the time it is a a random sequence from the full dataset. The model is then required to predict this relationship using the output corresponding to the [CLS] token, which is fed into a simple binary classification layer. \end{itemize} \begin{wrapfigure}{r}{0.32\textwidth} \vspace{-2mm} \centering \includegraphics[width=0.35\textwidth]{figures/dna_mlm.pdf} \vspace{-5mm} \caption{\textsc{BigBird}\xspace accuracy with context length.} \label{fig:apndx_dna_mlm} \end{wrapfigure} The sequence of steps is visually elaborated in \Cref{fig:apndx_fep_data}. The model is trained with both MLM and NSP together. Training hyperparameter is provided in second columns of \Cref{tab:app_bio}. In all experiments we use a learning rate warmup over the first 10,000 steps, and linear decay of the learning rate. We additionally performed a simple ablation study to validate the hypothesis, that similar to NLP, having a larger context improves performance. We use MLM task described above to test how \textsc{BigBird}\xspace performed with sequences of different length. Accuracy on MLM task with increasing sequence length is shown in \Cref{fig:apndx_dna_mlm}. Not only longer context improves final accuracy, it also leads to faster learning, as we have now more opportunities for masking. \begin{figure} \vspace{-5mm} \centering \includegraphics[width=\linewidth]{figures/FunctionalEffects.pdf} \caption{Visual description of the DNA segment from which we predict the chromatin profile for a given non-coding region of the raw DNA sequences of GRCh37. We take 8000 bps of DNA before and after the given non-coding region as context. The complete fragment of DNA including the context on both side, is then tokenized to form our input sequence of tokens. The task is to predict 919 chromatin profile including $690$ transcription factors (TF) binding profiles for $160$ different TFs, $125$ DNase I sensitivity (DHS) profiles and $104$ histone-mark (HM) profiles} \label{fig:apndx_fep_data} \end{figure} \subsection{Promoter Region Prediction} The promoter region plays an important role in transcription initiation and thus its recognition is an important area of interest in the field of bioinformatics. Following \citet{oubounyt2019deepromoter}, we use datasets from Eukaryotic Promoter Database (EPDnew) \citep{dreos2013epd}, which contains 29,597 promoter region in the human genome. Around the transcription start site (TSS), we extract a sequence of 8000 bp (-5000~+3000 bp) from the human reference genome GRCh37. Since EPDnew uses newer GRCh38, we convert to GRCh37 coordinates using LiftOver~\citep{kent2002human}. Following \citet{oubounyt2019deepromoter} for each promoter region example, a negative example (non-promoter sequences) with the same size of the positive one is constructed as follow: The positive sequence is divided into 20 subsequences. Then, 12 subsequences are picked randomly and substituted randomly. The remaining 8 subsequences are conserved. This process is illustrated in Figure 1 of \citep{oubounyt2019deepromoter}. Applying this process to the positive set results in new non-promoter sequences with conserved parts from promoter sequences (the unchanged subsequences, 8 subsequences out of 20). These parameters enable generating a negative set that has 32 and 40\% of its sequences containing conserved portions of promoter sequences. We prefix and append each example with [CLS] and [SEP] token respectively. The output corresponding to the [CLS] token from \textsc{BigBird}\xspace transformer encoder is fed to a simple binary classification layer. We fine-tune the pretrained \textsc{BigBird}\xspace from~\Cref{sec:apndx-expt-bio:mlm} using hyper-parameters described in~\Cref{tab:app_bio}. We note that high performance is not surprising due to the overlap in the nature of negative example generation and MLM pretraining. \subsection{Chromatin-Profile Prediction} The first step of sequence-based algorithmic framework for predicting non-coding effects is to build a model to predict, large scale chromatic profile \citep{zhou2015predicting}. \begin{table}[b] \small \centering \begin{tabular}{@{} l r r c r c r @{}} \toprule Parameter & & Pretraining & & Promoter Region & & Chromatin-Profile \\ \midrule Block length, $b$ & & $64$ & & $64$& & $64$\\ Global token location & & \textsc{itc} & & \textsc{itc}& & \textsc{itc} \\ $\#$ of global token, $g$ & & $2\times b$ & & $2\times b$& & $2\times b$ \\ Window length, $w$ & & $3\times b$ & & $3\times b$& & $3\times b$ \\ $\#$ of random token, $r$ & & $3\times b$ & & $3\times b$& & $3\times b$ \\ Max. Sequence Length & & $4096$ & & $4096$& & $4096$\\ $\#$ of heads & & $12$ & & $12$& & $12$\\ $\#$ of hidden layers & & $12$ & & $12$& & $12$ \\ Hidden layer size & & $768$ & & $768$ & & $768$ \\ Batch Size & & $256$ & & $256$& & $256$ \\ Vocab Size & & $32000$ & & $32000$& & $32000$\\ \multirow{2}{*}{Loss} & & \multirow{2}{*}{MLM+NSP} & & \multirow{2}{*}{BCE} & & 919 x +ve upweighted \\ & & & & & & BCE \\ Dropout prob & & $0.1$ & & $0.1$ & & $0.1$ \\ Optimizer & & Adam & & Adam & & Adam\\ Learning rate & & $0.0001$ & & $0.0001$ & & $0.0001$\\ $\#$ of steps & & $1000000$ & & 711 && 500000 \\ Compute Resources & & $8 \times 8$ TPUv3 & & $8 \times 8$ TPUv3 & & $8 \times 8$ TPUv3\\ \bottomrule \end{tabular} \vspace{2mm} \caption{Table of hyperparameters for Computational biology.} \label{tab:app_bio} \end{table} In this paper, we use the dataset provided in \citet{zhou2015predicting}\footnote{ \url{http://deepsea.princeton.edu/media/code/deepsea_train_bundle.v0.9.tar.gz}}, to train \textsc{BigBird}\xspace to predict the chromatic profile. Each training sample consists of a 8,000-bp sequence from the human GRCh37 reference genome centered on each 200-bp bin and is paired with a label vector for 919 chromatin features. As before, we prefix and append each example with [CLS] and [SEP] token respectively. The output corresponding to the [CLS] token from \textsc{BigBird}\xspace transformer encoder is fed to a linear layer with 919 heads. Thus we jointly predict the 919 independent binary classification problems. We fine-tune the pretrained \textsc{BigBird}\xspace from~\Cref{sec:apndx-expt-bio:mlm} using hyper-parameters described in~\Cref{tab:app_bio}. As the data is highly imbalanced data (way more negative examples than positive examples), we upweighted loss function for positive examples by factor of 8. We used training and testing split provided by \citet{zhou2015predicting} using chromosomes and strictly non-overlapping. Chromosome 8 and 9 were excluded from training to test chromatin feature prediction performances, and the rest of the autosomes were used for training and validation. 4,000 samples on chromosome 7 spanning the genomic coordinates 30,508,751–35,296,850 were used as the validation set. As the predicted probability for each sequence in DeepSea~\citet{zhou2015predicting} was computed as the ensemble average of the probability predictions for the forward and complementary sequence pairs, we also predict using an ensemble of two \textsc{BigBird}\xspace model trained independently.
\section{Introduction} Instance tracking is an important task in video applications, such as autonomous driving, sports analytics, video editing, and video surveillance. In single-object tracking, the position of the target instance is given in the first frame of a video sequence; tracking algorithms need to predict the position of the same instance in each of the following frames. Most state-of-the-art tracking methods use a convolutional neural network to extract features from the target object and features from the scene~\cite{SiamRPN,DaSiam,SiamMask}. These methods use an approach of ``tracking-by-one-shot-detection"~\cite{SiamRPN}: a network is trained to match the appearance between an image of the target object and an image of the same object in the current frame. Although this ``tracking-by-one-shot-detection" approach has achieved impressive performance, it is prone to errors due to distractors and object appearance changes. First, if there are similar-looking objects in the video (``distractors"), tracking-by-one-shot-detection methods often switch to a distractor object (see Figure~\ref{fig:all}b); common examples include different objects of the same category or objects of similar color or texture. Likewise, if the object changes its appearance due to object deformations, image blur (from large camera or object motion), lighting changes, or other variations, tracking-by-one-shot-detection methods often lose track of the target object (see Figures~\ref{fig:all}a,~\ref{fig:all}c). \begin{figure*} \begin{center} \includegraphics[width=\linewidth]{all.pdf} \end{center} \caption{Three example errors that our method fixes (a): Failure case of baseline due to large camera motion; (b): Failure case of baseline due to distractors; (c): Failure case of baseline due to large motion of instance being tracked. In this figure, white boxes represent ground-truth boxes, \textcolor{red}{red} boxes represent predictions by SiamMask~\cite{SiamMask}, \textcolor{green}{green} boxes show the results from our method.} \label{fig:all} \end{figure*} When there are distractor objects or large appearance changes, matching the object appearance alone will likely be insufficient for robust tracking. Instead, the tracker should make use of the tracked object's position. By tracking the position of the object throughout the video, the tracker can determine which object is the target and which are the distractors. Past trackers have typically incorporated relatively weak position information to try to resolve these issues. For example, many methods~\cite{SiamMask,SiamRPN,DaSiam, Fully_Convolutional_Siamese} use a position penalty that gives a lower score to bounding boxes that are farther away from the location of the detected object in the previous frame. However, a position penalty that is too strong will lose track of fast moving objects or objects under large camera motion; a position penalty that is too weak will not achieve the desired effect of ignoring distractors or handling large object appearance changes. To address these issues, we explore how trackers can incorporate object position information in a more robust manner. Specifically, we use dense optical flow correspondences to track the position of the target object from one frame to the next. Dense optical flow methods jointly track the motion of the target object as well as the distractors or other nearby objects, and can thus be used to more robustly ignore distractor objects. Optical flow can also track objects over large appearance changes by reasoning about the position of the target object relative to the rest of the scene. However, methods for dense optical flow can also make mistakes; incorporating such erroneous information can cause the tracking performance to degrade. Our main insight is that, to avoid such situations, we should use an estimate of optical flow uncertainty~\cite{FlowNetH} to reason about our confidence in the optical flow estimate. We develop a novel probabilistic framework that estimates a tracking score for each bounding box based on the optical flow estimates and their uncertainties. These flow scores are combined with object appearance scores to estimate the new position of the tracked object. We demonstrate that our method significantly improves tracking performance, when evaluated on the VOT 2016, 2018, and 2019 datasets, compared to the performance of the base tracker that our method builds upon. Our method is general in that our flow uncertainty tracking scores can be incorporated into any base tracker. We show that our method improves tracking robustness under distractors and large object appearance changes. Our contributions include \begin{itemize} \item A novel end-to-end differentiable method for combining segmentation and flow uncertainty for tracking \item Experimental demonstrations that our method outperforms current state of the art trackers \item Ablations showing the importance of each component of our method, especially flow uncertainty, for optimal tracking performance \end{itemize} \section{Related Work} \subsection{2D Instance Tracking} Since the work of Bolme \textit{et al.}~\cite{correlation_filter}, correlation filter has been a popular approach for instance tracking. The method trains a filter online and tracks the target by correlating the filter over a search window. Significant efforts has been devoted to improve the performance, such as by learning a multi-channel filter~\cite{multi-channel, KCF}, integrating multi-resolution deep feature maps~\cite{Cont_DCF, HierarchicalConv} and mitigating boundary effects ~\cite{LimitedBoundary, SpatiallyRegularizedCF}. Recently, instead of learning discriminative filter online, offline learning methods, especially siamese networks~\cite{SiamRPN, DaSiam, SiamMask, Fully_Convolutional_Siamese, GOTURN, DetectToTrack}, have considerably improved performance on 2D instance tracking by using a one-shot detection framework. In order to track objects temporally, most trackers incorporate a position penalty to prevent large changes in position from one frame to the next. This can be achieved using a cosine window penalty~\cite{SiamMask, SiamRPN, DaSiam, Fully_Convolutional_Siamese} or a Gaussian penalty~\cite{RemoveCosine}. Another approach to incorporate position information more implicitly is to input a search region cropped around the location of the tracked object in previous frame ~\cite{GOTURN, SiamMask, SiamRPN, DaSiam, Fully_Convolutional_Siamese} or to restrict feature correlation to a local neighborhood~\cite{DetectToTrack}. Many previous works take the Bayesian approach for instance tracking, using a Kalman filter~\cite{KalmanFilter, AdaptiveKalmanFilter, 2DHumanBodyTrack} or particle filter~\cite{Condensation, VisualTrackPF, AdaptivePF} to smooth the tracker output over time. To make the method more robust to distractors, DaSiamRPN~\cite{DaSiam} proposes a distractor-aware module to perform incremental learning during inference time. We show that our approach to avoiding distractors significantly outperforms these approaches. Our tracker makes use of a segmentation mask of the tracked object from the previous frame. To obtain this mask, we use SiamMask~\cite{SiamMask}, which achieves the state-of-the-art tracking performance. We combine this mask with uncertainty-aware optical flow to improve tracking performance in the face of distractors and large appearance changes. \subsection{Optical Flow} Optical flow has been widely used for video analysis and processing. Traditional methods for optical flow estimation includes variational approaches~\cite{variational_flow}, possibly combined with combinatorical matching~\cite{EpicFlow}. Recently, deep learning based methods~\cite{flownet,flownet2} have obtained state-of-the-art performance for optical flow estimation. Optical flow has been used to guide feature warping to improve performance of class-level object detection in videos ~\cite{FGFA}. Other work~\cite{seg_by_flow} uses optical flow to identify temporal connections throughout videos, and jointly updates object segmentation with flow models. In contrast to these applications, we use optical flow to improve tracking performance by estimating how the target object, as well as the other objects in the scene, move over time. \subsubsection{Tracking with Optical Flow} Recently, some trackers \cite{END_TO_END_FLOW_TRACK, DeepMotionFeature, SINT} use optical flow estimation to improve performance on instance tracking. FlowTrack~\cite{END_TO_END_FLOW_TRACK} uses flow to warp features from previous frames to improve the feature representation and tracking accuracy. The warped feature maps are weighted by a spatial-temporal attention module; these feature maps are then input into subsequent correlation filter layers along with feature maps of the current frame. Other work~\cite{DeepMotionFeature} uses optical flow to obtain deep motion features, and then fuses appearance information with deep motion features for visual tracking. For hand-crafted features, deep image features, and deep motion features, the method separately learns a filter by minimizing the SRDCF~\cite{SRDCF} objective and then averages the filter responses to get final confidence scores. SINT+~\cite{SINT} uses flow to remove motion inconsistent candidates. Specifically, it uses the estimated optical flow to map the locations of the pixels covered by the predicted box in the previous frame to the current frame, and remove the candidate boxes which contain less than 25\% of those pixels. However, none of these methods use a segmentation mask or flow uncertainty for tracking; our experiments demonstrate that both of these components are crucial for optimal tracking performance. We develop a probabilistic framework to use flow uncertainty for tracking. \subsubsection{Tracking with Uncertainty} There have been several recent works on estimating confidence in optical flow~\cite{BootstrapFlow, AdaptiveFlow, FlowNetH}. FlowNetH~\cite{FlowNetH} is shown to be able to generate effective uncertainty estimates without the need of sampling or ensembles. As far as we know, these methods have not been used to improve performance of instance tracking. We propose a new framework which combines flow uncertainty estimates with appearance scores from a one-shot-detection method; we show that our method can significantly improve tracking robustness and obtains state-of-the-art tracking results. \section{Background} \subsection{Siamese Networks for Tracking} \label{sec:Siamese Networks for Tracking} Our method is based on the SiamMask~\cite{SiamMask} framework, which is the state-of-the-art method on the VOT tracking benchmarks~\cite{VOT2016,VOT2017,VOT2018,VOT2019}. It consists of siamese subnetwork for feature extraction and a region proposal subnetwork for bounding box proposal generation. The framework scores proposals based on an appearance matching score $d$, a size change penalty $p_s$, and a position change penalty $p_c$. The size change penalty $p_s$ penalizes changes to the size of the bounding box of size $w$ by $h$ from one frame to the next; this is defined as \begin{align} p_s &= e^{(1-\max(\frac{r}{r'}, \frac{r'}{r}) \cdot \max(\frac{s}{s'}, \frac{s'}{s}))\cdot k_p} \end{align} where $s$ and $s'$ are the padded areas $p$ of the proposal box and the bounding box in the previous frame, respectively, given by \begin{align} s^2 &= (w+p) \times (h+p) \end{align} and $r$ and $r'$ are the aspect ratios of the proposal box and the bounding box in the previous frame, respectively. The score $f$ for each proposal is calculated as \begin{align} f &= (1-k_c) \cdot p_s \cdot d + k_c \cdot p_c \label{eq:total_score} \end{align} where $k_c$ and $k_p$ are hyperparameters. The position penalty $p_c$ is obtained by penalizing the position of the center of the bounding box according to one period of a cosine function, centered at the position of the previous bounding box; the period of this cosine penalty is determined based on the size of the previous bounding box. The appearance matching score $d$ is given by the output of the one-shot detection network, which matches a template image of the target object to the scene. SiamMask~\cite{SiamMask} first chooses a bounding box based on the proposal with the highest score $f$. For the highest scoring proposal box, it then predicts a mask $F_{t}$, thresholds it into a binary mask $\hat{F}_{t}$ using threshold $t_{\text{seg}}$, and outputs the minimum bounding rectangle (MBR) of the binary mask as the final prediction of the location of the tracked object. \section{Method} \begin{figure*}[t] \begin{center} \includegraphics[width=\linewidth]{flowchart.pdf} \end{center} \caption{Overall pipeline of our method: On top of SiamMask~\cite{SiamMask}(showed in \textcolor{orange}{orange}), we add a module(showed in \textcolor{green}{green}) to compute flow score for each proposal using flow uncertainty estimations and segmentation output from previous frame; our method then combines flow scores with appearance scores to choose a bounding box proposal.} \label{fig:flowchart} \end{figure*} We introduce a new method which improves tracking robustness under distractors and large appearance changes. We visualize our pipeline in Figure~\ref{fig:flowchart}. Our method uses optical flow to estimate the probability of being in part of the foreground for every pixels in a frame, which we call it a ``FlowMask." Based on this probability mask, we assign a flow score for each proposal, which we combine with an appearance score to obtain the final tracking output. The rest of this section explains how our method works in detail. \subsection{Flow Mask} \label{sec:Flow Mask} Our method makes use of previous work for uncertainty-aware dense optical flow estimation~\cite{FlowNetH} that computes the probability that each pixel $i$ in frame $t$ corresponds to a given pixel location $j$ in frame $t-1$: $p(c(I_{i,t}) = I_{j,t-1})$ where $c$ maps pixel $I_{i,t}$ to a pixel in frame $t-1$. Given images $I_t$ and $I_{t-1}$, this method predicts the probability of each pixel being part of the foreground as a Laplace distributions, parametrized by flow mean $\mu$ and scale $b$: \begin{align} p(c(I_{i,t}) = I_{j,t-1}) &= \mathcal{L}(I_{j,t-1}-I_{i, t}|\mu, b) \label{eq:prod_laplacian} \end{align} where the Laplace distributions are defined in the standard manner as \begin{align} \mathcal{L}(u|\mu, b) = \frac{1}{2b}\exp\Big(\frac{-|u - \mu|}{b}\Big) \label{eq:laplacian} \end{align} For notational convenience, we are omitting the conditioning for these probabilities on the images $I_t$ and $I_{t-1}$. As we will show, flow uncertainty is crucial for robust tracking. Using Eqn.~\ref{eq:prod_laplacian}, we can compute the probability that pixel $I_{i,t}$ corresponds to pixel $I_{j,t-1}$. We compute the probability that pixel $I_{j,t-1}$ belongs to the target object. To do so, we use a segmentation-based tracking method~\cite{SiamMask} to obtain a ``segmentation mask", which gives the probability that each pixel $I_{j,t-1}$ belongs to the foreground of the previous bounding box (i.e. the tracked object): $p(I_{j,t-1} \in F_{t-1})$ where $F_{t-1}$ is the set of foreground pixels, i.e. the set of pixels in frame $t-1$ that belong to the tracked object. We combine these flow probabilities with the Segmentation Mask probabilities to estimate the probability that a pixel $I_{i,t}$ in frame $t$ belongs to the tracked object: \begin{align} p(I_{i,t} \in F_{t}) = \sum_j p(c(I_{i,t}) = I_{j,t-1}) \, p(I_{j,t-1} \in F_{t-1}) \label{eq:0} \end{align} We compute the foreground probability for every pixel $I_{i,t}$ in frame $t$; we refer to the resulting set of probabilities as the ``FlowMask" at frame $t$. This computation is implemented in a differentiable manner, and could be used in end-to-end trainable pipelines. This idea is further illustrated in Figure~\ref{fig:fig1}. \\ \noindent\begin{minipage}[b]{0.5\textwidth} \centering \includegraphics[width=0.5\textwidth]{flowmask.pdf} \captionsetup{width=\textwidth} \captionof{figure}{Illustration of how flow mask is computed. The \textcolor{green}{green} box is the ground-truth box; the \textcolor{orange}{orange} boxes are proposals; $\alpha_u$ and $\alpha_v$ are the predicted flow mean from frame $t$ to frame $t-1$ in $u,v$ direction; colormap in frame $t-1$ visualizes a Laplace distribution parametrized by predicted flow mean and variance. The blue dot represents a point $I_{j,t-1}$ that belongs to the foreground in frame $t-1$.} \label{fig:fig1} \end{minipage}% \begin{minipage}[b]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{flow_nouncertainty.pdf} \captionsetup{width=0.9\textwidth} \captionof{figure}{Illustration of importance of uncertainty: One example that flow mask with uncertainty successfully tracks the target object but flow mask without uncertainty fails. In this figure, white boxes represent ground-truth boxes, \textcolor{red}{red} boxes represent prediction by using flow mask without uncertainty, \textcolor{green}{green} boxes shows the prediction using flow mask with uncertainty.} \label{fig:flow_nouncertainty} \end{minipage} \subsection{Flow Score} After we compute the flow mask at frame $t$, we can compute a flow score for each proposal $box_{(i,t)}$, denote as $f_s(box_{(i,t)})$ by averaging the foreground probabilities for each pixel in the box: \[f_s(box_{(i,t)}) = \frac{1}{N_{box(i,t)}}\sum_{I_{i,t}\in box_{(i,t)} }p(I_{i,t} \in F_{t})\] where $N_{box(i,t)}$ represents the total number of pixels in $box_{(i,t)}$. However, one discrepancy in this score is that, even though $box_{(i,t)}$ is a rectangle, the object being tracked may not be shaped as a rectangle, which could cause $f_s(box_{(i,t)})$ to be much less than 1. This variability in $f_s(box_{(i,t)})$ will make it difficult to combine the flow score with the appearance score, as described in Section~\ref{sec:Combining Flow Score with Appearance Score} below. To deal with this issue, we first compute the number of pixels that are in the thresholded segmentation mask $\hat{F}_{t-1}$ as $N_{\hat{F}_{t-1}}$. Then we compute a $t_{\text{flow},t}$ by dividing $N_{\hat{F
}_{t-1}}$ by the number of pixels in previous frame's axis-aligned detection box $box^*_{t-1}$. \begin{align} t_{\text{flow},t} =\frac{N_{\hat{F}_{t-1}}}{N_{box^*_{t-1}}} \end{align} This results in a flow score defined as \begin{align} f_s'(box_{(i,t)}) = \min\Big(\frac{f_s(box_{(i,t)})}{t_{\text{flow},t}},1\Big) \label{eq:new_flow_score} \end{align} \subsection{Bounding Box Selection} \label{sec:Combining Flow Score with Appearance Score} Lastly we combine our flow score with the appearance score from the one-shot detection framework to obtain a motion score for a given proposal box: \begin{align} (1-k_f) \cdot p_c + k_f \times f_s' \label{eq:our_motion_score} \end{align} where $k_f$ is a hyperparameter, $p_c$ was described in Section~\ref{sec:Siamese Networks for Tracking}, and $f_s'$ is obtained from Eqn.~\ref{eq:new_flow_score}. We combine our position penalty $p_c$ from Eqn.~\ref{eq:our_motion_score} with the size penalty $p_s$ and appearance matching score $d$ in Eqn.~\ref{eq:total_score} to obtain the total score for each proposal box. Our entire pipeline is end-to-end differentiable, so we could backprop through our network and learn the value for the different hyperparameters. However, since there are only three hyperparameters, we proceed as in SiamMask~\cite{SiamMask} to do a hyperparameter searches and find the proposal box with the highest score to obtain the tracking output and estimate a segmentation mask for this box. \section{Experiments} \subsection{Implementation Details} Our method uses the pretrained SiamMask~\cite{SiamMask} network to obtain the appearance matching score $d$ and to compute the segmentation mask. Since SiamMask~\cite{SiamMask} reports its tracking performace on visual object tracking datasets (VOT 2016, 2018 and 2019), we also report our performance on these three datasets. In SiamMask~\cite{SiamMask}, they perform hyperparameter searches on $k_c \in [0.40,0.43]$ and $k_p \in [0,1]$, and we similarly search in these ranges; we also perform a random hyperparameter search for $k_f \in [0,1]$ for Eqn.~\ref{eq:our_motion_score}. \subsection{Quantitative Results} \begin{table*}[h!] \fontsize{8.5}{12}\selectfont \centering \begin{tabular}{||c| c| c| c| c | c | c | c | c | c ||} \hline &\multicolumn{3}{|c|}{VOT2016}&\multicolumn{3}{|c|}{VOT2018}&\multicolumn{3}{|c||}{VOT2019}\\ \hline & EAO $\uparrow$ & R $\downarrow$ & A $\uparrow$ & EAO $\uparrow$ & R $\downarrow$ & A $\uparrow$ & EAO $\uparrow$ & R $\downarrow$ & A $\uparrow$ \\ [0.5ex] \hline ATP~\cite{Kristan2019a} & - & - & -& - &- & - & \textcolor{red}{0.394} & \textcolor{red}{0.291} & \textcolor{red}{0.650} \\ Our Method & \textcolor{red}{0.47} & \textcolor{red}{0.196} & \textcolor{red}{0.647}& \textcolor{red}{0.41} & \textcolor{red}{0.234} & \textcolor{green}{0.605} & \textcolor{green}{0.306} & \textcolor{green}{0.426} & \textcolor{green}{0.599} \\ SiamMask~\cite{SiamMask} & \textcolor{green}{0.433} & \textcolor{green}{0.214} & \textcolor{green}{0.639} & \textcolor{green}{0.38} & \textcolor{green}{0.276} & \textcolor{red}{0.609} & 0.283 & 0.467 & 0.596 \\ UInet~\cite{Kristan2019a} & - & - & -& - &- & - & 0.254 & 0.468 & 0.561\\ SiamMsST~\cite{Kristan2019a} & - & - & -& - &- & - & 0.252 & 0.552 & 0.575\\ MemDTC~\cite{Memdtc} & 0.297 & 1.310 & 0.5297& 0.2651 &1.5287 & 0.4909& 0.252 & 0.552 & 0.575\\ CSRDCF~\cite{csrdcf} & 0.338 & 0.85& 0.51 & - & - & - & 0.201 & 0.632 & 0.496\\ \hline \end{tabular} \caption{Results on VOT 2016, VOT2018, and VOT2019. R represents robustness and A represents accuracy. The top three performing trackers are colored with \textcolor{red}{red} and \textcolor{green}{green} respectively.} \label{table:results} \end{table*} \subsubsection{Evaluation for VOT} This section includes results on VOT2016~\cite{VOT2016}, VOT2018~\cite{VOT2018}, and VOT2019~\cite{VOT2019}. VOT2016 consists of 60 video sequences. The VOT2017~\cite{VOT2017} challenges replaces the 10 least challenging sequences with new ones. VOT2018 contains the same 60 video sequences as in VOT2017. VOT2019 replaces 20\% of the videos in VOT2018 with new ones. The performance is evaluated in terms of accuracy (average overlap while tracking successfully), robustness (failure times), and Expected Average Overlap (EAO), which takes account of both accuracy and robustness, as is common for the VOT challenges. The results are shown in Table~\ref{table:results}. We compare our method with all the other state-of-the-art trackers that predict both a bounding box for tracking and a segmentation mask for each frame. As can be seen, our method significantly improves over most state-of-the-art baselines in all categories across VOT2016, 2017, and 2018. In particular, our method builds upon~\cite{SiamMask}, so the improvement should be judged relative to this method. However, our method for incorporating flow uncertainty into tracking is modular and can be combined with other state-of-the-art tracking methods as well. In terms of speed, our methods operates at 5.62 frames per second, or 178ms per frame, including 18ms for SiamMask~\cite{SiamMask} and 60ms for the optical flow computation~\cite{FlowNetH}. \subsubsection{Ablations} Our method builds upon SiamMask~\cite{SiamMask} and incorporates flow uncertainty based on the previous frame's predicted segmentation mask to improve performance. To further investigate the importance of different components of our method, as well as the effectiveness of different approximations that we make, we conduct the following ablations: \paragraph*{Importance of Optical Flow} We first investigate the importance of using optical flow for tracking, rather than the approach taken by several recent papers of using a cosine~\cite{SiamMask, SiamRPN, DaSiam, Fully_Convolutional_Siamese} or Gaussian penalty~\cite{RemoveCosine} to penalize large motions from the previous frame. To analyze this, we note that our method for optical flow uses a Laplacian distribution, as shown in Equations~\ref{eq:prod_laplacian} and~\ref{eq:laplacian}. Thus, to evaluate the importance of optical flow, we replace the estimated flow distribution with a constant Laplacian, with 0 mean $\mu= (0,0)$ and fixed scale parameters $b$. This ablation is referred to as ``Ours minus Flow" (Ours - Flow) in Table~\ref{table:ablation_analysis}. As can be seen, using a constant Laplacian distribution (rather than optical flow) leads to no improvement over the baseline SiamMask~\cite{SiamMask}. \paragraph*{Importance of Optical Flow with Uncertainty} For the next ablation, we probe the importance of utilizing uncertainty estimates for optical flow in tracking. To evaluate this, we fix the Laplacian scale parameters $b$ to a constant. The result is shown in Table~\ref{table:ablation_analysis} as ``Ours - Uncertainty." Although the result is better than our baseline, it still has a large performance gap with our method. As a qualitative analysis, Figure~\ref{fig:flow_nouncertainty} shows one example where our method successfully tracks the target object but Ours-Uncertainty fails due to errors in the flow estimate under large object motion and perspective changes. This shows how the uncertainty increases the tracking robustness. \paragraph*{Importance of Segmentation Mask} In this ablation, we investigate the importance of using a segmentation mask for the computation of the flow mask. In our method, we use SiamMask~\cite{SiamMask} to predict a segmentation mask for the previous frame. We then combine the probability of being in the foreground with the flow probability, as shown in Equation~\ref{eq:0}. We probe the importance of having a segmentation mask by replacing it with a bounding box. We analyze using both an axis-aligned bounding box (ALB) and a minimum bounding rectangle (MBR) mask (see SiamMask~\cite{SiamMask} for details). The result is shown in Table~\ref{table:ablation_analysis} as ``Ours - SegMask (ALB)" and ``Ours - SegMask (MBR)". As we can see, using the minimum bounding rectangle instead of a segmentation mask results in no improvement over the baseline. On the other hand, using the mask obtained using axis-align box improve upon baseline but still is not as effective as our method. \subsubsection{SiamMask+Flow Rejection} Last, we compare to an additional baseline that also uses optical flow to improve tracking. Following SINT+~\cite{SINT}, we evaluate using optical flow to filter out motion inconsistent candidates, and try to use this ``flow rejection'' method to improve the SiamMask~\cite{SiamMask}. Specifically, we use flow to warp the pixels covered by the predicted box in the previous frame. We then remove all proposals in the current frame that contain less than 25\% of the warped pixels (this is similar to the procedure from SINT+~\cite{SINT}). We refer to this experiment as ``SiamMask plus flow rejection'' (SiamMask + FlowRej) in Table~\ref{table:ablation_analysis}. As we can see, using flow rejection does not improve the performance compared to the baseline SiamMask~\cite{SiamMask}. This degradation in performance, especially in robustness, is likely due to occasional errors in the flow estimation. This supports our claim about the important of flow uncertainty estimation for robust tracking. \begin{table}[h!] \fontsize{8.5}{12}\selectfont \centering \begin{tabular}{||c| c| c| c||} \hline &\multicolumn{3}{|c||}{VOT2018}\\ \hline & EAO$\uparrow$ & Robustness$\downarrow$ & Accuracy$\uparrow$ \\ [0.5ex] \hline Ours & \textbf{0.41} & \textbf{0.234} & 0.605\\ Ours - Flow & 0.38 & 0.276 & 0.609\\ Ours - Uncertainty & 0.383 & 0.262 & 0.610\\ Ours - SegMask (ALB) & 0.388 & 0.267 & \textbf{0.614}\\ Ours - SegMask (MBR) & 0.372 & 0.253 & 0.593\\ SiamMask~\cite{SiamMask} + FlowRej & 0.361 & 0.290 & 0.613\\ SiamMask~\cite{SiamMask} & 0.38 & 0.276 & 0.609 \\ \hline \end{tabular} \caption{Ablation Analysis. Ours-Flow uses identity flow; Ours-Uncertainty uses fixed variance; Ours-SegMask (ALB) replaces segmentation mask with an axis-aligned bounding box; Ours-SegMask (MBR) replaces segmentation mask with a minimum bounding rectangle.} \label{table:ablation_analysis} \end{table} \subsection{Qualitative Analysis} Our method effectively improves tracking robustness under distractors and large object appearance changes. To better illustrate the effect of our method, we analyze our results qualitatively. In Figure~\ref{fig:all}, we visualize three cases where the state-of-the-art tracker SiamMask~\cite{SiamMask} fails but our method is able to successfully keep track of the target objects. For each case, we visualize the position penalty that SiamMask~\cite{SiamMask} uses, the appearance matching score (i.e appearance\_score) produced by the one-shot-detection network, and the flow mask introduced in this work in Section~\ref{sec:Flow Mask}. Figure~\ref{fig:all} shows two categories of challenging tracking scenarios: \paragraph*{Distractors} One type of common failure case occurs when there are distractor objects in the background that are similar in appearance or category to the object being tracked. An example is shown in Figure~\ref{fig:all}(b), in which the target object runs across another person in the background. In this case, the appearance matching score (from the one-shot-detection network of SiamMask) is high for both people. The position penalty is also not useful in this case due to the fast motion of the target object. Thus, if we only rely on the appearance matching score and the position penalty, we would track the distractor instead of the target object, as illustrated by the red detection box (output by SiamMask). Nevertheless, the flow mask successfully tracks the target object. Since the flow mask is a probabilistic estimate based on the predicted segmentation mask of the previous frame, it is able to focus precisely on the target object; additionally, because we incorporate flow uncertainty, our method is also robust to small errors in the estimated flow. \paragraph*{Large Appearance Changes} Another challenging problem in tracking is large appearance changes. In Figure~\ref{fig:all}(a), the image of the target object becomes blurry under large camera motion. In this scenario, the appearance matching network predicts similar confidence for many areas in the image. The position penalty also fails because the position of the large change in the object position on the image due to the fast camera motion. Similarly, in Figure~\ref{fig:all}(c), the position penalty is also not effective due to the fast motion of the target object. In this case, the deformation of the target object (a bird) also causes the appearance matching score to be uncertain, leading to a failure from SiamMask. However, in both cases, our proposed flow mask is still able to track the target object. These examples illustrate that our method is robust to large appearance changes, such as blurry images and deformations, as well as fast moving objects. \subsection{Detailed Analysis on VOT2018} \begin{figure*} \begin{center} \includegraphics[width=0.8\linewidth]{attributes_comparison.pdf} \end{center} \caption{Results breakdown on VOT 2018 for different visual attributes. We compare the EAO under these five attributes of our method with the baseline SiamMask~\cite{SiamMask}. } \label{fig:attributes} \end{figure*} To better understand the effect of using our method, we perform in-depth analysis on the VOT 2018 dataset. In the VOT 2018 dataset, each frame is manually labeled with five visual attributes that reflect a particular challenge: (i) camera motion, (ii) motion change, (iii) size change, (iv) illumination change, (v) occlusion. In case that a frame doesn't correspond to any of those five attributes, it is labeled as "non-degraded". Those labels enable us to analyze the benefits of our method while focusing only on the frames that contain a given attribute. The results are shown in Figure~\ref{fig:attributes}, in which we compare our method to the SiamMask~\cite{SiamMask} baseline that we build on top of. As can be seen, our method significantly improves the tracker's performance under camera motion, and we also see modest improvements under size change and illumination change. Our method performance slightly worse than SiamMask~\cite{SiamMask} under motion change and occlusion. In Figure~\ref{fig:fish2_failure}, we visualize one example of a failure case due to occlusion. In this case, the target object gets occluded by another similar looking distractor. We find that, when an occlusion occurs, the predicted segmentation mask tends to also mask the distractor; thus it will mislead the calculation of FlowMask in the following frames. Eventually both FlowMask and the segmentation mask would have high confidence for the distractor. Thus tracker would drift and track the distractor instead. In SiamMask~\cite{SiamMask} baseline, it fails similarly in this case. However, since the location of two objects don't change too much, eventually baseline method recovers by the help of position penalty. In our case, the use of flow mask prevent us from recovering from a failure in this case. \begin{figure*} \begin{center} \includegraphics[width=\linewidth]{fish2_failure.pdf} \end{center} \caption{Illustration of a failure case due to occlusion: when there is an occlusion, the predicted segmentation mask and flow mask tends to drift to the distractor. In this figure, white boxes represent the ground-truth , \textcolor{green}{green} boxes show the prediction from our method.} \label{fig:fish2_failure} \end{figure*} \section{Conclusions} In this paper, we introduce a novel probabilistic framework that combines appearance and flow uncertainty for tracking. We show that our method, when evaluated on Visual Object Tracking datasets, significantly improves the performance of a state-of-the-art tracker. Ablation experiments show the importance of each component of our framework, such as the use of flow uncertainty and warping a segmentation mask. We hope that our work can be insightful to future research on robust tracking under distractor objects and large object appearance changes. \newpage \bibliographystyle{splncs}
\section{Introduction} Since the discovery of the first transiting exoplanet HD 209458b \citep{charbonneau00,henry00} it was realized that transit photometric observations are necessary to obtain a number of essential information that cannot be otherwise derived from the other methods of detection. The precise understanding on the radius, surface gravity, orbital distance, etc. helps us comprehend their formation and evolution. Proper characterization of the already discovered exoplanets calls for repeated follow-up observations during both in-transit and out-transit epoch. In this regard the ground-based telescopes are appropriate especially in case of a transit by a close-in (hence short period) gas giant planet around a main sequence star. However, in order to observe a transient event like a transit, a coordinated set of observations around the globe will prove to be highly effective by ensuring the coverage of a transit event regardless of its ephemeris of occurrence. In this regard, the astronomical facilities of India such as Indian Astronomical Observatory (IAO, $78^\circ$ $57'$ E, $32^\circ$ $46'$ N), Vainu Bappu Observatory (VBO, $78^\circ$ $50'$ E, $12^\circ$ $34'$ N) can fill in the missing longitudinal coverage. We, therefore, observed the transit events of a few hot Jupiters using the 2m Himalayan Chandra Telescope (HCT), IAO and 1.3m Jagadish Chandra Bhattacharyya Telescope (JCBT), VBO to demonstrate the capability of these telescopes and their back-end instruments for precise transit photometry. The HCT has already been a part of the detection of the planet TRAPPIST-1 b \citep{gillon16}. This had motivated us to continue the search and follow-up observations using HCT and JCBT. In this paper we present the transit light curves of the hot gas giants WASP-33 b, WASP-50 b, WASP-12 b, HATS-18 b and HAT-P-36 b. Apart from the photometric noise, noise due to other sources such as stellar activity, variation in the sky transparency etc. contribute to the fluctuations in the transit light curves to a great extent which subsequently constrain the precision in determining the transit parameters. Different types of noise require different treatments. We have performed differential photometry to reduce the patterns that affect all the stars in a frame equally such as gradual variation in the airmass over the span of 3-4 hours of observation, large-scale transparency fluctuations that change the apparent brightness of all the stars in the frame almost equally etc. Also, prior to the modeling of the transit light curves, we have performed preprocessing on the light curves using the wavelet-based denoising technique. This decorrelation method suppresses the patterns of variability that are common to all the stars in the field but uncorrelated in time. These patterns can be caused by the medium-scale transparency fluctuations or seeing fluctuations that affect the stars on a frame slightly unequally but temporally uncorrelated. Also, this method removes the outliers caused by the cosmic-ray hits etc. Wavelet-based light curve noise analysis and filtering can be found in \cite{cubillos17,waldman14}. In case of transit photometry, \cite{ser18} has shown that this denoising technique does not alter the underlying astrophysical signals but improves the results obtained after modeling the transit light curves. Such preprocessing of data is a part of our self-developed pipeline used for reduction, analysis and modeling. For a comparison, we have also analysed the data without using the wavelet denoising process. In addition to these patterns, confusing signals caused by stellar activity or pulsation are unique to each star and temporally correlated and can not be suppressed or removed by decorrelation or de-noising. They can, however, be modelled alongside the signal of interest. Such activity or pulsations have already been reported for some of the host stars \citep{vonessen14,mancini15}. We have addressed to such situations by modeling the covariance structure of the confusing signal using GP regression \citep{johnson15,barclay15}, in order to ensure that their effects are accounted for properly in the posterior uncertainty estimates for the fitted parameters of the transit light curves. Photon noise, propagated through the differential photometry, is uncorrelated which contributes to the diagonal elements of the covariance matrix. As a result of these processing and modeling we could determine the transit parameters of the planets more accurately than the previously published results. We have adopted the stellar parameters such as mass, radius and effective temperature of the host stars and the semi-amplitude of the oscillation of the radial velocity of the host stars due to the planets ($K_{RV}$) from existing data available in the literature \citep{cameron10,gillon11, bakos12,lehmann15,penev16,collins17}. The paper is organized in the following way: In section-\ref{sec:dodal} we discuss the details of the observations and the additional data adopted from the literature. In section-\ref{sec:rodam} we outline our newly developed pipeline used for data reduction and analysis on the raw images that produce the transit light curves that are fitted with transit models. In section-\ref{sec:ton} we elaborate the preprocessing method employed that reduces the fluctuations due to the contributions by various sources of noise to the transit light curves. In Section-\ref{sec:obc} we present the results for the observations of the stars without planetary transit and discuss their utility in characterizing the stability of the baselines of the transit light curves. In Section-\ref{sec:rd} we discuss the results and the work is summarized and concluded in section-\ref{sec:c} \section{Details of Observations and Data Adopted from Literature} \label{sec:dodal} We observed the transit events by using the 2-meter Himalayan Chandra Telescope (HCT) at Indian Astronomical Observatory (IAO), Hanle and the 1.3-meter Jagadish Chandra Bhattacharyya Telescope (JCBT) at Vainu Bappu Observatory (VBO), Kavalur. For HCT we used the back-end instrument Hanle Faint Object Spectroscopic Camera (HFOSC) which has a ${\rm 2K\times2k}$ optical CCD as the imager with a field of view of $10^\prime\times10^\prime$ on-sky. In case of JCBT we used the ${\rm 2k\times4k}$ UKATC optical CCD as the imager with a field of view of $10^\prime\times20^\prime$ on-sky. Bessel V, R and I filters were used for the observations. Both the imagers have a plate-scale of $3\arcsec$/pixel and both are liquid Nitrogen cooled to make the dark noise negligible. In order to obtain multiple transit light curves, each target has been observed repeatedly. Some of the observed frames had to be discarded as they were affected by either passing cloud or due to the condensation of water on the CCD. We have used the orbital period and the semi-amplitude of the radial velocity of the host star from the literature as mentioned in the next subsection (see Table~\ref{tab:lit}). The rest of the planetary parameters are deduced by the detail analysis and modeling of the observational data. \subsection{WASP-33b} WASP 33 b is a hot jupiter that orbits around the host star HD 15082. We observed this object for 5 transit events - one from HCT in V filter on 09 Dec 2017, two from JCBT in I filter on 05 Jan 2018, 27 Jan 2018 and the other two from JCBT in V filter on 26 Dec 2018 and 06 Jan 2019. The host star is an A5 type star \citep{grenier99}. It has a mass of $1.495\pm0.031$ $M_\odot$ and a radius of 1.444$\pm$0.034 $R_\odot$ \citep{cameron10}. It is a $\delta$ Sct variable star with a V mag of 8.3 \citep{herrero11}. So, the transit light curves of WASP-33b are contaminated with the pulsations as reported by \cite{vonessen14,johnson15}. The effect of these pulsations on the estimation of the transit parameters is subtracted by adopting a denoising technique as explained in Sec-\ref{sec:ton}. The orbital period of the planet is taken as $1.21987\pm0.000001$ days \citep{cameron10,vonessen14,johnson15}. In order to determine the mass and hence the mean density of the planet, we have considered the semi-amplitude of the radial velocity of the star due to the planet as $K_{RV}$ = $304.0 \pm 20.0 {\rm ms^{-1}}$ \citep{lehmann15}. The effective temperature of the host star is taken to be $T_{eff} = 7308 \pm 71 $ K \citep{cameron10}. From this the equilibrium temperature of the planet is determined (see section \ref{sec:rd}). \subsection{WASP-50b} We observed a total of 5 transit events of this hot jupiter by using JCBT in I filter on 26 Jan 2018, 28 Jan 2018 and 30 Jan 2018 and in R filter on 07 Jan 2019 and 11 Jan 2019. The host star has a V mag of 11.44, a mass of $0.892^{+0.080}_{-0.074}$ $M_\odot$ and a radius of $0.843\pm 0.031$ $R_\odot$ \citep{gillon11}. The $T_{eff}$ and the semi-amplitude of the radial velocity of the star due to the planet ($K_{RV}$) are respectively 5400$\pm$100 K and 256.6$\pm$4.4 m/s \citep{gillon11}. The orbital period of the planet is taken as $1.955100\pm0.000005$ days \citep{gillon11}. \subsection{WASP-12b} By using JCBT, we observed a total of 5 transit events for this hot Jupiter - 3 in R-band on 03 Feb 2018, 04 Feb 2018 and 14 Feb 2018, one in I-band on 15 Feb 2018 and the other one in V-band on 04 Jan 2019. The host star has a mass, radius and $T_{eff}$ of 1.434$\pm$0.11 $M_\odot$, 1.657 $\pm 0.046$ $ R_\odot$ and 6300$\pm$150 K respectively \citep{collins17}. The semi-amplitude of the radial velocity of the star due to the planet is $K_{RV}$ = 226$\pm4.0 {\rm ms^{-1}}$ \citep{collins17}. The orbital period of the planet is $1.09142 \pm 0.00000014$ days \citep{collins17}. \subsection{HATS-18b} HATS-18 b is a hot Jupiter that orbits around a G type star which is very similar to the Sun in terms of mass, radius and $T_{eff}$. We report the observations of four transit events of HATS-18 b, all by using JCBT. The observations were taken in I-band on 27 Jan 2018, 18 Feb 2018 and 06 Apr 2018 and in R-band on 08 Jan 2019. The host star has a mass, radius and $T_{eff}$ of 1.037$\pm$0.047 $M_\odot$, $1.020^{+0.057}_{-0.031} R_\odot$ and 5600$\pm$120 K respectively \citep{penev16}. The semi-amplitude of radial velocity of the star is $K_{RV}=415.2\pm10.0 {\rm ms^{-1}}$ \citep{penev16}. The orbital period of the planet is $0.8378\pm0.00000047$ days \citep{penev16}. \subsection{HAT-P-36b} We observed HAT-P-36b during 4 transit events: On 15 Feb 2018 by using I filter, on 08 Apr 2018 and 06 May 2018 by using V filter in JCBT and on 20 Jun 2018 by using V filter in HCT. The host star is a G5V star with mass, radius and $T_{eff}$ of $1.03\pm0.03$ $M_\odot$, $1.041\pm0.013$ $R_\odot$ and $5620\pm40$ K respectively \citep{bakos12}. This star is also very similar to the Sun in terms of mass, radius and $T_{eff}$. The semi-amplitude of radial velocity of the star due to the planet is $K_{RV}$ = $334.7\pm14.5 {\rm ms^{-1}}$ \citep{bakos12,mancini15}. The orbital period of the planet is taken as $1.9551\pm0.00005$ days \citep{bakos12,mancini15}. \clearpage \begin{deluxetable}{lccccc}[!ht] \tablecaption{Stellar and orbital parameters adopted from literature \label{tab:lit}} \tabletypesize{\scriptsize} \tablehead{\\ Parameters & WASP-33 b & WASP-50 b & WASP-12 b & HATS-18 b & HAT-P-36 b } \startdata Host star mass, & $1.495\pm0.031$ & $0.892^{+0.08}_{-0.074}$ & $1.434\pm0.11$ & $1.037\pm0.047$ & $1.03\pm0.03$ \\ $M_*$ ($M_\odot$) & & & & & \\ Host star radius, & $1.444\pm0.034$ & $0.843\pm0.031$ & $1.657\pm0.046$ & $1.02^{+0.057}_{-0.031}$ & $1.041\pm0.013$ \\ $R_*$ ($R_\odot$) & & & & & \\ Host star $T_{eff}$ & $7430\pm100$ (a) & $5400\pm100$ & $6360\pm140$ & $5600\pm120$ & $5620\pm40$ \\ (K) & & & & & \\ Orbital Period, & $1.21987\pm0.000001$ & $1.955100\pm0.000005$ & $1.09142\pm1.4432\times 10^{-7}$ & $0.83784\pm4.7\times 10^{-7}$ & $1.32734683\pm0.00000048$ \\ P (days) & & & & & \\ RV amplitude, & $304\pm20$ & $256.6\pm4.4$ & $226.4\pm4.1$ & $415.2 10.0$ & $334.7\pm14.5$ \\ $K_{RV}$ (m/s) & & & & & \\ Sources & \cite{cameron10} & \cite{gillon11} & \cite{collins17} & \cite{penev16} & \cite{bakos12} \\ & \cite{lehmann15} & & & & \cite{mancini15} \\ \enddata \tablecomments{The value of each parameter is shown along with 1-$\sigma$ error margin.} \end{deluxetable} \clearpage \clearpage \begin{deluxetable}{lcccccccc}[!ht] \tablecaption{Observation details and the night-dependent model parameters \label{tab:obs}} \tabletypesize{\scriptsize} \tablehead{\\ Planet & Date of & Telescope & Filter & Photometric & Mid-transit ephemerides, & Cycle & $A$ & $\tau$ \\ & Observation & & (Bessel) & S/N (median) & $t_{cen}$ ($BJD-TDB$) & no.\tablenotemark{a} & & } \startdata & 09 Dec 2017 & HCT & V & 191.90 & $2458097.30431\pm0.00000786$ & 4191 & $0.0060\pm0.0001$ & $20.0\pm0.1$ \\ & 05 Jan 2018 & JCBT & I & 1253.47 & $2458124.14196\pm0.00000769$ & 4213 & $0.0017\pm0.0001$ & $20.0\pm0.1$ \\ WASP-33 b & 27 Jan 2018 & JCBT & I & 300.29 & $2458146.09962\pm0.00000765$ & 4231 & $0.0029\pm0.0001$ & $19.99\pm0.1$ \\ & 26 Dec 2019 & JCBT & V & 505.03 & $2458479.12739\pm0.00000698$ & 4504 & $0.0044\pm0.0001$ & $19.99\pm0.1$ \\ & 06 Jan 2019 & JCBT & V & 466.48 & $2458490.10444\pm0.00000696$ & 4513 & $0.0039\pm0.0001$ & $18.99\pm0.1$ \\ \hline & 26 Jan 2018 & JCBT & I & 361.93 & $2458145.20327\pm0.00001057$ & 1323 & $0.00141\pm0.0001$ & $12.0\pm0.1$ \\ & 28 Jan 2018 & JCBT & I & 252.39 & $2458147.15848\pm0.00001135$ & 1324 & $0.00232\pm0.0001$ & $13.0\pm0.1$ \\ WASP-50 b & 30 Jan 2018 & JCBT & I & 1012.89 & $2458149.11405\pm0.00000886$ & 1325 & $0.00096\pm0.0001$ & $10.0\pm0.1$ \\ & 07 Jan 2019 & JCBT & R & 1139.06 & $2458491.25582\pm0.00000783$ & 1500 & $0.00240\pm0.00011$ & $12.0\pm0.1$ \\ & 11 Jan 2019 & JCBT & R & 1086.67 & $2458495.16601\pm0.00000811$ & 1502 & $0.00240\pm0.00011$ & $12.0\pm0.1$ \\ \hline & 03 Feb 2017 & JCBT & R & 1318.76 & $2458153.22835\pm0.00001885$ & 2754 & $0.00037^{+0.0005}_{-0.0001}$ & $12.0\pm0.1$ \\ & 04 Feb 2018 & JCBT & R & 1180.83 & $2458154.31975\pm0.00000770$ & 2755 & $0.00302\pm0.0002$ & $10.0\pm0.1$ \\ WASP-12 b & 14 Feb 2018 & JCBT & R & 1261.32 & $2458164.14255\pm0.00002867$ & 2764 & $0.00100^{+0.00025}_{-0.00014}$ & $10.0\pm0.1$ \\ & 15 Feb 2018 & JCBT & I & 500.50 & $2458165.23478\pm0.00001119$ & 2765 & $0.00293^{+0.0001}_{-0.0004}$ & $13.0\pm0.1$ \\ & 04 Jan 2019 & JCBT & V & 1085.84 & $2458488.29489\pm0.00001192$ & 3646 & $0.00300^{+0.0007}_{-0.0001}$ & $12.0\pm0.1$ \\ \hline & 27 Jan 2018 & JCBT & I & 443.07 & $2458146.42651\pm0.00000928$ & 1261 & $0.00099\pm0.0001$ & $12.0\pm0.1$ \\ HATS-18 b & 18 Feb 2018 & JCBT & I & 239.80 & $2458168.21044\pm0.00000952$ & 1287 & $0.004\pm0.0001$ & $10.0\pm0.1$ \\ & 06 Apr 2018 & JCBT & I & 307.09 & $2458215.12967\pm0.00001042$ & 1343 & $0.00599\pm0.0001$ & $7.0\pm0.1$ \\ & 08 Jan 2019 & JCBT & R & 425.84 & $2458492.45583\pm0.00001028$ & 1674 & $0.00091\pm0.0001$ & $7.0\pm0.1$ \\ \hline & 15 Feb 2017 & JCBT & I & 329.09
& $2458165.45507\pm0.00000686$ & 1959 & $0.00301\pm0.0001$ & $13.0\pm0.1$ \\ HAT-P-36 b & 08 Apr 2018 & JCBT & V & 461.99 & $2458217.22160\pm0.00000755$ & 1998 & $0.00175\pm0.0001$ & $12.99\pm0.1$ \\ & 06 May 2018 & JCBT & V & 589.10 & $2458245.09464\pm0.00000737$ & 2019 & $0.00099\pm0.0001$ & $13.0\pm0.1$ \\ & 20 Jun 2018 & HCT & V & 1106.84 & $2458290.22569\pm0.00000709$ & 2053 & $0.00088\pm0.0001$ & $13.0\pm0.1$ \\ \enddata \tablenotetext{\text{a}}{The mid-transit ephemerides (BJD-TDB) at cycle 0 for WASP-33 b, WASP-50 b, WASP-12 b, HATS-18 b, HAT-P-36 b are considered to be at 2452984.82964 \citep{turner16}, 2455558.61197 \citep{gillon11}, 2455147.4582 \citep{turner16}, 2457089.90598 \citep{penev16} and 2455565.18167 \citep{mancini15} respectively.} \tablecomments{The values of $t_{cen}$, A and $\tau$ are shown along with 1-$\sigma$ error margin.} \end{deluxetable} \clearpage \section{Data Reduction, Analysis and Modeling} \label{sec:rodam} We have developed an automated pipeline based on Python and PyRAF to reduce, analyse and model the observed data. This pipeline performs the necessary bias and flat corrections on the raw images and then aligns those corrected images. Leveraging the moderately large field of view on-sky of the imagers we could image the target stars along with a few field stars that serves as reference stars for those targets. Subsequently, using the DAOPHOT package of PyRAF the pipeline automatically performs differential aperture photometry on the target stars and generates normalized light curves during the transit ephoch. Henceforth, by a single light curve we will imply the light curve of a host star obtained on one particular night which may or may not contain a transit signal. The light curves were denoised using wavelet denoising technique before modeling to decorrelate the patterns of variability common to all the stars in the frame but uncorrelated in time \citep{ser18}. Also, to take care of the noise pattern unique to the host stars and correlated in time caused by stellar activity or pulsation etc.,we have adopted Gaussian process correlated noise modeling technique during modeling to model its covariance structure and to take it into account when calculating the likelihood of the data given the model \citep{johnson15,barclay15}. The various sources of noise that causes the fluctuations in the light curves and their decorrelation or modeling techniques are discussed in details in the next section. After denoising, we modeled the transit light curves by using the formalisms described in \cite{mandel-agol02}. We have used the Markov Chain Monte Carlo (MCMC) technique employing Metropolis-Hastings algorithm \citep{cameron10} to fit the models with the observed light curves and thus determined the various physical parameters from the best fit. An essential parameter of the model is the orbital period which we have kept fixed at the values given in previously published results \citep{cameron10,gillon11, bakos12,penev16,collins17}. For all the transit events we have assumed circular orbits of the planets. The free parameters for each transit model are the mid-transit ephemeris ($t_{cen}$), impact parameter $(b)$, the scaled radius of the star ($R_*/a$), the ratio between the planetary and the stellar radius ($R_p/R_*$), the pre-ingress or post-egress baseline level ($f_{star}$) and the limb darkening coefficients ($Ci$). We have modeled all the observed transit light curves of a particular planet simultaneously. By modeling the light curves simultaneously we have deduced a single set of values for $b$, $R_*/a$ and $R_p/R_*$ for each planet. These parameters are the properties of the planet-star systems and hence independent of the observing conditions. We deduced different sets of values for the limb darkening coefficients $C_i$ for each host star for different filters. Also, for different transit events, we deduced different sets of values for $t_{cen}$ and $f_{star}$ from our model fit (Table-\ref{tab:obs}) as these parameters depend on the nights of observations. For all the free parameters other than the limb darkening coefficients we have set uniform prior function \citep{gillon11}. We adopted quadratic limb darkening law which can be expressed as: \begin{equation} I/I(\mu=1) = 1 - C_1(1-\mu) - C_2(1-\mu^2), \end{equation} where $I/I(\mu=1)$ denotes the intensity at any point on the disc normalized to that at the center. The initial values required to derive the limb darkening coefficients from the MCMC fit are taken from \cite{claret11} and Gaussian priors was set on them \citep{johnson15}. The MCMC generates a sample space of the best-fit values for the model parameters depending upon the number of walkers and iterations by maximizing the likelihood space of model fits to the light curve data. A Gaussian fit to the sample space then gives the required value of the parameters at $1 \sigma$ error margins. \section{Treatment of Noise} \label{sec:ton} The images captured from ground based telescopes are susceptible to noises generated from various sources. These noises are either common to all the objects in a frame and uncorrelated in time such as the noises caused by the fluctuations in the transparency, seeing, airmass etc. or unique to each object and correlated in time such as the the noises caused by the activity or pulsations of the host stars. In order to decorrelate the former type of noises from the light curves, a preprocessing on the light curves is essential before modeling to achieve high precision in the transit parameters estimated from modeling. However, the smoothing techniques such as Moving Average or Gaussian smoothing can not be used to suppress these noises as the smoothing process can distort the orginal light curves by removing the high frequency components of the transit signal itself and question the reliability of the properties derived therefrom. On the other hand, for a non-stationary non-sinusoidal signal like a noisy transit signal, the wavelet denoising is much more efficient than a frequency-based filtering technique in terms of signal reconstruction and denoised S/N \citep{barsanti11,lagha13}. Wavelets have already been used extensively in the light curve noise analysis and filtering \citep{cubillos17,waldman14}. In case of transit photometry, wavelet denoising can efficiently remove the outliers, yield better MCMC posterior distributions and reduce the bias in the fitted transit parameters and their uncertainties \citep{ser18}. We used the pywt package \citep{pywt} and followed the same procedure as described in \cite{ser18}. Also, we simulated a transit light curve assuming a set of values for the transit parameters along with uncertainties in each parameter. The uncertainties in the parameters then reflect the errorbars in the simulated transit light curve. The wavelet denoising process is expected not to affect a light curve with errorbars limited by the uncertainty in the transit parameters. In fact we found that our simulated transit light curve was almost unchanged by the denoising process. This ensures that the light curves are not over-smoothed or the errorbars are not under-estimated by the denoising process. The transit light curves with and without wavelet-denoising are shown in Figure~\ref{fig:wasp33bwd}, Figure~\ref{fig:wasp50bwd}, Figure~\ref{fig:wasp12bwd}, Figure~\ref{fig:hats18bwd} and in Figure~\ref{fig:hatp36bwd}. The values of the planetary physical parameters deduced by modeling the transit light curves preprocessed with wavelet denoising are presented in Table~\ref{tab:par}. The same without wavelet denoising process are provided in Table~\ref{tab:parnw}. A comparison of the results presented in the two tables implies that the wavelet denoising process improves the precision in the deduced parameters significantly. However, the wavelet denoising process can only efficiently remove the outliers and reduce the temporally uncorrelated noise. The temporally correlated noise unique to the host stars still remaining in the denoised transit light curves can not be directly decorrelated from the light curves. Instead, while modeling the light curves using MCMC, we also modeled the covariance structure of the correlated noise using the Gaussian process regression (GP) \citep{johnson15,barclay15} whose mean function includes the transit model function \citep{mandel-agol02} itself. This ensures that the effect of the correlated noise on the estimation of the posterior uncertainties on the fitted parameters is minimized. The diagonal elements of the covariance matrix are contributed by the errorbars in the transit light curves (photon noise plus readout noise). We have formed the correlation matrix using a Matern 3/2 kernel, given by \citep{johnson15}: \begin{equation} \kappa_{ij} = A^2(1+\frac{\sqrt{3}\Delta t_{ij}}{\tau})\exp(-\frac{\sqrt{3}\Delta t_{ij}}{\tau}) + \delta_{ij}\sigma^2_i, \end{equation} where, $\Delta t_{ij} = (t_i-t_j)$; $t_i,t_j$ are two points of time of observation, $\sigma_i$ is the uncertainty (error) in the flux values at time $t_i$ and $\delta_{ij}$ is the Kronecker delta function. $A$ and $\tau$ are the amplitude and the time scale of the fluctuation of a light curve due to the correlated noise and used in the MCMC for model fitting. We have kept $A$ and $\tau$ variable for each light curve. The prior functions of $A$ and $\tau$ are also chosen to be uniform. The prior function for $A $ is estimated from the amplitude of fluctuation at the pre-ingress or post-egress points of time and the prior function of $\tau$ is estimated from the high frequency peaks on the Lomb-Scargle periodogram of each light curve. Besides, while modeling, we have multiplied the transit plus noise model with a baseline function to represent the systematics due to the other astrophysical noise or noise at the detector stage. By minimizing the Bayesian Information Criterion (BIC) we have chosen a one order baseline function of time for this purpose \citep{gillon16}. \clearpage \begin{deluxetable}{lccccc}[!ht] \tablecaption{Physical parameters directly obtained and deduced from our differential transit photometry followed by preprocessing with WD technique and modeling. \label{tab:par}} \tablehead{\\ Parameters & WASP-33 b & WASP-50 b & WASP-12 b & HATS-18 b & HAT-P-36 b } \startdata \textbf{Transit model parameters} \\ Impact Parameter, b & $0.21\pm0.002$ & $0.669^{+0.018}_{-0.007}$ & $0.339\pm0.0017$ & $0.3\pm0.001$ & $0.25\pm0.007$ \\ Scaled Stellar radius, $R_*/a$ & $0.28\pm0.0008$ & $0.133\pm0.003$ & $0.333^{+0.0002}_{-0.0017}$ & $0.273\pm0.0006$ & $0.21^{+0.003}_{-0.0002}$ \\ Planet/Star Radius Ratio, $R_p/R_*$ & $0.1118\pm0.0002$ & $0.139\pm0.0006$ & $0.117\pm0.0002$ & $0.132\pm0.0004$ & $0.1199\pm0.0002$ \\ \hline \textbf{Limb darkening coefficients} \\ Linear Term for V filter, $C1_V$ & $0.5\pm0.01$ & $-$ & $0.42\pm0.01$ & $0.5\pm0.01$ & $0.53\pm0.01$ \\ Quadratic Term for V filter, $C2_V$ & $0.2\pm0.01$ & $-$ & $0.31\pm0.01$ & $0.2\pm0.01$ & $0.23\pm0.01$ \\ Linear Term for R filter, $C1_R$ & $-$ & $0.4\pm0.01$ & $0.3\pm0.01$ & $0.41\pm0.01$ & $-$ \\ Quadratic Term for R filter, $C2_R$ & $-$ & $0.2\pm0.01$ & $0.3\pm0.01$ & $0.18\pm0.01$ & $-$ \\ Linear Term for I filter, $C1_I$ & $0.31\pm0.01$ & $0.3\pm0.01$ & $0.29\pm0.01$ & $0.31\pm0.01$ & $0.32\pm0.01$ \\ Quadratic Term for I filter, $C2_I$ & $0.18\pm0.01$ & $0.2\pm0.01$ & $0.31\pm0.01$ & $0.21\pm0.01$ & $0.19\pm0.01$ \\ \hline \textbf{Deduced parameters}\\ Transit Duration, $T_{14}$ (days) & $0.1189\pm0.0005$ & $0.0764\pm0.0011$ & $0.1267^{+0.00009}_{-0.0005}$ & $0.081\pm0.0001$ & $0.093^{+0.0016}_{-0.00007}$ \\ Planet Radius, $R_p$ ($R_J$) & $1.593\pm0.074$ & $1.166\pm0.043$ & $1.937\pm0.056$ & $1.329\pm0.075$ & $1.277\pm0.02$ \\ Scale Parameter, $a/R_*$ & $3.571\pm0.01$ & $7.51\pm0.10$ & $3.0^{+0.016}_{-0.0019}$ & $3.658\pm0.008$ & $4.95\pm0.042$ \\ Orbital Separation, $a$ (AU) & $0.0239\pm0.00063$ & $0.0293\pm0.0013$ & $0.0232\pm0.00064$ & $0.0174\pm0.00098$ & $0.0241\pm0.00047$ \\ Orbital Inclination, $i$ (degrees) & $86.63\pm0.03$ & $84.88\pm0.27$ & $83.52\pm0.03$ & $85.29\pm0.013$ & $87.13^{+0.004}_{-0.13}$ \\ Planet Mass, $M_p$ ($M_J$) & $2.093\pm0.139$ & $1.4688\pm0.092$ & $1.465\pm0.079$ & $1.9795\pm0.076$ & $1.8482\pm0.087$ \\ Planet Mean Density, $\rho_p$ (g$cm^{-3}$) & $0.689\pm0.074$ & $1.325\pm0.214$ & $0.267\pm0.0288$ & $1.1169\pm0.216$ & $1.175\pm0.078$ \\ Surface Gravity, $\log g_p$ (cgs) & $3.275\pm0.04$ & $3.469\pm0.029$ & $2.998\pm0.01$ & $3.45\pm0.013$ & $3.476\pm0.027$ \\ Equilibrium Temp.\tablenotemark{a}, $T_{eq}$ (K) & $2781.70\pm41.1$ & $1394.84\pm32.7$ & $2592.6\pm57.2$ & $2069.48\pm45.0$ & $1780.97\pm18.8$ \\ \enddata \tablenotetext{\text{a}}{Assuming zero Bond albedo and full re-distribution of the incident stellar flux.} \tablecomments{The value of each parameter is shown along with 1-$\sigma$ error margin. Also, some of the limb darkening coefficients are shown as $-$, which implies that no transit has been observed for that particular planet in that filter.} \end{deluxetable} \clearpage \begin{deluxetable}{lccccc}[!ht] \tablecaption{Physical parameters directly obtained and deduced from our differential transit photometry followed by modeling and no preprocessing (without WD). \label{tab:parnw}} \tablehead{\\ Parameters & WASP-33 b & WASP-50 b & WASP-12 b & HATS-18 b & HAT-P-36 b } \startdata \textbf{Transit model parameters} \\ Impact Parameter, b & $0.21\pm0.003$ & $0.65^{+0.068}_{-0.005}$ & $0.339\pm0.007$ & $0.299\pm0.019$ & $0.247\pm0.02$ \\ Scaled Stellar radius, $R_*/a$ & $0.28\pm0.003$ & $0.133^{+0.01}_{-0.002}$ & $0.332\pm0.002$ & $0.26\pm0.005$ & $0.202\pm0.004$ \\ Planet/Star Radius Ratio, $R_p/R_*$ & $0.1119\pm0.003$ & $0.135\pm0.001$ & $0.117^{+0.002}_{-0.0002}$ & $0.131^{+0.003}_{-0.0002}$ & $0.1199\pm0.003$ \\ \hline \textbf{Limb darkening coefficients} \\ Linear Term for V filter, $C1_V$ & $0.5\pm0.03$ & $-$ & $0.4\pm0.04$ & $0.48\pm0.04$ & $0.5\pm0.05$ \\ Quadratic Term for V filter, $C2_V$ & $0.2\pm0.03$ & $-$ & $0.3\pm0.04$ & $0.2\pm0.05$ & $0.2\pm0.04$ \\ Linear Term for R filter, $C1_R$ & $-$ & $0.39\pm0.05$ & $0.3\pm0.05$ & $0.4\pm0.06$ & $-$ \\ Quadratic Term for R filter, $C2_R$ & $-$ & $0.21\pm0.05$ & $0.3\pm0.05$ & $0.21\pm0.04$ & $-$ \\ Linear Term for I filter, $C1_I$ & $0.3\pm0.04$ & $0.3\pm0.06$ & $0.29\pm0.03$ & $0.31\pm0.05$ & $0.3\pm0.06$ \\ Quadratic Term for I filter, $C2_I$ & $0.2\pm0.04$ & $0.2\pm0.06$ & $0.3\pm0.03$ & $0.2\pm0.05$ & $0.2\pm0.06$ \\ \hline \textbf{Deduced parameters}\\ Transit Duration, $T_{14}$ (days) & $0.1188\pm0.0012$ & $0.078\pm0.003$ & $0.1267\pm0.0006$ & $0.079\pm0.0014$ & $0.095\pm0.0018$ \\ Planet Radius, $R_p$ ($R_J$) & $1.601\pm0.057$ & $1.144\pm0.057$ & $1.939\pm0.058$ & $1.341\pm0.079$ & $1.30\pm0.03$ \\ Scale Parameter, $a/R_*$ & $3.571\pm0.04$ & $7.485^{+0.1}_{-0.63}$ & $3.0\pm0.019$ & $3.724\pm0.067$ & $4.937\pm0.1$ \\ Orbital Separation, $a$ (AU) & $0.0239\pm0.00071$ & $0.0289\pm0.002$ & $0.0231\pm0.00068$ & $0.0176\pm0.001$ & $0.0239\pm0.00058$ \\ Orbital Inclination, $i$ (degrees) & $86.6
equation} We, thus, can define the truncated KLE by \begin{equation} \label{kle_truncated_3} Y^k_{N}(\boldsymbol{x})=\displaystyle \sum_{i=1}^{N}\sqrt{\lambda_i}\theta_i^k \varphi_i(\boldsymbol{x}). \end{equation} \subsection{Convergence Assessment of MCMCs} \label{mpsrf_3} We consider a problem that consists of sampling the permeability field conditioned on pressure measurements. We use a Bayesian statistical approach (discussed in Section \ref{Bayes_3}) along with a preconditioned MCMC method \cite{efendiev2006,christenfox05} to characterize the permeability field of our domain of interest. Two critical issues, namely, where to begin (burn-in) and when to terminate (convergence), need to be addressed when MCMC methods are used. We now discuss the convergence of MCMC methods that we use in our investigation to construct one of the rock properties (permeability field). A number of convergence criteria \cite{brooks1998mcmc,polson1996,rosenthal1995} for MCMCs have been developed with a solid theoretical foundation. Several review papers, where authors used MCMC convergence diagnostics, are available in the literature~\cite{Roy2020,cowles1996,mengersen1999,Brooksgelman1998}. Note that in \cite{cowles1996} the authors discussed thirteen MCMC convergence diagnostics. The convergence diagnostics described in~\cite{Brooksgelman1998} are now widely used. In this work we use two popular diagnostic tools, namely, the Potential Scale Reduction Factor (PSRF) and the multivariate PSRF (MPSRF), to diagnose the convergence of MCMC algorithms. Between these two, the MPSRF method takes all the parameters into account for accessing convergence of the MCMC methods. Thus, the MPSRF is more restrictive than the PSRF. The PSRF and MPSRF measures rely on multiple chains. Thus we are required to run $m>1$ independent chains in parallel with different initial points drawn from an overdispersed distribution. The effect of starting at different initial points is made minimal by discarding the first few iterations as burn-in. Let us denote by $\boldsymbol\theta$ an $N$-dimensional parameter vector, and let $l$ represents the number of posterior draws for each of the $m$ chains. Furthermore, assume that $\boldsymbol {\theta}_j^{c}$ denote the value of the parameter vector $\boldsymbol {\theta}$ generated at iteration $c$ in $j$th chain of the MCMC algorithm. The posterior variance-covariance matrix is then estimated as \\ \begin{equation} \mathbf{\widehat{V}} = \frac{l-1}{l}\mathbf{W} + \left( 1+ \frac{1}{m}\right)\frac{\mathbf{B}}{l}. \end{equation} The within- and between-sequence (chain) covariance matrix $\mathbf{W}$ and $\mathbf{B}$ are calculated as \begin{equation} \mathbf{W} = \frac{1}{m(l-1)} \sum\limits_{j=1}^m \sum\limits_{c=1}^l \left(\boldsymbol {\theta}_j^{c} - \boldsymbol {\bar \theta}_{j.}\right) \left(\boldsymbol {\theta}_j^{c} - \boldsymbol {\bar \theta}_{j.}\right) ^{T}, \end{equation} and \begin{equation} \mathbf{B} = \frac{l}{m-1} \sum\limits_{j=1}^m \left(\boldsymbol {\bar \theta}_{j.}-\boldsymbol {\bar \theta}_{..}\right) \left(\boldsymbol {\bar \theta}_{j.}-\boldsymbol {\bar \theta}_{..}\right)^{T}, \end{equation} respectively. $\boldsymbol {\bar \theta}_{j.}$ denote within chain mean and $\boldsymbol {\bar \theta}_{..}$ represent the mean between $m$ combined chains, respectively. $T$ denotes the transpose of a matrix. The PSRFs are calculated using the two estimators $\mathbf{\widehat{V}}$ and $\mathbf{W}$ defined by \begin{equation} \begin{aligned} \text{PSRF}_\text{i} =\sqrt{\frac {\text{diag}(\mathbf{\widehat{V}})_i}{\text{diag}(\mathbf{W})_i}}, ~~~~\text{where} \, \,\, i=1,2,...,N. \end{aligned} \end{equation} A large PSRF$_{i}$ suggests that either the estimate of the between variance can be decreased by taking more samples into account or by taking further samples one could increase the within variance. It indicates that the simulated sequences have not yet traversed the parameter space completely. On the other hand, if the maximum of PSRF values is close to $1$, we can draw the conclusion that each of the $ m $ chains of $l$ simulated samples is close to the target distribution. The MPSRF is estimated by using the maximum root statistic. As in \cite{Brooksgelman1998} it is defined by \begin{equation} \begin{aligned} \text{MPSRF} &= \sqrt{\max_{\bold a} \smallskip \frac{\bold a^T\bold{\widehat{V}} \bold a}{\bold a^T\bold{W}\bold a}}\\ &=\sqrt{\max_{\bold a} \smallskip \frac{\bold a^T\left[ \frac{l-1}{l}\bold{W} + \left( 1+ \frac{1}{m}\right)\frac{\bold{B}}{l} \right] \bold a }{\bold a^T \bold{W} \bold a}} \\ &=\sqrt{\frac{l-1}{l} + \left(\frac{m+1}{m}\right) \max_{\bold a} \frac{\bold a^T\frac{\bold{B}}{l}\bold a}{\bold a^T\bold{W} \bold a} \notag}\\ &=\sqrt{\frac{l-1}{l} + \left(\frac{m+1}{m}\right) \lambda}, \end{aligned} \end{equation} \noindent where $\bold a \in \mathbb{R}^N$ is an arbitrary vector, and $\lambda$ is the greatest eigenvalue of the positive definite matrix $\bold{W}^{-1} \bold{B}/l$. If the means of between chains are equal, the between chain covariance matrix $ \mathbf{B}$ becomes zero. In this case, the chains mix well and $\lambda\rightarrow 0$. Thus, as the MPSRF approaches to $1$, it guarantees a convergence for sufficiently large sample size. \section{Multiscale Sampling} \label{multiscale} \subsection{The Multiscale Prior Distribution}\label{prior} We begin with the description of a decomposition of the domain $\Omega$. Our multiscale sampling strategy is based on two non-overlapping partitions of the domain $\Omega$: the first is a uniform fine Cartesian mesh $\Omega^f$ where the values of the absolute permeability field are piecewise constant. This is also the mesh used for the numerical solution of the system \eqref{weakp1}-\eqref{weakp2}. The second is a coarse Cartesian mesh $\Omega^c$ constructed as sets of elements in $\Omega^f$ (see Figure \ref{fig1}) where a KLE will be applied for local dimensional reduction. The proposed method is based on partitions $\Omega^\gamma$ into rectangles $\{\Omega_i^\gamma,\ i=1,\dots,M_\gamma\}$ (see Figure \ref{fig1}), such that \[ \bar{\Omega^\gamma}=\bigcup^{M_\gamma}_{i=1}\bar{\Omega_i^\gamma}; \quad \Omega_i^\gamma \cap \Omega_k^\gamma = \varnothing, \quad i \neq k, \quad \gamma = c,f. \] Define $\Gamma=\partial\Omega$ and, for $i=1,\dots,M_\gamma$: \[ \quad\Gamma_{ik}^\gamma=\Gamma_{ki}^\gamma= \partial\Omega_i^\gamma\cap\partial\Omega_k^\gamma, \quad \gamma = c,f. \] For each element of the coarse partition $\{\Omega_i^\gamma,\ i=1,\dots,M_c\}$, we define the set \[ \mathcal{S}_i = \{j: \Omega_j^f \subset \Omega_i^c \}. \] As indicated in Figure \ref{fig1}, we refer to two length scales in the description of the new multiscale procedure: $H$, the mesh size for the coarse partition and $h$, the mesh size of an underlying fine grid. \begin{figure}[ht] \centering { \includegraphics[scale=1.0]{figures/fig1v2.eps} } \caption{The fine $\Omega^f$ (dashed lines) and coarse $\Omega^c$ (solid lines) partitions of $\O$ along with the three spatial scales used in the definition of the new multiscale procedure.} \label{fig1} \end{figure} We will consider a blocking strategy \cite{Brooksgelman1998} for Gibbs sampling within a Metropolis-Hastings algorithm. In order to define it we decompose the $\boldsymbol\theta$ vector in Eq. \eqref{kle_truncated_3} into orthogonal subspaces corresponding to blocks with the same number of components, that are denoted by $\boldsymbol\theta^i$, for $i=1,\dots,M_c$. Each block of thetas is used to generate a local Gaussian field within its corresponding subdomain, as illustrated in Fig. \ref{fig_map}. The update of each $\boldsymbol\theta^i$ block is based on the random walk sampler (RWS) of ~\cite{Cotter_2013}. It is given, for $i=1,\dots,M_c$, by \begin{equation} \label{RW_sampler} {\boldsymbol\theta_p}^i = \sqrt{1-\beta^2}\, {\boldsymbol\theta}^i + \beta\,{\boldsymbol\epsilon}^i, \end{equation} where the current sample is denoted by ${\boldsymbol\theta_p}^i$ and the previously accepted sample by $\boldsymbol\theta^i$. The algorithmic parameter $\beta$ is used for tuning the sampler and ${\boldsymbol\epsilon}^i$ represents a $\mathcal N(0,1)$-random vector. Not all components of ${\boldsymbol\theta_p}^i $ are updated simultaneously. Blocking is used again so that only a subset of the components is modified in one MCMC iteration. We view ${\boldsymbol\theta_p}^i $ as a column vector and we are going to refer to the {\it local blocking number} as the number of contiguous components of this column vector that are updated simultaneously. For the purpose of sampling, we consider the ${\boldsymbol\theta_p}^i$ components ordered by their corresponding eigenvalues in the local KLE, from the largest towards the smallest one. The samples produced by the local sampling strategy discussed above produces Gaussian samples that show discontinuities in $\Gamma_{ik}^c$. Motivated by downscaling strategies developed for multiscale methods (that aim at removing flux discontinuities at subdomain boundaries \cite{guiraldello2020}) in order to complete the construction of one sample from our multiscale prior distribution an averaging method is used to condition each sample on the available data at nearest neighbor subdomains. The averaging procedure is illustrated in Figure \ref{fig_map}. \begin{figure}[H] \centering { \includegraphics[scale=0.5]{figures/decomposition.pdf} } \caption{The mapping between blocks of theta variables and elements of the coarse $\Omega^c$ (solid lines) partition of $\Omega$. Each block of thetas is used to generate a local Gaussian field within its corresponding subdomain.} \label{fig_map} \end{figure} A length scale $\overline H$ is set (a fraction of the correlation length that enters in the construction of the prior distribution) and, for each $i$, all cells of $\mathcal{S}_i$ that are at a distance of $\overline H$ (or less) to $\Gamma_{ik}^c$ have their current values replaced by local averages (that preserve both their mean value and variance - if they were uncorrelated). Figure \ref{fig_jump} illustrates a sample before and after this averaging procedure. In this figure, $H = 0.25$ and the averaging is applied on the boundary of the top right subdomain. Note that if the correlation lengths are not equal, the circle for the averaging in Figure \ref{fig_map} should be replaced by an ellipse. In conclusion, the multiscale prior distribution requires three user-specified parameters: \begin{itemize} \item The value of $H$: the subdomain size; \item The value of $\overline H$: the length scale local averages are taken; \item The local blocking number. \end{itemize} \begin{figure}[H] \centering { \includegraphics[scale = 0.45]{figures/perm_not_smooth} \includegraphics[scale = 0.45]{figures/perm_smooth} } \caption{Sample of a permeability field before (left) and after (right) a local averaging is applied at the top right subdomain of a domain decomposition with $H = 0.25$.} \label{fig_jump} \end{figure} We remark that the number of blocks for the localized Gibbs sampling in each subdomain is given by $N_{\text{local}} = N/(M_c * N_{lb})$, where $N_{lb}$ denotes the local blocking number. We refer to these blocks as $B_k$, $k=1,\dots, N_{\text{local}}$, and we also define the local stochastic dimension to be $N_c=N/M_c$. We conjecture that improved convergence for MCMCs should be observed for $H>L$, where $L$ denotes the largest correlation length in the definition of the prior distribution. Further studies are needed to check the validity of this conjecture. \subsection{The Multiscale Sampling Method}\label{msm_section} We now provide a detailed algorithm of the new Multiscale Sampling Method (MSM) that consists of a preconditioned MCMC with a multiscale prior distribution. If the number of subdomains $M_c = 1$, then the proposed algorithm reduces to the classical preconditioned MCMC ~\cite{efendiev2006,christenfox05} with Gibbs sampling associated with the local blocking number. We first discuss the algorithm of the preconditioned MCMC method. The filtering step of this method is based on a coarse-scale model approximation of the governing system \eqref{weakp1}-\eqref{weakp2}. The coarse-scale discretization is similar to the fine-scale discretization and the permeability field $\pmb{\eta}(\pmb{\theta})$ is projected on the coarse-scale. An upscaling procedure~~\cite{durlofsky1991} is used to set an effective permeability field that provides a similar average response as that of the underlying fine-scale problem. The numerical simulator is run on the coarse-scale model and produces the coarse-grid pressure field $R_c$. The coarse-scale and fine-scale acceptance probabilities are estimated as \begin{equation} \begin{aligned} {\alpha}_c(\pmb{\eta}, \pmb{\eta}_p) &= \text{min}\left(1,\dfrac{I(\pmb{\eta}|\pmb{\eta}_p)P_c(\pmb{\eta}_p|R_p)}{I(\pmb{\eta}_p|\pmb{\eta})P_c(\pmb{\eta}|R_p)}\right),\text{and}\\ {\alpha}_f(\pmb{\eta},\pmb{\eta}_p) &= \text{min}\left(1,\dfrac{P_f(\pmb{\eta}_p|R_p)P_c(\pmb{\eta}|R_p)}{P_f(\pmb{\eta}|R_p)P_c(\pmb{\eta}_p|R_p)}\right), \end{aligned} \end{equation} where $P_c$ and $P_f$ are the posterior probabilities calculated at coarse- and fine-scale, respectively. In MSM we first construct a local permeability field $\pmb\eta(\pmb\theta^i)$ for each subdomain $\Omega^c_i, i=1,\dots M_c$ using Eq. \eqref{kle_truncated_3}. To do so, we generate local KLE data in subdomains $\Omega^c_i, i=1,\dots M_c$ with size $\dfrac{1}{2^n}$, $n=1,2,\dots $. Then we construct the global permeability field by taking local averages. The MSM algorithm is presented in Algorithm \ref{alg_multiscale_a}. \begin{algorithm}[H] \caption{The Multiscale Sampling Method (MSM)} \label{alg_multiscale_a} \begin{algorithmic}[1] \STATE For a given covariance function $R$ solve Eq. \eqref{kle_efun_3} to get a KLE in Eq. \eqref{kle_truncated_3}, which is used in all the subdomains, $\Omega_i^c, i=1,\dots,M_c$. \FOR{$j=1$ to $M_{\text{mcmc}}$} \FOR{$i=1$ to $M_c$} \FOR{$k=1$ to $N_{\text{local}}$} \STATE Generate i.i.d., $\mathcal N(0,1)$ Gaussian variables to construct $\pmb{\theta}_p$ using Eq. \eqref{RW_sampler} for block $B_k$ in $\Omega_i^c$. \STATE Construct a local Gaussian sample (in each subdomain) using the KLE to set a preliminary value for the Gaussian sample at the $\mathcal{S}_i$ cells. \STATE Run the local averaging algorithm to remove discontinuities. \STATE Compute the upscaled permeability on the coarse-scale using $\pmb{\eta}_p$. \STATE Solve the forward problem on the coarse-scale to get $R_c$. \STATE Compute the coarse-scale acceptance probability ${\alpha}_c(\pmb{\eta}, \pmb{\eta}_p)$. \IF{ $\pmb{\eta}_p$ is accepted} \STATE Use $\pmb{\eta}_p$ in the fine-scale simulation to get $R_f$. \STATE Compute the fine-scale acceptance probability ${\alpha}_f(\pmb{\eta},\pmb{\eta}_p)$. \IF{ $\pmb{\eta}_p$ is accepted} $\pmb{\eta} = \pmb{\eta}_p$. \ENDIF \ENDIF \STATE $j=j+1.$ \ENDFOR \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} \section{Kriging and Conditioning} \label{kriging_condi} In this section, we combine the multiscale sampling method with conditioning by projection for the sampling. The conditioning by projection method has been discussed in detail in \cite{ali2021conditioning}. This method consists of two steps. In the first step, for given permeability values at sparse locations in the domain, generate a kriged field for the domain. In the second step, project the i.i.d $\mathcal N(0,1)$-random vector onto the nullspace of a data matrix defined in terms of KLE to calculate the linear combination in Eq. \eqref{kle_truncated_3}. The final permeability field is obtained by adding the fields defined in both steps (after taking exponential). In the following subsections we briefly discuss the kriging interpolation and projection method for conditioning. \subsection{Kriging Interpolation} Kriging is an interpolation method that is derived from a regionalized variable theory \cite{kriging_1978,mining_1976}. It employs a limited set of sampled data points to compute the value of a variable throughout a continuous spatial field. Kriging uses the spatial correlation between sampled points to interpolate the values in the spatial field. This interpolation gives the exact values of the field at the known locations. \subsection{The Projection Method for Conditioning} Following the discussion in \cite{ali2021conditioning}, we need to extract a data matrix that is defined in terms of KLE. We assume the Gaussian field $Y(\boldsymbol{x})$ defined in Eq. \eqref{kle_truncated_3} is a Gaussian perturbation on top of a kriged field $\widehat{Y}(\boldsymbol{x})$, thus we can write \begin{equation} \begin{aligned} \label{condi_perm1} Y(\boldsymbol{x}) - \widehat{Y}(\boldsymbol{x}) = \sum_{i=1}^{N}\sqrt{\lambda_i}\varphi_i(\boldsymbol{x})\theta_i =\pmb{\phi}^T(\boldsymbol{x})\sqrt{D}\pmb{\theta}, \end{aligned} \end{equation} where, for each $\boldsymbol{x}, \pmb{\phi}(\boldsymbol{x})\in \mathbb{R}^N$, and $D$ is a diagonal matrix containing $N$ dominating eigenvalues. If we have $M$ measured data values at sparse locations, then we can define the following homogeneous linear system of equations \begin{equation*} \begin{aligned} \label{system} A\pmb{\theta}= \boldsymbol{0}, \end{aligned} \end{equation*} where $A=\pmb{\phi}^T(\boldsymbol{\hat{x}})\sqrt{D}\in \mathbb{R}^{M\times N}$ is the desired data matrix. Finally, we project the vector $\pmb{\theta}$ onto the nullspace of the matrix $A$ to get the closest vector to $\pmb{\theta}$ in the nullspace of the data matrix $A$. i.e. \begin{equation*} \label{projection} \pmb{\hat{\theta}} = P\pmb{\theta}, \end{equation*} where $P$ is a projection matrix \cite{LA_strang_2019}. Therefore, we can write \begin{equation*} \begin{aligned} \label{condi_perm3} Y(\boldsymbol{x}) = \widehat{Y}(\boldsymbol{x})+ \sum_{i=1}^{N}\sqrt{\lambda_i} \varphi_i(\boldsymbol{x})\hat{\theta_i}, \quad \pmb{\hat{\theta}} = (\hat{\theta}_1, \dots , \hat{\theta}_N). \end{aligned} \end{equation*} \section{Numerical Results} \label{results_3} In this section, we describe the simulation study for the problem of interest. We test the proposed multiscale sampling method in four examples. In each example, we numerically solve the system containing Eqs. \eqref{weakp1}-\eqref{weakp2} on the domain $\Omega =[0,1]\times [0,1]$. We present a comparative study between the preconditioned MCMC method with and without multiscale sampling in the first three examples. In the last example, we analyze MSM with and without conditioning for a problem of higher dimensional stochastic space. In MSM, we apply KLE to construct a permeability field for each subdomain, and then construct the global permeability field. In KLE, we use the following covariance function: \begin{equation} \label{kle_3} \begin{aligned} R(\boldsymbol{x}_1,\boldsymbol{x}_2) = \sigma_Y^2\, \text{exp}\left(-\frac{|x_1 - x_2|^2}{2L_x^2} - \frac{|y_1 - y_2|^2}{2 L_y^2}\right), \end{aligned} \end{equation} where $L_x$ and $L_y$ are the correlation lengths and $\sigma_Y^2= \text{Var}[(Y^k)^2]$. We take $\sigma_Y^2=1$ in all the four examples. Moreover, we set the source term $f=0$ and impose Dirichlet boundary conditions, $p=1$ and $p=0$, on the left and right boundaries, respectively. We also set a no-flow (Neumann-type boundary condition) condition on the other two boundaries. We run four MCMC chains for each method. In order to remove the discontinuities between subdomains in our numerical studies, we set the length scale $\overline H$ to be $\overline H = \min\{\frac{L_x}{2},\frac{L_y}{2}\}$. Below we discuss the numerical results. \subsection{Example 1} In the first example, we consider $L_x=L_y=0.2$ in Eq. \eqref{kle_3}. We then generate KLEs for the global and MSM $2\times 2$ samplings. In MSM $2\times 2$ sampling, we use $H=0.5$. Figure \ref{eigen_16x16_ch3} illustrates the decay of the eigenvalues (in log scale) for both samplings. Note that the relationship between the eigenvalues in the global sampling and the eigenvalues in multiscale sampling can be obtained directly by a change of variables in Eq. \eqref{kle_efun_3}. We take the first $20$ eigenvalues that preserve more than $97\%$ of the total energy for the global sampling. Five eigenvalues are used for each subdomain in the multiscale sampling. We generate a reference synthetic permeability field on a computational fine mesh of size $16\times 16$, and then run our numerical simulator to generate the corresponding reference pressure field. We run the MCMC algorithms conditioned on this pressure field. Figure \ref{ref_perm_1} shows these reference fields. Furthermore, we use a coarse mesh of size $8\times 8$ as a filtering step in the preconditioned MCMC. We let the local blocking number $N_{lb}=1$ and $\beta = 0.5$ in Eq. \eqref{RW_sampler}. \begin{figure}[H] \centering \includegraphics[scale = 0.7]{figures/errors_eign/eigenvalues_16x16.pdf} \caption{Decay of eigenvalues for the global and multiscale samplings in the first example.} \label{eigen_16x16_ch3} \end{figure} \begin{figure}[H] \centering \includegraphics[width= 2.2in]{figures/ex_1_perms/ref_perm_16x16_crop.pdf} \hspace{5mm} \includegraphics[width= 2.2in]{figures/ex_1_perms/ref_pressure_16x16_crop.pdf} \caption{Reference log permeability field (left) and the corresponding reference pressure field (right) for the first example.} \label{ref_perm_1} \end{figure} As we discussed in subsection \ref{mpsrf_3}, we analyze the convergence of the MCMCs using PSRFs and MPSRF. An MCMC method converges to the stationary distribution if both MPSRF and the maximum of PSRFs get closer to 1. In \cite{Brian2007} the author considered a value of $1.2$ for these parameters to confirm the convergence of the chains. In line with that, we decide to stop the simulation once these parameters reach $1.2$. Figure \ref{MPSRF_16x16_ch3} shows that the preconditioned MCMC methods with and without multiscale sampling converge. However, the plots at the bottom in Figure \ref{MPSRF_16x16_ch3} show that the preconditioned MCMC method with multiscale sampling converges to the stationary distribution earlier than the method without multiscale sampling. Table \ref{Lx_Ly_1} shows the acceptance rates for both methods as well as the precision parameters for coarse- and fine-grid simulations. The acceptance rate increases slightly when we use MSM. The errors between the reference and simulated pressure data, which are used in the likelihood function, for both methods are shown in Figure \ref{error_16x16_ch3}. Both methods produce similar error curves. \begin{figure}[H] \centering \includegraphics[scale = 0.55]{figures/mpsrf_psrf/psrf_16x16-jcp.pdf} \includegraphics[scale= 0.55]{figures/mpsrf_psrf/mpsrf_16x16-jcp.pdf}\\ \includegraphics[scale = 0.55]{figures/mpsrf_psrf/psrf_16x16-jcp-magnified.pdf} \includegraphics[scale= 0.55]{figures/mpsrf_psrf/mpsrf_16x16-jcp-magnified.pdf} \caption{Top: The maximum of PSRFs and MPSRF for the MCMC method with and without multiscale sampling in the first example. Bottom: Tails of the maximum of PSRFs and MPSRF curves.} \label{MPSRF_16
x16_ch3} \end{figure} \begin{table}[H] \caption{A comparison of acceptance rates for the MCMC with and without MSM for the first example.} \center \begin{tabular}{|cccc|} \hline & \quad MCMC with global sampling & \quad MCMC with multiscale sampling& \\ \hline $\sigma_F^2$ & $10^{-3}$ & $10^{-3}$ & \\ $\sigma_C^2$ & $5\times10^{-3}$ & $5\times10^{-3}$& \\ acc. rate & $53\%$ &$55\%$& \\ \hline \end{tabular} \label{Lx_Ly_1} \end{table} \begin{figure}[H] \centering \includegraphics[scale = 0.9]{figures/errors_eign/error_16x16.pdf} \caption{Error curves of the preconditioned MCMC with and without multiscale sampling for the first example.} \label{error_16x16_ch3} \end{figure} After the convergence of both MCMC methods, we take 10000 log permeability values from each chain and draw the posterior histograms for three cells with high, medium and low permeability values in the computational domain. See Figure \ref{ref_perm_1} for those three cells. Figure \ref{posterior} shows the posterior histograms with the true (red vertical line) and mean (green vertical line) values of the log permeability for the cells. \begin{figure} \centering \begin{tabular}[b]{c} \includegraphics[scale=.4]{figures/hist_location4-crop}\\ Cell 1 \end{tabular} \begin{tabular}[b]{c} \includegraphics[scale=.4]{figures/hist_location1-crop}\\ Cell 2 \end{tabular} \begin{tabular}[b]{c} \includegraphics[scale=.4]{figures/hist_location2-crop}\\ Cell 3 \end{tabular} \vspace*{-0.2cm} \caption{Posterior histograms for three selected cells.} \label{posterior} \end{figure} Table \ref{cells} shows these values and the corresponding standard deviation. We observe in Figure \ref{posterior} that when we use the multiscale sampling method, the mean values are within one standard deviation (green horizontal line) for all the three cells. Also, in MSM, the mean of the posterior histogram is almost the same as the true value for the cell 3. We do not observe a similar behavior in the posterior histograms in the global sampling method. \begin{center} \begin{table}[H] \caption{True and mean values of log permeability with the corresponding standard deviation for three cells.}\vspace{0.1cm} \setlength{\tabcolsep}{17pt} \centering \scalebox{0.85}{ \begin{tabular}{|l c c c c c c |} \hline & \multicolumn{2}{c}{Cell 1} & \multicolumn{2}{c}{Cell 2} & \multicolumn{2}{c|}{{Cell 3}} \\ & Global & MSM & Global & MSM & Global & MSM \\ \hline \\\\[-3.95\medskipamount ] True & $-1.54$ & $ - $ & $-0.3$ & $ - $ & $1.3$ &$ - $ \\ Mean & $-1.08$ & $-1.16$ & $-0.69$ & $-0.07$ & $0.78$ & $1.35$ \\ SD & $ 0.36$ & $0.34 $ & $0.39$ & $0.27$ & $0.42$ & $0.26$ \\ \hline \end{tabular}} \label{cells} \end{table} \end{center} We now compare the reference field with some of the simulated permeability fields from two selected chains. See Figures \ref{perm_1_1} and \ref{perm_1_2}. Other chains also show a similar behavior. Although both MCMC methods converged, we observe that at iteration 60000 the permeability field obtained from the multiscale sampling is closer to the reference permeability field than the field obtained without the multiscale sampling. \begin{figure}[H] \centering \includegraphics[width= 1.5in]{figures/ex_1_perms/ref_perm_16x16_crop}\\ \includegraphics[width=1.5in]{figures/ex_1_perms/perm_20_20k_crop.pdf} \includegraphics[width=1.5in]{figures/ex_1_perms/perm_20_40k_crop.pdf} \includegraphics[width=1.5in]{figures/ex_1_perms/perm_20_60k_crop.pdf} \\ \includegraphics[width=1.5in]{figures/ex_1_perms/perm_25_20k_crop.pdf} \includegraphics[width=1.5in]{figures/ex_1_perms/perm_25_40k_crop.pdf} \includegraphics[width=1.5in]{figures/ex_1_perms/perm_25_60k_crop.pdf} \\ \caption{First row: Reference log permeability filed. Second row: Accepted permeability fields in the global sampling method. Third row: Accepted permeability fields in MSM $2\times 2$. From left to right, log permeability fields at 20000, 40000 and 60000 iterations, respectively, from chain 1 in the first example.} \label{perm_1_1} \end{figure} \begin{figure}[H] \centering \includegraphics[width= 1.5in]{figures/ex_1_perms/ref_perm_16x16_crop.pdf}\\ \includegraphics[width=1.5in]{figures/ex_1_perms/perm_21_20k_crop.pdf} \includegraphics[width=1.5in]{figures/ex_1_perms/perm_21_40k_crop.pdf} \includegraphics[width=1.5in]{figures/ex_1_perms/perm_21_60k_crop.pdf}\\ \includegraphics[width=1.5in]{figures/ex_1_perms/perm_26_20k_crop.pdf} \includegraphics[width=1.5in]{figures/ex_1_perms/perm_26_40k_crop.pdf} \includegraphics[width=1.5in]{figures/ex_1_perms/perm_26_60k_crop.pdf} \\ \caption{First row: Reference log permeability filed. Second row: Accepted permeability fields in the global sampling method. Third row: Accepted permeability fields in MSM $2\times 2$. From left to right, log permeability fields at 20000, 40000 and 60000 iterations, respectively, from chain 2 in the first example.} \label{perm_1_2} \end{figure} \subsection{Example 2} In the second example we consider the case where the correlation lengths are not equal, i.e., $L_x= 0.2 $ and $L_y=0.06$ in Eq. \eqref{kle_3}. Figure \ref{eigen_32x32_ch3} shows the decay of the eigenvalues (in log scale) for the global sampling, MSM $2\times 2$, and MSM $4\times 4$ methods. \begin{figure}[H] \centering \includegraphics[scale = 0.7]{figures/errors_eign/eigenvalues_32x32.pdf} \caption{Decay of eigenvalues for the global and multiscale sampling in the second example.} \label{eigen_32x32_ch3} \end{figure} In KLE, we consider the first $64$ eigenvalues, which preserve $97.87\%$ of the total energy, in the global sampling method. For MSM $2\times 2$ and MSM $4\times 4$, we take $16$ and $4$ eigenvalues, respectively. We generate a reference synthetic permeability field on a computational grid of size $32\times 32$ and then run the numerical simulator to generate the corresponding reference pressure field. See Figure \ref{ref_perm_2} for the reference fields. \begin{figure}[H] \centering \includegraphics[width=2.2in]{figures/ex_2_perms/ref_32x32_crop.pdf} \hspace{5mm} \includegraphics[width=2.2in]{figures/ex_2_perms/Ref_ex_2_pressure_crop.pdf} \caption{Reference log permeability field (left) and the corresponding reference pressure field (right) for the second example.} \label{ref_perm_2} \end{figure} We use the same coarse mesh of size $8\times 8$ as in the first example. We also use the same local blocking number $N_{lb}=1$. We set the tuning parameter $\beta= 0.75$ in Eq. \eqref{RW_sampler}. Let us consider the convergence analysis of these methods. We take $170000$ proposals from each chain to compute the MPSRF and the maximum of PSRFs. Figure \ref{MPSRF_32x32_ch3} shows the maximum of PSRFs and MPSRF curves. At the tails of the maximum of PSRFs and MPSRF curves, we have the values $1.2$ and $1.4$, respectively, for MSM $4\times4$. These values are slightly higher for MSM $2\times2$. Thus, we can conclude that MSM $4\times 4$ converges to the stationary distribution faster than MSM $2\times 2$. On the other hand, the PSRF and MPSRF curves in the global sampling method do not show any sign of converging at the same number of iterations. Moreover, the acceptance rates are better for the multiscale sampling methods. See Table \ref{Lx_Ly_2}. The error curves for this study are shown in Figure \ref{error_32x32_ch3}. They are very comparable. Figures \ref{perm_2_chain1} and \ref{perm_2_chain2} present simulated permeability fields from two selected chains. From these figures, we observe that both MSM $2\times 2$ and MSM $4\times 4$ recover the permeability fields better than the global sampling method. \begin{figure}[H] \centering \includegraphics[scale = 0.55]{figures/mpsrf_psrf/psrf_32x32_ellipse.pdf} \includegraphics[scale= 0.55]{figures/mpsrf_psrf/mpsrf_32x32_ellipse.pdf} \caption{The maximum of PSRFs and MPSRF for the MCMC method with and without multiscale sampling for the second example.} \label{MPSRF_32x32_ch3} \end{figure} \begin{table}[H] \caption{A comparison of acceptance rates for the MCMC with and without MSM in the second example.} \center \begin{tabular}{|cccc|} \hline & \quad MCMC global & \quad MCMC with MSM $2\times 2$ & \quad MCMC with MSM $4\times 4$ \\ \hline $\sigma_F^2$ & $10^{-3}$ & $10^{-3}$ & $10^{-3}$ \\ $\sigma_C^2$ & $5\times10^{-3}$ & $5\times10^{-3}$& $5\times10^{-3}$ \\ acc. rate & $50\%$ & $54\%$& $55\%$ \\ \hline \end{tabular} \label{Lx_Ly_2} \end{table} \begin{figure}[H] \centering \includegraphics[scale = 0.9]{figures/errors_eign/error_32x32_ellipse.pdf} \caption{Error curves of the preconditioned MCMC with and without multiscale sampling for the second example.} \label{error_32x32_ch3} \end{figure} \begin{figure}[H] \centering \includegraphics[width=1.5in]{figures/ex_2_perms/ref_32x32_crop.pdf}\\ \includegraphics[width=1.5in]{figures/ex_2_perms/perm_30_50k_crop.pdf} \includegraphics[width=1.5in]{figures/ex_2_perms/perm_30_100k_crop.pdf} \includegraphics[width=1.5in]{figures/ex_2_perms/perm_30_150k_crop.pdf}\\ \includegraphics[width=1.5in]{figures/ex_2_perms/20k_2x2_1_crop.pdf} \includegraphics[width=1.5in]{figures/ex_2_perms/50k_2x2_1_crop.pdf} \includegraphics[width=1.5in]{figures/ex_2_perms/100k_2x2_1_crop.pdf}\\ \includegraphics[width=1.5in]{figures/ex_2_perms/20k_4x4_1_crop.pdf} \includegraphics[width=1.5in]{figures/ex_2_perms/50k_4x4_1_crop.pdf} \includegraphics[width=1.5in]{figures/ex_2_perms/100k_4x4_1_crop.pdf} \caption{First row: Reference log permeability filed. Second row: Accepted permeability fields in the global sampling method. Third row: Accepted permeability fields in MSM $2\times 2$. Fourth row: Accepted permeability fields in MSM $4\times 4$. From left to right, log permeability fields at 20000, 50000 and 100000 iterations, respectively, from chain 1 in the second example.} \label{perm_2_chain1} \end{figure} \begin{figure} \centering \includegraphics[width=1.5in]{figures/ex_2_perms/ref_32x32_crop.pdf}\\ \includegraphics[width=1.5in]{figures/ex_2_perms/perm_31_50k_crop.pdf} \includegraphics[width=1.5in]{figures/ex_2_perms/perm_31_100k_crop.pdf} \includegraphics[width=1.5in]{figures/ex_2_perms/perm_31_150k_crop.pdf}\\ \includegraphics[width=1.5in]{figures/ex_2_perms/20k_2x2_2_crop.pdf} \includegraphics[width=1.5in]{figures/ex_2_perms/50k_2x2_2_crop.pdf} \includegraphics[width=1.5in]{figures/ex_2_perms/100k_2x2_2_crop.pdf}\\ \includegraphics[width=1.5in]{figures/ex_2_perms/20k_4x4_2_crop.pdf} \includegraphics[width=1.5in]{figures/ex_2_perms/50k_4x4_2_crop.pdf} \includegraphics[width=1.5in]{figures/ex_2_perms/100k_4x4_2_crop.pdf} \caption{First row: Reference log permeability filed. Second row: Accepted permeability fields in the global sampling method. Third row: Accepted permeability fields in MSM $2\times 2$. Fourth row: Accepted permeability fields in MSM $4\times 4$. From left to right, log permeability fields at 20000, 50000 and 100000 iterations, respectively, from chain 2 in the second example.} \label{perm_2_chain2} \end{figure} \subsection{Example 3} In this example, we test the proposed method on a large grid size of $64\times 64$ with the correlation lengths, $L_x= L_y = 0.1 $ in Eq. \eqref{kle_3}. Figure \ref{eigen_16x16_ch3} shows the decay of the eigenvalues for the methods, the global sampling, MSM $2\times 2$, and MSM $4\times 4$. We consider $64$ eigenvalues, which preserve $95.8\%$ of the total energy in KLE, in the global sampling. The numbers of eigenvalues for MSM $2\times 2$ and MSM $4\times 4$ are 16 and 4, respectively. The synthetic reference permeability field is generated on a computational mesh of size $64\times 64$. Then, the numerical simulator is used to generate the corresponding reference pressure field. Figure \ref{ref_perm_3_1} shows the reference permeability field and the corresponding pressure distribution on the grid. We use a coarse mesh of size $16\times 16$ in the filtering step in the preconditioned MCMC. Let $N_{lb}=2$. We set $\beta = 0.2$ in Eq. \eqref{RW_sampler}. \begin{figure}[H] \centering \includegraphics[scale = 0.7]{figures/errors_eign/eigenvalues_64x64.pdf} \caption{Decay of eigenvalues for the global and multiscale sampling for the third example.} \label{eigen_64x64_ch3} \end{figure} \begin{figure}[H] \centering \includegraphics[width= 2.2 in] {figures/ex_3_perms/Ref_ex_3_ref.pdf} \hspace{5mm} \includegraphics[width= 2.2 in] {figures/ex_3_perms/Ref_ex_3_pressure_crop.pdf} \caption{Reference log permeability field (left) and the corresponding reference pressure field (right) for the third example.} \label{ref_perm_3_1} \end{figure} Let us consider PSRFs and MPSRF curves for these methods. We take $240000$ proposals from each chain in constructing the PSRF and MPSRF curves. We show the maximum of PSRFs and MPSRF curves in Figure \ref{MPSRF_64x64_ch3}. For MSM $4\times 4$, the values at the tails of the PSRF and MPSRF curves are $1.2$ and $1.6$, respectively. These values indicate that the curves in MSM $4\times 4$ are closer to the convergence. However, the curves in MSM $2\times 2$ and global sampling method are very far from reaching a convergence. Table \ref{Lx_Ly_3} shows that the acceptance rates for the multiscale sampling methods are also slightly better than that of the global sampling method. The error curves are comparable for these methods. See Figure \ref{error_64x64_ch3}. Figures \ref{perm_3_chain1} and \ref{perm_3_chain2} compare the accepted permeability fields for two selected chains in the MCMC simulation. Both MSM $2\times 2$ and MSM $4\times 4$ recover the fields better than the global sampling method. \begin{figure}[H] \centering \includegraphics[scale = 0.55]{figures/mpsrf_psrf/psrf_64x64.pdf} \includegraphics[scale= 0.55]{figures/mpsrf_psrf/mpsrf_64x64.pdf} \caption{The maximum of PSRFs and MPSRF for the MCMC method with and without multiscale sampling for the third example.} \label{MPSRF_64x64_ch3} \end{figure} \begin{table}[H] \caption{A comparison of acceptance rates for the MCMC with and without MSM for the third example.} \center \begin{tabular}{|cccc|} \hline & \quad MCMC global & \quad MCMC with MSM $2\times 2$ & \quad MCMC with MSM $4\times 4$ \\ \hline $\sigma_F^2$ & $10^{-3}$ & $10^{-3}$ & $10^{-3}$ \\ $\sigma_C^2$ & $5\times10^{-3}$ & $5\times10^{-3}$& $5\times10^{-3}$ \\ acc. rate & $41\%$ & $43\%$& $43\%$ \\ \hline \end{tabular} \label{Lx_Ly_3} \end{table} \begin{figure}[H] \centering \includegraphics[scale = 0.9]{figures/errors_eign/error_64x64.pdf} \caption{Error curves of the preconditioned MCMC with and without multiscale sampling for the third example.} \label{error_64x64_ch3} \end{figure} \begin{figure}[H] \centering \includegraphics[width= 1.5 in] {figures/ex_3_perms/Ref_ex_3_ref.pdf}\\ \includegraphics[width= 1.5 in] {figures/ex_3_perms/perm_100_100k_crop.pdf} \includegraphics[width= 1.5 in] {figures/ex_3_perms/perm_100_200k_crop.pdf} \includegraphics[width= 1.5 in] {figures/ex_3_perms/perm_100_300k_crop.pdf}\\ \includegraphics[width= 1.5 in] {figures/ex_3_perms/perm_110_100k_crop.pdf} \includegraphics[width= 1.5 in] {figures/ex_3_perms/perm_110_200k_crop.pdf} \includegraphics[width= 1.5 in] {figures/ex_3_perms/perm_110_300k_crop.pdf}\\ \includegraphics[width= 1.5 in] {figures/ex_3_perms/perm_90_100k_crop.pdf} \includegraphics[width= 1.5 in] {figures/ex_3_perms/perm_90_200k_crop.pdf} \includegraphics[width= 1.5 in] {figures/ex_3_perms/perm_90_300k_crop.pdf} \caption{First row: Reference log permeability filed. Second row: Accepted permeability fields in the global sampling method. Third row: Accepted permeability fields in MSM $2\times 2$. Fourth row: Accepted permeability fields in MSM $4\times 4$. From left to right, log permeability fields at 80000, 160000 and 240000 iterations, respectively, from chain 1 in the third example.} \label{perm_3_chain1} \end{figure} \begin{figure}[H] \centering \includegraphics[width= 1.5 in] {figures/ex_3_perms/Ref_ex_3_ref.pdf}\\ \includegraphics[width= 1.5 in] {figures/ex_3_perms/perm_101_100k_crop.pdf} \includegraphics[width= 1.5 in] {figures/ex_3_perms/perm_101_200k_crop.pdf} \includegraphics[width= 1.5 in] {figures/ex_3_perms/perm_101_300k_crop.pdf}\\ \includegraphics[width= 1.5 in] {figures/ex_3_perms/perm_111_100k_crop.pdf} \includegraphics[width= 1.5 in] {figures/ex_3_perms/perm_111_200k_crop.pdf} \includegraphics[width= 1.5 in] {figures/ex_3_perms/perm_111_300k_crop.pdf}\\ \includegraphics[width= 1.5 in] {figures/ex_3_perms/perm_91_100k_crop.pdf} \includegraphics[width= 1.5 in] {figures/ex_3_perms/perm_91_200k_crop.pdf} \includegraphics[width= 1.5 in] {figures/ex_3_perms/perm_91_300k_crop.pdf} \caption{First row: Reference log permeability filed. Second row: Accepted permeability fields in the global sampling method. Third row: Accepted permeability fields in MSM $2\times 2$. Fourth row: Accepted permeability fields in MSM $4\times 4$. From left to right, log permeability fields at 80000, 160000 and 240000 iterations, respectively, from chain 2 in the third example.} \label{perm_3_chain2} \end{figure} \subsection{Example 4} In this example we compare MSM $4\times4$ with and without conditioning for the problem in example 3. However, in the conditioning approach, we incorporate the permeability measurements at eight sparse locations in the field. Four MCMCs are simulated in each case. We compute the maximum of the PSRFs and MPSRF by taking $100,000$ samples from each chain (total $400,000$ samples). In Figure \ref{conver_cond} we present the maximum of PSRFs and MPSRF curves. The values in the tails of the maximum of the PSRFs and the MPSRF are $1.09$ and $1.17$, respectively, for the method with conditioning. For the method without conditioning, these values are $2.5$ and $3.5$, respectively. Therefore, we can say that MSM $4\times4$ with conditioning reaches convergence earlier than MSM without conditioning. \begin{figure}[H] \centering \includegraphics[scale = 0.55]{figures/mpsrf_psrf/psrf_64x64_condi.pdf} \includegraphics[scale= 0.55]{figures/mpsrf_psrf/mpsrf_64x64_condi.pdf} \caption{The maximum of PSRFs and MPSRF for the multiscale sampling MCMC method with and without conditioning for the fourth example.} \label{conver_cond} \end{figure} Fig. \ref{perm_cond_64} shows the accepted fields in the method with conditioning for two chains. The permeability fields were not fully recovered, however, we see a considerable improvement in the fields in comparison to the fields in Figures \ref{perm_3_chain1} and \ref{perm_3_chain2}, which were obtained using MSM $4\times4$ without conditioning. We thus conclude that the conditioning speeds-up the convergence and improves the characterization in this example \begin{figure}[H] \centering \includegraphics[width= 1.5in] {figures/Ref_conditioning_crop.pdf}\\ \includegraphics[width= 1.5 in] {figures/perm_135_40000_crop.pdf} \includegraphics[width= 1.5 in] {figures/perm_135_60000_crop.pdf} \includegraphics[width= 1.5 in] {figures/perm_135_80000_crop.pdf}\\ \includegraphics[width= 1.5 in] {figures/perm_137_40000_crop.pdf} \includegraphics[width= 1.5 in] {figures/perm_137_60000_crop.pdf} \includegraphics[width= 1.5 in] {figures/perm_137_80000_crop.pdf} \caption{ First row: Reference log permeability filed. Second row: Accepted permeability fields from chain 1. Third row: Accepted permeability fields from chain 2. From left to right, log permeability fields at 40000, 60000 and 80000 iterations, respectively, in MSM $4\times 4$ with conditioning. } \label{perm_cond_64} \end{figure} \section{Conclusions} \label{concl_3} We have presented a novel multiscale sampling method aiming at subsurface characterization. The proposed method is based on a non-overlapping partition of the domain of the governing partial differential equation that leads to the localization of the search in the underlying stochastic space. The novel method is implemented in the framework of a preconditioned Markov Chain Monte Carlo algorithm. Through several multi-chain MCMC examples, motivated by subsurface flow problems, we compare the usual preconditioned Markov Chain Monte Carlo algorithm with the proposed procedure. Our results show that the new multiscale sampling method considerably improves the convergence rate of the preconditioned Markov Chain Monte Carlo algorithm. We also incorporated sparse measurements of the permeability field in the multiscale sampling method, and showed that conditioning on this data further improves the convergence of the proposed method. The authors and their collaborators are currently applying the method introduced here to solve the inverse problems associated with single and multiphase flows in porous media. In these studies the Multiscale Perturbation Method \cite{mpm_2020} will be used to speed-up the numerical solution of elliptic equations in the forward solution of the governing system of equations. Multiscale sampling procedures based on overlapping domain decompositions are also being considered. \section*{Acknowledgments} A. Rahunanthan research was supported by NIFA/USDA through Central State University Evans-Allen Research Program. \bibliographystyle{model1-num-names}
\section{Introduction} Fidelity, as a measure of the distinguishability between quantum states\cite{Fuchs1994,Jozsa1994,Fuchs1996}, plays an important role in many areas of quantum information science, for example it is related to the precision limit in quantum metrology~\cite{BRAU94}, serves as a measure of entanglement preservation through noisy quantum channels~\cite{Schumacher1996}, and a measure of entanglement preservation in quantum memory~\cite{Surmacz2006}; it has also been used as a characterization method for quantum phase transitions~\cite{Gu2010}, and a criterion for successful transmission in formulating quantum channel capacities~\cite{Barnum1998}. Unlike the fidelity of quantum states which is defined directly on quantum states, most commonly used measures for the distinguishability of quantum channels are defined indirectly through the effects of the channels on the states. For example the diamond norm, which is defined as $\|K_1-K_2\|_{\diamond}=\max_{\rho_{SA}}\|K_1\otimes I_A(\rho_{SA})-K_2\otimes I_A(\rho_{SA})\|_1$\cite{Kitaev1997,Kitaev2002,watrous2009}( here $\|X\|_1=Tr\sqrt{X^\dagger X}$, $\rho_{SA}$ denotes a state on system+ancilla, and $I_A$ denotes the identity operator on the ancillary system), is induced by the trace distance on quantum states $\|\rho_1-\rho_2\|_1$; another measure on quantum channels which is defined as $\arccos F_{\min}(K_1,K_2)=\arccos \min_{\rho_{SA}} F_S[K_1\otimes I_A(\rho_{SA}), K_2\otimes I_A(\rho_{SA})]$\cite{Gilchrist,Belavkin}, is induced by the fidelity on quantum states $F_S(\rho_1,\rho_2)=Tr\sqrt{\rho_1^{\frac{1}{2}}\rho_2\rho_1^{\frac{1}{2}}}$. These induced measures through quantum states lack a direct connection to the properties of quantum channels, which severely restrict the insights that can be gained from these measures. A direct measure on quantum channels is expected to provide more insights thus highly desired. In this paper we provide a fidelity function defined directly on quantum channels, and show that this fidelity function on quantum channels, together with the classical fidelity on probability distribution and the fidelity on quantum states, form a hierarchy of fidelity functions in terms of optimization. This fidelity function on quantum channels also lead to various distance measures defined directly on quantum channels, in particular we show the Bures angle and the Bures distance can be extended to quantum channels. We then show the distance between quantum channels leads naturally to a new Fisher information on quantum channels which quantifies the ultimate precision limit in quantum metrology. We also show that this fidelity function provides a unified framework for perfect quantum channel discrimination and quantum metrology, in particular we show the minimum number of uses needed for perfect channel discrimination is exactly the counterpart of the precision limit in quantum metrology, and various useful lower bounds for the minimum number of uses needed for perfect channel discrimination can be obtained via this connection. \section{Fidelity function on quantum channels} We start by defining the fidelity function on unitary channels then extend it to noisy channels. For a $m\times m$ unitary matrix $U$, we denote $e^{-i\theta_j}$ as the eigenvalues of $U$, where $\theta_j\in(-\pi,\pi]$ for $1\leq j\leq m$ and we call $\theta_j$ the eigen-angles of $U$. We define(see also\cite{Chau2011,Fung1,Fung2}) $\parallel U\parallel_{\max}=\max_{1\leq j \leq m}\mid\theta_j\mid,$ and $\parallel U\parallel_g$ as the minimum of $\parallel e^{i\gamma}U\parallel_{\max}$ over equivalent unitary operators with different global phases, i.e., $\parallel U\parallel_g=\min_{\gamma\in \mathbb{R}} \parallel e^{i\gamma} U\parallel_{\max}$. We then define \begin{eqnarray} C(U)=\left\{\begin{array}{cc} \parallel U\parallel_g, & if \parallel U\parallel_g\leq \frac{\pi}{2}, \\ \frac{\pi}{2}, & if \parallel U\parallel_g> \frac{\pi}{2}.\\ \end{array}\right. \end{eqnarray} Quantitatively $C(U)$ is equal to the maximal angle that $U$ can rotate a state away from itself\cite{Acin01,Mauro2001,Fung2}, i.e., $\cos[C(U)]=\min_{|\psi\rangle}|\langle \psi |U |\psi \rangle|.$ For mixed states it can be written as $\cos[C(U)]=\min_{\rho}F_S(\rho, U\rho U^\dagger).$ If $\theta_{\max}=\theta_1\geq \theta_2\geq \cdots \geq \theta_m=\theta_{\min}$ are arranged in decreasing order, then $C(U)=\frac{\theta_{\max}-\theta_{\min}}{2}$ when $\theta_{\max}-\theta_{\min}\leq \pi$\cite{Fung2}. We then define $\Theta_{QC}(U_1,U_2)=C(U_1^\dagger U_2)$, here $U_1$ and $U_2$ are unitary operators on the same Hilbert space(we can expand the space if they are not the same). It is easy to see that \begin{eqnarray} \aligned \cos[\Theta_{QC}(U_1,U_2)]&=\cos[C(U_1^\dagger U_2)]\\ &=\min_{\rho}F_S(U_1\rho U_1^\dagger, U_2\rho U_2^\dagger), \endaligned \end{eqnarray} $\Theta_{QC}(U_1,U_2)$ thus corresponds to the maximal angle between the output states of $U_1$ and $U_2$(however we note that the definition of $\Theta_{QC}(U_1,U_2)$ is independent of the states). We then denote $F_{QC}(U_1,U_2)=\cos[\Theta_{QC}(U_1, U_2)]$ as the fidelity between $U_1$ and $U_2$. For unitary channels this is equivalent to the fidelity function proposed previously in \cite{Acin01}. We now generalize this to noisy quantum channels. A general quantum channel $K$, which maps from $m_1$- to $m_2$-dimensional Hilbert space, can be represented by Kraus operators, $K(\rho_S)=\sum_{j=1}^q F_j\rho_S F^\dagger_j$ where $\sum_{j=1}^q F^\dagger_jF_j=I$. Equivalently it can also be written as $K(\rho_S)=Tr_E(U_{ES}(|0_E\rangle\langle0_E|\otimes \rho_S) U^\dagger_{ES}),$ where $|0_E\rangle$ denotes some standard state of the environment, and $U_{ES}$ is a unitary operator acting on both system and environment, which we call as the unitary extension of $K$. We define $\Theta_{QC}(K_1, K_2)=\min_{\{U_{ES1},U_{ES2}\}}\Theta_{QC}(U_{ES1}, U_{ES2})$ and $F_{QC}(K_1, K_2)=\cos \Theta_{QC}(K_1, K_2),$ where $U_{ESi}$ are unitary extensions of $K_i$, $i\in\{1,2\}$. In Appendix~\ref{app-compute}, we show that the optimization can be taken by fixing one unitary extension and just optimizing over the other unitary extension, i.e., \begin{eqnarray} \aligned \Theta_{QC}(K_1, K_2)&=\min_{U_{ES1}}\Theta_{QC}(U_{ES1}, U_{ES2})\\ &=\min_{U_{ES2}}\Theta_{QC}(U_{ES1}, U_{ES2}). \endaligned \end{eqnarray} In terms of $F_{QC}(K_1,K_2)$ it can be written as \begin{eqnarray} \aligned F_{QC}(K_1, K_2)&=\max_{U_{ES1}}F_{QC}(U_{ES1}, U_{ES2})\\ &=\max_{U_{ES2}}F_{QC}(U_{ES1}, U_{ES2}). \endaligned \end{eqnarray} This can be seen as the counterpart of Uhlmann's purification theorem on quantum states~\cite{Uhlmann1976}(however the proof does not use Uhlmann's purification theorem~\cite{Yuan2017npj}). In Appendix~\ref{app-metric}, we show that $\Theta_{QC}(K_1, K_2)$ is a metric and can be computed directly from the Kraus operators of $K_1$ and $K_2$ as~\cite{Yuan2017npj} \begin{equation} \label{eq:dk} \Theta_{QC}(K_1, K_2)=\arccos \max_{\|W\|\leq 1}\frac{1}{2}\lambda_{\min}(K_W+K^\dagger_W), \end{equation} here $\lambda_{\min}(K_W+K^\dagger_W)$ denotes the minimum eigenvalue of $K_W+K^\dagger_W$ with $K_W=\sum_{ij}w_{ij}F_{1i}^\dagger F_{2j}$, $F_{1i}$ and $F_{2j}$ denote the Kraus operators of $K_1$ and $K_2$ respectively, $w_{ij}$ denotes the $ij$-th entry of a $q\times q$ matrix $W$ with $\|W\|\leq 1$ where $\|\cdot\|$ is the operator norm which corresponds to the maximum singular value, here $W$ arises from the non-uniqueness of the Kraus representations. Thus \begin{equation} \label{eq:F_QC1} F_{QC}(K_1,K_2)=\max_{\|W\|\leq 1}\frac{1}{2}\lambda_{\min}(K_W+K^\dagger_W). \end{equation} We emphasize that $F_{QC}$ is defined directly on quantum channels without referring to the states, such direct connection, in contrast to the induced measure, is crucial when applying the fidelity to channel discrimination and quantum metrology as we will show later. Furthermore the fidelity can be formulated as a semi-definite programming and computed efficiently as $\max_{\|W\|\leq 1}\frac{1}{2}\lambda_{\min}(K_W+K^\dagger_W)=$ \begin{eqnarray} \label{eq:sdp} \aligned &max \qquad \frac{1}{2}t \\ s.t.\qquad &\left(\begin{array}{cc} I & W^\dagger \\ W & I \\ \end{array}\right)\succeq 0,\\ & K_W+K^\dagger_W-tI \succeq 0. \endaligned \end{eqnarray} Analogous to the Bures distance on quantum states $B_S(\rho_1,\rho_2)=\sqrt{2-2F_S(\rho_1,\rho_2)}$, we can similarly define a Bures distance on quantum channels as $B_{QC}(K_1,K_2)=\sqrt{2-2F_{QC}(K_1,K_2)}$. In Appendix~\ref{app-compute}, we prove an intriguing and useful connection between $B_{QC}(K_1,K_2)$ and the minimum distances between the Kraus operators of $K_1$ and $K_2$ as \begin{eqnarray} \aligned \nonumber B_{QC}^2(K_1,K_2) =\min_{\{\tilde{F}_{1i}\},\{\tilde{F}_{2i}\}} \| \sum_{i} (\tilde{F}_{1i}-\tilde{F}_{2i})^\dagger (\tilde{F}_{1i}-\tilde{F}_{2i})\| \\ \endaligned \end{eqnarray} where $\{\tilde{F}_{1i}\},\{\tilde{F}_{2i}\}$ are the sets of all equivalent Kraus representations of $K_1$ and $K_2$ respectively. This connection is particular useful in studying the scalings of the distance between quantum channels as we will show later. In which sense we call $F_{QC}(K_1,K_2)$ a fidelity function? It turns out that $F_{QC}(K_1,K_2)=\min_{\rho_{SA}}F_S[K_1\otimes I_A (\rho_{SA}), K_2\otimes I_A(\rho_{SA})]$. To see this, it is proved in the supplemental material of Ref.~\cite{Yuan2017npj} that \begin{eqnarray} \min_{\rho_{SA}}F_S[K_1\otimes I_A (\rho_{SA}), K_2\otimes I_A(\rho_{SA})] =\max_{\|W\|\leq 1 }\frac{1}{2}\lambda_{\min}(K_{W}+K^\dagger_{W}), \end{eqnarray} which coincides with Eq.~\eqref{eq:F_QC1}. From this relationship it is also immediate clear that $F_{QC}(K_1,K_2)$ is stable, i.e., $F_{QC}(K_1\otimes I,K_2\otimes I)=F_{QC}(K_1,K_2)$. This result gives an operational meaning to $F_{QC}(K_1,,K_2)$. We emphasize that although we made connections between $F_{QC}(K_1,K_2)$ and the minimum fidelity of the output states, $F_{QC}(K_1,K_2)$ is defined directly on quantum channels and does not depend on the states. The definition and the operational meaning of $F_{QC}(K_1,K_2)$ play distinct roles in applications, the operational meaning provides a physical picture while the direct definition brings insights which enable or ease the proofs and computations, which will be demonstrated in the applications. This is in analogy to how fidelity of quantum states is connected to the classical fidelity $F_S(\rho_1,\rho_2)=\min_{\{E_i\}}F_C(p_1,p_2)$, here $F_C(p_1,p_2)=\sum_{i}\sqrt{p_{1i}}\sqrt{p_{2i}}$ denotes the classical fidelity with $p_{1i}=Tr(\rho_1E_i)$ and $p_{2i}=Tr(\rho_2E_i)$, $\{E_i\}$ denotes a set of Positive Operator Valued Measurements(POVM)\cite{Fuchs1996}, here similarly the fidelity between quantum states has the operational meaning as the minimum classical fidelity, however the fidelity between quantum states is defined directly on quantum states which is independent of the measurements and such direct definition has provided numerous insights which would be hindered with just the classical fidelity. It is known that the trace distance and the fidelity between quantum states have the following relationships\cite{Fuchs1999} \begin{equation} 1-F_S(\rho_1,\rho_2)\leq \frac{1}{2}\|\rho_1-\rho_2\|_1\leq\sqrt{1-F_S^2(\rho_1,\rho_2)}, \end{equation} from which it is straightforward to get the relationships between the diamond norm and the fidelity of quantum channels. This can be obtained by substituting $\rho_1=K_1\otimes I_A(\rho_{SA})$ and $\rho_2=K_2\otimes I_A(\rho_{SA})$, then optimizing over $\rho_{SA}$ \begin{eqnarray} \aligned \max_{\rho_{SA}}1-F_S[K_1\otimes I_A(\rho_{SA}),K_2\otimes I_A(\rho_{SA})]&\leq \max_{\rho_{SA}}\frac{1}{2}\|K_1\otimes I_A(\rho_{SA})-K_2\otimes I_A(\rho_{SA})\|_1\\ &\leq \max_{\rho_{SA}}\sqrt{1-F_S^2[K_1\otimes I_A(\rho_{SA}),K_2\otimes I_A(\rho_{SA})]}, \endaligned \end{eqnarray} which gives \begin{equation} 1-F_{QC}(K_1,K_2)\leq \frac{1}{2}\|K_1-K_2\|_{\diamond}\leq \sqrt{1-F_{QC}^2(K_1,K_2)}. \end{equation} Since $F_{QC}(K_1,K_2)$ can be computed directly from the Kraus operators, this also provides a way to bound the diamond norm using the Kraus operators. In \cite{Raginsky} the Choi matrices of the quantum channels are used to compute the fidelity between the channels, which corresponds to the fidelity between the output states of two quantum channels when the input state is taken as the maximal entangled state. As the maximal entangled state is in general not the optimal input state, the fidelity thus defined does not have operational meaning as the minimum fidelity of the output states, thus can not be related to the ultimate precision limit in quantum metrology etc(instead related to the precision limit when the probe state is taken as the maximally entangled state). \section{A unified framework for quantum metrology and perfect channel discrimination} Next we demonstrate the applications in quantum information science, in particular we show how the fidelity provides a unified platform for the ultimate precision in quantum metrology and the minimum number of uses needed for perfect channel discrimination. The task of quantum metrology, or quantum parameter estimation in general, is to estimate a parameter $x$ encoded in some channel $K_x$, this can be achieved by preparing a quantum state $\rho_{SA}$ and let it go through the extended channel $K_x\otimes I_A$ with the output state $\rho_x=K_x\otimes I_A(\rho_{SA})$. By performing POVM, $\{E_y\}$, on $\rho_x$ one gets the measurement result $y$ with probability $p(y|x)=Tr(E_y\rho_x)$. According to the Cram\'{e}r-Rao bound\cite{HELS67,HOLE82,CRAM46,Rao}, the standard deviation for any unbiased estimator of $x$ is bounded below by $\delta \hat{x}\geq \frac{1}{\sqrt{nJ_C[p(y|x)]}},$ where $\delta \hat{x}$ is the standard deviation of the estimation of $x$, $J_C[p(y|x)]$ is the classical Fisher information and $n$ is the number of times that the procedure is repeated. The classical Fisher information can be further optimized over all POVMs, which gives \begin{equation} \label{eq:J} \delta\hat{x}\geq\frac{1}{\sqrt{n\max_{\{E_y\}}J_C[p(y|x)]}}=\frac{1}{\sqrt{nJ_S(\rho_x)}}, \end{equation} where the optimized value $J_S(\rho_x$) is usually called the quantum Fisher information\cite{HELS67, HOLE82,BRAU94,BRAU96}, here for distinguish we will call it the quantum state Fisher information. We first recall established connections between the fidelity functions and the Fisher information. Given $\rho_x$ and its infinitesimal state $\rho_{x+dx}$, for a given POVM $\{E_y\}$, the classical fidelity between $p(y|x)=Tr(E_y\rho_x)$ and $p(y|x+dx)=Tr(E_y\rho_{x+dx})$ is given by $F_C[p(y|x),p(y|x+dx)]=\sum_{y_i} \sqrt{p(y_i|x)}\sqrt{p(y_i|x+dx)}$ which defines an angle as $\cos \Theta_C[p(y|x),p(y|x+dx)]=F_C[p(y|x),p(y|x+dx)]$. The classical Fisher information is related to the classical fidelity as $\frac{1}{4}J_C[p(y|x)]dx^2=2-2F_C[p(y|x),p(y|x+dx)]$ up to the second order of $dx$\cite{BRAU94}, this can also be written as \begin{equation} \label{eq:prob} J_C[p(y|x)]=\lim_{dx\rightarrow 0}\frac{4\Theta_C^2[p(y|x),p(y|x+dx)]}{dx^2}. \end{equation} If we optimize over $\{E_y\}$ the classical fidelity then leads to the fidelity between quantum states as \cite{BRAU94} \begin{equation}\min_{\{E_y\}} F_C[Tr(E_y\rho_x), Tr(E_y\rho_{x+dx})]=F_S(\rho_x,\rho_{x+dx}),\end{equation} and the classical Fisher information leads to the quantum state Fisher information $J_S(\rho_x)=\max_{\{E_y\}}J_C[p(y|x)]$ and up to the second order of $dx$\cite{BRAU94,BRAU96} \begin{eqnarray} \label{eq:qsf} \aligned \frac{1}{4}J_S(\rho
_x)dx^2=2-2F_S(\rho_x,\rho_{x+dx}). \endaligned \end{eqnarray} If we denote $\cos \Theta_S(\rho_x,\rho_{x+dx})= F_S(\rho_x,\rho_{x+dx})$, then \begin{eqnarray} \label{eq:BJ} \aligned J_S(\rho_x)&=\lim_{dx\rightarrow 0}\frac{8[1-\cos \Theta_S(\rho_x,\rho_{x+dx})]}{dx^2}\\ &=\lim_{dx\rightarrow 0}\frac{4\Theta_S^2(\rho_x,\rho_{x+dx})}{dx^2}. \endaligned \end{eqnarray} The precision can be further improved by optimizing over the probe states, which leads to the ultimate local precision limit of estimating $x$ from $K_x$. Intuitively, this ultimate precision limit should be quantified by the distance between $K_x$ and its infinitesimal neighboring channel $K_{x+dx}$, in a way analogous to how Bures distance of quantum states quantifies the precision limit of estimating $x$ from the state $\rho_x$\cite{BRAU94}. However although much progress has been made on calculating the ultimate precision limit\cite{Fujiwara2008,Escher2011,Tsang2013,Rafal2012,durkin,Knysh2014,Jan2013,Rafal2014,Alipour2014}, such a clear physical picture has still not been established after more than two decades since Braunstein and Caves's seminal paper\cite{BRAU94}, this is mainly due to the lack of proper tools on quantum channels. Here we show that the fidelity between quantum channels can be used to establish such a physical picture, which also leads naturally to a new Fisher information on quantum channel. Further optimizing over the probe states \begin{eqnarray} \aligned \max_{\rho_{SA}} \frac{1}{4}J_S(\rho_x)dx^2&=2-2\min_{\rho_{SA}}F_S(\rho_x,\rho_{x+dx})\\ &=2-2F_{QC}(K_x, K_{x+dx})\\ &=B_{QC}^2(K_x,K_{x+dx}), \endaligned \end{eqnarray} this leads naturally to a quantum channel Fisher information $J_{QC}(K_x)=\max_{\rho_{SA}}J_S(\rho_x)$ which is similarly related to the distance on quantum channels as \begin{eqnarray} \aligned \label{eq:qcf} J_{QC}(K_x)=&\lim_{dx\rightarrow 0}\frac{4B_{QC}^2(K_x,K_{x+dx})}{dx^2}\\ =&\lim_{dx\rightarrow 0}\frac{8[1-\cos \Theta_{QC}(K_x,K_{x+dx})]}{dx^2}\\ =&\lim_{dx\rightarrow 0}\frac{4\Theta_{QC}^2(K_x,K_{x+dx})}{dx^2}. \endaligned \end{eqnarray} The quantum channel Fisher information quantifies the ultimate precision limit upon the optimization over the measurements and probe states \begin{equation} \label{eq:channelpre} \delta \hat{x} \geq \frac{1}{\sqrt{nJ_{QC}(K_x)}}=\frac{1}{\sqrt{n}\lim_{dx\rightarrow 0}\frac{2\Theta_{QC}(K_x,K_{x+dx})}{|dx|}}. \end{equation} This connects the precision limit directly to the distance between quantum channels which provides a clear physical picture for the ultimate precision limit. The scaling of the ultimate precision limit can now be seen as a manifestation of the scaling of the distances between quantum channels as we now show. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{scheme.pdf} \caption{ (a) Parallel Scheme. (b)Sequential Scheme.} \label{fig:scheme} \end{figure} Two schemes on multiple uses of quantum channels are usually considered in quantum parameter estimation, the parallel scheme and the sequential scheme as shown in Fig.\ref{fig:scheme}. We will show that for both schemes, the scaling of the distances between two quantum channels are at most linear, which underlies the scaling for the Heisenberg limit. \begin{figure} \centering \begin{minipage}{.35\textwidth} \centering \includegraphics[width=.9\linewidth]{parallel.pdf} \caption{Parallel scheme with multiple uses of the channel.} \label{fig:parallel} \end{minipage}% \qquad \begin{minipage}{.34\textwidth} \centering \includegraphics[width=.9\linewidth]{parallelx.pdf} \caption{A unitary extension of the parallel scheme.} \label{fig:parallelu} \end{minipage} \end{figure} For parallel scheme with $N$ uses of a channel $K$ as shown in Fig.\ref{fig:parallel}, the total dynamics can be described by $K^{\otimes N}\otimes I_A$. If we denote $U_{ES}$ as one unitary extension of $K$, then $U_{ES}^{\otimes N}$ is a unitary extension of $K^{\otimes N}$ as shown in Fig.\ref{fig:parallelu}. Given two channels $K_1$ and $K_2$, we choose $U_{ES1}$ and $U_{ES2}$ as the unitary extension for $K_1$ and $K_2$ respectively which satisfies $\Theta_{QC}(K_1,K_2)=\Theta_{QC}(U_{ES1},U_{ES2})$. Now as $U_{ES1}^{\otimes N}$ and $U_{ES2}^{\otimes N}$ are unitary extensions of $K_1^{\otimes N}$ and $K_2^{\otimes N}$ respectively, we then have \begin{eqnarray} \label{eq:bound} \aligned \Theta_{QC}(K_1^{\otimes N},K_2^{\otimes N})&\leq \Theta_{QC}(U_{ES1}^{\otimes N}, U_{ES2}^{\otimes N}) \\ &=C[(U_{ES1}^\dagger U_{ES2})^{\otimes N}]\\ &\leq NC(U_{ES1}^\dagger U_{ES2})\\ &=N\Theta_{QC}(K_1,K_2). \endaligned \end{eqnarray} For the sequential scheme, we consider the general case that controls can be inserted between sequential uses of the channels. Any measurements that are used in the control can be substituted by controlled unitaries with ancillary systems, the controls interspersed between the channels can thus be taken as unitaries, which is shown in Fig.\ref{fig:seqsupp}. Parallel scheme can be seen as a special case of the sequential scheme by choosing the controls as SWAP gates on the system and different ancillary systems\cite{Rafal2014}. We show that with $N$ uses of the channel, the distance is still bounded above by $N\Theta_{QC}(K_1,K_2)$. \begin{figure} \centering \begin{minipage}{.7\textwidth} \centering \includegraphics[width=.9\linewidth]{sequential.pdf} \caption{Sequential scheme with multiple uses of the channel.} \label{fig:seqsupp} \end{minipage}% \qquad \begin{minipage}{.92\textwidth} \centering \includegraphics[width=.9\linewidth]{extensionSe.pdf} \caption{A unitary extension of the sequential scheme.} \label{fig:sequsupp} \end{minipage} \end{figure} We present the proof for the case of $N=2$, same line of argument works for general $N$. For $N=2$, one unitary extension of $U_2K_1U_1K_1$ is $U_2U_{E_2S1}U_1U_{E_1S1}$, similarly $U_2U_{E_2S2}U_1U_{E_1S2}$ is a unitary extension of $U_2K_2U_1K_2$, here $U_{E_jSi}$ denote a unitary extension of $K_i$, $i=1,2$, with $E_j$ as the environment. We can choose $U_{E_jSi}$ such that $\Theta_{QC}(K_1,K_2)=\Theta_{QC}(U_{E_jS1},U_{E_jS2})$, here all operators are understood as defined on the whole space so the multiplication makes sense, for example the control $U_1$, which only acts on the system and ancillaries, is understood as $U_1\otimes I_E$, an operator on the whole space including the environment. We then have \begin{eqnarray} \aligned &\Theta_{QC}(U_2K_1U_1K_1,U_2K_2U_1K_2)\\ \leq& \Theta_{QC}(U_2U_{E_2S1}U_1U_{E_1S1}, U_2U_{E_2S2}U_1U_{E_1S2}) \\ =&C[ U_{E_1S1}^\dagger U_1^\dagger U_{E_2S1}^\dagger U_2^\dagger U_2U_{E_2S2}U_1U_{E_1S2}]\\ =&C[U_{E_1S1}^\dagger U_1^\dagger U_{E_2S1}^\dagger U_{E_2S2}U_1U_{E_1S2}]\\ =&C[(U_{E_1S1}^\dagger U_1^\dagger) (U_{E_2S1}^\dagger U_{E_2S2}) (U_1U_{E_1S1}) (U_{E_1S1}^\dagger U_{E_1S2})]\\ \leq&C[U_{E_2S1}^\dagger U_{E_2S2}]+C[U_{E_1S1}^\dagger U_{E_1S2}]\\ =& 2\Theta_{QC}(K_1,K_2),\\ \endaligned \end{eqnarray} i.e., with two uses of the channel, the distance is bounded above by $2\Theta_{QC}(K_1,K_2)$. With the same line of argument it is easy to show that with $N$ uses of the channel the distance is bounded above by $N\Theta_{QC}(K_1,K_2)$. Substitute $K_1$ with $K_x$ and $K_2$ with $K_{x+dx}$, we have $\Theta_{QC}(NK_x,NK_{x+dx})\leq N\Theta_{QC}(K_x,K_{x+dx})$ for both schemes. From Eq.(\ref{eq:channelpre}) the ultimate precision limit is then bounded by \begin{eqnarray} \aligned \label{eq:Heisenberg} \delta \hat{x} &\geq \frac{1}{\lim_{dx\rightarrow 0}\frac{2\Theta_{QC}(K_x,K_{x+dx})}{\mid dx\mid}N\sqrt{n}} , \endaligned \end{eqnarray} the scaling $1/N$ is called the Heisenberg scaling, which, as we showed, is just a manifestation of the fact that the distance between quantum channels can grow at most linearly with the number of channels. For $N$ uses of the channels under the parallel scheme we can also obtain a tighter bound as \begin{eqnarray} \label{eq:Nparallel} \aligned &2-2\cos \Theta_{QC}(K_1^{\otimes N},K_2^{\otimes N})\\ &\leq N\|2I-K_W-K_W^\dagger\|+N(N-1)\|I-K_W\|^2, \endaligned \end{eqnarray} here $K_{W}=\sum_{i=1}^q\sum_{j=1}^qw_{ij}F_{1i}^\dagger F_{2j}$ as previously defined, and the inequality holds for any $W$ with $\|W\|\leq 1$ (see Appendix~\ref{app-upper-bound-parallel}). In the asymptotical limit, $N(N-1)\|I-K_W\|^2$ is the dominating term, in that case we would like to choose a $W$ minimizing $\|I-K_W\|$ to get a tighter bound. This can be formulated as semi-definite programming with $\min_{\|W\|\leq 1} \|I-K_W\|=$ \begin{eqnarray} \label{eq:sdpN} \aligned &\min \qquad t \\ s.t.\qquad &\left(\begin{array}{cc} I & W^\dagger \\ W & I \\ \end{array}\right)\succeq 0,\\ &\left(\begin{array}{cc} tI & (I-K_W)^\dagger \\ I-K_W & tI \\ \end{array}\right)\succeq 0. \endaligned \end{eqnarray} If we let $K_1=K_x$ and $K_2=K_{x+dx}$, then Eq.(\ref{eq:Nparallel}) provides bounds on the scalings in quantum parameter estimation, which is consistent with the studies in quantum metrology\cite{Fujiwara2008,Escher2011,Rafal2012,Jan2013,Rafal2014} but here with a more general context (see also Ref.~\cite{Yuan2017npj}). Next we show how the tools unify quantum parameter estimation and the perfect quantum channel discrimination\cite{Acin01,Duan2007,Duan2008,Cheng2012, ChiribellaDP08,DuanFY09,Harrow2010}. Given two quantum channels $K_1$ and $K_2$, they can be perfectly discriminated with one use of the channels if and only if there exists a $\rho_{SA}$ such that $K_1\otimes I_A(\rho_{SA})$ and $K_2\otimes I_A(\rho_{SA})$ are orthogonal, i.e., $\min_{\rho_{SA}}F_S[K_1\otimes I_A(\rho_{SA}),K_2\otimes I_A(\rho_{SA})]=0$, which is the same as $\Theta_{QC}(K_1,K_2)=\frac{\pi}{2}$. When $K_1$ and $K_2$ can not be perfectly discriminated with one use of the channel, finite number of uses may able to achieve the task\cite{DuanFY09}. This is in contrast to the perfect discrimination of non-orthogonal states which always requires infinite number of copies. The minimum number of uses needed for perfect channel discrimination should satisfy $\Theta_{QC}(NK_1,NK_2)=\frac{\pi}{2}$. The perfect channel discrimination is thus determined by the distances between quantum channels, and the scalings of $\Theta_{QC}(NK_1,NK_2)$ obtained before can be used to determine the minimum $N$. For example, from $\Theta_{QC}(NK_1,NK_2)\leq N\Theta_{QC}(K_1,K_2)$ we can obtain a lower bound on $N$ as \begin{equation} \label{eq:lowerN} N\geq \lceil\frac{\pi}{2\Theta_{QC}(K_1,K_2)}\rceil, \end{equation} where $\lceil x\rceil$ is the smallest integer not less than $x$. This bound is tighter than existing bounds for noisy channels\cite{Cheng2012} and for unitary channels it reduces to the formula which is known to be tight\cite{Acin01}. For noisy channels under the parallel scheme we can also substitute $\Theta_{QC}(K_1^{\otimes N},K_2^{\otimes N})=\frac{\pi}{2}$ into the inequality (\ref{eq:Nparallel}) to get a tighter bound. The lower bound on minimum $N$ can also be obtained via a connection to quantum metrology. Given two channels $K_1$ and $K_2$, let $K_x, x\in [a,b]$ as a path connecting $K_1$ and $K_2$. With $N$ uses of the channel under the parallel strategy we have $\sqrt{J_{QC}(K_x^{\otimes N})}=\lim_{dx\rightarrow 0}\frac{2\Theta_{QC}(K_x^{\otimes N},K_{x+dx}^{\otimes N})}{\mid dx\mid}$. From the triangular inequality \begin{eqnarray} \label{eq:connect} \aligned \Theta_{QC}(K_1^{\otimes N},K_2^{\otimes N})&\leq \int_a^b \lim_{dx\rightarrow 0}\frac{\Theta_{QC}(K_x^{\otimes N},K_{x+dx}^{\otimes N})}{dx} dx\\ &=\frac{1}{2}\int_a^b \sqrt{J_{QC}(K_x^{\otimes N})} dx. \endaligned \end{eqnarray} This connects the prefect channel discrimination to the ultimate precision limit. By choosing different paths various useful lower bounds on the minimum number of uses for perfect channel discrimination can be obtained. For example, given $K_0(\rho)=e^{i\theta\sigma_1}\rho e^{-i\theta\sigma_1}$ and $K_1=\frac{1+\eta}{2}\rho+ \frac{1-\eta}{2}\sigma_3 \rho\sigma_3$, where $\sigma_1, \sigma_2$ and $\sigma_3$ are Pauli matrices and assume $\theta=0.3, \eta=0.5$. For the parallel strategy the lower bound given by Eq.(\ref{eq:lowerN}) is $N\geq \lceil\frac{\pi}{2\Theta_{QC}(K_0,K_1)}\rceil=3$. If we choose a simple path $K_x=(1-x)K_0+xK_1$, $x\in[0,1]$, which is a line segment connecting $K_0$ to $K_1$, then with the connection provided by Eq.(\ref{eq:connect}) we obtain $N\geq 4$. Other paths may be explored to further improve the bound. By using the inequality (\ref{eq:Nparallel}) with the $W$ obtained from the semi-definite programming that minimizes $\|I-K_W\|$, we get $N\geq 5$. For any $N$ we can also choose the $W$ to minimize $N\|2I-K_W-K_W^\dagger\|+N(N-1)\|I-K_W\|^2$, it turns out the minimum $N$ such that $\min_{\|W\|\leq 1} N\|2I-K_W-K_W^\dagger\|+N(N-1)\|I-K_W\|^2\geq 2$ is $6$, thus $N\geq 6$. For comparison we also explicitly computed the actual distance $\Theta_{QC}(K_0^{\otimes N},K_1^{\otimes N})$ with the increasing of $N$, it turns out that the minimum $N$ such that $\Theta_{QC}(K_0^{\otimes N},K_1^{\otimes N})=\frac{\pi}{2}$ is actually $6$. All computations here are done with the CVX package in Matlab\cite{CVX}. \section{Summary} A fidelity function defined directly on quantum channels is provided, which leads to various distance measures defined directly on quantum channels, as well as a new Fisher information on quantum channel. This forms another hierarchy for fidelity functions and Fisher information as shown in the table: \begin{displaymath} \xymatrix{ F_C(p_1,p_2) \ar[d] \ar[r] & F_S(\rho_1,\rho_2) \ar[d] \ar[r] & F_{QC}(K_1,K_2) \ar[d] \\ \Theta_C(p_1,p_2) \ar[d] \ar[r] & \Theta_S(\rho_1,\rho_2) \ar[d] \ar[r] & \Theta_{QC}(K_1,K_2) \ar[d] \\ J_C[p(y|x)] \ar[r] & J_S(\rho_x) \ar[r] & J_{QC}(K_x) } \end{displaymath} where $\cos \Theta_i=F_i$ and $J_i=\lim_{dx\rightarrow 0}\frac{4\Theta_i^2}{dx^2}$, $i\in \{C,S,QC\}$. In this table the functions on quantum states equal to the optimized value over all measurements of the corresponding functions on probability distribution, and the functions on quantum channels equal to the optimized value over all probe states of the corresponding functions on quantum states. This framework connects quantitatively the ultimate precision limit and the distance between quantum channels, which provided a clear physical picture for the ultimate precision limit in quantum metrology. It also provide a unified framework for the continuous case in quantum parameter estimation and the discrete case in perfect quantum channel discrimination, with this framework the progress in one field can then be readily used to stimulate the progress of the other field. We expect these tools will find wide applications in many other fields of quantum information science.
\section{Introduction} Generative adversarial networks (GANs) \cite{goodfellow2014generative} are a subclass of generative models that enable anyone to generate photo-realistic images with a single click of a button \cite{karras2019style}. Some have heralded this innovation as the end of design, whereby machine intelligence will replace human creation \cite{slate}. However, a larger contingent considers these new generative technologies as yet another tool in an artist's toolkit, which offers new expression with its novel affordances \cite{hertzmann2018can}. Since the application of generative models does not require formal artistic training or technical expertise, these models can serve as scaffolding to generate large possibility spaces that can be embedded in casual creator systems \cite{compton2015casual}. Recent platforms such as RunwayML, GANBreeder, GANPaint, and DeepAngel have already started to use the new medium of GANs for casual creation. The key challenge for GAN-based casual creators is designing systems that ``supports a state of creative flow'' - whereby users and the generative models can co-create new artifacts in a collaborative, coordinated and organic dialogue, towards the idea of \textit{mixed-initiative co-creativity}.~\cite{yannakakis2014mixed,acharya2019building}. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{chomgrid2.png} \caption{Schematic for interpolation. The yellow images in the four corners represent the praying mantis, a boston terrier, a pufferfish and poodle categories respectively, which we call generation zero ($G_0$) ganimals, the images in the four outer mid-points in blue are hybrids of two $G_0$ ganimals which we call $G_1$ ganimals, and the center image is a $G_2$ ganimal, which is a combination of all four $G_0$ ganimals.} \label{fig:my_label} \end{figure} We introduce one such casual creator system, Meet the Ganimals, that allows users to selectively create new artificial hybrid species by interpolating between categories modeled by BigGAN \cite{brock2018large}. Trained on images with 1000 categorical labels, BigGAN embeds each category in a high-dimensional latent space. This space can be smoothly traversed such that images of mixed categories can be synthesized via interpolating the categories. Figure~\ref{fig:my_label} presents examples of images generated from single and mixed categories. The original BigGAN model was trained on 1000 categories, and we restrict BigGAN to the 396 animal categories. The goal of constraining the model to animal categories is to focus the experience on discovering and breeding hybrid animals –- what we call ganimals. \begin{figure*}[h] \centering \includegraphics[width=\textwidth]{system-map.png} \caption{System map of the Meet the Ganimals Platform } \label{fig:map} \end{figure*} Unlike professional creative systems which provide users with many precise tools to craft artifacts directly, Meet the Ganimals is a simple interface designed to promote the exploration of a vast possibility space. The possibility space includes three generations of ganimals; 396 $G_0$ ganimals that correspond directly to the ImageNet animal categories, 78,210 $G_1$ ganimals from hybrid pairs of $G_0$ ganimals, and 3,058,362,945 $G_2$ ganimals come from hybrid quadruples of $G_0$ ganimals. The possibility space is even larger when accounting for variations that come from truncation inputs and random seeds. With such a large possibility space, it is difficult to find the near-superlative ganimals e.g. the cutest, creepiest, most memorable ganimals. Meet the Ganimals confronts this seemingly intractable search problem with two innovations. First, Meet the Ganimals simplifies exploration into a two-part creation interface: Users generate a large number of $G_1$ ganimals to find the ones they like best, and then they breed the chosen $G_1$ ganimals into $G_2$ ganimals. Second, instead of randomly combining categories to generate $G_1$ ganimals in this first stage, the system instead balances exploring new permutations, and exploiting previously popular permutations, as indicated from crowd signals from other users. These innovations build on earlier interfaces where user-generated landmarks serve as navigation elements in a large parametrically-defined possibility space \cite{talton2008collaborative,harding2018biomorpher}. \section{Related Work} Several recently developed platforms explore how GANs serve as scaffolding for autotelic creativity. RunwayML provides a simple interface such that non-technical artists can use state-of-the-art neural network models (e.g. style transfer and super resolution) in their work. GANPaint is a scene drawing tool that allows users to add or remove trees, grass, and other natural features with a simple click \cite{bau2018gan}. DeepAngel is an online tool that removes objects from images users upload via masking and generative inpainting \cite{groh2019human}. While such functionality exists in Photoshop (e.g. content-aware fill), DeepAngel provides a one-click interface, sidestepping the technical skills required for photo-editing. Other related platforms have explored creating collaborative media via crowd signals and genetic algorithms. The R/Place experiment on Reddit allowed users to collectively recolor pixels of a dynamically changing image \cite{rappaz2018latent}. Electric Sheep generates procedural animations using crowd signals in an evolutionary algorithm \cite{draves2005electric}. PicBreeder demonstrated how pictures could be evolved collaboratively to rapidly explore a possibility space and proliferate fascinating discoveries while promoting individual exploration \cite{secretan2008picbreeder,secretan2011picbreeder}. ArtBreeder is an example of a casual creator built on interpolating GANs \cite{simon2019ganbreeder}. From the perspective of a user, ArtBreeder is a platform for creating new artworks by blending existing images. Meet the Ganimals builds on these projects to combine GAN scaffolding with collective feedback while focusing on the domain of hybrid animals. \section{System Overview} Meet the Ganimals is designed with modern UI/UX paradigms built on BigGAN to serve as a mixed-initiative co-creation tool whereby users are both creators and consumers of the possibility space. From April 20th to May 20th, 51,110 ganimals were generated, and 10,587 ganimals were bred by 4,392 users. The system map for the platform is shown in Figure~\ref{fig:map}. \subsection{Random Stimulus for Exploring Possibility Space} In \textit{The Book of Imaginary Beings}, Jorge Luis Borges wrote about his compilation of created creatures, saying that ``the book $\cdots$ is not meant to be read straight through; rather, we should like the reader to dip into these pages at random, just as one plays with the shifting patterns of a kaleidoscope'' \cite{borges2002book}. Echoing Borges, this system leverages the idea of the random stimulus principle of lateral thinking to offer a stochastic exploration of the possibility space \cite{beaney2005imagination}. In the ``Discover 'Em'' page, users are shown ganimals randomly generated using a bandit algorithm that balances exploration of an unseen possibility space with the popularity of the discovered space. In particular, $G_1$ ganimals are generated and presented to users according one of four selection procedures: (1) 30\% of the time that ganimal is generated using a carefully feature engineered stochastic process that samples pairs that we as designers found to be compelling (``recipe-based exploration''), (2) 30\% of the time by random uniformly sampling two animal categories to breed (``uniform exploration''), (3) 30\% of the time by randomly sampling two animal categories to breed stratified by species (``stratified exploration''), and (4) 10\% of the time by sampling a ganimal from the top rated ganimals, proportional to its order in the leaderboard for a random one of the following characteristics: cute, creepy, realistic, or memorable (``leaderboard exploitation''). For the recipe-based exploration, we carefully curated an
ad-hoc generative process that we found created high-quality ganimals. In particular, we defined five cores - sets of conceptually similar ImageNet categories that are well-suited for blending for aquatic, canine, bird, megafauna, and wildcard categories. We then randomly blend these cores in order to create diversity in the resulting ganimals. For stratified exploration, we uniformly sample a pair of animal species and sample an ImageNet category that corresponds to that species. For the majority of categories, there is a one-to-one correspondence between ImageNet categories and species. However, there are 118 categories of dogs (Canis lupus familiaris). The stratified exploration downsamples the frequency of dogs relative to the frequency in which dogs appear in ImageNet categories to promote diversity in the kinds of ganimals created. Users can curate $G_1$ ganimals and blend $G_1$ ganimals to create their own $G_2$ ganimal, which can be named and given its own unique hyperlink. This process supports the creative flow that allows users to efficiently explore the system's possibility space, view a diverse array of combinations, and add their own creativity to the that of the system to create their own artifacts. Users feel a sense of pride and ownership over the ganimals they create, which they have shown by sharing discoveries on social media (see Figure~\ref{fig:tweets}). \begin{figure} \centering \includegraphics[width=0.5\textwidth]{tweets.png} \caption{Selected screenshots of tweets from (anonymized) users sharing the ganimals they have discovered and named.} \label{fig:tweets} \end{figure} \subsection{Towards a Citizen Science of GAN Subjectivity} From identifying exoplanets \cite{zink2019catalog} and Christmas birds \cite{birds} to detecting changes in climate \cite{scistarter} and coral reef coverage \cite{raoult2016gopros}, citizen science projects have been core part of engaging the public in global scientific operations. Ultimately, participants engage in these projects because it is of individual interest to participate in a public (and therefore social) network \cite{lukyanenko2019citizen}, and because they are aesthetically pleasing and easy-to-use \cite{bonney2009citizen}. Meet the Ganimals has no metric for success, efficiency, or productivity, and there is no way for a user to demonstrate technical artistic or design skills. Instead, users are motivated by naming privileges for unseen ganimals, and general curiosity. As such, Meet the Ganimals is well suited for casual citizen science. In the ``Catalogue 'Em'' page, users have the opportunity to take on the role of citizen scientists (i.e. ``casual'' scientists) to help answer scientific questions about the ganimals: Do ganimals with canine morphological features look cuter than the rest \cite{kaminski2019evolution}? Does curation decrease as the underlying animal categories diverge in evolutionary time \cite{miralles2019empathy}? Do descendents of charismatic megafauna emerge as the most popular \cite{bennett2017conservation}? To explore such questions, users can recount their subjective and emotional perspectives of the ganimals, as well as annotate their morphological features. For morphology features, users can annotate whether or not the ganimals have a head, eyes, a mouth, a nose, legs, hair, scales, feathers, live underwater, or are bigger than a house cat. For subjective perspectives, users can annotate how much compassion and empathy they feel towards the ganimals, as well as how cute, memorable, realistic and creepy they are. This process allows for a deeper understanding of how animal morphologies relate to subjective perception. In particular, we can correlate subjective evaluations of ganimals with their other characteristics - such as crowd annotated morphology features, or the number of ``dog'' categories present within that ganimal. As a preliminary analysis, we find that ganimals contain at least one dog are statistically significantly cuter than those that do not, and that ganimals that contain at least one insect are statistically significantly less cute than those that do not (see Figure~\ref{fig:ploz}). Such knowledge can inform the design of future generative algorithms that use crowdsourced labels to surface maximally compelling artifacts. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Rplot04.pdf} \caption{Perceived cuteness rating of ganimals with or without specific categories. Images are exemplary ganimals from that set.} \label{fig:ploz} \end{figure} When users take on the role of an explorer in the realm of ganimals, they not only become a creator, but also take on a role as a participant in the scheme of a broader investigation. By departing from the traditional role of a data-driven "scientist" who seeks a conclusion to a hypothesis, Meet the Ganimals empowers the exploration-driven creator: casual citizen scientists do not participant in the context of accomplishing a specific task or aim, and largely are not driven by a particular hypothesis. This unique dynamic thereby can leverage discovery to collect in-the-field results in a host of different ecosystems. \section{Random World Assignment to Explore Local Ecologies} \par In line with the Music Lab experiment \cite{salganik2006experimental}, a final ingredient for the Meet the Ganimals system is the random assignment of each participant to a ``world,'' with its own local ecology that involves independently from those of other worlds. Each world is initialized with a fixed ``seed set'' of 100 ganimals, randomly selected using the random stimulus approach discussed above. Then users assigned to a particular world interact with this seed set plus the ganimals discovered and bred by users assigned to that world. In the ``Feed Em'' page, users can feed the ganimals they like the best. Well-fed ganimals are promoted and remain in this view, while unfed ganimals disappear. In addition, the fourth selection procedure from the bandit algorithm (``leaderboard exploitation'') only pulls ganimals from the world corresponding to that user. The design of the ``Feed Em'' page can differ across worlds, allowing cross-world comparisons to serve as an A/B test for how different UX/UI patterns affect emergent ecologies. For example, one might ask how the layout of the ``Feed Em'' page (e.g. a linear feed-like view versus a more spatial ecological view) changes the resulting diversity of the ganimals in that world.\footnote{A deep dive into the experimental design and measurement approach of such a research question is beyond the scope of this paper, which focuses on the overview of the casual creator itself. However interested readers can learn more by reading the pre-analysis plan here: https://aspredicted.org/65nv7.pdf}. The random assignment of users to different worlds that evolve independently provides a virtual laboratory to compare behavior and curation across worlds, and causally assess the impact of design interventions. \section{Discussion} GAN architectures force two computational agents, the generator and discriminator, to compete against each other with the goal of creating a statistical model resembling the training data. Casual creators built on GAN architectures introduce a third agent, a casual human collaborator, into the loop to explore the most intriguing parts of latent space. With a simple interface for creating and curating images of hybrid AI generated animals, users are motivated to engage in computational creativity for no other reasons than their own curiosity and the chance to name their creations and discoveries. The autotelic motivation drives the interactions within the casual creator and as a result, the system provides insights into what intrigues people \cite{yannakakis2014mixed}. In many creative endeavors, the production and consumption of artifacts are separated, which can lead to undifferentiated production and passive consumption. Human-in-the-loop casual creators built upon GANs are a new medium that blends production and consumption of media into a singular creative process. While Meet the Ganimals focuses on generating images of hybrid animals in particular, it is but one of a growing number of casual creators built on interpolating the GAN latent space of other cultural artifacts. Beyond animals, artifacts as varied as facial expressions, architectural landmarks and fashion are emerging domains where GANs could serve as scaffolding for casual creators \cite{zhu2020domain}, paving the way for new forms of human-AI collaboration. With a well-constrained casual creator, the frontiers of the GAN latent space are within reach. \bibliographystyle{iccc}
\section{Introduction} Autonomous vision aboard UAVs has grown to an important research area \cite{zhu2018visdrone,menouar2017uav,lygouras2019unsupervised,mishra2020drone}. Next to traffic surveillance \cite{fan2020visdrone,du2018unmanned} and agriculture \cite{tsouros2019review}, also the field of search and rescue (SaR) has been tackled \cite{varga2021seadronessee,mishra2020drone}. However, while several works focus on path-planning and mission implementation \cite{bevacqua2015mixed,hayat2020multi,mayer2019drones}, few works address the actual vision part, necessary for autonomously searching certain areas. Finding interesting regions on the sea is a hard problem, since objects of interest are often not known a priori or have a vast variety of different appearances which is why supervised methods often fail in these scenarios. Even if object categories are known beforehand, current methods focus on object detection, which is not viable for large image resolutions and real-time (for rigor defined here to be $>$25FPS) performance on embedded hardware \cite{cazzato2020survey}. Both constraints occur in reliable SaR missions. Furthermore, labeled data sets in these environments are scarce as the data acquisition process is a complicated undertaking, requiring strict safety regulations for all subjects, and is expensive \cite{seadronesseedataacqui}. Instead, it is considerably easier and cheaper to obtain raw data of sea surfaces. What is more, often a low bandwidth, possibly due to large distances or suboptimal weather conditions, does not allow for the whole footage being transmitted to a ground station. This becomes especially severe in maritime scenarios, where the drone is far away from any ground station \cite{avalon,larus}. While compression can be done on-board, it is often not sufficient and furthermore results in image quality loss across the whole image, i.e. also possibly quality loss in regions of the image that need to be analyzed more thoroughly to exclude false positives or negatives. Recently, special purpose video codecs that allow few regions of an image to be coded with near constant picture quality have been proposed to tackle this problem \cite{steinert2022architecture}. The high quality regions that are transmitted can subsequently be combined with an actual classical object detection system on a ground station with much more hardware resources. This methodology separates the problem into two stages: generating few high-recall regions of interest of a high dimensional image in real-time in a low resource environment and classifying these regions into known classes on a ground station with more resources. Motivated by these observations, in this work, we formulate and formalize the former problem and propose an autoencoder-based future frame prediction model that generates meaningful regions of interest on sea surfaces which can run in real-time on an embedded GPU. Owing to the nature of maritime environments, we show that classical methods perform poorly due to dynamic backgrounds, wave movements, sun reflections and others while modern methods are too slow. As this method is a type of anomaly detection method, it does not require bounding box annotations. We introduce a metric that measures the recall at a given amount of footage being transmitted and show that this method outperforms classical methods on multiple benchmarks and metrics. To train the proposed model, we capture over 60 minutes of 4K video footage with several cameras depicting the sea surface from different angles and altitudes at different days and waters. We provide a webserver where we host the Maritime Anomaly Detection Benchmark, where researchers can upload their predictions, which will be evaluated on the server side to allow for a fair comparison. \begin{figure*} \begin{tabular}{c} \centering \includegraphics[trim=0 0 0 0,clip,width=.9\textwidth]{images/autopipeline_new.png} \end{tabular} \caption{Best viewed digitally. Future frame prediction autoencoder pipeline. The frames $F_1,\dots,F_n$ are concatenated and input into the autoencoder, which learns to predict $F_{n+1}$ via $\hat{F}_{n+1}$ and the $L_1$ loss. The error frame (absolute difference between the two), $D_{n+1}$, is concatenated to the last $D_j,\dots , D_n$. Then ,frame momentum and local noise reducer are applied until a final grid is put on the resulting error frame to yield final regions of interests. } \label{fig:autoencoder_img} \end{figure*} Our contributions are as follows: \begin{itemize} \item We formulate a novel problem of obtaining high-recall regions of interest in a high-resolution and real-time scenario and propose a future frame prediction autoencoder to detect these regions in real-time on an embedded GPU. \item We capture over 60 minutes of video footage of the sea surface in various conditions as train set for our method and make it publicly available\footnote{\url{https://seadronessee.cs.uni-tuebingen.de/}}. We host a web server and propose the Maritime Anomaly Detection Benchmark with upload options. \item We analyze the proposed method and compare it to traditional and modern methods on two large-scale public data sets. \end{itemize} \subsection{Related Work} \subsubsection{Computer Vision in Maritime Environments} Airborne maritime data sets are scarce and mostly focus on synthetic aperture radar satellite imagery and ships \cite{airbus-ship,chen2020fgsd,wang2019sar,zhang2021sar}. \cite{marques2015unmanned,varga2021seadronessee,lygouras2019unsupervised} provide UAV-based maritime detection data sets. While the data set in \cite{lygouras2019unsupervised} features only stock photos scraped from the internet, the Seagull data set \cite{marques2015unmanned} and SeaDronesSee \cite{varga2021seadronessee} provide video material with objects of interest. Of these data sets, only Seagull provides frames that do not contain objects. However, the videos suffer from heavy lens distortion and distortion caused by a rolling shutter. We collect 60 minutes of video footage ($>$100000 frames) depicting the sea surface in various altitudes at different angles and days with multiple cameras. We weakly annotate the footage such that no objects of interest are visible in any of the frames. \subsubsection{UAV-based Detection} Rudimentary vision methods in SaR scenarios aboard a UAV are done in \cite{scherer2015autonomous}, using color, text or shape cues using OpenCV \cite{bradski2000opencv} to detect objects of interest. Similarly, \cite{rudol2008human} use Haar features to detect objects of interest in SaR missions. Among the learning-based methods \cite{ferreira2020ship} consider the application of ship detection, classifying images into positives (containing ships) and negatives. While it is also an unsupervised method, they ignore the localization. \cite{lygouras2019unsupervised} describe a complete SaR system from path planning over detection to action. However, their detection system is a basic YOLO variant unsuitable for large resolutions and real-time. Furthermore, it is restricted to the objects it is trained on. Generally, all literature regarding supervised UAV object detection can be considered related \cite{fan2020visdrone,zhu2018visdrone,du2018unmanned,varga2021seadronessee,xia2018dota,kiefer2021leveraging,price2018deep}, albeit not viable, since they do not work in real-time large-resolution scenarios on embedded hardware. \subsubsection{Region proposal networks} Selective search \cite{uijlings2013selective,van2011segmentation} generates many thousand little informative boxes for use of an object detector at a later stage, which makes it inapplicable in embedded environments. Common region proposal networks \cite{zhong2019anchor} are used in two-stage object detectors, such as Faster R-CNN \cite{ren2015faster}, but require bounding box supervision to be learned. Methods for weakly supervised object detection often employ region proposal networks \cite{tang2018weakly}, but require image-level annotations. Background subtraction methods \cite{benezeth2010comparative} are used to separate the background from the foreground, which is defined by the scene captured by a static camera. Most of the methods are not suitable for dynamically changing scenes caused by camera and background movement. Furthermore, these methods do not focus on obtaining meaningful bounding box locations but only on the obtained segmentation maps. (Video) Anomaly detection methods \cite{deecke2018image,sultani2018real,nguyen2019anomaly} learn on normal samples and detect anything previously not seen as anomalies. In images, this is often done for industrial parts \cite{staar2019anomaly,roth2021towards}, and in static videos for surveillance in traffic and crowded scenes \cite{saligrama2012video,zhao2017spatio,zhou2019anomalynet,liu2018future}. Earlier methods only focus on classifying images or frames \cite{ferreira2020ship,liu2018future}, while newer methods also consider localizing anomalies \cite{li2021cutpaste,Szymanowicz_2022_WACV}. However, these methods either ignore the temporal dimension or are not suitable for real-time use. Furthermore, video anomaly methods are designed for static scenes. Our work focuses on dynamic scenes and requires models running in real-time on embedded hardware. Furthermore, the focus is on generating high recall meaningful bounding box regions that potentially contain objects of interest. \section{Method} We are given a high resolution (e.g. 4K) video stream depicting the sea surface. Furthermore, we have an embedded GPU (e.g. Nvidia Xavier AGX). The task is to select regions of interest in every frame which are to be transmitted down (e.g. via a streaming FPGA \cite{steinert2022architecture}). Each region is defined via four bounding box locations, describing the corners of the region in pixels (similar to classical object detection). Depending on the exact use case, the remaining regions are either also transmitted with lower quality or completely omitted. We propose an autoencoder-based future frame prediction architecture to detect anomalies (See Fig. \ref{fig:autoencoder_img}). We train a shallow autoencoder on sequences of normal images $F_1,\dots,F_n$ depicting the sea surface such that the model learns to predict the next normal frame. Subsequently, the predicted frame $\hat{F}_{n+1}$ is subtracted from the original next frame $F_{n+1}$. The hypothesis is that the autoencoder fails to reconstruct objects that differ from the sea surface in their colors, shapes and textures (e.g. see Fig. \ref{fig:arearecall} \& \ref{fig:manyimgs}). Furthermore, by incorporating the last few frames, the autoencoder learns temporal correlations of water movements. Common future frame prediction networks employ large networks, such as UNet \cite{liu2018future,szymanowicz2022discrete} or even larger models \cite{yu2020efficient}. They operate on small video resolutions and are not suitable for employment on embedded hardware. Applying models in real-time on embedded hardware and for high resolutions requires us to fall back to shallow autoencoder architectures. We follow the basic principle of an encoder-decoder architecture, but only employ small channel dimensions for the filters as these make up for a large computational overhead. We refrain from using depth-wise separable convolutions \cite{howard2017mobilenets} or more advanced methods \cite{lebedev2014speeding}, since they are not optimized for embedded GPUs. For the first layer, we concatenate the past $n$ frames along the channel dimension and apply a regular 2D convolution with filter dimension $n\times 4$, kernel size $3\times 3$ and stride $2$. We perform the same convolution six times, while halving the channel dimension each time due to performance. The decoder performs the symmetric operations via deconvolutions. \begin{figure} \centering \setlength{\tabcolsep}{1pt} \begin{tabular}{ccc} (a) Raw & (b) no momentum & (c) momentum\\ \includegraphics[trim=0 0 0 0,clip,width=.32\textwidth]{images/img_real_frameframe4166.jpg} & \includegraphics[trim=0 0 0 25,clip,width=.32\textwidth]{images/img_3_pred.jpg} & \includegraphics[trim=0 0 0 25,clip,width=.32\textwidth]{images/img_3_pred4.jpg}\\ (d) Raw cut-out & (e) no LNR & (f) LNR\\ \includegraphics[trim=0 0 0 1,clip,width=.32\textwidth]{images/img_4110gt_cutt.jpg}& \includegraphics[width=.32\textwidth]{images/img_4110prednolocal.jpg}& \includegraphics[width=.32\textwidth]{images/img_4110gt_localnoiseremoved.jpg} \end{tabular} \caption{Qualitative errors on Seagull (top) and SeaDronesSee (bottom). Note that the random noise at the bottom left of (c) almost vanished.} \label{fig:localnoise} \label{fig:frame-moment} \end{figure} We hypothesize that reconstruction errors due to wave patterns and sun reflections are more temporally unstable than actual anomalies. Thus, we propose to include an error frame momentum term, which averages over the past $n$ error frames $D_1,...,D_n$. This assumes that the camera movement is not too quick as then, the actual anomalies also move quickly in the image plane, eliminating the error frame momentum effect. However, for frame rates of roughly 30, this is negligible. Figure \ref{fig:frame-moment} (c) and Section \ref{sec:experiment} show the advantage of using this component. To counteract the local noise induced by an imperfect reconstruction coming from sun reflections and wave patterns, we introduce a local noise remover (LNR). Channel-wise, we multiply each pixel of the error frame by its immediate vertical and horizontal surrounding neighbour. We repeat this procedure three times. This ensures that only regions of larger error areas are detected as anomalies (as opposed to noisy areas) in subsequent steps. This can be seen as a morphological operation, however also different, since we do not use a structuring element \cite{zhuang1986morphological}. See how the waves are eliminated in Fig. \ref{fig:localnoise} (f), while the boat is amplified. \begin{figure} \begin{tabular}{@{}c@{\hspace{.99cm}}c@{}} \includegraphics[trim=0 20 0 0,clip,width=.99\textwidth]{images/horizon_cutter_complete.png}\\ \includegraphics[trim=0 0 0 0,clip,width=.49\textwidth]{images/horizon_cutter_real_img.jpg} \includegraphics[trim=0 0 0 0,clip,width=.49\textwidth]{images/horizoncutterwithaltitudeangle.jpg} \end{tabular} \caption{Illustration of the horizon cutter (top) and predictions (bottom). The curvature is just for visualization purpose (ignored for computation).} \label{fig:horizon_cutter} \end{figure} Importantly, this method is sensitive to regions above the horizon. Therefore, we leverage meta data from the UAV's on-board sensors to determine the horizon line in open water. This allows us to ignore this region in the autoencoder training and inference phase which, in turn, results in more robust anomaly detection performance and faster inference times. Notably, this computation has virtually no overhead. The horizon line can be computed using the UAV's height, camera gimbal pitch and roll angle, and the camera intrinsics. Ignoring the effect of atmospheric refraction, we can estimate the distance to the horizon as a function of the height of the observer as $d=3.57h^{1/2}$. This approximation is fairly accurate for heights that are typical for SaR-UAVs (far below 1000m) \cite{bohren1986altitude}. We furthermore ignore the curvature of the earth, which is also negligible for these heights. We compute the angle $\alpha$ to the horizon via $\alpha=\arcsin (h/d)$. Using the focal length $f$ (in pixels) and the camera gimbal pitch $\beta$, we can then compute the camera perspective projection on the image plane, which yields the height offset $o$ in pixels to the horizontal center line of the image plane as $o=\tan (|\alpha-\beta|) \cdot f \cdot \sign (\alpha-\beta)$. Naturally, we truncate $o$ to be within the range of the number of horizontal pixels. To account for the roll angle $\gamma$ of the UAV (or camera gimbal), we can simply add a roll angle induced offset at the left and subtract at the right of the image given as $o_r=\tan (\gamma)\cdot p_w/2$, where $p_w$ is the pixel width of the video. While the horizontal pixel location o is an approximation, it is quite robust to errors in the altitude h. Since an exact error analysis is not in the scope of this work, we just report values for altitudes that are common in the data set SeaDronesSee. For $\beta = 0 - 20^\circ, h < 300m$ it holds that $10m$ in altitude error results in approx. $1px$ offset change in a 4K image. However, $o$ is very sensitive to errors in the gimbal angle $\beta$ . For example, for $h = 130m, \beta = 16^\circ$, $1^\circ$ in angle error results in approx. $40px$ offset change. Therefore, it is essential to have a well-calibrated gimbal and UAV IMU. The latter can be accurate up to $0.1^\circ$ when configured properly \cite{suzuki2016precise}. See Figure \ref{fig:horizon_cutter} for an illustration. Empirically, we show that the horizon cutter performs well despite the occurrence of nearby land. We manually annotate the horizon line for a subset of the SeaDronesSee-Tracking validation set and compute the pixel offset error and the roll angle $\gamma$ error. Table \ref{table:empiricalhorizon} shows that despite some land mass blocking the horizon (also see Fig. \ref{fig:horizon_cutter}), the error is negligible. \begin{table} \caption{Error to the ground truth horizon as measured by roll angle $\gamma$ and pixel offset $o$ in a 4K image.} \begin{tabular}{c|c|c|c} Errors of & $\gamma$ & $o$ & $o/2160$ \\ \hline horizon visible (5\%) & $0.8^\circ$ & $71$px & $3.3\%$\\ horizon not visible (95\%) & -- & $2$px & $0.1\%$\\ total & -- & $5.45$px & 0.3\% \\ \end{tabular} \label{table:empiricalhorizon} \end{table} Lastly, we apply a grid of size $m_1\times m_2$ on the error frame and for every grid window we average over the error frame pixels contained in it. We select the maximum number of boxes outputted given a certain bandwidth, which we simply break down in $p\%$ of the area of the whole frame. \section{Data Set Generation and Webserver} To test our approach, we gathered 60 minutes of 4K video footage on open water at three different days with three different cameras. We made sure to include altitudes and viewing angles from $5-120m$ and $0-90
^\circ$. We manually filtered out the sequences that contained objects considered anomalous, such as humans, boats, life jackets and buoys. Each frame is annotated with its corresponding meta data information, such as altitude, all angles of the UAV principal axes, camera gimbal pitch angle, time, GPS and others. This data, called {\bf OpenWater}, comes along with over 20 minutes of bounding box annotated footage in open water where we annotated the same classes as in SeaDronesSee, serving as anomalies. Everything but the test annotations will be uploaded to avoid researchers from overfitting. We will upload the test videos, on a web server and propose the Weakly Supervised Maritime Anomaly Detection Benchmark where researchers can upload their predictions, which will be evaluated and published on the server side for fair comparisons. \begin{figure} \centering \includegraphics[trim=0 0 0 0,clip,width=.9\textwidth]{images/arearecall.png}\\ \includegraphics[trim=0 0 0 0,clip,width=.9\textwidth]{images/arearecall_seagull.png} \caption{Area recall curves for SeaDronesSee and Seagull. AR$\hat{=}$avg. recall.} \label{fig:arearecall} \end{figure} \section{Experiments} \label{sec:experiment} For our experiments, we choose a grid size of $48 \times 27$, predict the fifth frame from the past four, and use an error frame momentum of two. Influences of the components are discussed in further sections. \begin{figure*} \centering \begin{tabular}{@{\hspace{.45cm}}c@{\hspace{.45cm}}c@{}} \rotatebox{90}{\ \ Image} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_real_frame0.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_real_frame1276.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_real_frameframe274.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_real_frameframe136.jpg} \\ \rotatebox{90}{\ \ MF box} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_Naive_Baselinebbox5.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_Naive_Baselinebbox1276.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_Naive_Baselinebboxframe274.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_Naive_Baselinebboxframe136.jpg} \\ \rotatebox{90}{\ \ FD box} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_Frame_Differencingbbox7.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_Frame_Differencingbbox1276.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_Frame_Differencingbboxframe274.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_Frame_Differencingbboxframe132.jpg} \\ \rotatebox{90}{\ \ GMM box} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_Gaussian_Mixturebbox5.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_Gaussian_Mixturebbox1276.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_Gaussian_Mixturebboxframe274.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_Gaussian_Mixturebboxframe136.jpg} \\ \rotatebox{90}{\ \ Auto err} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_Autoencoder0.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_Autoencoder1276.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_diff_imgframe274.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_diff_imgframe136.jpg} \\ \rotatebox{90}{\ \ Auto box} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_Autoencoderbbox0.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_Autoencoderbbox1276.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_Autoencoderbboxframe274.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_Autoencoderbboxframe136.jpg} \\ \rotatebox{90}{\ \ Recon} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_predicted_frame0.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_predicted_frame1276.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_predicted_frameframe274.jpg} \includegraphics[trim=0 0 0 0,clip,width=.22\textwidth]{images/img_predicted_frameframe136.jpg} \\ \end{tabular} \caption{Qualitative results for mean filter (MF), frame differencing (FD), Gaussian mixture model (GMM) and the autoencoder (Auto) on SeaDronesSee (left two columns) and Seagull (right two columns). For Auto, we plot the error heat map and the reconstructed image (Recon).} \label{fig:manyimgs} \end{figure*} As we operate on high resolution videos and in real-time scenarios, we compare to three methods commonly used for background subtraction and anomaly detection: Mean filter (MF) \cite{zhang2012object}, frame differencing (FD) \cite{mohamed2010background} and Gaussian mixture model (GMM) \cite{zivkovic2004improved}. For GMM we use three Gaussians. We extend every method with the grid component and employ the horizon cutter. We evaluate on the following datasets (see also Fig. \ref{fig:manyimgs}): \begin{itemize} \item We use our {\bf Open Water} data set as the training set for SeaDronesSee as the latter does not consist of frames without objects. It consists of $>100000$ frames of open water captured on multiple days with three different 4K video cameras. Similar to SeaDronesSee, we provide precise meta annotations and manually annotate bounding boxes with the same classes as SeaDronesSee for each of the frames. \item We experiment on the multi-object tracking track of {\bf SeaDronesSee}. It depicts humans, boats and other objects in open water (incl. bboxes) serving as our anomalies. The frames are of 4K resolution and each frame is annotated with precise meta data information. It is challenging since it can contain multiple objects scattered around the frames and due to its diversity in the altitude and angle of view distribution, resulting in different objects' appearances and sizes. \item The {\bf Seagull} data set features video data showing boats, ships, life rafts and other objects from a fixed wing UAV. It also features video clips containing no objects. The latter serve as our training set for the Seagull test set. The videos are of Full HD resolution and have a heavy lens distortion and distortion caused by a rolling shutter. See also Figure \ref{fig:manyimgs} for examples. \end{itemize} We measure the recall given a certain bandwidth (percentage of the video frame transmitted averaged over all frames). Therefore, we consider as evaluation metric $R^{p\%}$, which is the recall over all frames, given that at most $p\%$ of the image may be transmitted. Each region to be transmitted must be encoded by a rectangular bounding box. We consider an object to be correctly detected if there is an overlap with the predicted region of at least 50\%, which is common for aerial object detection \cite{zhu2018vision,varga2021seadronessee,du2018unmanned}. Furthermore, we report the average recall ($AR$) averaged over 10 equidistant percentages $p$ from $p=0.05$ to $p=0.95$, denoted $AR$. \subsection{Anomaly Detection Performance} Fig. \ref{fig:arearecall} shows $AR$ and the recall values for all the methods for the 10 transmission percentages $p$, while we interpolate in between. The autoencoder consistently outperforms the baselines for all values of $p$ and for AR. However, the difference is especially visible for low values of $p$, which is the primary use case in this application scenario \cite{steinert2022architecture}. For example, for $p=5\%$ the autoencoder achieves $70.1\%$ and $40.0\%$ recall for SeaDronesSee and Seagull, respectively, which is over $15$ resp. $32$ percent points more than the best baseline. Subsequently, we focus on the case of low $p$. \begin{table} \caption{Average $L_1$ recon. error within boxes and outside for $p=5\%$.} \begin{tabular}{c|c|c|c|c|c|c} &\multicolumn{3}{c|}{SeaDronesSee}&\multicolumn{3}{c}{Seagull} \\ & $err_{b}$ & $err_{r}$ & $\Delta_r$ &$err_{b}$ & $err_{r}$ & $\Delta_r$ \\ \hline MF & 78.3 & 0.3 & 78.0 & 0.37 & 0.3 & 0.07\\ FD & 34.7 & 2.5 & 32.2 & 1.7 & 0.3 & 1.4\\ GMM & 4.3 & 0.2 & 4.1 & 0.3 & 0.2 & 0.1\\ \bf Auto & 79.5 & 0.2 & \bf 79.3 & 2.2 & 0.2 & \bf 2.0\\ \end{tabular} \label{table:errorreconstruction} \vspace{-5mm} \end{table} We report the average reconstruction errors $err_{b}$ (average $L_1$ reconstruction error within ground truth boxes), $err_{r}$ (average $L_1$ reconstruction error rest) and their differences $\Delta_r$ in Table \ref{table:errorreconstruction}. The autoencoder yields higher $\Delta_r$, which shows its ability to discriminate better between normal and anomalous regions. Notably, the values for SeaDronesSee are generally much higher than for Seagull due to Seagull's lower image quality and higher blurriness (see Figure \ref{fig:manyimgs}). \begin{table} \caption{Autoencoder ablation experiment on SeaDronesSee. } \label{tab:ablation} \centering \begin{tabular}{c|c|c|c|c} Future Frames & -- & \checkmark & \checkmark & \checkmark \\ Local Noise Remover & -- & -- & \checkmark & \checkmark \\ Frame Momentum & -- & -- & -- & \checkmark \\ \hline $R^{p=5\%}$ & 60.3 & 66.2 & 68.6 & \bf 70.1 \\ \end{tabular} \end{table} Table \ref{tab:ablation} analyzes the influence of different components. When using future frames, we take the past four frames to predict the fifth. For frame momentum, we use the past two frames. It shows that using future frames yields the greatest benefits. All components improve the performance. We analyze the influence of the horizon cutter on the performance of the autoencoder. As only SeaDronesSee incorporates meta data, we perform experiments on this data set. Only 5\% of all frames actually show the horizon. Therefore, we restrict the influence of the horizon cutter to only that portion, as it does not have any on the other part. Remarkably, the autoencoder with horizon cutter achieves 86.3\% recall whereas it only achieves 47.0\% without. As the autoencoder only is trained on frames of open water, the different image statistics of the sky skew the image reconstruction error on these parts which expectedly results in a loss in performance. Both experiments used $p = 5$\%. So far, we considered the case where we only have access to normal frames as training data. However, often we are given some labeled training data. Thus, we propose to use an adversarial training objective where we maximize the prediction penalty of the autoencoder within ground truth boxes and minimize it everywhere else. That way, the model is punished for learning to reconstruct actual anomalies. We evaluate this strategy by comparing it to its naive counterpart, i.e. not backpropagating the loss within boxes. We compare these two approaches by training on the SeaDronesSee tracking train set and testing on the SeaDronesSee tracking test set. For $p=5\%$,the ignoring yields a recall of $65.7\%$ in contrast to $67.2\%$ with adversarial loss. \subsection{Obtaining Fewer Bounding Boxes} Aside from the restriction of choosing at most $p\%$ of the frames, which may be imposed due to a potentially low bandwidth, another restriction may come from common video codecs' inability to process a large number of regions of interest. Therefore, another type of restriction on a region proposer may be the number of regions it yields. Thus, we propose to merge regions of interest touching each other at corners using Suzuki's border following method \cite{suzuki1985topological}. As this may yield a larger than allowed area to be transmitted, each resulting box is ranked based on its reconstruction error. This leads to fewer and larger bounding boxes at the expense of a potentially lower recall. \begin{table} \caption{Fewer number of bounding boxes (\#B) via reduced recall on SeaDronesSee.} \label{table:fewerbboxes} \centering \begin{tabular}{c|c|c|c|c} & \multicolumn{2}{c|}{Not Merging}&\multicolumn{2}{c}{Merging} \\ Method & $\#$B & $R^\text{p=5\%}$ &$\#$B & $R^\text{p=5\%}$ \\ \hline MF \cite{zhang2012object} & \bf 65 & 54.8 & 7 & 49.6\\ FD \cite{mohamed2010background} & \bf 65 & 55.0 & 8 & 53.2\\ GMM \cite{zivkovic2004improved} & \bf 65 & 0.2 & \bf 5 & 0.1\\ \bf Auto & \bf 65 & \bf 70.1 & 6 & \bf 64.3\\ \end{tabular} \end{table} Table \ref{table:fewerbboxes} shows the number of boxes \#B and the recall for the standard and the merging method for $p=5\%$. Note that without merging we have the same number of bounding boxes for all the methods since we allow $5\%$ of the area of the image to be transmitted. We can substantially decrease the number of bounding boxes at the cost of a slightly lower recall. We note that this also highly depends on the anomaly distribution since for clustered anomalies it is easier to merge bounding boxes (see Fig. \ref{fig:manyimgs}). \subsection{Running Times} Finally, we consider the running times of the individual methods on embedded hardware. We deploy them on an NVIDIA Xavier \cite{nvidiaxavier} mounted on a DJI Matrice 100. We transform all methods into optimized engines using TensorRT \cite{vanholder2016efficient} and set the Xavier to MAX-N mode and report the running times averaged over 1000 frames. Table \ref{table:fpstable} shows the speed comparison between traditional and modern (U-NET \cite{liu2018future}, CFLOW \cite{gudovskiy2022cflow}) methods. The much simpler baselines run in real-time, while the modern methods are slow. For completeness, we replaced our architecture with the popular UNet architecture and trained it on SeaDronesSee using halved resolution and filter dimensions (more did not fit into a 3090Ti w/ $24$GB). Interestingly, the performance trailed the performance of our method (78.1 AR). This led us to the conjecture that the high resolution is crucial in this application, which makes sense if we consider that many objects are of $\approx 20$px size. \begin{table} \caption{Running times in FPS. Bold values depict real-time methods.} \label{table:fpstable} \centering \begin{tabular}{c|c|c|c|c|c|c} & MF & FD & GMM & U-NET & CFLOW & Auto\\ \hline 1K & \bf{64} & \bf{70} & \bf{35} & \textcolor{red}{8} & \textcolor{red}{12} & \bf{48} \\ 4K & \bf{50}& \bf{62} & \textcolor{red}{17} & \textcolor{red}{1}& \textcolor{red}{3}& \bf{27}\\ \end{tabular} \end{table} \section{Conclusion and Outlook} We formulated the novel problem in maritime SaR of finding relevant regions of interest in a low-resource real-time and high-resolution scenario. We show that an autoencoder-based future frame prediction model is a promising direction even in a resource constrained setting. The benchmark is publicly available and we hope that the field of maritime SaR will be advanced by means of fast neural networks in the future. \newpage \bibliographystyle{IEEEtran}
\section{Introduction} Recall that a Motzkin path is a lattice path from $(0,0)$ to $(n,0)$ which does not fall below the horizontal axis, and uses only the {\em ascents} $(1,1)$, the {\em descents} $(1,-1)$ or the {\em level steps} $(1,0)$. (Other authors use terminology ``up steps", ``down steps", and ``horizontal" steps -- here we follow terminology in \citet[page 319]{flajolet09analytic}.) Each such path is uniquely described by the sequence $\gamma_n=(\eps_1,\dots,\eps_n)$ with $\eps_j\in\{0,\pm 1\}$ which determine the directions of consecutive steps along the vertical axis. The cardinality of the set $\calM_n$ of all Motzkin paths, known as the Motzkin number and denoted by $M_n$, is related to the Catalan numbers $C_k$ by the formula \ M_n=\sum_{k=0}^{\lfloor n/2 \rfloor}\bin{n}{2k}C_k, \mbox{ where } C_k = \frac1{k+1}\bin{2k}k. \ \begin{figure}[H] \begin{tikzpicture}[scale=.8] \draw[->] (0,0) to (0,3); \draw[->] (0,0) to (11,0); \draw[-,thick] (0,0) to (1,0); \draw[-,thick] (1,0) to (2,1); \draw[-,thick] (2,1) to (3,0); \draw[-,thick] (3,0) to (4,1); \draw[-,thick] (4,1) to (5,1); \draw[-,thick] (5,1) to (6,2); \draw[-,thick] (6,2) to (7,1); \draw[-,thick] (7,1) to (8,0); \draw[-,thick] (8,0) to (9,1); \draw[-,thick] (9,1) to (10,0); \node[below] at (1,0) { $1$}; \node[below] at (2,0) { $2$}; \node[below] at (3,0) { $3$}; \node[below] at (4,0) { $4$}; \node[below] at (5,0) { $5$}; \node[below] at (6,0) { $6$}; \node[below] at (7,0) { $7$}; \node[below] at (8,0) { $8$}; \node[below] at (9,0) { $9$}; \node[below] at (10,0) { $10$}; \end{tikzpicture} \caption{ \label{Fig1} Motzkin path $\gamma_{10}=(0,1,-1,1,0,1,-1,-1,1,-1)$, drawn as a linear interpolation. For this path, the number of level steps $L_{10}(1)=2$ and the number of ascents $A_{10}(1)=4$.} \end{figure} Consider now a random Motzkin path $\gamma_n$ selected uniformly from the set $\calM_n$ of all Motzkin paths of length $n$. Then $(\eps_1,\dots,\eps_n)$ becomes a sequence of random variables with the distribution that coincides with the distribution of a random walk with independent increments that take values $-1,0,1$ with probability $1/3$ each, conditioned on staying in the upper quadrant and landing at $0$ at time $n$. The general result of \citet{kaigh76invariance} specialized to this setting implies that random process $$ \left(\frac{\sqrt{3}}{\sqrt{2n}}\sum_{k=1}^{\lfloor nt\rfloor}\eps_k\right)_{t\in[0,1]} $$ converges in distribution to the Brownian excursion $(B_t^{ex})_{t\in[0,1]}$. Recall that Brownian excursion is a nonhomogeneous Markov process with explicit transitions which can be interpreted as the Brownian bridge conditioned to stay strictly positive until time $t=1$. See for example \citep{revuz99continuous,yen13local} and the references therein for more background. Here we take a closer look, and are in particular interested in the asymptotic behavior of the three components that constitute a random Motzkin path: the counting process $\{A_n(t)\}_{t\in[0,1]}$ of the ascent steps, the counting process $\{D_n(t)\}_{t\in[0,1]}$ of the descent steps, and the counting process $\{L_n(t)\}_{t\in[0,1]}$ of the level steps. That is, for each Motzkin path $\gamma_n=(\eps_1,\dots,\eps_n)$ write \[ \eps^+_j = 1_{\ccbb{\eps_j = 1}},\quad \eps^-_j = 1_{\ccbb{\eps_j =-1}},\quad \delta_j = 1_{\ccbb{\eps_j = 0}} \] and consider three stochastic processes: $$ A_n(t)=\sum_{k=1}^{\floor{nt}}\eps^+_k,\quad D_n(t)=\sum_{k=1}^{\floor{nt}}\eps^-_k, \quad L_n(t)=\sum_{k=1}^{\floor{nt}} \delta_k. $$ See Figure \ref{Fig1} for an illustration. Clearly, $A_n(t)+D_n(t)+L_n(t)=\floor{nt}$ and the above mentioned consequence of \citet{kaigh76invariance} can be rephrased as $$\frac{1}{\sqrt{2n}}\left(A_n(t)-D_n(t)\right)_{t\in [0,1]}\toD \frac{1}{\sqrt{3}}(B_t^{ex})_{t\in[0,1]}$$ as $n\to\infty$. Here, $\toD$ stands for the weak convergence in $D([0,1])$ with Skorohod topology. Our main result is the following component-wise description of the above convergence. \begin{theorem}\label{T1} The finite-dimensional distributions of the $\RR^3$-valued process $$ \frac{1}{\sqrt{2n}}\left(A_n(t)-\frac{nt}{3}, L_n(t)-\frac{nt}{3},D_n(t)-\frac{nt}{3}\right)_{t\in[0,1]} $$ converge to the finite-dimensional distributions of $$\left(\frac{1}{2\sqrt{3}}B_t^{ex}+\frac{1}{6}B_t, -\frac{1}{3}B_t, \frac{1}{6}B_t-\frac{1}{2\sqrt{3}}B_t^{ex}\right)_{t\in[0,1]} , $$ where $(B_t)_{t\in[0,1]}$ is a Brownian motion, $(B_t^{ex})_{t\in[0,1]}$ is a Brownian excursion, and the processes $(B_t)_{t\in[0,1]}$ and $(B_t^{ex})_{t\in[0,1]}$ are independent. \end{theorem} This result can be established by different methods. For example, a probabilistic approach is sketched in Section \ref{sec:proba}. The main purpose of this paper is, by proving Theorem \ref{T1}, however to demonstrate a method that was recently introduced in our investigation \citep{bryc17asymmetric,bryc17limit} of asymmetric simple exclusion process (ASEP) with open boundary \citep{derrida06matrix}. One of the key ideas therein is to establish an identity connecting the Laplace transform of the statistics of interest, essentially the moment generating functions of the particles, to the expectation of a functional of a certain inhomogeneous Markov process with explicit transition density functions. The expectation, in the form of an integral representation, makes it possible to compute the asymptotic Laplace transform and then to characterize the limit distribution, although the computation in \citep{bryc17limit} is quite involved. It is not surprising that the same idea can be applied to the Motzkin-path model considered here, as intrinsic connections between Motzkin paths and ASEP have been well known and explored in earlier research (e.g.~\citep{blythe09continued,brak06combinatorial,brak04asymmetric,corteel07tableaux,corteel11matrix,woelki13parallel}). In particular, the model on Motzkin paths that we considered here is simpler in the sense that the corresponding identity between the generating functions of the counting processes, and the so-called {\em free Brownian motion}, an inhomogeneous Markov process, is more straightforward (see Propositions \ref{P1} and \ref{P3} below) than in the ASEP example. Once the identity is established, the asymptotic limit is then obtained by a straightforward calculation, which is also simpler than in the ASEP example. \begin{remark} The result in Theorem \ref{T1} itself might be known, although we could not find a reference. A similar phenomenon as in Theorem \ref{T1} has been described and explained for the steady state of ASEP with open boundary by Derrida, Enaud and Lebowitz \citep[Section 2.5]{derrida04asymmetric}, where the fluctuations of height functions can also be decomposed into linear combinations of a Brownian motion and a Brownian excursion, the two being independent. \end{remark} The paper is organized as follows. In Section \ref{Sect-Sulanke} for pedagogical purposes we give a simple integral representation for the generating function of the level steps, and prove that finite-dimensional distributions of $\frac{1}{\sqrt{2n}}(3L_n(t)-nt)_{t\in[0,1]}$ converge to the finite-dimensional distributions of the Brownian motion $(B_t)_{t\in[0,1]}$. In Section \ref{Sect:Proofs} we derive a more general integral representation for the joint generating functions that is needed for the proof of Theorem \ref{T1}. In Section \ref{Sec:RM} we collect some additional comments and remarks. \section{Warmup: fluctuations of level steps}\label{Sect-Sulanke} The proof that the counting process of level steps $(L_n(t))_{0\leq t\leq 1}$ is asymptotically a Brownian motion relies on fewer technicalities, so we present it separately as an introduction to our approach. The proof of Theorem \ref{T1} presented in Section \ref{Sect:Proofs} is self-contained and covers this case. Recall our notation $\delta_k$ for the indicators of the level steps, and consider the probability generating function $$ \varphi(\vv u)=\sum_{\gamma_n\in\calM_n}\prod_{j=1}^n u_j^{\delta_j}, \quad \vv{u}=(u_1,\dots,u_n), $$ for the locations of level steps. We have the following integral representation for $\varphi(\vv u)$ that uses the Wigner semicircle law $(2\pi)^{-1}\sqrt{4-y^2}dy$ supported on $[-2,2]$. \begin{proposition}\label{P1} For $n=1,2,\dots$, \begin{equation} \label{GenF0} \varphi(\vv u)=\frac{1}{2\pi}\int_{-2}^2 \prod_{j=1}^n(u_j+y)\sqrt{4-y^2}dy. \end{equation} \end{proposition} \begin{proof} Each Motzkin path of $n$ steps decomposes uniquely into a Dyck path of $2k$ steps (with ascents and descents only) and $n-2k$ level steps. Thus, $\gamma_n\in\calM_n$ partitions set $\{1,\dots,n\}$ into the set $S$ of non-level steps and its complement $S^c$, where the level steps occur. If cardinality $|S|$ of $S$ is $2k$, then there are in total $C_k$ different Dyck paths over $S$. This gives \ \varphi(\vv u)=\sum_{\substack{S\subset\{1,\dots,n\}\\ |S|\in2\NN}} C_{|S|/2}\prod_{j\not\in S}u_j. \ Since the even moments of the semicircle law are the Catalan numbers, and the odd moments are zero, see e.g.~ \citet[page 24]{hiai00semicircle}, we can now write this sum over all subsets $S$. We get $$ \varphi(\vv u)=\sum_{S\subset\{1,\dots,n\}} \frac{1}{2\pi}\int_{-2}^2 y^{|S|}\sqrt{4-y^2}dy \prod_{j\not\in S}u_j=\frac{1}{2\pi}\int_{-2}^2 \prod_{j=1}^n(u_j+y)\sqrt{4-y^2}dy. $$ \end{proof} We can now prove the convergence of the middle component in Theorem \ref{T1}, showing that $L_n(t)$ behaves just like the sum of independent Bernoulli random variables with probability of success $1/3$. \begin{proposition} As $n\to\infty$, \[ \pp{\frac{3L_n(t)-nt}{\sqrt{2n}}}_{t\in[0,1]} \fddto \pp{B_t}_{t\in[0,1]}, \] where $\fddto$ denotes convergence of finite-dimensional distributions. \end{proposition} \begin{proof} Fix $s_0=0<s_1<\dots<s_d<s_{d+1}=1$. Since $L_n(0)=0$, it suffices to prove that the $(d+1)$-dimensional vector of increments \[ \Delta_k\topp n=L_n(s_k)-L_n(s_{k-1}), k=1,\dots,d+1 \] converges in distribution to the corresponding increments of the Brownian motion. We use formula \eqref{GenF0} to deduce an integral representation for the Laplace transform of $(\Delta_1,\dots,\Delta_{d+1})$. Denote $n_k=\floor{ns_k}-\floor{ns_{k-1}}$, starting with $n_1=\floor{ns_1}$ and ending with $n_{d+1}=n-\floor{n s_d}$. Splitting the sum into the consecutive blocks $\calN_k=\left\{j\in\NN: n_{k-1}<j\leq n_{k} \right\}$, we have \begin{align}\label{Lap0} E \exp\pp{\sum_{k=1}^{d+1} w_k\Delta_k} & = \frac{1}{M_n} \sum_{\gamma\in\calM_n}\exp\pp{\sum_{k=1}^{d+1}w_k\sum_{j\in \calN_k}\delta_j}\\ \nonumber &=\frac{1}{2\pi M_n}\int_{-2}^2 \prod_{k=1}^{d+1} (e^{w_k}+y)^{n_k}\sqrt{4-y^2}dy. \end{align} For centering, it is more convenient to work with $$G_n(t):=\frac1{\sqrt{2n}}(3L_n(t)- \floor{nt}),$$ which is asymptotically equivalent to $(3L_n(t)- nt)/\sqrt{2n}$. With \begin{equation} \label{u_{n,k}} u_{n,k}=e^{w_k/\sqrt{2n}} \end{equation} we rewrite \eqref{Lap0} as \begin{multline} \label{Lap1} E \exp\pp{\sum_{k=1}^{d+1} w_k(G_n(s_k)-G_n(s_{k-1}))} \\ =\frac{1}{2\pi M_n}\int_{-2}^2 \prod_{k=1}^{d+1} \pp{u_{n,k}^2+\frac y{u_{n,k}}}^{n_k}\sqrt{4-y^2}dy. \end{multline} The asymptotic for Motzkin numbers $M_n$ is well known \begin{equation}\label{M-growth} M_n\sim \frac{3^{n+3/2}}{2\sqrt{\pi}n^{3/2}}, \end{equation} see e.g.~\citet[Example VI.3 page 396]{flajolet09analytic} who consider $f_n=M_{n-1}$ so their asymptotic expression differs from \eqref{M-growth} by a factor of $3$. Here and below, we write $a_n\sim b_n$ if $\lim_{n\to\infty} a_n/b_n = 1$. From now on, we concentrate on the asymptotics of the integral on the right-hand side of \eqref{Lap1}. The first step is to discard the integral over $y<0$. Since $w_1,\dots,w_{d+1}$ are fixed, we have, for every $k=1,\dots,d+1$, $u_{n,k}\sim 1$. If $-2\leq y<0$ and $1/(1+\delta/2)<u<1+\delta/2$ for some $0<\delta<1$, then $$ \abs{u^2+\frac{y}{u}}\leq \max\ccbb{u^2,\frac 2u}<2+\delta<3 . $$ So $$ \frac{1}{2\pi M_n}\left|\int_{-2}^0\prod_{k=1}^{d+1}\pp{u_{n,k}^2+\frac y{u_{n,k}}}^{n_k}\sqrt{4-y^2}dy\right|\leq \frac{(2+\delta)^n}{M_n}\to 0.$$ To determine the asymptotic for the integral over $0<y<2$, mimicking \citep{bryc17limit} we substitute $y=2-v^2/(2n)$. We get \begin{align}\label{f_n} \frac1{2\pi}&\int_{0}^2 \prod_{k=1}^{d+1} \left(u_{n,k}^2+\frac{y}{u_{n,k}}\right)^{n_k}\sqrt{4-y^2}dy \\ & =\frac1{2\sqrt{2}\pi}\int_{0}^{2\sqrt{n}}\prod_{k=1}^{d+1} \left(u_{n,k}^2+\frac{2}{u_{n,k}}-\frac{v^2}{2nu_{n,k}}\right)^{n_k}\sqrt{4+\frac{v^2}{2n}}\frac{v^2}{n\sqrt{n}}dv \nonumber \\ & =:\frac{3^n}{2\sqrt{2}\pi n^{3/2}}\int_{0}^{\infty} f_n(v) dv,\nonumber \end{align} with \[ f_n(v) = 1_{\ccbb{v\leq 2\sqrt{n}}}\prod_{k=1}^{d+1} \left(\frac13\pp{u_{n,k}^2+\frac{2}{u_{n,k}}}-\frac{v^2}{6nu_{n,k}}\right)^{n_k}\sqrt{4+\frac{v^2}{2n}}v^2, v\ge0. \] We want to show $\lim_{n\to\infty}\int_0^\infty f_n(v)dv = \int_0^\infty\lim_{n\to\infty} f_n(v)dv$. To do so, we first verify that functions $f_n(v)$ are dominated by an integrable function. Since $e^{2x}+2e^{-x}\geq 3$, for any real $w$ and $0<\delta<1/2$ we can choose $N(w,\delta)$ such that for $n\geq N(w,\delta)$ we have $1/(1+\delta)<e^{w/\sqrt{2n}}<1+\delta$. Then for $0<v^2<4n$ we have \begin{equation} \label{pos0} \frac13e^{2w/\sqrt{2n}}+\frac23e^{-w/\sqrt{2n}}-e^{-w/\sqrt{2n}}\frac{v^2}{6n}\geq 1-\frac{2(1+\delta)}{3}>0. \end{equation} For $-1<x\leq 1/2$, we have $ e^{2x}+2e^{-x}\leq 3(1+2 x^2)$. So by $1+y\le e^y$ we get \[ \frac13e^{2w/\sqrt{2n}}+\frac23e^{-w/\sqrt{2n}}-e^{-w/\sqrt{2n}}\frac{v^2}{6n} \leq 1+\frac{w^2}{n}-\frac{1+\delta}{6n} v^2 \leq e^{w^2/n- v^2/(6n)}. \] Since the left-hand side of expression \eqref{pos0} is non-negative, by the above bound its $n_k$-th power is bounded by $\exp(\frac{n_k}{n}(w^2-v^2/6))$. Applying this bound to the factors in $f_n(v)$ for a finite number of values of $w=w_1,\dots,w_{d+1}$ and using the fact that $\sum n_k/n =1$ we see that for large enough $n$ and $v^2\leq 4n$ we have \begin{equation}\label{f1bound} 0\leq f_n(v)\leq \sqrt{4+\frac{v^2}{2n}} v^2 e^{\max_kw_k^2-v^2/6} \leq \sqrt{6} e^{\max_kw_k^2}v^2e^{- v^2/6} , \end{equation} and the latter bound is valid for all $v$ as $f_n(v)=0$ for $v>2\sqrt{n}$. This bound will justify the use of the dominated convergence theorem below. It remains to compute the pointwise limit of $f_n(v)$. Recalling \eqref{u_{n,k}}, we note that $$\frac13\pp{u_{n,k}^2+\frac{2}{u_{n,k}}}\sim 1+\frac{w_k^2}{2n}+o\pp{\frac1n}, $$ and hence $$ \lim_{n\to\infty}\left(\frac13\pp{u_{n,k}^2+\frac{2}{u_{n,k}}}-\frac{v^2}{6nu_{n,k}}\right)^{n_k}= e^{(s_k-s_{k-1})w_k^2/2 -(s_k-s_{k-1}) v^2/6}. $$ So $$\lim_{n\to\infty} f_n(v)= 2\exp\pp{\frac12\sum_{k=1}^{d+1}(s_k-s_{k-1})w_k^2} v^2 e^{-{v^2}/6}. $$ The factor of $2$ arises from $\sqrt{4+\frac{v^2}{2n}}$. By the dominated convergence theorem, \begin{align*} \lim_{n\to\infty}\int_0^\infty f_n(v)dv & =2 \exp\pp{\frac12\sum_{k=1}^{d+1}(s_k-s_{k-1})w_k^2} \int_0^\infty v^2 e^{-v^2/6} dv \\ &=3^{3/2} \sqrt{ 2 \pi}\exp\pp{\frac12\sum_{k=1}^{d+1}(s_k-s_{k-1})w_k^2}. \end{align*} So the right-hand side of \eqref{f_n} is asymptotically $$ \frac{3^{n+3/2}}{2\sqrt{\pi}}\exp\pp{\frac12\sum_{k=1}^{d+1}(s_k-s_{k-1})w_k^2}. $$ From \eqref{M-growth} we therefore get $$\lim_{n\to\infty} E \exp\pp{\sum_{k=1}^{d+1} w_k(G_n(s_k)-G_n(s_{k-1}))}=\exp\pp{\frac12\sum_{k=1}^{d+1}(s_k-s_{k-1})w_k^2} .$$ The right-hand side is the Laplace transform of the increments of a Brownian motion $(B_{s_k}-B_{s_{k-1}})_{k=1,\dots,d+1}$. This ends the proof. \end{proof} \section{Proof of Theorem \ref{T1}}\label{Sect:Proofs} It will be convenient to re-state Theorem \ref{T1} using just two of the processes. \begin{theorem} \label{PT3} The finite-dimensional distributions of the process \begin{equation} \label{MultiVariate} \frac{1}{{\sqrt{2n}}}\left( 2A_n(t)+L_n(t)-nt,\; 3L_n(t)-nt \right)_{t\in[0,1]} \end{equation} converge to the corresponding finite-dimensional distributions of $(\frac{1}{\sqrt{3}}B_t^{ex}, B_t)_{t\in[0,1]}$, where $(B)_{t\in[0,1]}$ is the Brownian motion, $(B_t^{ex})_{t\in[0,1]}$ is the Brownian excursion, and the two processes are independent. \end{theorem} The proof requires some additional notation and preparation. An analog of \eqref{GenF0} involves a multivariate integral with respect to the finite-dimensional distributions of a Markov process $(Z_t)_{t\geq 0}$, which has the univariate distributions $P(Z_t\in dx)=p_t(x)dx$ with \begin{equation} \label{Z-univ} p_t(x)= \frac{\sqrt{4t-x^2}}{2\pi t}1_{\ccbb{|x|\le 2\sqrt t}} , \end{equation} and its transition probabilities for $0\leq s<t$ are given by $P(Z_t\in dy\mid Z_s=x)=p_{s,t}(x,y)dy$ with \begin{equation} \label{Z-trans} p_{s,t}(x,y)=\frac{1}{2\pi} \frac{(t-s)\sqrt{4t-y^2}}{tx^2+sy^2-(s+t)xy+(t-s)^2}\; \mbox{ for $|x|\leq 2\sqrt{s}, |y|\leq 2\sqrt{t}$}, \end{equation} starting at $Z_0 = 0$. The process $(Z_t)_{t\ge 0}$ is known as the {\em free Brownian motion}. See Appendix \ref{A:free-Wick} for explanation. The joint moments of $(Z_t)_{t\geq 0}$ are given by a formula that resembles the formula \citep{isserlis18formula} for the joint moments of the multivariate normal random variable sometimes known as Wick's theorem. The formula relies on the concept of non-crossing partition introduced by \citet{kreweras72partitions}. Recall that a pair partition $\pi$ of $\{1,\dots,d\}$, where $d$ necessarily even, say $d = 2m$, is a partition into two-element sets $\{i_1,j_1\},\{i_2,j_2\}\dots,\{i_m,j_m\}$ with $i_k<j_k, k=1,\dots,m$. A pair partition $\pi$ is crossing if there exist two pairs $\{i_k,j_k\},\{i_{k'},j_{k'}\}\in\pi$ such that $i_k<i_{k'}<j_k<j_{k'}$, and is noncrossing otherwise. Somewhat more generally, we denote by $\mathbf{NC}_2(S)$ the set of all non-crossing pair partitions of a finite subset $S\subset\NN$ of even cardinality, see \cite[page 132]{nica06lectures}. The key identity that we need is the following. \begin{lemma}\label{L:free-Wick} For $0<t_1\leq \dots\leq t_d $ we have \begin{equation} \label{free-Wick} E(Z_{t_1}Z_{t_2 \cdots Z_{t_d})=\begin{cases} \displaystyle\sum_{\pi\in \mathbf{NC}_2(d)} \prod_{\{i,j\}\in\pi} t_i , & \mbox{if $d$ is even} , \\ 0 &\mbox{ if $d$ is odd}. \end{cases} \end{equation} \end{lemma} In order not to interrupt the exposition, we postpone the proof to Appendix \ref{A:free-Wick}. Formula \eqref{free-Wick} then gives the integral formula for the joint generating function of the {\em ascent} and {\em level} steps. \begin{proposition}\label{P3} For $0<t_1\leq t_2\leq\dots\leq t_n$, \begin{equation}\label{up2fBM} \sum_{\gamma_n\in\calM_n} \prod_{j=1}^n t_j^{\eps_j^+}\prod_{j=1}^n u_j^{\delta_j}=E\pp{\prod_{j=1}^n\left(u_j+Z_{t_j}\right)}. \end{equation} \end{proposition} \begin{proof} We will use a natural bijection from the set of all noncrossing pair partitions on subsets $S\subset \{1,\dots,n\}$ of even cardinality, to the set of Motzkin paths, where a pair $(S,\pi)$ with $\pi\in \mathbf {NC}_2(S)$ is mapped to Motzkin path $\gamma_n=(\eps_1,\dots,\eps_n)$ with $\eps_i=0$ if $i\not\in S$, $\eps_i=1$ if $\{i,j\}\in \pi$ and $i<j$ and $\eps_i=-1$ otherwise. This is of course the standard decomposition of a Motzkin path into the level part over $S^c$ and a Dyck path over $S$, the latter in one-to-one correspondence with noncrossing pair partitions by \citet[Exercise 6.19]{stanley99enumerative}. So the left-hand side of \eqref{up2fBM} is $$\sum_{S\subset\{1,\dots,n\}} \prod_{j\not\in S} u_j \sum_{\pi\in \mathbf{NC}_2(S)} \prod_{\{i,j\}\in \pi} t_i,$$ where the sum is over the subsets $S$ of even cardinality. But for nondecreasing $t_1,\dots,t_n$, \begin{align*} \prod_{j\not\in S} u_j\sum_{\pi\in \mathbf{NC}_2(S)} \prod_{\{i,j\}\in \pi} t_i& = E\left(\prod_{j\not\in S} u_j\prod_{k\in S} Z_{t_k}\right). \end{align*} The expectation on the right-hand side is $0$ when $S$ is of odd cardinality, so summing the right-hand side over all $S\subset\{1,\dots,n\}$ we get the right-hand side of \eqref{up2fBM}. \end{proof} We remark that \eqref{up2fBM} is a generalization of \eqref{GenF0}. We will use \eqref{up2fBM} to prove the convergence of Laplace transforms on an open set that does not include the origin. This will prove the convergence of finite-dimensional distributions by an application the following result which is not well known. \begin{lemma}\label{L3} Let $\vv{X}\topp n=(X_1\topp n,X_2\topp n,\dots,X_d\topp n)$ be sequence of random vectors with Laplace transforms $\calL_n(\vv{z})=\calL_n(z_1,\dots,z_d)=E \exp(\sum_{j=1}^d z_j X_j\topp n)$ which are finite and converge pointwise to a function $\calL(\vv{z})$ for all $\vv{z}$ from an open set in $\mathbb{R}^d$. If $\calL(\vv{z})$ is a Laplace transform of a random variable $\vv{Y}=(Y_1,\dots,Y_d)$, then $\vv{X}\topp n$ converges in distribution to $\vv{Y}$. \end{lemma} In the univariate case, this result is due to \citet[Section 5.14, page 378, (5.14.8)]{hoffmannjorgensen94probability}. It was rediscovered by Mukherjea, Rao and Suen \citep[Theorem 2]{mukherjea06note} and the proof given there works also in the multivariate setting, see \citep[Theorem A.1]{bryc17limit}. \begin{proof}[Proof of Theorem \ref{PT3}] Denote $$F_n(s)=\frac{2A_n(s)+L_n(s)-\floor{ns}}{\sqrt{2n}},G_n(s)= \frac{3L_n(s)-\floor{ns}}{\sqrt{2n}} ,$$ and fix $0=s_0<s_1<s_2<\dots<s_d<s_{d+1}=1$. Since $F_n(0)=G_n(0)=0$, and $(F_n(s),G_n(s))$ differs by at most $2/\sqrt{n}$ from the process in \eqref{MultiVariate},
it is enough to prove that the vector of increments $\vv{X}\topp n\in\RR^{d+1}\times \RR^{d+1}$ with components \[ X_j\topp n=\pp{F_n(s_j)-F_n(s_{j-1}),G_n(s_j)-G_n(s_{j-1})}, j=1,\dots,d+1, \] converges in distribution to the vector $\vv{Y}$ with components \[ Y_j=\pp{\frac{1}{\sqrt{3}}\pp{B^{ex}_{s_j}-B^{ex}_{s_{j-1}}},B_{s_j}-B_{s_{j-1}}} , j=1,\dots,d+1. \] Fix $\vv{z}=(z_1,\dots,z_{d+1})$ with $0<z_1<z_2<\dots<z_{d+1}$, and $\vv{w}=(w_1,\dots,w_{d+1})$. The plan is to compute the limit of the Laplace transforms, $\calL_n(\vv z,\vv w)$ below, and identify the limit as the Laplace transform of $\vv{Y}$. This will conclude the proof by Lemma \ref{L3}. For $k=1,\dots, d+1$ it is convenient to introduce the following notation: $$ \calN_k=\left\{j\in\NN: s_{k-1}n<j\leq s_{k}n \right\},\quad n_k=|\calN_k|=\floor{s_kn}-\floor{s_{k-1}n},\; $$ \begin{equation} \label{u-and-t} u_{n,k}=e^{w_k/\sqrt{2n}}, \quad t_{n,k}=e^{z_k/\sqrt{2n}}. \end{equation} We rewrite the Laplace transform as follows \begin{align* \calL_n(\vv{z},\vv w)& =E \exp \left(\sum_{k=1}^{d+1} z_k (F_n(s_k)-F_n(s_{k-1})) +\sum_{k=1}^{d+1} w_k (G_n(s_k)-G_n(s_{k-1}))\right)\\& = \prod_{k=1}^{d+1} e^{- n_k (z_k+w_k)/\sqrt{2 n}} E\left(\prod_{k=1}^{d+1} \exp \left(\frac{2 z_k}{\sqrt{2n}} \sum_{j\in\calN_k}\eps_j^+ +\frac{ z_k+3w_k}{\sqrt{2n}} \sum_{j\in\calN_k}\delta_j\right)\right) \\ &= \prod_{k=1}^{d+1} t_{n,k}^{-n_k} u_{n,k}^{-n_k} E\left(\prod_{k=1}^{d+1} t_{n,k}^{2 \sum_{j\in\calN_k}\eps_j^+} t_{n,k}^{ \sum_{j\in\calN_k}\delta_j} u_{n,k}^{3 \sum_{j\in\calN_k}\delta_j} \right), \end{align*} where by \eqref{up2fBM} the expectation above is the same as \[ \frac{1}{M_n} E\left(\prod_{k=1}^{d+1}\pp{t_{n,k}u_{n,k}^3 +Z_{t_{n,k}^2 }}^{n_k}\right). \] Therefore the Laplace transform can be written as the functional of process $(Z_t)_{t\geq 0}$: \begin{equation}\label{L_n} \calL_n(\vv{z},\vv w) =\frac{1}{M_n}E\left(\prod_{k=1}^{d+1}\left( u_{n,k}^2+\frac{Z_{t_{n,k}^2}}{u_{n,k}t_{n,k}}\right)^{n_k}\right). \end{equation} Since asymptotic behavior \eqref{M-growth} for Motzkin numbers is well known, we concentrate on the asymptotic of the integral on the right-hand side of \eqref{L_n}. Since $t_{n,k}\to 1$ as $n\to\infty$, it is clear that the limit of the Laplace transforms can be determined from the analysis of the process $(Z_t)_{t\in[1-\eps,1]}$. Moreover, we note that if $Z_t<0$ then for $\delta>0$ we have $$\left| u^2+\frac{Z_{t^2}}{ut}\right|\leq \max \ccbb{u^2, \frac2u}<2+\delta$$ for $u$ close enough to $1$. So for large enough $n>N(\vv{w},\delta)$, if $Z_{t_j^2}<0$ for some $j$ then \begin{align*} \left|\prod_{k=1}^{d+1}\left( u_{n,k}^2+\frac{Z_{t_{n,k}^2}}{u_{n,k}t_{n,k}}\right)^{n_k}\right| & \leq (2+\delta)^{n_j} \prod_{k\ne j}\left(u_{n,k}^2+\frac2{u_{n,k}}\right)^{n_k} \\ & \leq (2+\delta)^{n_j}\left( 3+\delta \right)^{n-n_j} =(2+\delta)^{n \theta}\left( 3+\delta\right)^{n(1-\theta)} =C^n, \end{align*} where $C=(2+\delta)^{ \theta}\left(3+\delta\right)^{(1-\theta)}\to 3 (2/3)^\theta<3$ as $\delta\to 0$. This shows that for small enough $\delta>0$ we have $C^n/M_n\to 0$. Thus, only the integral over positive $Z_{t_k^2}$ contributes to the limit on the right-hand side of \eqref{L_n}. That is, \begin{equation} \label{eq:L_n'} \calL_n(\vv z,\vv w) \sim \frac{1}{M_n}E\left(\prod_{k=1}^{d+1}\left( u_{n,k}^2+\frac{Z_{t_{n,k}^2}}{u_{n,k}t_{n,k}}\right)^{n_k}1_{\ccbb{Z_{t_{n,k}^2}>0,k=1,\dots,d+1}}\right). \end{equation} Next, \begin{multline*} E\left(\prod_{k=1}^{d+1}\left( u_{n,k}^2+\frac{Z_{t_{n,k}^2}}{u_{n,k}t_{n,k}}\right)^{n_k 1_{\ccbb{Z_{t_{n,k}^2}>0, k=1,\dots,d+1}} \right)\\ = \int_0^{2 t_{n,1} \dots\int_0^{2 t_{n,d+1}}\prod_{k=1}^{d+1}\left(u_{n,k}^2+\frac{y_k}{u_{n,k}t_{n,k}}\right)^{n_k} p_{t_{n,1}^2 }(y_1)\prod_{k=2}^{d+1}p_{t_{n,k-1}^2,t_{n,k}^2}(y_{k-1},y_k)\, d\vv y, \end{multline*} where $p_t$ and $p_{s,t}$ are densities from \eqref{Z-univ} and \eqref{Z-trans}. To find the asymptotic behavior of the latter integral, we substitute $y_k=t_{n,k} (2-v_k^2/(2n))$ and write \begin{multline}\label{pre-lim} E\left(\prod_{k=1}^{d+1}\left( u_{n,k}^2+\frac{Z_{t_{n,k}^2}}{u_{n,k}t_{n,k}}\right)^{n_k}1_{\ccbb{Z_{t_{n,k}^2,k=1,\dots,d+1}>0}}\right)\\ =\frac{3^{n}}{n^{3/2} \int_0^{2 \sqrt{n} \dots\int_0^{2\sqrt{n}} \prod_{k=1}^{d+1} \left(\frac{u_{n,k}^2}{3 }+\frac{2}{3u_{n,k}}-\frac{ v_k^2}{6nu_{n,k}}\right)^{n_k} \psi_n(\vv{v}) \,d\vv{v}, \end{multline} where $\vv{v}=(v_1,\dots,v_{d+1})$ and $$ \psi_n(\vv{v})=\sqrt{n}t_{n,1}v_1p_{{t_{n,1}^2}}(y_1(\vv v))\prod_{k=2}^{d+1} \frac{v_k}{n} t_{n,k} p_{t_{n,k-1}^2,t_{n,k}^2}(y_{k-1}(\vv v),y_k(\vv v)). $$ The rest of the proof combines arguments from \citep{bryc17limit,bryc18dual}. For completeness, we include the details which in the current setting are more straightforward. Recalling \eqref{u-and-t}, we first study the limit of the integrand. Clearly, \begin{equation*} \lim_{n\to\infty} \left(\frac{u_{n,k}^2}{3 }+\frac{2}{3u_{n,k}}-\frac{ v_k^2}{6nu_{n,k}}\right)^{n_k}= \exp\pp{\frac12(s_k-s_{k-1})w_k^2 - \frac16(s_k-s_{k-1})v_k^2}. \end{equation*} Next, we look at the limit of the first factor in $\psi_n(\vv{v})$. We have $$ \lim_{n\to\infty}\sqrt{n}t_{n,1}v_1p_{t_{n,1}^2 }\left(t_{n,1}\pp{2-\frac{v_1^2}{2n}}\right)=\frac{v_1^2}{\pi\sqrt{2}}.$$ The remaining factors in $\psi_n(\vv{v})$ also converge. Recalling the definition of $p_{s,t}(x,y)$, we write \[ p_{t_{n.k-1}^2,t_{n,k}^2}(y_{k-1}(\vv v),y_k (\vv v)) = \frac1{2\pi}\frac{(t_{n,k}^2-t_{n,k-1}^2)\sqrt{4t_k^2- t_{n,k}^2(2-v_k^2/(2n))^2}}{\varphi((z_k-z_{k-1})/\sqrt{2n},2-v_{k-1}^2/(2n),2-v_{k}^2/(2n))} , \] with \[ \varphi(\delta,x,y) = e^{-2\delta}\pp{4\sinh^2\delta+(x-y)^2+2xy(1-\cosh\delta)}. \] One can show that \[ \varphi(\varepsilon\delta,2-x\varepsilon^2,2-y\varepsilon^2)\sim \varepsilon^4\pp{(x-y)^2+2(x+y)\delta^2+\delta^4} \] as $\varepsilon\downarrow0$. See \citep[page 343]{bryc16local}, where our $\varphi$ is the same function as $\varphi_{0,0}$ therein. Then, it follows that \begin{multline} \label{eq:tangent} p_{t_{n,k-1}^2,t_{n,k}^2}(y_{k-1}(\vv v),y_k(\vv v)) \\ \sim \frac{4n}\pi\frac{(z_k-z_{k-1})v_k}{(v_{k-1}^2-v_k^2)^2 + 2(v_{k-1}^2+v_k^2)(z_k-z_{k-1})^2+(z_k-z_{k-1})^4}. \end{multline} Noting that for $z>0$, \begin{align*} & \frac{2zuv}{(z^2+(u-v)^2)(z^2+(u+v)^2)}= \frac{z/2}{z^2+(u-v)^2} - \frac{z/2}{z^2+(u+v)^2}\\ & \quad\quad= \frac12\int_0^\infty e^{- z x}\cos((u-v)x) \,dx -\frac12\int_0^\infty e^{- z x}\cos((u+v)x) \,dx \\ & \quad\quad =\int_0^\infty e^{- z x}\sin(ux)\sin(vx)\,dx, \end{align*} we obtain \begin{align*} \psi_n &(\vv v) \to \frac{v_1^2}{\pi\sqrt 2}\prodd k2{d+1}\frac{4 v_k^2(z_k-z_{k-1})}{\pi((v_k-v_{k-1})^2+ (z_k-z_{k-1})^2)((v_k+v_{k-1})^2+ (z_k-z_{k-1})^2)}\\ & = \frac{2^dv_1v_{d+1}}{\pi^{d+1}\sqrt 2}\prodd k2{d+1} \frac{2 (z_k-z_{k-1})\cdot v_{k-1}v_k}{((v_k-v_{k-1})^2+ (z_k-z_{k-1})^2)((v_k+v_{k-1})^2+ (z_k-z_{k-1})^2)}\\ & = \frac{2^dv_1v_{d+1}}{\pi^{d+1}\sqrt 2}\prodd k1d \int_{\mathbb R_+} e^{-(z_{k+1}-z_{k})x_k}\sin (v_{k+1} x_k)\sin(v_{k}x_k) \, dx_k. \end{align*} Next, we verify that we can pass to the limit under the integral sign. Consider the $k$-th factor in the product. As in the proof of \eqref{f1bound}, there are $N$ and $\theta>0$ such that for all $n>N$ and all $0<v_k^2<2n$ we have $$0<\frac{u_{n,k}^2}{3 }+\frac{2}{3u_{n,k}}-\frac{ v_k^2}{6nu_{n,k}}<e^{w_k^2/n}e^{-\theta v_k^2/n}.$$ Thus with $ \theta_1=\theta \min _{1\leq k\leq d+1}(s_k-s_{k-1})/2$ and $n$ large enough (so that $1/n<(s_k-s_{k-1})/2$) we have $$ \prod_{k=1}^{d+1} \left(\frac{u_{n,k}^2}{3 }+\frac{2}{3u_{n,k}}-\frac{ v_k^2}{6nu_{n,k} }\right)^{n_k}\leq \exp\pp{\sum_{k=1}^{d+1}w_k^2-\theta_1\sum_{k=1}^{d+1}v_k^2}. $$ Since the minimum of the denominator in \eqref{Z-trans} occurs for $x=2\sqrt{s}$, $y=2\sqrt{t}$ and it is then equal $(\sqrt{t}-\sqrt{s})^4$, we see that up to a multiplicative constant $\psi_n(\vv{v})$ is bounded by $$\prod_{k=1}^{d+1} \frac{v_k^2(t_{n,k} +t_{n,k-1} )}{n^{3/2}(t_{n,k} -t_{n,k-1} )^3} \sim 2^{5d/2} \prod_{k=1}^{d+1}\frac{v_k^2}{(z_k-z_{k-1})^3}. $$ So the integrand on the right-hand side of \eqref{pre-lim} is bounded by a constant times the function $v_1^2\cdots v_{d+1}^2\exp(-\theta_1(v_1^2+\dots+v_{d+1}^2))$, which is integrable over $\RR_+^{d+1}$. By including the limits of integration as indicators in the integrand, this bound holds for all $0\leq v_1,\dots,v_{d+1}<\infty$. We can therefore pass to the limit under the integral on the right-hand side of \eqref{pre-lim}. We get \begin{align*} \frac1{M_n}E& \left(\prod_{k=1}^{d+1}\left( u_{n,k}^2+\frac{Z_{t_{n,k}^2}}{u_{n,k}t_{n,k}}\right)^{n_k}1_{\ccbb{Z_{t_{n,k}^2,k=1,\dots,d+1}>0}}\right)\nonumber\\ & \sim \frac1{M_n}\frac{3^n}{n^{3/2}}\int_{\mathbb R_+^{d+1}}\exp\pp{\frac12\summ k1{d+1}(s_k-s_{k-1})w_k^2 - \frac16(s_k-s_{k-1})v_k^2}\nonumber\\ & \quad \quad \times \frac{2^dv_1v_{d+1}}{\pi^{d+1}\sqrt 2}\prodd k1d \int_0^\infty e^{-(z_{k+1}-z_k)x_k} \sin(v_{k+1}s_k)\sin(v_kx_k)\, dx_k\, d\vv v.\nonumber\\ \end{align*} Thus \eqref{eq:L_n'} becomes \begin{equation}\label{lim-L0} \lim_{n\to\infty}\calL_n(\vv z,\vv w) =\frac{2^{d+1/2}}{\pi^{d+1/2}3^{3/2}}\exp\pp{\frac12\summ k1{d+1}(s_k-s_{k-1})w_k^2}\int_{\mathbb R_+^{d+1}}\int_{\mathbb R^d_+} g(\vv v,\vv x)\, d\vv x d\vv v, \end{equation} where \begin{multline* g(\vv v, \vv x) \\ v_1v_{d+1}e^{-v_{d+1}^2(1-s_d)/6}\prod_{k=1}^{d}e^{-v_k^2(s_k-s_{k-1})/6} e^{-(z_{k+1}-z_{k})x_k}\sin (v_{k+1} x_k)\sin(v_{k}x_k). \end{multline*} Noting that $s_k-s_{k-1} >0$ and $z_{k+1}-z_k>0$, we see that $|g(\vv v, \vv x)|$ is bounded by the integrable function of the form $$ v_1v_{d+1}\exp\pp{-\theta \pp{\sum_{k=1}^{d+1} v_k^2+\sum_{k=1}^d x_k }} $$ for some $\theta>0$. So the order of iterated integrals on the right-hand side of \eqref{lim-L0} can be interchanged. We then have \begin{align} \frac{2^{d+1/2}}{\pi^{d+1/2}3^{3/2}} & \int_{\mathbb R_+^d}\int_{\mathbb R_+^{d+1}}g(\vv v,\vv x)\, d\vv v d\vv x\nonumber = \int_{\mathbb R_+^d}\frac{\sqrt {8\pi}}{3^{3/2}} \exp\pp{-\summ k1d(z_{k+1}-z_k)x_k}\\ & \quad\times \frac 1\pi \int_{\mathbb R_+}v_1e^{-s_1v_1^2/6}\sin(v_1x_1)\, dv_1\nonumber\\ & \quad \times \prodd k2d \frac2\pi\int_{\mathbb R_+}e^{-(s_k-s_{k-1})v_k^2/6}\sin(v_kx_k)\sin(v_kx_{k-1})\, dx_k\nonumber\\ & \quad \times \frac 1\pi \int_{\mathbb R_+}v_{d+1}e^{-v_{d+1}^2/6}\sin(v_{d+1}x_d)\, d{v_{d+1}} \, d\vv x.\nonumber \end{align} So \eqref{lim-L0} now becomes \begin{multline} \lim_{n\to\infty}\calL_n(\vv z,\vv w) \\ =\exp\pp{\frac12\summ k1{d+1}(s_k-s_{k-1})w_k^2} \int_{\mathbb R_+^d} \exp\pp{-\summ k1d(z_{k+1}-z_k)x_k} f(\vv x)\, d \vv x, \label{eq:Lap(f)} \end{multline} where \begin{equation} f(\vv x) = \frac{ \sqrt{8\pi}}{ 3\sqrt{3}}\alpha_{s_1}(x_1)\alpha_{1-s_d}(x_d) \prod_{k=2}^{d}\beta_{s_k-s_{k-1}}(x_{k-1},x_k) \label{eq:f} \end{equation} with \begin{equation*} \alpha_{s}(x )=\frac{1}{\pi}\int_0^\infty ve^{-s v^2/6}\sin(vx )dv = \frac{3 \sqrt{3}}{\sqrt{2\pi} s^{3/2} } x e^{-3 x^2/(2s)} ,\end{equation*} and \begin{align*} \beta_{s}&(x,y)=\frac{2}{\pi}\int_0^\infty e^{-s v^2/6} \sin (x v)\sin (yv)\, dv \\ &= \frac{1}{\pi}\int_0^\infty e^{-s v^2/6} \cos ((y-x) v) d\,v - \frac{1}{\pi}\int_0^\infty e^{-s v^2/6} \cos ((y+x) v) \, dv\\ &=\frac{\sqrt{3}}{\sqrt{2\pi s}}\left(\exp\left(-\frac{3(y-x)^2}{2s}\right)-\exp\left(-\frac{3(y+x)^2}{2s}\right)\right). \end{align*} To conclude the proof, we now match the density in \eqref{eq:f} to the joint density of Brownian excursion. It is known that the joint probability density function of the Brownian excursion $B^{ex}_{s_1}, \dots, B^{ex}_{s_d}$ is \begin{equation*} f_{s_1,\dots,s_d}(x_1,\dots,x_d) = \sqrt{8\pi}\ell_{s_1}(x_1) \ell_{1-s_d}(x_d)\prod_{k=1}^{d-1}g_{s_{k+1}-s_k}(x_k,x_{k+1}) \end{equation*} with \begin{equation*} \ell_t(y) = \frac1{\sqrt{2\pi t^3}} y\exp\left(-\frac{y^2}{2t}\right), {t,y>0} , \end{equation*} and \begin{equation*} g_t(y_1,y_2) = \frac1{\sqrt{2\pi t}}\pp{\exp\left(-\frac{(y_1-y_2)^2}{2t}\right) - \exp\left(-\frac{(y_1+y_2)^2}{2t}\right)}, {t,y_1,y_2>0}. \end{equation*} See \citep{durrett77weak}, \citep[page 76]{ito65diffusion}, or \citep[page 464]{revuz99continuous}. Thus the density of $(B^{ex}_{s_1}, \dots, B^{ex}_{s_d})/\sqrt{3}$ is \begin{align*} 3^{d/2}& f_{s_1,\dots,s_d}\pp{\sqrt{3}x_1,\dots,\sqrt{3}x_d}\\ &= \frac{\sqrt{8\pi}}{\sqrt{3}}\cdot \sqrt{3}\ell_{s_1}\pp{\sqrt{3}x_1} \cdot \sqrt{3}\ell_{1-s_d}\pp{\sqrt{3}x_d}\prod_{k=1}^{d-1}\pp{\sqrt{3}g_{s_{k+1}-s_k}\pp{\sqrt{3}x_k,\sqrt{3}x_{k+1}}} \\&=\frac{\sqrt{8\pi}}{\sqrt{3}} \frac{\alpha_{s_1}(x_1)}{\sqrt{3}}\frac{ \alpha_{1-s_d}(x_d)}{\sqrt{3}}\prod_{k=2}^{d}\beta_{s_{k}-s_{k-1}}(x_{k-1},x_{k}) = f(\vv x). \end{align*} Combining \eqref{eq:Lap(f)} and the above, we have shown that \begin{multline*} \lim_{n\to\infty}\calL_n(\vv z,\vv w)= E \exp \pp{\sum_{k=1}^{d+1} w_k (B_{s_k}-B_{s_{k-1}})} E\exp\pp{\frac1{\sqrt 3}\sum_{k=1}^{d} (z_{k+1}-z_k) B_{s_k}^{ex } \\ = E \exp \pp{\sum_{k=1}^{d+1} w_k (B_{s_k}-B_{s_{k-1}})} E\exp\pp{-\frac1{\sqrt 3}\sum_{k=1}^{d+1} z_k (B_{s_k}^{ex}-B_{s_{k-1}}^{ex} }. \end{multline*} By Lemma \ref{L3}, this ends the proof. \end{proof} \begin{remark} In \citep{bryc17limit}, another ingredient of the proof is to introduce the so-called {\em tangent process}, a positive self-similar Markov process with explicit transition density function that has its own interest, and in particular plays a role in the Laplace transform of Brownian excursion \citep{bryc18dual}. Here, we choose to not to elaborate on the tangent process in order to reduce the probabilistic flavor of the proof. Instead we only mention that the tangent process arises in the step \eqref{eq:tangent}, where the right-hand side is the same as $2nq_{z_{k-1},z_k}(v_{k-1}^2,v_k^2)$ with $q_{s,t}(x,y)$ being the transition density function of the tangent process as in \citep[Eq.(4.1)]{bryc17limit}. \end{remark} \begin{proof}[Proof of Theorem \ref{T1}] To see that Theorem \ref{PT3} is an equivalent formulation of Theorem \ref{T1}, note that since $D_n(s)=\floor{ns}-L_n(s)-A_n(s)$, we have \begin{eqnarray*} \frac{1}{\sqrt{2n}} \pp{A_n(s)-\frac{\floor{ns}}3} &=& \frac1{2}F_n(s)-\frac{1}{6}G_n(s) \\ \frac{1}{\sqrt{2n}} \pp{L_n(s)-\frac{\floor{ns}}3} &=& \frac{1}{3}G_n(s) \\ \frac{1}{\sqrt{2n}}\pp{D_n(s)-\frac{\floor{ns}}3} &=& -\frac1{2}F_n(s)-\frac{1}{6}G_n(s). \end{eqnarray*} \end{proof} \section{Comments and remarks}\label{Sec:RM} \subsection{Sulanke polynomials The topic of this research was also inspired by \citet{sulanke00moment,sulanke01bijective} who studied recursions for polynomials $\calS_n(t)$ which are the sum over all Motzkin paths of length $n$ of the products of weights along a path. To define these polynomials, Sulanke assigned weight $1$ to ascent and descent steps, and assigned the weight of indeterminate $t$ to each level step. He then gave a bijective proof of a recursion for $\calS_n$. (We note that Sulanke considered elevated Motzkin paths, thus his $f_n(t)$ is $\calS_{n-2}(t)$ in our notation.) Clearly, $\calS_n(t)=\varphi(t,t,\dots,t)$, where $\varphi$ is given by \eqref{GenF0}. This gives the generating function for Sulanke polynomials. \begin{proposition} Setting $\calS_0(t)=1$, we have \begin{equation} \label{MGF}\sum_{n=0}^\infty z^n \calS_n(t)=\frac{1-tz-\sqrt{(1-tz)^2-4z^2}}{2z^2}. \end{equation} \end{proposition} (When $t=1$ this is of course the well known expression for the generating function of the Motzkin numbers.) \begin{proof}From \eqref{GenF0} we get \begin{equation} \label{Sulanke2FB} \calS_n(t)=\frac{1}{2\pi}\int_{-2}^2 (t+y)^n\sqrt{4-y^2}\,dy, \; n=0,1,2\dots \end{equation} Summing the series on the left-hand side of \eqref{MGF} we see that the generating function of Sulanke polynomials is $G(1/z-t)/z$, where $$ G(z)=\frac{1}{2\pi}\int_{-2}^2 \frac{\sqrt{4-y^2}}{z-y}\,dy = \frac{z-\sqrt{z^2-4}}{2} $$ is the Cauchy--Stieltjes transform of the semicircle law, see for example \citet[Example 3.1.1]{hiai00semicircle}. \end{proof} We remark that asymptotic normality of $L_n(1)$ can also be deduced from \eqref{Sulanke2FB} using the Laplace method, and presumably also from the generating function \eqref{MGF}; analytic techniques in \citet{flajolet09analytic} are likely to imply a stronger local limit law of the Gaussian type. \subsection{A probabilistic approach for Theorem \ref{T1}}\label{sec:proba} Here we sketch a probabilistic proof for Theorem \ref{T1}. Let $$J_n(t)=\floor{nt}-L_n(t)=A_n(t)+D_n(t)$$ denote the number of non-level steps. Since asymptotically only $1/3$ of the steps of a random walk are horizontal, we can expect that $J_n(1)/n\to 2/3$ in probability, and it is natural to expect that $\frac1{\sqrt{n}}(J_n(t)-2nt/3)_{t\in[0,1]}$ converges to $\frac{\sqrt{2}}{3}(B_t)_{t\in[0,1]}$, the Brownian motion scaled by the standard deviation of a Bernoulli random variable with probability of success $p=2/3$. Conditionally on $J_n$, $A_n(t)-D_n(t)=2A_n(t)-J_n(t)$ behaves like a Dyck path on $J_n(1)$ sites, which by \citet{kaigh76invariance} converges to the Brownian excursion. So we expect that $$ \left(\frac{A_n(t)-J_n(t)/2}{\sqrt{J_n(1)}}\right)_{t\in[0,1]}\toD \frac{1}{2} (B_t^{ex})_{t\in[0,1]}. $$ We can then decompose the process into $$ \frac{A_n(t)-nt/3}{\sqrt{2n}}=\frac{A_n(t)-J_n(t)/2}{\sqrt{J_n(1)}}\sqrt{\frac{J_n(1)}{2n}}+\frac{J_n(t)-2nt/3}{2\sqrt{2n} ,\quad t\in[0,1]. $$ One can show that the two processes on the right-hand side above converge to $\frac1{2\sqrt 3}B^{ex}$ and $\frac 16B$, respectively, and furthermore the limit of the first term is independent from $(J_n(1))_{n\in\mathbb N}$ as $n\to\infty$. Therefore, the two limit processes corresponding to right-hand side above are independent. \subsection*{Acknowledgement} The authors thank an anonymous referee for the careful reading of the manuscript. WB's research was supported in part by the Charles Phelps Taft Research Center at the University of Cincinnati. He thanks Jacek Weso\l owski for helpful discussions. YW's research was supported in part by NSA grant H98230-16-1-0322 and Army Research Laboratory grant W911NF-17-1-0006.
\section{Introduction} \label{sec_intro} Experiments at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) have been searching for the deconfined phase of nuclear matter and have begun to probe its properties~\cite{Adams:2005dq,Schukraft:2011kc}. There are strong indications that this new form of matter behaves like a nearly perfect fluid with high opacity and low viscosity, referred to as strongly coupled Quark-Gluon Plasma (sQGP)~\cite{Gyulassy:2004zy,Shuryak:2008eq}. One of the major experimental findings is a large azimuthal anisotropy, $v_2$, in transverse momentum ($p_T$) spectra of hadrons in non-central collisions~\cite{Voloshin:2008dg}. To account for this observation, hydrodynamic simulations require an early initialization time, implying a rapid thermalization of the bulk medium~\cite{Teaney:2001av,Heinz:2009xj,Hirano:2010je}. However, the microscopic origin of the rapid thermalization remains a matter of debate. In contrast to light partons making up the bulk of the medium, heavy quarks (charm and bottom), produced in primordial hard collisions and acting as impurities in the QGP, are not expected to fully equilibrate with the surrounding medium. Due to their large masses ($m_Q$) a memory of their interaction history may be preserved, thus providing a more direct probe of the medium properties than bulk observables~\cite{Svetitsky:1987gq,vanHees:2004gq,Moore:2004tg,Rapp:2009my}. The thermal relaxation time of heavy quarks has been argued to be larger than that of light quarks by a factor of $m_Q/T\approx 5-20$~\cite{Moore:2004tg,Rapp:2009my} ($T$: typical temperature of the QGP). As they diffuse through the medium, heavy quarks interact with the light partons and their spectrum becomes quenched~\cite{vanHees:2004gq,Moore:2004tg}. Moreover, as they couple to the collective flow of the medium in non-central heavy-ion collisions, heavy quarks may develop substantial momentum anisotropies. These two effects are translated into equivalent behavior of heavy-flavor (HF) meson ($D$ and $B$) spectra and $v_2$, and further into the spectrum and $v_2$ of their decay electrons. The latter have been measured in Au+Au collisions at RHIC~\cite{Abelev:2006db,Adare:2006nq,Adare:2010de}, exhibiting appreciable modifications over their baseline spectra from $p+p$ and $d+$Au collisions. Model calculations based on radiative energy loss in perturbative QCD (pQCD), which could account for the observed jet-quenching in the light sector~\cite{Gyulassy:2003mc}, predicted a much smaller quenching for heavy quarks and associated single-electron spectra~\cite{Wicks:2005gt}. The large HQ mass suppresses small-angle gluon radiation (``dead cone" effect~\cite{Dokshitzer:2001zm}) and reduces the gluon formation time~\cite{Zhang:2003wk}, hence mitigating radiative energy loss significantly. However, elastic collisions of heavy quarks with light partons~\cite{Svetitsky:1987gq,Braaten:1991we,vanHees:2004gq,Moore:2004tg,Mustafa:2004dr} have been argued to dominate over radiative scattering at low momentum, resulting in notable quenching of the HQ spectrum. However, jet quenching only captures part of the physics potential of the HQ probe. Its diffusion properties, which reach all the way to zero momentum, include energy-gain processes which are, e.g., instrumental for the coupling to the collective flow of the medium. Several studies of HQ diffusion have been conducted in recent years using Fokker-Planck~\cite{vanHees:2004gq,Moore:2004tg,Mustafa:2004dr,vanHees:2005wb,vanHees:2007me,Gossiaux:2008jv,Akamatsu:2008ge,Das:2009vy,Alberico:2011zy} and Boltzmann transport~\cite{Zhang:2005ni,Molnar:2006ci,Uphoff:2010sh} approaches, mostly implementing elastic collisions as the microscopic dynamics. They differ not only in their treatment of the background medium, but also in the evaluation of (a) the transport coefficients emerging from the interactions between the heavy quarks and the medium, and (b) hadronization of heavy quarks into HF mesons. Concerning item (a), most studies employ variants of the pQCD interaction~\cite{Combridge:1978kx}, while a novel approach with heavy-light resonant interactions was introduced in Refs.~\cite{vanHees:2004gq,vanHees:2005wb}. The latter was found to be a factor of 3-4 more efficient in HQ thermalization than pQCD, and was subsequently corroborated by microscopic $T$-matrix calculations using input potentials from lattice QCD (lQCD)~\cite{vanHees:2007me,Riek:2010fk,Riek:2010py}. Concerning HQ hadronization, several studies focused on independent fragmentation~\cite{Akamatsu:2008ge,Das:2009vy,Alberico:2011zy,Uphoff:2010sh}, which is not reliable in the low and intermediate-$p_T$ regimes. Here, light partons surrounding the heavy quark have a high phase-space density which renders coalescence a more plausible hadronization mechanism~\cite{Lin:2003jy,Fries:2003vb,Greco:2003mm,Fries:2008hs}. In Refs.~\cite{vanHees:2005wb,vanHees:2007me}, heavy-light quark recombination has been incorporated utilizing an instantaneous coalescence model~\cite{Greco:2003vf} which could still be problematic at low $p_T$ due to lack of energy conservation. A reliable treatment of the low-$p_T$ regime is important since the total number of heavy quarks is expected to be conserved through the hadronization transition. If the $D$- or $B$-meson spectra are distorted at low $p_T$, the spectra at higher $p_T$ are necessarily affected thus modifying the $R_{AA}$ (and $v_2$) of $D$ and $B$-mesons and their decay electrons. The purpose of the present work is to establish a realistic and quantitative framework for HQ probes within (a) a strongly coupled QGP background medium (modeled by hydrodynamics), (b) a non-perturbative scenario of elastic diffusion in the QGP simulated by Fokker-Planck-Langevin dynamics, and (c) a hadronization scheme at the phase transition based on the same interaction as in (b), combining recombination and fragmentation consistent with the limiting cases of kinetic equilibrium and vacuum hadronization. Unlike previous studies utilizing weak-coupling diffusion~\cite{Moore:2004tg,Gossiaux:2008jv,Das:2009vy,Alberico:2011zy,Uphoff:2010sh} we try to implement the HQ probe consistently within a framework of strong coupling between heavy and light quarks, both in the QGP and during hadronization. Our comprehensive framework is hence conceptually compatible with the notion of a strongly interacting QGP. The strategy in this work is as follows. For the HQ transport coefficient we employ a non-perturbative $T$-matrix calculation of heavy-light quark interactions~\cite{Riek:2010fk,Riek:2010py}. This calculation supports Feshbach resonances in the QGP in the color-singlet and anti-triplet channels, surviving as rather broad states up to $\sim1.5~T_c$. They are responsible for the enhancement of the transport coefficient compared to pQCD scattering. With these coefficients we perform Langevin simulations of HQ diffusion through an expanding medium which is described by ideal 2+1-dimensional hydrodynamics (using the AZHYDRO code~\cite{Kolb:2003dz} at RHIC energies). At the phase transition, heavy quarks are hadronized through coalescence with light quarks of the medium using the Resonance Recombination Model (RRM)~\cite{Ravagli:2007xx} implemented on a hypersurface given by the hydrodynamic simulation. The coalescence probability is evaluated using the resonant scattering rate of the heavy quark with light (anti) quarks, supplemented by independent fragmentation. The RRM formalism is consistent with the heavy-light Feshbach resonance formation found in the $T$-matrix used for the transport coefficient. This stipulates the role played by the resonance correlations in our work. With an artificially large transport coefficient, we check the equilibrium limit of the HQ distribution emanating from the combined hydro+Langevin simulation and the ensuing degree of equilibration of the HF mesons upon resonance recombination. The full space-momentum correlations generated by the hydro-Langevin simulation enter into resonance recombination. This enables a quantitative assessment of the radial medium flow on HF meson spectra at low $p_T$ as imprinted on the final $R_{AA}$ measurement. Our article is organized as follows. In Sec.~\ref{sec_Langevin} we introduce the ingredients for the hydro-Langevin simulation of HQ diffusion in the medium, i.e., the transport coefficient, the initial distribution in coordinate and momentum space, and the background medium described by an ideal hydrodynamic model. Numerical results for the HQ $R_{AA}$ and $v_2$ are discussed in the equilibrium limit as well as for realistic coefficients. Sec.~\ref{sec_hadronization} is devoted to HQ hadronization. We implement the RRM formalism on arbitrary hadronization hypersurfaces, elaborate the equilibrium mapping in resonance recombination, and determine the partition of coalescence and fragmentation. In Sec.~\ref{sec_flow} we examine consequences of modifying the medium flow for the predicted HF meson spectra, triggered by indications that the partonic flow of the hydrodynamic evolution is too soft. In Sec.~\ref{sec_electron} we make contact with current experiments in terms of the nuclear modification factor and the elliptic flow of electrons from HF decays. In Sec.~\ref{summary} we summarize and conclude. \section{Langevin Simulation of Heavy Quark Diffusion} \label{sec_Langevin} \subsection{Relativistic Langevin Kinetics} \label{ssec_2.1} The thermal momentum of a heavy quark at temperatures characteristic for heavy-ion collisions at RHIC amounts to $p_{th}\sim \sqrt{m_QT}$, which is parametrically larger than the typical momentum transfer, $q\sim T$, in a single elastic collision with a light parton from the bulk medium. Therefore many collisions are needed to change the HQ momentum considerably~\cite{vanHees:2004gq,Moore:2004tg}. This forms the basis for approximating the HQ motion in the QGP by a succession of uncorrelated momentum kicks and leads to a Fokker-Planck approach realized stochastically by the Langevin equations~\cite{Landau1981,Svetitsky:1987gq,Rapp:2009my,Hanggi2009} \begin{align} \label{Langevinrule1} d\mathbf{x}&=\frac{\mathbf{p}}{E}dt,\\ d\mathbf{p}&=-\Gamma(p) \ \mathbf{p} \ dt + \sqrt{2D(\mathbf{p}+d \mathbf{p}) \, dt} \ \mathbf{\rho} \ , \label{Langevinrule2} \end{align} where $\mathbf{x}$ and $\mathbf{p}$ are the position and momentum vector of the heavy quark, and $E(p)=(m_Q^2+\mathbf{p}^2)^{1/2}$ is its energy. In the following we employ the post-point discretization scheme in which the equilibrium condition (the relativistic fluctuation-dissipation theorem) takes the simple form \begin{equation} D(p)=\Gamma(p)~E(p)~T \label{equilcond} \end{equation} with $\Gamma(p)$ being the drag coefficient and $D(p)$ the (diagonal) diffusion coefficient. The standard Gaussian noise variable, $\mathbf{\rho}$, is distributed according to \begin{equation} w(\mathbf{\rho})=\frac{1}{(2\pi)^{3/2}}e^{-\mathbf{\rho}^2/2} \, . \end{equation} Neither the original Fokker-Planck equation~\cite{Landau1981,Rapp:2009my} nor the Langevin equation is Lorentz covariant. We choose the momentum and position updates for our HQ test particles to be at equidistant time steps $d\tau$ in the lab frame. For a flowing medium, as in our context, the momentum updates are rather to be done in the fluid rest frame. The updated 4-momentum is boosted back to the lab frame with the fluid four-velocity $u^{\mu}(x)=\gamma(v)(1,\mathbf{v}(x))$. The aforementioned equilibrium condition must be satisfied in order for the long-time limit of the test particle distribution to converge to the equilibrium (Boltzmann-J\"uttner) distribution as defined by the underlying background medium. Further details of our algorithm will be detailed in a forthcoming article~\cite{langevin}. \subsection{Thermal Relaxation Rate of Heavy Quarks} \label{ssec_2.2} The transport coefficient most commonly calculated from an underlying microscopic interaction of the heavy quark with the bulk medium is the thermal relaxation rate $A(p;T)$. It is related to the drag coefficient, $\Gamma(p;T)$, in the post-point Langevin scheme, Eq.~(\ref{Langevinrule2}), through \begin{equation} \Gamma(p;T)=A(p;T)+\frac{1}{E(p)}\frac{\partial D(p;T)}{\partial E(p)} \ . \label{Gamma-A} \end{equation} Utilizing the equilibrium condition (\ref{equilcond}) one can argue that $\Gamma(p) = A(p) + \mathcal{O}(T/m_Q)$ and neglect terms to higher order in the inverse HQ mass (relative to the medium temperature). \begin{figure}[!t] \includegraphics[width=\columnwidth]{imaginary-part_T-matrix_color-singlet.eps} \includegraphics[width=\columnwidth]{imaginary-part_T-matrix_color-anti-triplet.eps} \caption{(Color online) Imaginary part of the in-medium on-shell $T$-Matrix for charm-light quark scattering as a function of center-of-mass energy in the color-singlet (upper panel) and anti-triplet (lower panel) channels, taken from the lattice-QCD based potential approach of Ref.~\cite{Riek:2010fk}. The vacuum $T$-matrices have been downscaled by a factor of $0.025$.} \label{fig_Tmat} \end{figure} \begin{figure}[!t] \hspace{4mm} \includegraphics[width=\columnwidt ]{charm_relaxation_rate.eps} \includegraphics[width=\columnwidt ]{charm_relaxation_rate_vs_T.eps} \caption{(Color online) (a) Charm-quark relaxation rate as a function of three-momentum using (i) heavy-light quark $T$-matrices (with lQCD internal energy~\cite{Kaczmarek:2005ui} as potential) plus pQCD gluon scattering with $\alpha_s=0.4$ (upper 3 curves), and (ii) pQCD scattering off anti-/quarks and gluons with $\alpha_s=0.4$ (lower 3 curves). (b) Temperature dependence of the charm/bottom quark thermal relaxation rate (at vanishing momentum) used in our simulations. The results are taken from Ref.~\cite{Riek:2010fk}.} \label{fig_Ap} \end{figure} We employ HQ relaxation rates from Refs.~\cite{Riek:2010fk}, where in-medium $T$-matrices have been calculated for both heavy-light and quarkonium channels. The input potentials were constructed using a field-theoretic ansatz for a confining and a color-Coulomb interaction with parameters fitted to color-average free energies computed in finite-temperature lattice QCD (lQCD)~\cite{Kaczmarek:2005ui}. This approach treats heavy quarkonia and heavy-light interactions in the QGP on an equal footing, and in both bound-state and scattering regimes. One thus obtains mutual constraints by analyzing, e.g., Euclidean correlation functions and HQ susceptibilities which turn out to agree fairly well with thermal lQCD ``data"~\cite{Riek:2010py}. For heavy-light quark scattering, the (non-perturbative) resummation in the $T$-matrix generates resonances close to the 2-particle threshold (commonly referred to as ``Feshbach resonances") in the attractive color-singlet (meson) and color-anti-triplet (diquark) channels up to temperatures of about 1.5\,$T_c$, see Fig.~\ref{fig_Tmat} for charm quarks (similar results are obtained in the bottom sector). The increasing strength of the $T$-matrices in the color-singlet and anti-triplet channels when approaching $T_c$ from above is indicative for ``pre-hadronic" correlations leading to hadronization. But even at high temperatures a substantial enhancement of the $T$-matrix over elastic pQCD amplitudes persists, in particular close to threshold. The rather large resonance widths are mostly generated through the self-energies of the light- and heavy-quark propagators in the $T$-matrix (evaluated self-consistently in the HQ sector). The $T$-matrices have been used to calculate thermal relaxation rates of heavy quarks~\cite{Riek:2010fk,Riek:2010py}. Resonant rescattering accelerates kinetic equilibration by up to a factor of $\sim$3-5 relative to leading order (LO) pQCD calculations~\cite{Svetitsky:1987gq}, cf.~upper panel of Fig.~\ref{fig_Ap}. With increasing HQ 3-momentum the thermal phase space of comoving partons (suitable for forming a Feshbach resonance) decreases and the relaxation rate approaches the pQCD results. For high energies and in Born approximation the $T$-matrix results recover the LO pQCD scattering amplitudes~\cite{Riek:2010fk}. The temperature dependence of the charm and bottom relaxation rates (at vanishing 3-momentum) used in our simulations is displayed in the lower panel of Fig.~\ref{fig_Ap}. They have been extrapolated linearly from the transition temperature in the lQCD calculations of the free energies~\cite{Kaczmarek:2005ui}, $T_c$=196\,MeV, to $T_c$=165\,MeV implicit in the equation of state as used in AZHYDRO. \subsection{The hydrodynamic background QGP medium} \label{ssec_2.3} Hydrodynamic simulations are widely applied to model the bulk evolution of the matter created in heavy-ion collisions at RHIC~\cite{Teaney:2001av,Heinz:2009xj,Hirano:2010je}, providing a good description of hadron spectra and their elliptic flow. Here we use a hydrodynamic simulation of the fireball to provide the background medium for HQ diffusion. It supplies the information on the space-time evolution of energy and entropy density, as well as temperature and fluid velocity which are needed to calculate the transport coefficients in the Langevin dynamics and on the hadronization hypersurface. We have employed the publicly available ideal 2+1-dimensional AZHYDRO code~\cite{Kolb:2003dz} in our study. It assumes longitudinal boost invariance~\cite{Bjorken:1982qr} and has been tuned to fit to bulk observables at kinetic freeze-out at an energy density of $e_{\rm fo}=0.075~{\rm GeV/fm^3}$ in $\sqrt{s_{NN}}=200~{\rm GeV}$ Au+Au collisions at RHIC~\cite{Kolb:2003dz}. The initialization of AZHYDRO is done at $\tau_0=0.6~{\rm fm/c}$ by specifying the entropy density distribution as \begin{equation} \label{AZHYDROentropy} s(\tau_0,x,y;b)=\kappa[\frac{1}{4}n_{\rm BC}(x,y;b) + \frac{3}{4} n_{\rm WN}(x,y;b)] \ , \end{equation} where $n_{\rm BC}$ and $n_{\rm WN}$ are the binary-collision and wounded-nucleon densities, respectively, calculated in the optical Glauber model~\cite{Kolb:2003dz}, and $b$ is the impact parameter. The coefficient $\kappa$ is fitted to the observed rapidity density of charged hadrons, $dN_{\rm ch}/dy$, and translates into an initial entropy density of $s(\tau_0,0,0;0)=110{\rm /fm^3}$ at the center of the transverse plane for central Au+Au collisions at RHIC. \begin{figure}[!t] \hspace{4mm} \includegraphics[width=\columnwidt ]{AZHYDRO_epsilon_P.eps} \includegraphics[width=\columnwidt ]{light_quark_pT-spectrum.eps} \includegraphics[width=\columnwidt ]{light_quark_v2.eps} \caption{(Color online) (a) The time evolution of the asymmetry $\epsilon_p$ of the energy-momentum tensor in AZHYDRO for $b=7$ fm Au+Au collisions at $\sqrt{s_{\rm NN}} = 200$ GeV. (b) Light quark ($m_q=350~{\rm MeV}$) $p_t$-spectrum calculated with freeze-out at the end of the mixed phase with decoupling energy density $e_{\rm dec}=0.445~{\rm GeV/fm^3}$ in AZHYDRO (red solid line). It is compared to the light quark spectrum at the end of the mixed phase of the parameterized elliptic fireball model discussed in Sec.~\ref{sec_flow} (green dashed line). (c) The light quark elliptic flow $v_2$ at the end of the mixed phase. Again, AZHYDRO and the parameterized fireball results are shown.} \label{fig_AZHYDRO} \end{figure} In Fig.~\ref{fig_AZHYDRO} we summarize the main features of AZHYDRO relevant to our HQ diffusion calculations. The upper panel displays the time evolution of the energy-momentum anisotropy, $\epsilon_p=\langle T^{xx}-T^{yy}\rangle/\langle T^{xx}+T^{yy}\rangle$ for semi-central collisions ($b$=7\,fm); it exhibits the development of the bulk anisotropy which leads to an elliptic flow for final-state particles~\cite{Kolb:2003dz}. One sees that $\epsilon_p$ tends to saturate at later times when the spatial anisotropy of the system has essentially vanished; the dip around $\tau\simeq5$~fm/$c$ is due to the vanishing acceleration in the mixed phase, which, in turn, is a result of the equation of state (EoS) with a Maxwell construction between a non-interacting QGP with a bag constant, $B=0.3642~{\rm GeV/fm^3}$ at $T>T_c=165$\,MeV, and a hadronic resonance gas at $T<T_c$. Since in our Langevin simulations the HQ test particles freeze out at the end of the mixed phase (at $e_{\rm dec}=0.445~{\rm GeV/fm^3}$), we show in the middle and lower panels of Fig.~\ref{fig_AZHYDRO} the light-quark $p_t$-spectrum and $v_2$ at this point, respectively. The light-quark mass is taken as $m_q=350$ MeV and we used the standard Cooper-Frye freeze-out procedure~\cite{Cooper:1974mv,Kolb:2003dz}. For comparison we also show the results of an empirical fireball parametrization of quark distributions extracted from multi-strange hadron spectra in Ref.~\cite{He:2010vw}. The quark-$p_t$ spectra are noticeably harder than in the hydrodynamic evolution. Since multi-strange particles are believed to kinetically decouple close to $T_c$, this suggests that the hydrodynamic evolution in the default AZHYDRO does not generate enough flow in the QGP. To investigate the effect of a larger flow on HQ spectra we will also conduct Langevin simulations with a schematic fireball whose final-state flow is given by the empirically extracted quark spectra. The pertinent elliptic flow exhibits slightly flatter $p_t$ dependence than in the hydrodynamic simulation, cf.~lower panel of Fig.~\ref{fig_AZHYDRO}. However, the integrated quark elliptic flow of $\langle v_2 \rangle =4.99\%$ is very close to the hydro result of $\langle v_2 \rangle =5.03\%$, representing the benchmarks from which the heavy quarks acquire $v_2$ through heavy-light parton interactions. However, another 20-30\% is typically built up in the hadronic evolution below $T_c$ (recall the upper panel of Fig.~\ref{fig_AZHYDRO}) which is neglected in the present study. \subsection{Initial Distributions of Heavy Quarks} \label{ssec_2.4} The number of heavy quarks produced in heavy-ion collisions is consistent with binary nucleon-nucleon collision scaling~\cite{Adler:2004ta}. Thus their initial spatial distribution is expected to follow the binary collision density, $n_{\rm BC}(x,y;b)$ which we adopt in our simulations within the transverse area where the energy density, $e(\tau_0,x,y)$, is larger than the decoupling value $e_{\rm dec}=0.445~{\rm GeV/fm^3}$. For the initial HQ momentum distribution, we use the same spectrum as in Refs.~\cite{vanHees:2004gq,vanHees:2005wb}, where PYTHIA results for charm- and bottom-quark spectra, converted into $D$ and $B$ mesons via $\delta$-function fragmentation, were tuned to semi-leptonic electron-decay spectra as measured in $p+p$ and $d+$Au collisions at RHIC. This procedure leads to a bottom-to-charm cross section ratio of $\sigma_{b\bar b}/\sigma_{c\bar c}=4.9\times10^{-3}$, and a crossing of the electron spectra from $D$- and $B$-meson decays at $p_t^e \approx 5$ GeV, see Fig.~\ref{fig_initial-e}. The $b/c$ cross-section ratio is within the range of pQCD predictions~\cite{Cacciari:2005rk} and turns out to reproduce fairly well experimental data~\cite{Adare:2009ic,Aggarwal:2010xp} for the $p_t$-dependence of the ratio of electrons from $B$-mesons to the sum from $D$+$B$, cf.~lower panel of Fig.~\ref{fig_initial-e}. The $B$-meson contribution becomes sizable for $p_t^e\gsim3$\,GeV. \begin{figure}[!t] \hspace{4mm} \includegraphics[width=\columnwidt ]{initial_DB_decay_electron_pT_spectrum.eps} \includegraphics[width=\columnwidt ]{initial_Be_vs_DBe
.eps} \caption{(Color online) (a) Electron spectra from semileptonic decays of $D$- and $B$-mesons (obtained from initial $c$- and $b$-quark spectra with $\delta$-function fragmentation) in $p+p$ collisions at RHIC energies. (b) Transverse-momentum dependence of the relative contribution of electrons from $B$-mesons to electrons from $D$+$B$ decays. The solid curve results from the spectra in the upper panel which we adopt in our calculations; the data are from PHENIX~\cite{Adare:2009ic} (filled squares) and from STAR~\cite{Aggarwal:2010xp} (filled circles) for $p+p$ collisions at $\sqrt{s}=200$\,GeV.} \label{fig_initial-e} \end{figure} \subsection{Heavy-Quark Spectra and Elliptic Flow} \label{ssec_2.5} We now combine the ingredients as specified in the previous sections to perform the hydro+Langevin simulation of HQ diffusion in the QGP using the test-particle method. A vector ($\mathbf{x}_0, \mathbf{p}_0$) in transverse phase space, representing a heavy quark, is generated by Monte Carlo methods following the initial distributions discussed in Sec.~\ref{ssec_2.4}. Then we follow the trajectory of the heavy quark in phase space in equal time steps in the lab frame. At each time step, we read off the temperature, energy density and velocity of the fluid cell at the current HQ position, ($\tau,x,y,\eta=0$). The drag coefficient is determined by the HQ momentum in the fluid rest frame and the temperature of the fluid cell. The momentum of the heavy quark is updated stochastically in the fluid rest frame according to the Langevin rule in Eq.~(\ref{Langevinrule2}) and boosted back to the lab frame using the fluid velocity. The HQ position is updated in the lab frame, which can be shown to be equivalent to an update in the fluid rest frame. Test particles that have diffused away from $\eta=0$ to rapidity $y$ and space-time rapidity $\eta$ are redefined from the longitudinal phase-space coordinate $(\eta;y)$ to $(0;y-\eta)$ to enforce boost-invariance. The heavy quark continues to diffuse in the QGP until the local energy density of the fluid drops below the decoupling value, $e_{\rm dec}=0.445~{\rm GeV/fm^3}$, corresponding to the end of the mixed phase of the cell. At that point, we assume the heavy quark to decouple from the fireball and mark it for hadronization. We do not take into account a possible local reheating if the expanding QGP phase ``swallows" again an already hadronized heavy quark due to the increasing matter flow. Our criterion for the decoupling of heavy quarks automatically yields their flux across the hadronization hypersurface as \begin{equation} f_Q(\tau,x,y;\mathbf{p}) p_\mu d\sigma^\mu(\tau,x,y)/E(\mathbf{p}) \label{eq:flux} \end{equation} for any area element $d\sigma^\mu(\tau,x,y)$ on that surface, in accordance with the Cooper-Frye formalism for the hydrodynamic freeze-out. \begin{figure}[!t] \hspace{4mm} \includegraphics[width=\columnwidt ]{equilibrium_charm_pT_spectrum.eps} \includegraphics[width=\columnwidt ]{equlibrium_charm_v2.eps} \caption{(Color online) (a) The charm quark $p_t$-spectrum obtained from hydro+Langevin simulations with a large drag coefficient, $\Gamma=40.0/\sqrt{E}/{\rm fm}$ (dots), compared to the equilibrated charm-quark spectrum calculated from the $e_{\rm dec}=0.445~{\rm GeV/fm^3}$ freeze-out hypersurface in AZHYDRO (red solid line). The blue dashed line is the initial charm-quark spectrum with the same total yield. (b) The same comparison as in (a) but for the elliptic flow.} \label{fig_c-equil} \end{figure} It is critical to verify that heavy quarks can reach local equilibrium as the stationary solution~\cite{Walton:1999dy}. We have checked the equilibrium limit with an artificially increased drag coefficient, $\Gamma=40/\sqrt{E\,{\rm[GeV]}}/{\rm fm}$, and a homogeneous initial spatial distribution for test particles in the transverse plane. This specific choice for the energy dependence of $\Gamma$ resembles the momentum dependence of the $T$-matrix based coefficients.\footnote{We have verified that the higher-order terms in Eq.~(\ref{Gamma-A}), which are dropped in our Langevin simulations below, are negligible for the much smaller ``realistic" coefficients.} The size of the numerical coefficient ($\sim$40) in the large-$\Gamma$ case is limited by the requirement that the numerical time-step in the Langevin process be smaller than the inverse relaxation rate. In the upper panel of Fig.~\ref{fig_c-equil} the Langevin charm-quark spectrum with large coefficients is compared to the distribution from Cooper-Frye freeze-out on the $e_{\rm dec}=0.445~{\rm GeV/fm^3}$ hypersurface in AZHYDRO, i.e., charm quarks in complete local thermal equilibrium. We have adopted a charm-quark mass of $m_c=1.8$\,GeV, corresponding to the in-medium mass at $T_c=165$\,MeV in our simulations~\cite{Riek:2010fk}. The spectra agree well up to $p_t\simeq 4.0-4.5~{\rm GeV}$. The deviation at higher $p_t$ is due to surface emission of charm quarks with large velocities which escape the active (i.e.\ $e(\tau,x,y)\geq e_{\rm dec}$) part of the fireball at the earliest times; roughly 1\% of heavy quarks at a given high $p_t$ do not suffer collisions, corresponding to the factor $\sim$100 suppression of spectra from the Langevin simulation relative to the initial distribution at large $p_t$. A matching picture is observed for the elliptic flow (lower panel in Fig.~\ref{fig_c-equil}): at low $p_t$ the $v_2$ of the hydro+large-$\Gamma$-Langevin simulation follows the $v_2$ of equilibrated charm quarks, while it breaks away and oscillates around zero for large $p_t$ (deviations set in slightly earlier than for the inclusive $p_t$ spectra, presumably since $v_2$ is a more differential and thus more ``fragile" quantity). \begin{figure}[!t] \hspace{4mm} \includegraphics[width=\columnwidt ]{cb_RAA.eps} \includegraphics[width=\columnwidt ]{cb_v2.eps} \caption{(Color online) Nuclear modification factor (upper panel) and elliptic flow (lower panel) of charm (red solid line) and bottom quarks (green dashed line) at hadronization obtained from hydro+Langevin simulations for $b$=7\,fm Au+Au collisions at RHIC energy, using transport coefficients from the heavy-light quark $T$-matrix plus a pQCD HQ-gluon contribution. For comparison charm-quark results are shown with coefficients using only LO pQCD scattering off gluons light quarks (blue dotted line).} \label{fig_hq-raa-v2} \end{figure} Next we turn to the results of our simulations under ``realistic" conditions, using the transport coefficients and initial distributions outlined above, together with temperature-dependent in-medium HQ masses~\cite{Riek:2010fk}. As usual, the modifications of the HQ spectra in the medium are quantified by the nuclear modification factor and elliptic flow, \begin{align} \label{RAAv2} R_{AA}(p_t,y) &= \frac{\frac{dN_{\rm AA}}{dp_tdy}}{N_{\rm coll} \frac{dN_{\rm pp}}{dp_tdy}} \ ,\\ v_2(p_t,y) &= \frac{\int d\phi\frac{dN_{\rm AA}}{dp_td\phi dy}\cos(2\phi)}{\int d\phi\frac{dN_{\rm AA}}{dp_td\phi dy}} \ , \end{align} respectively, where $N_{\rm coll}$ is the estimated number of binary nucleon-nucleon collisions for the centrality bin under consideration. In Fig.~\ref{fig_hq-raa-v2} we display the charm- and bottom-quark $R_{\rm AA}$ and $v_2$ at the end of the mixed phase as obtained from hydro+Langevin simulations in semi-central Au+Au collisions ($b$=7\,fm). The approach toward thermalization induces a depletion of heavy quarks at large $p_t$ (quenching) and an enhancement at low $p_t$ enforced by HQ number conservation. At $p_t\simeq5$\,GeV, the charm-quark quenching reaches down to $\sim$0.4 while bottom quarks are much less affected, with $R_{\rm AA}$($p_t$=5\,GeV)$\simeq0.8$. Note that at the same $p_t$ the Lorentz-$\gamma$ of bottom quarks is significantly smaller than for charm. Radiative contributions to HQ transport are estimated to become competitive with elastic scattering once the non-perturbative effects are suppressed, i.e., above $p_t\simeq$~4-5\,GeV for charm quarks~\cite{Rapp:2009my} (recall Fig.~\ref{fig_Ap}). When using a drag coefficient from pQCD elastic scattering only (including both quarks and gluons with $\alpha_s=0.4$), the quenching is weaker by about a factor of $\sim$3~\cite{vanHees:2005wb}. Similar features are found in the elliptic flow coefficient, which for $c$-quarks first increases approximately linearly before leveling off at about 4.5\%, characterizing a transition from a quasi-thermal to a kinetic regime. Previous calculations employing a thermal-fireball model for the medium evolution, using drag coefficients for non-perturbative elastic scattering~\cite{vanHees:2004gq} of comparable magnitude as in our calculation, have found significantly larger values for the maximal charm-quark $v_2$ of around 7.5\%~\cite{vanHees:2005wb}. Part of this difference originates from the larger ``intrinsic" $v_2$ in the fireball medium which has been adjusted to the empirically observed hadron-$v_2$ of 5.5-6\%. Since the diffusion coefficient of charm ($D$-mesons) in the hadronic phase is not negligible~\cite{He:2011yi}, the HF $v_2$ in the present study should be considered as a lower bound. Another source of uncertainty derives from the freeze-out prescription and the associated realization of the HQ Langevin process (Cooper-Frye in the hydro evolution vs. Milekhin-like in some fireball calculations)~\cite{Gossiaux:2011ea}. \section{Heavy-Quark Hadronization} \label{sec_hadronization} The bulk matter in a hydrodynamic simulation can be evolved through a phase transition (here QGP to hadronic matter) solely by specifying the equation of state of the medium. However, the HQ spectra resulting from the Langevin simulations through the QGP are, in general, not in full equilibrium with the bulk medium and thus require a microscopic hadronization mechanism to enable the calculation of HF observables. We will carry this out at the end of the mixed phase, represented by the hypersurface defined by the critical energy density of the hadronic phase in the hydrodynamic simulation. For simplicity we focus on the formation of $D$- and $B$-mesons neglecting HF baryons and hidden heavy flavor (both of which have been found to give small contributions to the total HF content of the hadronic phase~\cite{vanHees:2005wb}). Two microscopic hadronization mechanisms have been considered in heavy-ion physics to date: independent fragmentation of partons and coalescence of quarks. The former is appropriate for large-momentum partons emerging directly from initial hard processes, with phenomenological fragmentation functions simulating vacuum gluon radiation and color neutralization. Coalescence, on the other hand, is believed to dominate in the low-momentum regime where partons are abundant in phase-space in heavy-ion~\cite{Fries:2003vb,Greco:2003mm,Fries:2008hs} and even in elementary hadronic reactions~\cite{Rapp:2003wn,Hwa:2005}. Several previous studies of HQ diffusion in heavy-ion collisions have neglected coalescence processes~\cite{Akamatsu:2008ge,Das:2009vy,Alberico:2011zy,Uphoff:2010sh}, thus limiting the applicability of HF observables to high momenta. The formation time of heavy quarks is comparatively short, and thereafter their virtuality is small, governed by interactions with the medium with modest momentum transfers. Hence, fragmentation is not effective. In the Langevin simulations of Refs.~\cite{vanHees:2005wb,vanHees:2007me,Gossiaux:2008jv}, heavy-light quark recombination has been accounted for~\cite{Greco:2003vf} and found to be important for increasing {\em both} the elliptic flow and the nuclear modification factor of the resulting $D$-meson spectra. The coalescence formalism was based on the widely used instantaneous approximation~\cite{Fries:2003vb,Greco:2003mm} which, however, does not conserve energy in the $2\to1$ hadron formation process. A related problem is the lack of a well-defined equilibrium limit for the hadron distributions. Together, both features imply appreciable uncertainties in calculating HF observables in the low-$p_T$ region (albeit suppressed compared to light-quark coalescence by a mass ratio $(m_q/m_c)$). To improve the coalescence description and achieve consistency with local kinetic equilibrium, we here employ the resonance recombination model (RRM) implemented on the hydrodynamic hadronization surface. \subsection{Resonance Recombination at the Hadronization Hypersurface} \label{ssec_rrm} In the RRM the hadronization of constituent quarks is treated via resonance scattering within a Boltzmann transport equation~\cite{Ravagli:2007xx}. For scattering rates which are large compared to the inverse hadronization time, $\Gamma_{\rm res}\gg 1/\tau_{\rm had}$, equilibrium quark distribution functions in a flowing medium are converted into equilibrium meson spectra with the same flow properties, including elliptic anisotropies with space-momentum correlations characteristic for a hydrodynamically expanding source~\cite{Ravagli:2008rt,He:2010vw}. The RRM has been employed previously to investigate kinetic-energy and constituent-quark number scaling~\cite{Ravagli:2008rt}, and to extract empirical quark distribution functions of the bulk medium at hadronization at RHIC~\cite{He:2010vw}. The RRM is consistent with the heavy-light Feshbach resonance formation found in the $T$-matrix calculation of the HQ thermal relaxation rate (see Section~\ref{ssec_2.2}). It reiterates the important role played by resonance correlations in our work. As the temperature drops towards $T_c$, the resonance correlations in the heavy-light quark $T$-matrix strengthen (recall Fig.~\ref{fig_Tmat}) and thus naturally merge into heavy-light quark recombination processes. When implementing the latter via a Breit-Wigner ansatz one obtains the HF meson distribution from the asymptotic solution of the Boltzmann equation as~\cite{Ravagli:2007xx} \begin{multline} f_M^{\mathrm{asymp}}(\mathbf{x},\mathbf{p})=\frac{E_M(\mathbf{p})}{m_M\Gamma_M} \int\frac{d^3p_1d^3p_2}{(2\pi)^6}f_Q(\mathbf{x},\mathbf{p}_1) \\ \times f_{\bar q}(\mathbf{x},\mathbf{p}_2) \ \sigma(s) \ v_{\rm rel}(\mathbf{p}_1,\mathbf{p}_2) \ \delta^{(3)}(\mathbf{p} -\mathbf{p}_1 -\mathbf{p}_2) \ , \label{rrm} \end{multline} where $f_{Q,q,M}$ are equal-time phase-space distributions of heavy quarks, light quarks, mesons, respectively, $v_{\rm rel}$ is the relative velocity of the recombining heavy and light quarks, and $m_M$ and $\Gamma_M$ are the mass and width of the meson resonance ~\cite{Ravagli:2007xx,Ravagli:2008rt,He:2010vw}. In the calculations below we employ masses and widths compatible with the $T$-matrix calculation of Ref.~\cite{Riek:2010fk}, extrapolated to $T_c=165$~MeV with $m_c=1.8$\,GeV, $m_q=0.35$\,GeV, $m_D=2.25$ GeV and $\Gamma_D=0.1$\,GeV. Energy conservation and detailed balance in RRM ensure an equilibrium mapping between the distributions of quarks and formed mesons~\cite{Ravagli:2008rt,He:2010vw}. We verify this in the present case of a non-trivial freeze-out hypersurface given by AZHYDRO. We use local charm- and light-quark equilibrium phase-space distributions, $f(p,x)=e^{-p\cdot u(x)/T}$, with fluid velocities given by AZHYDRO at the end of the mixed phase, then apply resonance recombination, Eq.~(\ref{rrm}), locally (for each cell) to obtain the local meson phase-space distribution $f_M(\tau,x,y;\mathbf{p})$. Finally we calculate the current across the hypersurface and sum over all fluid cells on the $e_{\rm dec}=0.445~{\rm GeV/fm^3}$ freeze-out hypersurface, \begin{equation} \label{CF} \frac{dN}{p_Tdp_Td\phi dy}= \int_{\Sigma}\frac{p_\mu d\sigma^\mu(\tau,x,y)}{(2\pi)^3} f_M (\tau,x,y;\mathbf{p}) \ . \end{equation} In Fig.~\ref{fig_hydroRRM} we compare the resulting $D$-meson spectrum and $v_2$ with a calculation directly from hydro using $D$-meson Cooper-Fry freeze-out on the same hypersurface ($e_{\rm dec}=0.445~{\rm GeV/fm^3}$). The close agreement of the two calculations verifies the mapping between the equilibrium quark and meson distributions in RRM, including the full space-momentum correlations encoded in the AZHYDRO flow field. Longitudinal boost invariance of AZHYDRO is preserved by RRM as well, as observed from the independence of the $D$-meson spectra on rapidity within our accuracy. \begin{figure}[!t] \hspace{4mm} \includegraphics[width=\columnwidt ]{hydro_RRM_D-meson_pT-spectrum.eps} \includegraphics[width=\columnwidt ]{hydro_RRM_D-meson_v2.eps} \caption{(Color online) (a) $D$-meson $p_T$-spectrum calculated with RRM on the AZHYDRO hadronization hypersurface (circles), compared to a direct calculation from AZHYDRO using the Cooper-Frye formula on the same hypersurface (solid line). The $D$-meson spectra at different rapidities ($y_D=0.0$ and $y_D=\pm 0.5$) calculated from RRM agree with each other. (b) The same comparison for the elliptic flow of $D$ mesons.} \label{fig_hydroRRM} \end{figure} The next step is to extend our approach to hadronize off-equilibrium quark distribution functions emerging from our HQ Langevin simulations. In order to couple the RRM to a HQ test particle freezing out from the hydro-Langevin simulation with momentum $\mathbf{p}_{\rm dec}$ and coordinate $\mathbf{x}_{\rm dec}$, we represent the corresponding local equal-time HQ phase-space distribution on the hadronization hypersurface, $f_Q$ from Eq.~(\ref{eq:flux}), by a $\delta$-function, $\delta^3(\mathbf{x} -\mathbf{x}_{\rm dec})\delta^3(\mathbf{p}-\mathbf{p}_{\rm dec})$ at the hadronization time $\tau(x_{\rm dec},y_{\rm dec})$. As before, the light-quark phase space density $f_q$ at this point is taken to be the equilibrium distribution $e^{-p\cdot u/T}$ at the local temperature and flow, and we can apply Eq.~(\ref{rrm}) to obtain the phase space density $f_M$ of heavy mesons test particle by test particle. Finally, the spectrum of heavy mesons follows from Eq.~(\ref{CF}) as a sum over test particles. To check our procedure we first apply it to the Langevin output in the large-coefficient limit for charm quarks which -- as discussed in Sec.~\ref{ssec_2.5} -- follows the equilibrium distribution up to $p_t\simeq$~4\,GeV. Figure~\ref{LangeviRRMequilibrium} shows the comparison between the $D$-meson spectrum and $v_2$ calculated in this way and the $D$-mesons from a direct AZHYDRO calculation. Compared to Fig.~\ref{fig_c-equil}, the agreement of the hydro+Langevin+RRM calculation with the direct hydro calculation extends to slightly larger $p_T$ since the recombination essentially acts as an additional heavy-light interaction when forming $D$-mesons. This connection is particularly transparent if the interaction used in the diffusion process is the same as in the recombination process, i.e., a resonance interaction in our approach. We conclude that the equilibrium limit is well under control in this framework. \begin{figure}[!t] \hspace{4mm} \includegraphics[width=\columnwidt ]{LangevinRRMequilibrium_D-meson_pT-spectrum.eps} \includegraphics[width=\columnwidt ]{LangevinRRMequilibirum_D-meson_v2.eps} \caption{(Color online) (a) $D$-meson $p_T$-spectrum (stars) calculated from RRM on the hadronization hypersurface applied to charm-quark spectra from hydro+Langevin simulations in the large-drag coefficient limit (corresponding to Fig.~\ref{fig_c-equil}). It is compared to the $D$-meson spectrum directly calculated from AZHYDRO on the same hypersurface. (b) Same as in panel (a) but for $D$-meson elliptic flow.} \label{LangeviRRMequilibrium} \end{figure} \subsection{Hadronization: Coalescence vs Fragmentation} \label{ssec_coal-frag} After establishing a coalescence formalism for hadronization of off-equilibrium HQ distributions on an arbitrary hypersurface it remains to couple this contribution with the standard fragmentation mechanism representing the large-$p_t$ limit where the phase-space density of light partons vanishes (vacuum limit). In most previous works the coalescence probability has been evaluated in an instantaneous approximation which did not allow for a full control over its absolute magnitude. Here, instead, we use a dynamic criterion which directly follows from the RRM formalism; it is based on the HQ scattering rate which is derived from the same interactions as used in the diffusion calculations\footnote{See Refs.~\cite{vanHees:2007me,He:2011yi} for a recent discussion of the relation between the two in the HF context}. We thus make the following ansatz for the HQ coalescence probability in the fluid rest frame: \begin{equation} P_{\rm coal}(p) = \Delta\tau_{\rm res} \ \Gamma_Q^{\rm res}(p) \ , \label{Pcoal} \end{equation} which is Lorentz invariant. In Eq.~(\ref{Pcoal}), the scattering rate, $\Gamma_Q^{\rm res}= n_{q} \langle \sigma_{qQ}^{\rm res} \ v_{\rm{rel}}\rangle$, refers to the resonant part of the $Qq$ cross section, $\sigma_{qQ}^{\rm res}$ (or $T$-matrix), and thus represents the rate for hadron formation ($n_{q}$: light-quark density, $v_{\rm rel}$: relative velocity). The time interval $\Delta\tau_{\rm res}$ characterizes the window in the dynamic medium evolution during which resonance states exist; typically, this corresponds to the duration of the hadronization transition, i.e., the ``mixed phase", or even longer depending on whether ``pre-resonance" states can be formed above $T_c$. Of course, if the product $\Delta\tau_{\rm res}~\Gamma_Q^{\rm res}$ exceeds one, $P_{\rm coal}$ should be put to one, corresponding to the equilibrium limit (more accurately, one could apply an exponential relaxation, but in view of the practical uncertainties in the values for $\Delta\tau_{\rm res}$ and $\Gamma_Q^{\rm res}(p)$ this is currently not warranted). We emphasize that this procedure provides an absolute normalization of the coalescence contribution, consistent with the (unique) equilibrium limit. Since the resonance formation rate naturally diminishes with increasing HQ momentum (the phase-space density of quarks from the thermal bath to match the resonance mass decreases), one obtains an increasing fraction, $P_{\rm frag}(p) = 1- P_{\rm coal}(p)$, of heavy quarks undergoing independent fragmentation, recovering the vacuum limit. In practice, for our calculations reported below, we evaluate Eq.~(\ref{Pcoal}) as follows. For the HQ scattering rate we employ a Breit-Wigner cross section which is consistent with our heavy-light $T$-matrix (cf.~Sec.~\ref{ssec_2.2}) and approximately reproduces the color-singlet contribution to the HQ thermal relaxation rate at $T_c$; the pertinent meson width of $\Gamma_M$=0.4\,GeV is larger than the one used in the RRM expression, Eq.~(\ref{rrm}), but the resulting meson spectra and elliptic flow are rather insensitive to this quantity~\cite{Ravagli:2007xx,Ravagli:2008rt} (we neglect resonant diquark contributions since we only consider color-singlet scattering relevant for $D$- and $B$-meson formation). With this cross section we calculate the HQ scattering rate in the fluid rest frame. We typically find $\Gamma_c^{\rm res}\approx$~0.1\,GeV for charm quarks at vanishing momentum (similar for bottom quarks), consistent with Refs.~\cite{Riek:2010fk,Riek:2010py} (about half of the total HQ width of $\sim$0.2\,GeV calculated in these works is due to the color-singlet part). The HQ scattering rate is then boosted to the lab frame at the end of Langevin simulation (mixed phase) and expressed as a function of HQ transverse momentum. For simplicity, we have chosen to apply $P_{\rm coal}$ not test-particle-by-test-particle but averaged over the spatial dependence in the fireball, i.e., as a function of $p_t$ only. We have checked that the explicit inclusion of space-momentum correlations leads to very similar results for
), antiproton$-$proton~($\overline{p}p$), nucleus$-$nucleus~($AA$) \cite{Biro,Urmo,Cley,Beck,Wilk} and $e^{+}e^{-}$ \cite{Bed,SS} collisions.~It has also been shown by K.~Urmossy et al \cite{Gam1} that in very high energy collisions, if only the momentum-energy conservation in hadronisation is taken into account, the average momentum distribution of produced particles can be considered as a micro-canonical generalisation of the Tsallis distribution.~Each final state particle created in a collision is identified with a microstate of a microcanonical esemble with scaling volume fluctuations.~If the momentum distribution in events with fixed multiplicity is micro-canonical, the shifted multiplicity has the Gamma-distribution.~In the Tsallis $q$-statistics, the entropy of standard statistical mechanics becomes non-extensive.~This non-extensive property of the entropy is then determined in terms of a parameter $q$, known as entropic index.~Which on account of its non-extensive behaviour exceeds unity.~Most of the analyses on the data from different kinds of collisions have been done to study the transverse momentum distributions and fragmentation functions.~At LHC energies, some analyses \cite{DE, PL,T,Azmi} have been done to study the $p_{T}$ spectra by using the Tsallis distribution.~However, the analysis of the multiplicities has been done only in very few cases \cite{Jan1, Jan2, Urmo3} using different approaches.~In addition, analyses of the data from various experiments at the RHIC and at the LHC have shown excellent fits to the transverse momentum distributions with the Tsallis-like distribution \cite{JCley}.~A. Capella et al \cite{Cap} have studied the multiplicities at the LHC in the Pomeron model. In this paper, the first study of multiplicity distributions is reported on the proton-proton collisions at the LHC energies in the restricted central rapidity windows.~Energy-momentum conservation strongly influences the multiplicity distribution for the full phase space.~The distribution in restricted rapidity windows however, is less prone to such constraints and thus can be expected to be a more sensitive probe to the underlying dynamics of QCD, as inferred in references \cite{Gam1, Gam2}.~We also study the dependence of the entropic index $q$, an important parameter in the Tsallis $q$-statistics, on energy of the collisions. After this brief introduction in section I, we describe the essential steps of the distributions, Tsallis, Gamma and shifted-Gamma and the basic definition of rapidity used, in section II.~Section III gives details of the data used and results from the comparison of our analysis of the three distributions.~Section IV presents the conclusions. \section{PARTICLE PRODUCTION AND THE DISTRIBUTION} The charged particles produced in a collision are emitted at all angles and measured in terms of rapidity defined as $y = -ln(\frac{E+p_{L}}{E-p_{L}})$ where E is the particle energy and $p_{L}$ is the longitudinal momentum.~The number of particles produced are distributed according to some probability distribution function (PDF) with mean of the distribution coinciding with the average number, called the average multiplicity.~We discuss three such PDFs in the following section; \subsection{The Gamma and the shifted-Gamma distributions} The Gamma is a very basic distribution which describes the multiplicity distributions at lower energies very well.~Inclusive data $e^{+}e^{-}\rightarrow h^{\pm} + X$ from LEP experiments were studied by Urmossy et al \cite{Gam1} by considering a sample of two$-$jet events.~They used the Boltzman-Gibbs and micro-canonical distributions in one dimension.~They showed that the Gamma distribution of the shifted multiplicity, N-N0 can result in a Tsallis or micro-canonical Tsallis shaped spectrum. The probability density function for the Gamma and the shifted-Gamma distributions are given below; \begin{equation} P_N = AN^{\alpha-1} exp^{-\beta{N}} \hspace{0.8cm} \end{equation} with $\alpha$ the scale parameter, $\beta$ the shape parameter and $A$ are the fit parameters of the distribution.~The average momentum distribution is Tsallis. A shift in the multiplicity $N \rightarrow (N-N0 )$ is exploited, without violating the KNO scaling \cite{KNO}, then the averaging is done over the multiplicity distribution; \begin{equation} P_N = A(N-N0)^{\alpha^{\prime}-1} exp^{-\beta^{\prime}({N-N0})} \hspace{0.8cm} \end{equation} The resulting momentum distribution is a possible micro-canonical generalisation of the Tsallis. ~The shift in the multiplicity has been chosen to be $N0 = 1 + 2/D$ where D is the dimensionality of phase space.~The details can be found in \cite{Gam1, Gam2} \subsection{The Tsalls distribution} The Tsallis q-statistics deals with entropy in the usual Boltzman-Gibbs thermo-statistics modified by introducing $q$-parameter. ~For a given thermo-dynamical system, when divided into two subsystems, the Tsallis entropy no longer remains extensive, but is defined as \begin{equation} S_q(A,B)= S_A + S_B+ (1-q)S_{A}S_{B} \end{equation} where $q$ is known as the entropic index with value $q>1$ and $(1-q)$ measures the departure of entropy from its extensive behaviour. Assuming the interaction as a canonical ensemble of N particles, the partition function is defined through the probability as \begin{equation} P_N = \frac{Z^{N}_q}{Z} \end{equation} where Z represents the total partition function and $Z^{N}_q$ represents partition function at a particular multiplicity.~C.E. Aguiar et al \cite{TS2} have discussed in detail the method for calculating N particles partition function and deriving the probability distribution.~Details of these calculations can be obtained from this reference. \section{RESULTS} The experimental data of proton-proton collisions at the Large Hadron Collider (LHC) obtained by the CMS experiment for different energies are analysed.~The data analysed are at $\sqrt{s}$ = 0.9~TeV, 2.34~TeV and 7~TeV in the restricted rapidity windows of $|y|<$ 0.5, 1.0, 1.5, 2.0 and 2.4.~The experimental data \cite{CMS} are fitted with the distributions from the Tsallis $q$-statistics, the Gamma distribution and the shifted-Gamma distribution.~The results are discussed in the following sections; \subsection{ The Gamma vs. the Tsallis distribution} The probabilities from the Gamma distribution, the shifted-Gamma distribution and the Tsallis distribution are calculated using equations (1), (2) and (4).~Fits to the data are shown in figures~1-3.~Table~I gives the parameters of the fits for all the rapidity windows at energies 0.9~TeV 2.36~TeV and 7~TeV respectively and a comparison of corresponding $\chi^{2}/ndf$ and $p$-values are given in the table~II.~While fitting we consider the probability distribution for 7~TeV extending up to the continuous range of $N$ values.~Beyond this the statistics is very low and the probability falls below .001 leading to the fit parameters with very large errors, particularly for the Tsallis distribution. In order to study the behaviour of the three distributions for multiplicities of charged particles produced with higher transverse momenta $p_{T}$, the analysis is extended to the data $\sqrt{s}$ = 0.9~TeV, 2.36~TeV and 7~TeV in the restricted rapidity window of $|y|<$ 2.4 and $p_{T} >$ 500~MeV.~Figure~4 shows the results of fits for the three distributions.~The fit parameters are given in the table~III and the $\chi^{2}/ndf$ and $p$-values in the table~IV. One finds that both the Tsallis and the shifted-Gamma distributions reproduce the data very well in most of the rapidity windows at the three energies in comparison to the Gamma distribution.~However all the distributions fail for rapidity windows $|y|<$ 0.5 and $|y|<$ 1.0 with $p$-values corresponding to $CL< 0.10\%$.~The detailed comparison between the three functions is shown in the table II where $\chi^{2}/ndf$ and $p$-values at all energies for all rapidity windows are compared. It is found that the $\chi^{2}/ndf$ values are comparable for the Tsallis and the shifted-Gamma fits with $p$-values corresponding to $CL> 0.1\%$ as compared to the Gamma fits.~This is true for nearly all the rapidity windows at all the energies. Results from the table~III and IV for the fits of the three distributions to the data with $p_{T} > $500~MeV in the $|y|<$ 2.4 rapidity window, show that for 7~TeV data $p$-values correspond to $CL<0.1\%$ and hence all the fits are statistically excluded at this energy.~However at the other two energies, all fits are able to reproduce the data, with the shifted-Gamma giving the best. \begin{figure} \includegraphics[width=3.8 in , height= 2.9 in]{pp900.pdf} \includegraphics[width=3.8 in , height= 2.9 in]{pp236.pdf} \includegraphics[width=3.8 in , height= 2.9 in]{pp7000.pdf} \caption{The charged particle multiplicity distributions measured in $pp$ collisions by the CMS experiment with the fits by the Tsallis distribution.~Each successive distribution is multiplied by a factor of 10.} \end{figure} \begin{figure} \includegraphics[width= 3.8 in , height= 2.9 in]{finalgm9.pdf} \includegraphics[width= 3.8 in , height= 2.9 in]{finalgm236.pdf} \includegraphics[width=3.8 in , height= 2.9 in]{gamma7.pdf} \caption{The charged particle multiplicity distributions measured in $pp$ collisions by the CMS experiment with the fits by the Gamma distribution.~Each successive distribution is multiplied by a factor of 10.} \end{figure} \begin{figure} \includegraphics[width= 3.8 in , height= 2.9 in]{finalshiftedgm9.pdf} \includegraphics[width= 3.8 in , height= 2.9 in]{finalshiftedgm236.pdf} \includegraphics[width=3.8 in , height= 2.9 in]{shiftedgamma7.pdf} \caption{The charged particle multiplicity distributions measured in $pp$ collisions by the CMS experiment with the fits by the shifted-Gamma distribution.~Each successive distribution is multiplied by a factor of 10.} \end{figure} \begin{figure} \includegraphics[width= 3.8 in , height= 2.9 in]{Tsallispt24.pdf} \includegraphics[width= 3.8 in , height= 2.9 in]{gammapt24.pdf} \includegraphics[width=3.8 in , height= 2.9 in]{shiftedgammapt24.pdf} \caption{The charged multiplicity distributions for particles with $|y|<2.4$, $p_{T} >$ 500~MeV measured by the CMS experiment with the fits by the Tsallis, the Gamma and the shifted-Gamma distributions.~Each successive distribution is multiplied by a factor of 10.} \end{figure} From the tables I \& II, it is also observed that for the Gamma distribution, $\beta$ values decrease with energy as well as with rapidity.~This is as per the expected trend.~Similarly for the shifted-Gamma distribution, $\beta^{\prime}$ values decrease with energy as well as with rapidity.~Both $\beta$ and $\beta^{\prime}$ measure the shape parameters for the two distributions.~For the Tsallis distribution, the $q$ value which measures the entropic index of the Tsallis statistics, increases with energy and exceeds unity in each case.~This confirms that the Tsallis statistics becomes non-extensive.~The parameter $K$ also determines the shape of the distribution, it becomes binomial-like if $K$ becomes negative. Figure~5, shows energy dependence of the entropic parameter of the Tsallis function in various rapidity windows.~The dependence can be parameterised as a power law, $q=A\sqrt{s}^B$.~The fit parameters $A$ and $B$ are listed in the table~V.~It may be observed that the value of $B$ is constant and independent of rapidity.~In a study of the sytematic properties of Tsallis distribution, J. Cleyman et al \cite{JCley} have studied the energy dependence of parameters of Tsallis distribution in $pp$ collisions and shown that $q$ has a weak dependence on beam energy.~From the transverse momentum distributions, they have determined the $q$ values from the ATLAS data \cite{Aad} at $\sqrt{s}$=0.9, 2.36 and 7~TeV as 1.1217$\pm$0.0007, 1.1419$\pm$0.0025 and 1.1479$\pm$0.0008.~These values agree very closely with the values we obtain from multiplicity distribution fits in rapidity $|y|<$ 2.4 for the CMS data; 1.055$\pm$0.008, 1.136$\pm$0.031 and 1.228$\pm$0.055.~Small differences are expected from the slightly different phase spaces considered in the two cases. Figure~6 shows the multiplicity distribution of charged particles predicted in the $pp$ collisions in the restricted rapidity window of $|y|<1.5$ at $\sqrt{s}$=14~TeV.~The value of $q$ is predicted as 1.476$\pm$0.108.~Similar predictions can be made for other higher rapidity windows. In a further investigation of the failure of all distributions at 7~TeV, we consider 2-component approach, soft and semi-hard component-structure in the multi-particle production.~This leads to the division of the distribution in terms of soft events(events without minijets) and the semi-hard events(events with minijets).~A distribution is then produced as a weighted superposition of the two components, the weight $\alpha_{soft}$ being the fraction of soft events, as below; \small \small \begin{equation} P(n)=\alpha_{soft}P_{soft}^{MD}(n)+(1-\alpha_{soft})P_{semi-hard}^{MD}(n) \end{equation} \normalsize The multiplicity distribution (MD) of each component being one of the three distributions under consideration; Gamma, shifted-Gamma or Tsallis distribution.~The idea of this superposition was first suggested by C. Fuglesang \cite{Fuse} in order to explain the negative binomial regularity violations.~The concept originates from purely phenomenological and very simple considerations.~The two fragments of the distribution suggests the presence of the substructure.~For example, by using this approach, fits of the data at 7~TeV with the the shifted-Gamma distribution, in the rapidity windows $|y|<$ 0.5, $|y|<$ 1.0 and $|y|<$ 2.4 reduces the $\chi^{2}/ndf$ values by a large factor and the distributions become statistically significant with $CL>0.1\%$.~The results are shown in the table~VI.~Since the $\alpha_{soft}$ value is not available, it was input in the distribution and iterated to obtain the best fit.~This observation indicates that at higher energies, the contribution of the events with mini jets grows.~Similar fits when used for data at other energies and rapidity regions also reduce the $\chi^{2}/ndf$ in every case.~However, in case of Tsallis distribution, the number of parameters becomes as large as 9 and the fit values of the parameters have very large errors. \begin{figure}[h] \includegraphics[width=3.7 in]{qe.pdf} \caption{The energy dependence of the non-extensive entropic parameter of the Tsallis function in different rapidity windows.} \end{figure} \begin{figure}[h] \includegraphics[width=3.9 in]{prd.pdf} \caption{The multiplicity distribution predicted for $pp$ collisions at $\sqrt{s}$=14~TeV in $|y| < 1.5$ window.} \end{figure} \section{CONCLUSION} The data on charged multiplicities at the LHC energies, $\sqrt{s}$=0.9, 2.36 and 7~TeV in proton-proton collisions in the restricted rapidity windows obtained by CMS experiment have been used for a detailed analysis of the multiplicities.~Analysis has also been done for the particles emitted with $p_{T} >$500~MeV in the largest rapidity window $|y|< 2.4$.~A comparison of the Gamma distribution, the shifted-Gamma distribution and the Tsallis distribution has been done.~The relevance of the comparison is due to the similar nature of the three distributions.~When the multiplicity has Gamma distribution, the average momentum distribution of particles is Tsallis like.~For the shifted-Gamma distribution, the avearge momentum distribution is a possible micro-canonical generalisation of the Tsallis.~A comparison of the $\chi^{2}/ndf$ and p-values in table II, shows that all the three distributions reproduce the data in most of the rapidity windows at 0.9~TeV and 2.36~TeV.~However at 7~TeV, all distributions fail and are statistically excluded with $CL<0.1\%$ in the two lower rapidity windows, $|y|< 0.5$ and $|y|< 1.0$.Interestingly, for the particles emitted with $p_{T} >$500~MeV, at $\sqrt{s}$=7~TeV, all the distributions fail with $CL<0.1\%$.~This indicates the possible dynamical changes in the particle production at such high collision energy. ~Overall the Tsallis fits and the shifted-Gamma fits are comparable and much better than the Gamma fits.~The value of $q$ determined at each center of mass energy and in different rapidity windows exceeds unity.~The value of $q$ decreases with the increase in rapidity window size at a given energy.~For collisions in the same size rapidity window at different energies, the $q$ value increases with energy as shown by values in tables~I and III, indicating that for collisions at higher energies, the non-extensive behaviour of entropy becomes more pronounced.~The energy dependence of $q$ is described by the power law.~The parametrisation as a power law is inspired by the observation that single particle energy distribution obeys a power law behaviour \cite{Gaz}.~Entropy being determined by the energy fluctuations, influences the $q$ values.~The $q$ values obtained from our analysis agree very well with the results from analysis of the data from the ATLAS experiment.~Thus the consistent results from two different analyses confirm the $q$-values being weakly dependent on the beam energy, non-extensive nature of the entropy of collisions and a change in dynamics of particle production at high energies.\\ \begin{center} {\bf ACKNOWLEDGMENT} \end{center} S.S. is grateful to the Department of Science and Technology, Government of India for the research fellowship grant. \newpage \begin{table*}[t] \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline &\multicolumn{2}{|c|}{}& \multicolumn{2}{|c|}{} &\multicolumn{4}{|c|}{} \\ Rapidity & \multicolumn{2}{|c|}{Gamma distribution } & \multicolumn{2}{|c|}{shifted-Gamma distribution} &\multicolumn{4}{|c|}{Tsallis distribution}\\ Interval & \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{} &\multicolumn{4}{|c|}{}\\ \hline \multicolumn{9}{|c|}{} \\ \multicolumn{9}{|c|}{$\sqrt{s}$ = 0.9 TeV} \\ \multicolumn{9}{|c|}{} \\\hline & & & & & & & & \\ $|y|$ & $\alpha$ & $\beta$ &$\alpha'$ & $\beta'$ & $nV$ & $nv_{0}$ & K & q \\ & & & & & & & &
\\\hline 0.5 &1.515 $\pm$ 0.068 &0.352 $\pm$ 0.010 &2.265 $\pm$ 0.156 & 0.395 $\pm$ 0.015& 1.932 $\pm$ 0.152 &0.286 $\pm$ 0.135 &1.726 $\pm$ 0.098 &1.431 $\pm$ 0.004\\\hline 1.0 &1.848 $\pm$ 0.061 &0.223 $\pm$ 0.004& 2.498 $\pm$ 0.098 & 0.244 $\pm$ 0.005 & 3.043 $\pm$ 0.091 &0.317 $\pm$ 0.019 &2.084 $\pm$ 0.080 &1.356 $\pm$ 0.033\\\hline 1.5 &1.844 $\pm$ 0.058 &0.153 $\pm$ 0.003 & 2.371 $\pm$ 0.081 & 0.167 $\pm$ 0.004 & 4.067 $\pm$ 0.661 &0.134 $\pm$ 0.190 &2.028 $\pm$ 0.070 &1.201 $\pm$ 0.046\\\hline 2.0 &1.865 $\pm$ 0.053 &0.117 $\pm$ 0.002 & 2.331 $\pm$ 0.067& 0.128 $\pm$ 0.002 & 4.338 $\pm$ 0.120 &0.375 $\pm$ 0.078 &2.025 $\pm$ 0.055 &1.111 $\pm$ 0.003 \\\hline 2.4 &1.944 $\pm$ 0.054 &0.102 $\pm$ 0.002 & 2.382 $\pm$ 0.062 & 0.111 $\pm$ 0.002 & 4.802 $\pm$ 0.071 &0.417 $\pm$ 0.010 &2.101 $\pm$ 0.058 &1.055 $\pm$ 0.008 \\\hline \multicolumn{9}{|c|}{} \\ \multicolumn{9}{|c|}{$\sqrt{s}$ = 2.36 TeV} \\ \multicolumn{9}{|c|}{} \\\hline 0.5 &1.379 $\pm$ 0.068 &0.260 $\pm$ 0.009 & 1.855 $\pm$ 0.141 & 0.284 $\pm$ 0.012 &2.271 $\pm$ 0.163 &0.150 $\pm$ 0.180 &1.501 $\pm$ 0.088 &1.546 $\pm$ 0.030 \\\hline 1.0 &1.743 $\pm$ 0.060 &0.168 $\pm$ 0.003 & 2.187 $\pm$ 0.089 & 0.179 $\pm$ 0.004 & 3.702 $\pm$ 0.102 &0.152 $\pm$ 0.011 &1.892 $\pm$ 0.073 &1.475 $\pm$ 0.029\\\hline 1.5 &1.595 $\pm$ 0.057 &0.105 $\pm$ 0.003 & 1.892 $\pm$ 0.081 & 0.112 $\pm$ 0.003 & 4.186 $\pm$ 0.426 &0.257 $\pm$ 0.251 &1.682 $\pm$ 0.066 &1.284 $\pm$ 0.026\\\hline 2.0 &1.644 $\pm$ 0.057 &0.082 $\pm$ 0.002 & 1.931 $\pm$ 0.075 & 0.087 $\pm$ 0.002 & 4.770 $\pm$ 0.350 &0.431 $\pm$ 0.305 &1.731 $\pm$ 0.063 &1.183 $\pm$ 0.021\\\hline 2.4 &1.684 $\pm$ 0.057 &0.070 $\pm$ 0.002 & 1.959 $\pm$ 0.071 & 0.075 $\pm$ 0.002 & 5.227 $\pm$ 0.394 &0.441 $\pm$ 0.271 &1.751 $\pm$ 0.059 &1.136 $\pm$ 0.031\\\hline \multicolumn{9}{|c|}{} \\ \multicolumn{9}{|c|}{$\sqrt{s}$ = 7.00 TeV} \\ \multicolumn{9}{|c|}{} \\\hline 0.5 &1.461 $\pm$ 0.050 &0.201 $\pm$ 0.007 & 1.841 $\pm$ 0.083 & 0.214 $\pm$ 0.004 &2.871 $\pm$ 0.068 &0.136 $\pm$ 0.005 &1.571 $\pm$ 0.062 &1.674 $\pm$ 0.010\\\hline 1.0 &1.645 $\pm$ 0.044 &0.113 $\pm$ 0.003 & 1.897 $\pm$ 0.059 & 0.118 $\pm$ 0.002 &3.745 $\pm$ 0.103 &0.159 $\pm$ 0.014 &1.725 $\pm$ 0.043 &1.593 $\pm$ 0.004\\\hline 1.5 &1.767 $\pm$ 0.041 &0.081 $\pm$ 0.001 & 1.317 $\pm$ 0.059 & 0.063 $\pm$ 0.002 &4.232 $\pm$ 0.064 &0.443 $\pm$ 0.296 &1.247 $\pm$ 0.035 &1.401 $\pm$ 0.026\\\hline 2.0 &1.738 $\pm$ 0.039 &0.061 $\pm$ 0.002 & 1.325 $\pm$ 0.050 & 0.048 $\pm$ 0.001 &5.101 $\pm$ 0.122 &0.334 $\pm$ 0.026 &1.265 $\pm$ 0.040 &1.303 $\pm$ 0.007\\\hline 2.4 &1.506 $\pm$ 0.031 &0.046 $\pm$ 0.001 & 1.344 $\pm$ 0.047 & 0.040 $\pm$ 0.001 &5.769 $\pm$ 0.217 & 0.221 $\pm$ 0.218 & 1.284 $\pm$ 0.035 & 1.228 $\pm$ 0.055\\\hline \end{tabular} \caption{Fit parameters of the distributions for all rapidity windows for the $pp$ data.} \end{table*} \begin{table*}[t] \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & &\multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{} &\multicolumn{2}{|c|}{}\\ Energy & Rapidity &\multicolumn{2}{|c|}{Gamma } & \multicolumn{2}{|c|}{shifted-Gamma } &\multicolumn{2}{|c|}{Tsallis }\\ (TeV)& Interval &\multicolumn{2}{|c|}{Distribution} & \multicolumn{2}{|c|}{Distribution} &\multicolumn{2}{|c|}{Distribution}\\ \hline & & & & & & & \\ & $|y|$ & $\chi^2$/ndf &p value & $\chi^2$/ndf & p value & $\chi^2$/ndf & p value \\ & & & & & & & \\\hline \multirow{4}{*}{0.9} & 0.5 &3.21/19 &1.0000 & 0.98/19 &1.0000 &1.58/17 &1.0000 \\ \cline{2-8} &1.0 &54.50/36 &0.0247 & 33.66/36 &0.5804 &43.64/34 &0.1244\\ \cline{2-8} &1.5 &48.49/48 &0.4531 & 35.41/48 &0.9113 &41.32/46 &0.6683\\ \cline{2-8} &2.0 &38.11/58 &0.9798 & 31.21/58 &0.9985 &33.06/56 &0.9938\\ \cline{2-8} &2.4 &52.93/64 &0.8368 & 44.02/64 &0.9733 &46.82/62 &0.9240\\\hline & & & & & & & \\ \multirow{4}{*}{2.36} &0.5 &7.41/19 &0.9917 &5.94/19 & 1.0000 &6.60/17 &0.9882 \\ \cline{2-8} &1.0 &67.79/36 &0.0011 &50.59/36 & 0.0541 &59.99/34 &0.0039\\\cline{2-8} &1.5 &30.10/46 &0.9662 &25.24/46 & 0.9945 &27.28/44 &0.9774\\\cline{2-8} &2.0 &46.44/56 &0.8162 &45.25/56 & 0.8473 &45.31/54 &0.7941\\\cline{2-8} &2.4 &44.91/65 &0.9729 &44.98/65 & 0.9778 &40.94/63 &0.9859\\\hline & & & & & & & \\ \multirow{4}{*}{7.00} &0.5 &101.40/37 &0.0001 & 75.86/37 & 0.0002 &91.74/35 &0.0001\\\cline{2-8} &1.0 &183.71/66 &0.0001 & 149.11/66 & 0.0001 &170.93/64 &0.0001\\\cline{2-8} &1.5 &34.93/68 &0.9997 & 35.89/68 & 0.9995 &35.38/66 &0.9991\\\cline{2-8} &2.0 &40.41/86 &1.0000 & 44.27/86 & 0.9999 &41.69/84 &1.0000\\\cline{2-8} &2.4 &47.91/99 &1.0000 & 54.67/99 & 0.9999 &49.96/97 &1.0000\\\hline \end{tabular} \caption{$\chi^{2}/ndf$ comparison for the fits with three different distributions for all rapidity windows for the $pp$ data.} \end{table*} \begin{table*}[t] \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline &\multicolumn{2}{|c|}{}& \multicolumn{2}{|c|}{} &\multicolumn{4}{|c|}{} \\ Energy & \multicolumn{2}{|c|}{Gamma distribution} & \multicolumn{2}{|c|}{shifted-Gamma distribution} &\multicolumn{4}{|c|}{Tsallis distribution}\\ \hline & & & & & & & & \\ (TeV) & $\alpha$ & $\beta$ &$\alpha'$ & $\beta'$ & $nV$ & $nv_{0}$ & K & q \\ & & & & & & & & \\\hline 0.9 &1.386 $\pm$ 0.057 &0.211 $\pm$ 0.005 & 1.782 $\pm$ 0.098 & 0.227 $\pm$ 0.006& 2.214 $\pm$ 0.054 &0.249 $\pm$ 0.054 &1.476 $\pm$ 0.056 &1.055 $\pm$ 0.036\\\hline 2.36 &1.204 $\pm$ 0.054 &0.137 $\pm$ 0.003&1.373 $\pm$ 0.093 & 0.142$\pm$ 0.005 & 2.602 $\pm$ 0.106 &0.282 $\pm$ 0.028 &1.241 $\pm$ 0.063 &1.128 $\pm$ 0.007\\\hline 7.00 &1.288 $\pm$ 0.037 &0.096 $\pm$ 0.001 & 1.438 $\pm$ 0.048 & 0.099 $\pm$ 0.001 & 3.611 $\pm$ 0.110 &0.177 $\pm$ 0.093 &1.326 $\pm$ 0.033 &1.242 $\pm$ 0.092\\\hline \end{tabular} \caption{Fit Parameters with the distributions for charged particle multiplicity spectra in $|y| < 2.4 $ and $P_{T} > $500 MeV of the $pp$ data.} \end{table*} \begin{table*}[t] \begin{tabular}{|c|c|c|c|c|c|c|} \hline &\multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{} &\multicolumn{2}{|c|}{}\\ Energy &\multicolumn{2}{|c|}{Gamma distribution} & \multicolumn{2}{|c|}{shifted-Gamma distribution } &\multicolumn{2}{|c|}{Tsallis distribution }\\ (TeV)&\multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{} &\multicolumn{2}{|c|}{}\\\hline & & & & & & \\ & $\chi^2$/ndf &p value & $\chi^2$/ndf & p value & $\chi^2$/ndf & p value \\ & & & & & & \\\hline 0.9 &32.84/34 &0.5244 &17.86/34 &0.9896 &21.26/32 &0.9177 \\\hline 2.36 &36.37/36 &0.4514 &33.43/36 &0.5914 &35.61/34 &0.3195\\\hline 7.00 &178.07/75 &0.0001 &157.02/75 &0.0001 &172.01/73 &0.0001\\\hline \end{tabular} \caption{$\chi^{2}/ndf$ values obtained with the distributions fits to the charged particle multiplicity spectrum for $|y| < 2.4$ and $P_{T} > $500 MeV in the $pp$ data.} \end{table*} \begin{table*}[t] \begin{tabular}{|>{\centering}m{0.90cm}|c|c|c|} \hline $|y|$ & A & B \\ & & \\\hline 0.5 &0.851 $\pm$ 0.020 &0.076 $\pm$ 0.003\\\hline 1.0 &0.811 $\pm$ 0.072 &0.076 $\pm$ 0.010\\\hline 1.5 &0.706 $\pm$ 0.106 &0.077 $\pm$ 0.018\\\hline 2.0 &0.660 $\pm$ 0.014 &0.076 $\pm$ 0.003\\\hline 2.4 &0.630 $\pm$ 0.081 &0.075 $\pm$ 0.012\\\hline \end{tabular} \caption{Parameters A and B of the Power law fit between q and c.m.energy for the $pp$ collision data.} \end{table*} \begin{table*}[t] \begin{tabular}{|>{\centering}m{0.90cm}|c|c|c|c|c|c|c|c|c|} \hline & & & & & & & & \\ $|y|$ & $P_{T}$ & $\alpha_{soft}$ &$\alpha'_{1}$ & $\beta'_{1}$ &$\alpha'_{2}$ & $\beta'_{2}$ & $\chi^2$/ndf & p values \\ & (MeV) & & & & & & & \\\hline \multicolumn{9}{|c|}{} \\ \multicolumn{9}{|c|}{$\sqrt{s}$ = 7~TeV} \\ \multicolumn{9}{|c|}{} \\\hline & & & & & & & & \\ 0.5 & $>$ 0 & 0.81 & 2.981$\pm$ 0.221 & 0.260$\pm$ 0.010 & 7.443$\pm$ 0.505 & 1.291$\pm$ 0.091 & 11.42 / 35 & 0.9999\\\hline & & & & & & & & \\ 1.0 & $>$ 0 & 0.77 & 3.702$\pm$ 0.033 & 0.161$\pm$ 0.002 & 6.985$\pm$ 0.145 & 0.695$\pm$ 0.028 & 57.91 / 64 & 0.6904\\\hline & & & & & & & & \\ 2.4 & $>$ 500 & 0.64 & 5.302$\pm$ 0.101 & 0.141$\pm$ 0.003 & 3.235$\pm$ 0.030 & 0.576$\pm$ 0.023 & 69.67 / 73 &0.5888\\\hline \end{tabular} \caption{Fit Parameters with the 2-component shifted-Gamma distribution to multiplicity spectra in $pp$ collisions for different rapidity windows and $P_{T}$.} \end{table*} \newpage
\section{Introduction} Model-free Reinforcement Learning (RL) is a learning paradigm which aims to maximize a cumulative reward signal based on experience gathered through interaction with an environment \citep{sutton1998reinforcement}. It is divided into two primary categories. Value-based approaches involve learning the value of each action and acting greedily with respect to it (i.e., selecting the action with highest value). On the other hand, policy-based approaches (the focus of this work) learn the policy directly, thereby explicitly learning a mapping from state to action. Policy gradients (PGs) \citep{sutton2000policy} have been the go-to approach for learning policies in empirical applications. The combination of the policy gradient with recent advances in deep learning has enabled the application of RL in complex and challenging environments. Such domains include continuous control problems, in which an agent controls complex robotic machines both in simulation \citep{schulman2015trust,haarnoja2017reinforcement,peng2018deepmimic} as well as real life \citep{levine2016end,andrychowicz2018learning,riedmiller2018learning}. Nevertheless, there exists a fundamental problem when PG methods are applied to continuous control regimes. As the gradients require knowledge of the probability of the performed action $P(\action | \state)$, the PG is empirically limited to parametric distribution functions. Common parametric distributions used in the literature include the Gaussian \citep{schulman2015trust,schulman2017proximal}, Beta \citep{beta_gradients} and Delta \citep{silver2014deterministic,lillicrap2015continuous,fujimoto2018addressing} distribution functions. In this work, we show that while the PG is properly defined over parametric distribution functions, it is prone to converge to sub-optimal exterma (Section~\ref{sec:dist_approach}). The leading reason is that these distributions are not convex in the distribution space\footnote{As an example, consider the Gaussian distribution, which is known to be non-convex.} and are thus limited to local improvement in the action space itself. Inspired by Approximate Policy Iteration schemes, for which convergence guarantees exist \citep{puterman1979convergence}, we introduce the Distributional Policy Optimization (DPO) framework in which an agent's policy evolves towards a \textit{distribution} over improving actions. This framework requires the ability to minimize a distance (loss function) which is defined over two distributions, as opposed to the policy gradient approach which requires an explicit differentiation through the density function. DPO establishes the building blocks for our generative algorithm, the Generative Actor Critic\footnote{Code provided in the following \emph{anonymous} repository: \href{https://github.com/tesslerc/GAC}{github.com/tesslerc/GAC}}. It is composed of three elements: a generative model which represents the policy, a value, and a critic. The value and the critic are combined to obtain the advantage of each action. A target distribution is then defined as one which improves the value (i.e., all actions with negative advantage receive zero probability mass). The generative model is optimized directly from samples without the explicit definition of the underlying probability distribution using quantile regression and Autoregressive Implicit Quantile Networks (see Section~\ref{sec: method: our approach}). Generative Actor Critic is evaluated on tasks in the MuJoCo control suite (Section~\ref{sec:experiments}), showing promising results on several difficult baselines. \cmnt{We call this generative approach, the Generative Actor Critic. Specifically, we use an Autoregressive Implicit Quantile Network (AIQN) \citep{ostrovski2018autoregressive} to represent the policy. Combining both a critic and a value network, we are capable of estimating the advantage of each action, thus the target policy is defined as a distribution over the improving actions, i.e., actions with positive advantage. Finally, the loss is constructed such that the distribution represented by the actor shifts towards this improved policy. We validate our approach on various continuous control problems under the MuJoCo \citep{todorov2012mujoco} control suite. Our empirical results show that the approach indeed works as intended, resulting in competitive results across all domains while exhibiting much lower variance. We believe that these results motivate a new line of research, which combines generative modeling and policy learning, detaching from the standard PG formulation.} \cmnt{ \section{Introduction} Improving the performance with experience is the hallmark of Reinforcement Learning (RL). There are quite a few successful RL algorithms that theoretically solve any problem that can be cast in the Markov Decision Process (MDP) framework. Recent advances in deep learning have enabled the application of RL in complex and challenging environments. Such domains include continuous control problems, in which an agent controls complex robotic machines both in simulation \citep{schulman2015trust,haarnoja2017reinforcement,peng2018deepmimic} as well as real life \citep{levine2016end,andrychowicz2018learning,riedmiller2018learning}. Despite these advances, existing approaches for continuous control lack theoretical convergence guarantees. Current continuous, policy gradient based approaches restrict the optimization process to parametric distribution classes (e.g., Gaussian, Delta functions) which are \textit{non-convex} by nature. In Proposition~\ref{prop: k-modal doesnt converge}, we show that this can result in convergence to arbitrarily bad solutions. Policy gradient based approaches in discrete action spaces \citep{sutton1998reinforcement} are free of these limitations, as the model of policy distributions on this set is in general not restricted. Representation of general policy distributions in the continuous setting, similar to those of discrete action spaces, would ensure convergence to global optima. While there exist certain formulations, such as LQR, in which the decision problem at each state consists of solving a quadratic optimization problem, constructing algorithms with such guarantees for the general case is essential to solving continuous control problems. There are several ways in which the restrictions of current policy gradient methods can be tackled. First, one may attempt to enrich the policy space using a mixture of parametric distributions. However, as we suggest in Section~\ref{sec:policy search}, this does not mitigate the issue, as the parametric representation remains limited, and in most cases non-convex. A second, valid approach, is discretization of the action space \citep{tang2019discretizing}. Here, however, optimality is controlled by how finely discretization is performed. A third approach may be to combine non-convex optimization methods as in \cite{munos2011optimistic} and \cite{bartlett2018simple} in order to find the optimal action at each step. These adaptive sampling methods find solutions which are at most $\bigO(e^{-\sqrt{n}})$ away from optimal (i.e., simple regret). Nevertheless, similar to discretization of the action space, these methods are incapable of finding an optimal solution. In this work, we build upon $\alpha$-Policy Iteration schemes \citep{scherrer2014approximate} and suggest an alternative training paradigm. Contrary to the common policy gradient method, which focus on parametric distribution functions, we model the policy using a \emph{generative model}. In theory, this model can represent arbitrary distribution functions and thus does not suffer from the sub-optimality inherent in training using parametric distributions. \cmnt{Since generative models do not produce the p.d.f., but rather provide a method for sampling from the distribution;} The generative model is updated by minimizing the distance (e.g., Wasserstein) between the actor's policy distribution and some improving policy. We show how this update rule can be implemented using Implicit Quantile Network \citep{dabney2018implicit} and the Autoregressive Quantile Loss \citep{ostrovski2018autoregressive}. We call this method the Generative Actor Critic (GAC). Empirical evaluation on several robotic tasks \citep{todorov2012mujoco} show that our approach is capable of attaining competitive performance. Moreover, GAC exhibits much lower variance than previous approaches and often outperforms them. This paper is organized as follows. In Section~\ref{sec:preliminaries} we provide the preliminary definitions and notations we will use throughout the paper. In Section~\ref{sec:policy search} we compare two policy search \mbox{procedures: (i) methods} which consider parametric distributions and directly learn the parameters (e.g., learning the mean $\mu$ and variance $\sigma$ of a Gaussian distribution), and (ii) methods which consider the entire policy distribution space. We show that, when the optimization process results in a smoothly evolving policy (see Definition \ref{def:smooth evolving}), then parametric finite-modal distributions are not ensured to converge to an optimal solution, even though there exists a deterministic policy which is optimal and is contained in the set of uni-modal distributions. In Sections~\ref{sec: general policy spaces} and \ref{sec:dist_approach}, we show that in order to find an optimal policy one must consider the entire policy space, as opposed to a parametric distributions, which only considers a subset of it. This leads to our approach, the Generative Actor Critic, which is presented in Section~\ref{sec: method: our approach} and evaluated in Section~\ref{sec:experiments}. We conclude the paper with an overview of related work in Section~\ref{sec:related_work} as well as a short discussion in Section~\ref{sec:discussion}. } \section{Preliminaries} \label{sec:preliminaries} We consider an infinite-horizon discounted Markov Decision Process (MDP) with a continuous action space. An MDP is defined as the 5-tuple $(\mathcal{S}, \mathcal{A},P,r,\gamma)$ \citep{puterman1994markov}, where ${\mathcal S}$ is a countable state space, $\mathcal{A}$ the continuous action space, ${P : S \times S \times \mathcal{A} \mapsto [0,1]}$ is a transition kernel, ${r : S \times A \to [0,1]}$ is a reward function, and $\gamma\in(0,1)$ is the discount factor. Let $\pi: \mathcal{S} \mapsto \mathcal{B}({\mathcal A})$ be a stationary policy, where $\mathcal{B}({\mathcal A})$ is the set of probability measures on the Borel sets of $\mathcal{A}$. We denote by $\Pi$ the set of stationary stochastic policies. In addition to $\Pi$, often one is interested in optimizing over a set of parametric distributions. We denote the set of possible distribution parameters by $\Theta$ (e.g., the mean $\mu$ and variance $\sigma$ of a Gaussian distribution). Two measures of interest in RL are the value and action-value functions ${v^\pi \in \mathbb{R}^{|\mathcal{S}|}}$ and ${Q^\pi \in \mathbb{R}^{|\mathcal{S}| \times |\mathcal{A}|}}$, respectively. The value of a policy $\pi$, starting at state $\state$ and performing action $\action$ is defined by ${Q^\pi (\state, \action) = \mathbb{E}^\pi \left[\sum_{t=0}^\infty \gamma^t r(\state_t, \action_t) \mid \state_0 = \state, \action_0 = \action\right]}$. The value function is then defined by $v^\pi = \mathbb{E}^\pi [Q^\pi (\state, \action)]$. Given the action-value and value functions, the advantage of an action $\action \in \mathcal{A}$ at state $\state \in \mathcal{S}$ is defined by $A^\pi (\state, \action) = Q^\pi (\state, \action) - v^\pi (\state)$. The optimal policy is defined by $\pi^* = \argmax_{\pi \in \Pi} v^\pi$ and the optimal value by $v^* = v^{\pi^*}$. \section{From Policy Gradient to Distributional Policy Optimization} \label{sec:dist_approach} Current practical approaches leverage the Policy Gradient Theorem \citep{sutton2000policy} in order to optimize a policy, which updates the policy parameters according to \begin{equation}\label{eqn: policy gradient} \theta_{k+1} = \theta_k + \alpha_k {\mathbb{E}}_{\state \sim d\pth{\pi_{\theta_k}}} {\mathbb{E}}_{\action \sim \pi_{\theta_k}(\cdot|\state)} \nabla_\theta \log \pi_\theta (\action|\state) \mid_{\theta = \theta_k} Q^{\pi_{\theta_k}} (\state, \action) \, , \end{equation} where $d\pth{\pi}$ is the stationary distribution of states under $\pi$. Since this update rule requires knowledge of the log probability of each action under the current policy $\log \pi_\theta (\action | \state)$, empirical methods in continuous control resort to parametric distribution functions. Most commonly used are the Gaussian \citep{schulman2017proximal}, Beta \citep{beta_gradients} and deterministic Delta \citep{lillicrap2015continuous} distribution functions. However, as we show in Proposition~\ref{prop:gaussian doesnt converge}, this approach is not ensured to converge, even though there exists an optimal policy which is deterministic (i.e., Delta) - a policy which is contained within this set. The sub-optimality of uni-modal policies such as Gaussian or Delta distributions does not occur due to the limitation induced by their parametrization (e.g., the neural network), but is rather a result of the predefined set of policies. As an example, consider the set of Delta distributions. As illustrated in Figure~\ref{fig:param_vs_policy}, while this set is convex in the parameter $\mu$ (the mean of the distribution), it is not convex in the set $\Pi$. This is due to the fact that $(1-\alpha) \delta_{\mu_1} + \alpha \delta_{\mu_2}$ results in a stochastic distribution over two supports, which cannot be represented using a single Delta function. Parametric distributions such as Gaussian and Delta functions highlight this issue, as the policy gradient considers the gradient w.r.t. the parameters $\mu, \sigma$. This results in local movement in the action space. Clearly such an approach can only guarantee convergence to a locally optimal solution and not a global one. \begin{figure}[t!] \begin{subfigure}[]{0.64\textwidth} \centering \includegraphics[width=\textwidth]{figures/gac} \caption{Policy vs. Parameter Space}\label{fig: param vs policy gradient comparison} \end{subfigure}% \hspace*{0.2cm} \begin{subfigure}[]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{figures/ddpg_evolution} \caption{Delta}\label{fig: delta evolution} \vspace*{\baselineskip} \includegraphics[width=\textwidth]{figures/ppo_evolution} \caption{Gaussian} \end{subfigure} \caption{(a): A conceptual diagram comparing policy optimization in parameter space $\Theta$ (black dots) in contrast to distribution space $\Pi$ (white dots). Plots depict $Q$ values in both spaces. As parameterized policies are non-convex in the distribution space, they are prone to converge to a local optima. Considering the entire policy space ensures convergence to the global optima. (b,c): Policy evolution of Delta and Gaussian parameterized policies for multi-modal problems.} \label{fig:param_vs_policy} \end{figure} \cmnt{ \begin{figure}[t!] \centering \begin{subfigure}{0.45\textwidth} \label{fig: general 0} \centering \includegraphics[width=\textwidth]{figures/general_0} \caption{$\pi_0$} \end{subfigure}% \begin{subfigure}{0.45\textwidth} \label{fig: general 1} \centering \includegraphics[width=\textwidth]{figures/general_1} \caption{$\pi_1$} \end{subfigure}% \caption{Policy evolution of a general, non-parametric policy when the target policy is the $\argmax$. $\pi_0$ denotes the initial policy, a Gaussian in this example, and $\pi_1$ the policy after one update step. At each step probability mass is transferred to the action which maximizes the value.} \label{fig:general_evolution} \end{figure} } \begin{proposition}\label{prop:gaussian doesnt converge} For any initial Gaussian policy $\pi_0 \sim \mathcal{N}(\mu_0, \Sigma)$ and $L \in [0, \frac{v^*}{2})$ there exists an MDP $\mathcal{M}$ such that $\pi_\infty$ satisfies \begin{equation} \norm{v^* - v^{\pi_\infty}}_\infty > L \, , \end{equation} where $\pi_\infty$ is the convergent result of a PG method with step size bounded by $\alpha$. Moreover, given $\mathcal{M}$ the result follows even when $\mu_0$ is only known to lie in some ball of radius R around $\tilde{\mu}_0$, $B_R(\tilde{\mu}_0)$. \end{proposition} \begin{proof}[Proof sketch] For brevity we prove for the case of $\action \in \mathbb{R}$, such that $B_R$ is a finite interval $[a,b]$. We also assume $[a,b] \subseteq [\mu_0 - 2\alpha, \mu_0 + 2\alpha]$, and $\sigma \to 0$. The general case proof can be found in the supplementary material. Let $\epsilon > 0$. We consider a single state MDP (i.e., x-armed bandit) with action space \cmnt{$\mathcal{A} = [\mu_0 - 2\alpha, \mu_0 + 6\alpha]$} $\mathcal{A} = \mathbb{R}$ and a multi-modal reward function (similar to the illustration in Figure~\ref{fig: delta evolution}), defined by \begin{equation*} r(\action) = \abs{\cos\pth{\frac{2\pi}{8\alpha}(\action - \mu_0)}}\pth{\epsilon W_{\mu_0 - 2\alpha, \mu_0 + 2\alpha} + (1-\epsilon)W_{\mu_0 + 2\alpha, \mu_0 + 6\alpha}}, \end{equation*} where $W_{x,y}(z) = \begin{cases} 1 & z \in [x, y] \\ 0 & \text{else} \end{cases}$ is the window function. In PG, we assume $\mu$ is parameterized by some parameters $\theta$. Without loss of generality, let us consider the derivative with respect to $\theta = \mu$. At iteration $k$ the derivative can be written as ${ \frac{d}{d\mu} \log \pi_\mu (\action) \mid_{\mu=\mu_k} = -\frac{1}{2\sigma^2} \pth{\mu_k - \action }. }$ PG will thus update the policy parameter $\mu$ by ${ \mu_{k+1} = \mu_k + \alpha_k \braces{{\mathbb{E}}_{\action \sim \mathcal{N}(\mu_k, \sigma)} \frac{1}{2\sigma^2}\pth{\action - \mu_k}r(\action)}. }$ As $\sigma \to 0$, it holds that ${ \text{sign}\braces{{\mathbb{E}}_{\action \sim \mathcal{N}(\mu_k, \sigma)}\pth{\action - \mu_k}r(\action)} = \text{sign}\braces{\frac{d}{d\action}r(\action) \mid_{\action = \mu_k}}. }$ It follows that if $\epsilon < \frac{1}{3}$ and ${\mu_k \in [\mu_0-2\alpha, \mu_0 + 2\alpha]}$ then so is $\mu_{k+1}$. Then, $\mu_\infty \in [\mu_0-2\alpha, \mu_0 + 2\alpha]$. That is, the policy can never reach the interval $[\mu_0 + 2\alpha, \mu_0 + 6\alpha]$ in which the optimal solution lies. Hence, $\norm{v^* - v^{\pi_\infty}}_\infty = 1 - 2\epsilon$ and the result follows for $\epsilon < \frac{1}{3}$. % % \cmnt{ Let $\epsilon > 0$, let $M > \epsilon$, and let $D > 0$. $D$ will be defined later as a function of $M, \epsilon, \sigma$. We consider a single state MDP (i.e., x-armed bandit) with action space \cmnt{$\mathcal{A} = [\mu_0 - 2\alpha, \mu_0 + 6\alpha]$} $\mathcal{A} = \mathbb{R}$ and a multi-modal reward function defined by $$ r(\action) = \abs{\cos\pth{\frac{2\pi}{8D\alpha}(\action - \mu_0)}}\pth{\epsilon W_{\mu_0 - 2D\alpha, \mu_0 + 2D\alpha} + (M-\epsilon)W_{\mu_0 + 2D\alpha, \mu_0 + 6D\alpha}}, $$ where $W_{a,b}(x) = \begin{cases} 1 & x \in [a, b] \\ 0 & \text{else} \end{cases}$ is the window function. In PG, we assume $\mu$ is parameterized by some parameters $\theta$. Without loss of generality, let us consider the gradient with respect to $\mu$. At iteration $k$ the gradient can be written as ${ \frac{d}{d\mu} \log \pi_\theta (\action | \state) \mid_{\mu=\mu_k} = -\frac{1}{2\sigma^2} \pth{\mu_k(\state) - \action }. }$ PG will thus update the policy parameter $\mu$ by ${ \mu_{k+1} = \mu_k + \frac{\alpha_k}{2\sigma^2} \braces{{\mathbb{E}}_{\action \sim \mathcal{N}(\mu_k(s), \sigma)} \pth{\action - \mu_k(\state)}r(\action)}. }$ There exists $D(M, \epsilon, \sigma) > 0$ such that if ${\mu_k < \mu_0 + 2\alpha}$ then ${{\mathbb{E}}_{\action \sim \mathcal{N}(\mu_k(s), \sigma)}\pth{\action - \mu_\theta(\state)}r(\action) < 0}$. For instance, take $D$ such that $$ \int_{\action > \mu_k} r(\action) $$ } \cmnt{ Let $\epsilon > 0$ and let $M > \epsilon$. We consider a single state MDP (i.e., x-armed bandit) with action space $\mathcal{A} = [\mu_0 - 2\alpha, \mu_0 + 6\alpha]$ and a multi-modal reward function defined by $$ r(\action) = \abs{\cos\pth{\frac{2\pi}{8\alpha}(\action - \mu_0)}}\pth{\epsilon W_{\mu_0 - 2\alpha, \mu_0 + 2\alpha} + (M-\epsilon)W_{\mu_0 + 2\alpha, \mu_0 + 6\alpha}}, $$ where $W_{a,b}(x) = \begin{cases} 1 & x \in [a, b] \\ 0 & \text{else} \end{cases}$ is the window function. In PG, we assume $\mu$ is parameterized by some parameters $\theta$. \cmnt{We therefore have $$ f_\theta(\state) = \frac{1}{(2\pi)^{\frac{p}{2}}\abs{\Sigma}^{}} \exp\braces{-(\action - \mu_\theta(\state)^T \Sigma (\state)^{-1}(\action - \mu_\theta(s))} $$} At iteration $k$ the gradient w.r.t. $\theta$ can thus be written as \begin{align*} \nabla_\theta \log \pi_\theta (\action | \state) \mid_{\mu=\mu_k} = -\Sigma^{-1} \pth{\mu_\theta(\state) - \action } \nabla_\theta \mu_\theta(\state). \end{align*} PG will update the policy parameters by $$ \theta_{k+1} = \theta_k + \frac{\alpha_k}{2}\Sigma^{-1} \braces{{\mathbb{E}}_{\action \sim \mathcal{N}(\mu_\theta(s), \Sigma)} \pth{\action - \mu_\theta(\state)}r(\action)}\nabla_\theta \mu_\theta(\state). $$ } \cmnt{$r(\action) = \epsilon \abs{\cos(\frac{\action}{\alpha})} + (M - \epsilon) W_{0,\pi}(\action) \abs{\cos(\frac{\action}{\alpha})}$, where $W_{a,b}(x)$ takes the value of $1$ for $a \leq x \leq b$ and 0 otherwise.} \end{proof} \subsection{Distributional Policy Optimization (DPO)} \label{sec: dpo} In order to overcome issues present in parametric distribution functions, we consider an alternative approach. In our solution, the policy does not evolve based on the gradient w.r.t. distribution parameters (e.g., $\mu, \sigma$), but rather updates the policy distribution according to \begin{equation*} \pi_{k+1} = \Gamma \left( \pi_k - \alpha_k \nabla_\pi d(\mathcal{D}^{\pi_k}_{I^{\pi_k}}, \pi) \mid_{\pi=\pi_k} \right), \end{equation*} where $\Gamma$ is a projection operator onto the set of distributions, $d:\Pi \times \Pi \to [0, \infty)$ is a distance measure (e.g., Wasserstein distance), and $\mathcal{D}^{\pi}_{I^{\pi}} (\state)$ is a distribution defined over the support ${I^{\pi}(\state) = \set{\action : A^{\pi}(\state,\action) > 0}}$ (i.e., the positive advantage). Table~\ref{table: distributions} provides examples of such distributions. Algorithm~\ref{alg:dpo} describes the Distributional Policy Optimization (DPO) framework as a three time-scale approach to learning the policy. It can be shown, under standard stochastic approximation assumptions \citep{borkar2009stochastic,konda2000actor,bhatnagar2012online,chow2017risk}, to converge to an optimal solution. DPO consists of 4 elements: (1) A policy $\pi$ on a fast timescale, (2) a delayed policy $\pi'$ on a slow timescale, (3) a value and (4) a critic, which estimate the quality of the delayed policy $\pi'$ on an intermediate timescale. Unlike the PG approach, DPO does not require access to the underlying p.d.f. In addition, $\pi$ which is updated on the fast timescale views the delayed policy $\pi'$, the value and critic as quasi-static, and as such it can be optimized using supervised learning techniques\footnote{Assuming the target distribution is 'fixed', the policy $\pi$ can be trained using a supervised learning loss, e.g., GAN, VAE or AIQN.}. Finally, we note that in DPO, the target distribution $\mathcal{D}^{\pi'}_{I^{\pi'}}$ induces a higher value than the current policy $\pi'$, ensuring an always improving policy. \cmnt{ In DPO, the target distribution $\mathcal{D}^{\pi'}_{I^{\pi'}}$ induces a higher value than the current policy. Table~\ref{table: distributions} provides several examples, where $I^\pi (\state)$ is defined as the set of all actions $\action \in \mathcal{A}$ with a positive advantage, i.e., $I^\pi (\state) = \{ \action : Q^\pi (\state, \action) > v^\pi (\state) \}$. A naive approach would be to attempt to define the target distribution as the $\argmax$, however, in non-convex problems, this may pose infeasible. Thus considering a distribution with probability mass only on actions with positive advantage, is a more tractable approach and is ensured to result in a monotonically improving policy.} The concept of policy evolution using positive advantage is depicted in Figure~\ref{fig:gac_evolution}. While the policy starts as a uni-modal distribution, it is not restricted to this subset of policies. As the policy evolves, less actions have positive advantage, and the process converges to an optimal solution. In the next section we construct a practical algorithm under the DPO framework using a generative actor. \begin{algorithm}[t] \caption{Distributional Policy Optimization (DPO)} \label{alg:dpo} \begin{algorithmic}[1] \State Input: learning rates $\alpha_k \gg \beta_k \gg \delta_k$ \State $\pi_{k+1} = \Gamma \left( \pi_k - \alpha_k \nabla_\pi d(\mathcal{D}^{\pi_k'}_{I^{\pi_k'}}, \pi) \mid_{\pi=\pi_k} \right)$ \State $Q^{\pi'}_{k+1} (\state, \action) = Q^{\pi'}_k (\state, \action) + \beta_k \left( r(\state, \action) + \gamma v^{\pi'}_k (\state) - Q^{\pi'}_k (\state, \action) \right)$ \State $v^{\pi'}_{k+1} (\state) = v^{\pi'}_k + \beta_k \int_{\mathcal{A}} \left( Q^{\pi'}_k (\state, \action) - v^{\pi'}_k (\state) \right)$ \State $\pi_{k+1}' = \pi_k' + \delta_k (\pi_k - \pi_k')$ \end{algorithmic} \vspace{-0.1cm} \end{algorithm} \begin{table}[t!] \begin{center} \caption{Examples of target distributions over the set of improving actions}\label{table: distributions} \begin{tabular}{|l|l|} \hline \\[-1em] Argma & $\mathcal{D}^\pi_{I^\pi(\state)}(\action|\state) = \delta_{\arg\max_{a \in I(\pi)} A^\pi(\state,\action)}(\action|\state)$ \\ \hline \\[-1em] Linear & $\mathcal{D}^\pi_{I^\pi(\state)}(\action|\state) = \mathbf{1}_{\braces{\action \in I^\pi}}\frac{A^\pi(\state,\action)}{\int_{I^\pi(\state)}A^\pi(\state,\action')d \action'}$ \\ \hline \\[-1em] Boltzmann ($\beta > 0$) & $\mathcal{D}^\pi_{I^\pi(\state)}(\action|\state) = \mathbf{1}_{\braces{\action \in I^\pi}}\frac{\exp\pth{\frac{1}{\beta}A^\pi(\state,\action)}}{\int_{I^\pi(\state)} \exp\pth{\frac{1}{\beta}A^\pi(\state,\action')}d \action'}$ \\ \hline \\[-1em] Uniform & $\mathcal{D}^\pi_{I^\pi(\state)}(\action|\state) = \text{Uniform}(I^\pi(\state))$ \\ \hline \end{tabular} \end{center} \end{table} \cmnt{ The value and critic estimators which the advantage function and a delayed policy are learned in slower time-scales. One can show, under standard stochastic approximation assumptions \citep{borkar2009stochastic,konda2000actor,bhatnagar2012online,chow2017risk} that Algorithm \ref{alg:dpo} converges to an optimal fixed point $(\pi^*, Q^{\pi^*}, v^{\pi^*})$ (see supplementary material). Informally, the critic $Q^{\pi'}$ and value $v^{\pi'}$ estimators perform policy evaluation of the delayed policy $\pi'$. The policy $\pi$ minimizes the distance between its current distribution and that of $\mathcal{D}^{\pi'}_{I^{\pi'}}$. Finally, the delayed actor $\pi'$ tracks the policy $\pi$ slowly. While uncommon, the delayed actor is a crucial component, as minimizing the distribution distance $d$ does not necessarily result in a monotonically improving policy at each step, even though the converged result $\mathcal{D}_{I^{\pi'}}^{\pi'}$ does. In order to overcome these issues, one must consider an alternative approach, in which the policy does not evolve based on the gradient w.r.t. the distributions parameters, i.e., $\mu, \sigma$. We suggest an update rule which conservatively updates a policy given a target distribution $\pi_\text{target}$. More specifically, at iteration $k$, for $\alpha_k \in (0,1)$, the policy update rule is given by \begin{equation}\label{eqn:alpha_greedy} \pi_{k+1}(\action|\state) = (1-\alpha_k)\pi_k(\action|\state) + \alpha_k \pi_\text{target}(\action|\state) \, . \end{equation} This update rule is used by well-known Policy Iteration schemes \citep{scherrer2014approximate}, including: \mbox{$\alpha$-Approximate Policy Iteration (API($\alpha$))}, $\alpha$-Conservative Policy Iteration (CPI($\alpha$)), and their exact form, $\alpha$-Policy Iteration (PI($\alpha$)). These methods use the greedy target policy ${\pi_\text{target}(\action|\state) \in \argmax_{\action \in A} (r(\state,\action) + \gamma \sum_{\state' \in S} P^\pi(\state'|\action) v^{\pi_k}(\state'))}$ in the exact case, or the $\epsilon$-greedy target policy in the approximate case. Figure~\ref{fig:general_evolution} illustrates this procedure, it presents a policy that evolves between two distributions - a Gaussian centered over some action to a delta function of the greedy action w.r.t. $Q$. It is well known that when $\pi_\text{target}$ is the 1-step greedy policy, this procedure converges to a globally optimal policy \citep{kakade2002approximately, scherrer2014approximate}. Finding the $\argmax$ over the continuous action set is a hard problem in non-convex regimes, hence, it is reasonable to instead define the target policy (i.e., $\pi_\text{target}$) as a distribution over improving actions. Distributional Policy Optimization (DPO) optimizes the policy in $\Pi$-space. The update rule of the policy is then defined by \begin{equation*} \pi_{k+1} = \Gamma \left( \pi_k - \alpha_k \nabla_\pi d(\mathcal{D}^{\pi_k}_{I^{\pi_k}}, \pi) \mid_{\pi=\pi_k} \right) \,, \end{equation*} where $\Gamma$ is a projection operator onto the set of distributions, $d:\Pi \times \Pi \to [0, \infty)$ is a distance measure, and $\mathcal{D}^{\pi}_{I^{\pi}} (\state)$ is a distribution defined over the support $I^{\pi}(\state) = \set{\action : A^{\pi}(\state,\action) > 0}$ (i.e., the positive advantage). Table~\ref{table: distributions} provides examples of such distributions. } \cmnt{ \section{Policy Search}\label{sec:policy search} In this section we compare two paradigms for policy search. The first focuses on parametric distribution functions and on policy search w.r.t. the distributions parameters $\Theta$ (e.g., directly optimizing the mean and variance of a Gaussian distribution), whereas the second approach considers the general class of distributions, namely $\Pi$. We show that optimization in the parametric distribution space is prone to sub-optimal behavior as opposed to policy space, in which optimality is guaranteed. We will focus on gradient based approaches, in which the policy evolves smoothly over time. Given a distance metric, we assume the policy remains ``close" between subsequent updates. More specifically, we define a smoothly evolving policy as follows. \begin{defn} \label{def:smooth evolving} A $C$-smoothly evolving policy $[\pi_1, \pi_2, \hdots, \pi_k]$ w.r.t. $\alpha_k$ is defined by \begin{equation} D(\pi_{k+1}, \pi_k) \leq C \cdot \alpha_k , \enspace \forall 1 \leq k < n \, , \end{equation} where $D$ is a distance metric (e.g., Wasserstein distance), $C$ is constant, and $\alpha_k$ is the step size. \end{defn} This definition is closely related to the concept of trust region policy optimization \citep{kakade2002approximately, schulman2015trust, schulman2017proximal}. In \cite{schulman2015trust} the KL-divergence is used as a premetric, constraining the policy, similar to Definition~\ref{def:smooth evolving}. \begin{figure}[t!] \begin{subfigure}[]{0.64\textwidth} \centering \
includegraphics[width=\textwidth]{figures/gac} \caption{Policy vs. Parameter Space}\label{fig: param vs policy gradient comparison} \end{subfigure}% \hspace*{0.2cm} \begin{subfigure}[]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{figures/ddpg_evolution} \caption{Delta}\label{fig: delta evolution} \vspace*{\baselineskip} \includegraphics[width=\textwidth]{figures/ppo_evolution} \caption{Gaussian} \end{subfigure} \caption{(a): A conceptual diagram comparing policy optimization in $\Theta$ (black dots) in contrast to $\Pi$ (white dots). Plots depict $Q$ values in both spaces. While parametrized policies are non-convex sets in the distribution space, thus prone to converge to a local optima, approaches which consider the entire policy space ensure attainment of a global optima. (b,c): Policy evolution of Delta and Gaussian parameterized policies, respectively.} \label{fig:param_vs_policy} \vskip -0.2in \end{figure} \subsection{Optimizing over $\Theta$} \label{sec:parameter_space} Current practical approaches leverage the Policy Gradient Theorem \citep{sutton2000policy} in order to optimize a policy, which updates the policy parameters according to \begin{equation}\label{eqn: policy gradient} \theta_{k+1} = \theta_k + \alpha_k {\mathbb{E}}_{\state \sim d\pth{\pi_{\theta_k}}} {\mathbb{E}}_{\action \sim \pi_{\theta_k}(\cdot|\state)} \nabla_\theta \log \pi_\theta (\action|\state) \mid_{\theta = \theta_k} Q^{\pi_{\theta_k}} (\state, \action) \, , \end{equation} where $d\pth{\pi}$ is the stationary distribution of states under $\pi$. Since this update rule requires knowledge of the log probability of each action under the current policy, empirical methods in continuous control resort to parametric distribution functions. Most commonly used are the Gaussian \citep{schulman2017proximal}, Beta \citep{beta_gradients} and deterministic Delta \citep{lillicrap2015continuous} distribution functions. However, as Proposition~\ref{prop: k-modal doesnt converge} shows, this approach is not ensured to converge even when the policy space is enriched through mixture models, such as a mixture of $k$ Gaussians $\Theta^{kg}$, even though there exists an optimal policy which is deterministic (i.e., Delta) - a policy which is contained within this set. Under smoothness assumptions of $Q$, the update in Equation \eqref{eqn: policy gradient} results in a smoothly evolving policy (Definition \ref{def:smooth evolving}). However, as Proposition~\ref{prop: k-modal doesnt converge} shows, this approach is not ensured to converge even when the policy space is enriched through mixture models, such as a mixture of $k$ Gaussians $\Theta^{kg}$, even though there exists an optimal policy which is deterministic (i.e., Delta) - a policy which is contained within this set. \begin{proposition}\label{prop: k-modal doesnt converge} For any $k < \infty$, initial policy $\pi^0 \in \Theta^{kg}$ which is a mixture of $k$ Gaussians, and $L \in [0, v^*)$ there exists an MDP $\mathcal{M}$ such that $\pi^\infty$ satisfies \begin{equation} \norm{v^* - v^{\pi^\infty}}_\infty > L \, , \end{equation} where $\pi^\infty$ is the convergent result of a smoothly evolving policy (Definition~\ref{def:smooth evolving}) following the policy gradient direction \eqref{eqn: policy gradient} initialized at $\pi^0$ and restricted to the class of $k$-mixture distributions, $\Theta^{kg}$. \end{proposition} \begin{proof} Let $\alpha$ be an upper bound on the step size of the smoothly evolving policy. Denote by $\{\mu_i\}_{i=1}^k$ the modes of the Gaussian mixture. Denote by $\mu_0 = \min_i \mu_i - \alpha$. Denote $\Delta_i = \frac{\mu_{i+1} - \mu_i}{2}. $Let $\mathcal{A} = [\mu_0, \max_i \mu_i ]$. Let $\epsilon > 0$ and let $M > \epsilon$. We define $r(\action) = W_{\mu_0, \mu_0 + D\cdot\mathbf{1}elta_0}(\action) + \sum_{i=1}^{k-1} W_{\mu_{i-1} + \Delta_{i-1}, \mu_i + \Delta_i }(\action) + W_{\mu_{k-1} + \Delta_{k-1}, \mu_k}(\action)$, where $W_{a,b}(x)$ takes the value of $1$ for $a \leq x \leq b$ and 0 otherwise. \cmnt{$r(\action) = \epsilon \abs{\cos(\frac{\action}{\alpha})} + (M - \epsilon) W_{0,\pi}(\action) \abs{\cos(\frac{\action}{\alpha})}$, where $W_{a,b}(x)$ takes the value of $1$ for $a \leq x \leq b$ and 0 otherwise.} Notice that there exist $k$ local maxima with the value of $\epsilon$ and a single maxima with the value of $M$. It is easy to see that $v^* = M$ for choosing action $\action = 0$. Next, consider a $k$-modal policy $\pi^0$, in which the mean of each modality is initialized at $\mu_i = 2\pi \cdot (i + 1) + \pi/2$ for $i \in \{0 \hdots k\}$. We have that $v^{\pi^0} = \epsilon$. Furthermore, since $\pi^k$ is a smoothly evolving policy restricted to $\Theta^{km}$ we have that $\pi^k = \pi^0$ for all $k$. Hence, $\norm{v^* - v^{\pi^\infty}}_\infty = M - \epsilon$ and the result follows. \end{proof} \begin{figure}[t!] \centering \begin{subfigure}{0.45\textwidth} \label{fig: general 0} \centering \includegraphics[width=\textwidth]{figures/general_0} \caption{$\pi_0$} \end{subfigure}% \begin{subfigure}{0.45\textwidth} \label{fig: general 1} \centering \includegraphics[width=\textwidth]{figures/general_1} \caption{$\pi_1$} \end{subfigure}% \caption{Policy evolution of a general, non-parametric policy when the target policy is the $\argmax$. $\pi_0$ denotes the initial policy, a Gaussian in this example, and $\pi_1$ the policy after one update step. At each step probability mass is transferred to the action which maximizes the value.} \label{fig:general_evolution} \end{figure} The sub-optimality of parametric policy search does not occur due to the limitation induced by the parametrization of the policy function (e.g., the neural network), but is rather a result of the predefined set of policies. As an example, consider the set of Delta distributions. As illustrated in Figure~\ref{fig:param_vs_policy}, while this set is convex in the parameter $\mu$ (the mean of the distribution), it is not convex in the set $\Pi$. This is due to the fact that $(1-\alpha) \mu_1 + \alpha \mu_2$ results in a stochastic distribution over two supports, which cannot be represented using a single Delta function. Parametric distributions such as Gaussian and Delta functions highlight this issue, as the policy gradient considers the gradient w.r.t. the parameters $\mu, \sigma$, this results in local movement in the action space. Clearly such an approach can only guarantee convergence to a locally optimal solution and not a global one. \subsection{Optimizing over $\Pi$} \label{sec: general policy spaces} As seen in Section~\ref{sec:paramet\section{Distributional Policy Optimization (DPO)} \label{sec:dist_approach}er_space}, limiting the policy space may potentially compromise the optimality of the solution. In order to ensure convergence, one must take into account the entire set of policy distributions, namely $\Pi$. We consider an update rule which conservatively updates a policy given a target distribution $\pi_\text{target}$. More specifically, at iteration $k$, for $\alpha_k \in (0,1)$, the policy update rule is given by \begin{equation}\label{eqn:alpha_greedy} \pi_{k+1}(\action|\state) = (1-\alpha_k)\pi_k(\action|\state) + \alpha_k \pi_\text{target}(\action|\state) \, . \end{equation} This update rule is used by well-known Policy Iteration schemes \citep{scherrer2014approximate}, including: \mbox{$\alpha$-Approximate Policy Iteration (API($\alpha$))}, $\alpha$-Conservative Policy Iteration (CPI($\alpha$)), and their exact form, $\alpha$-Policy Iteration (PI($\alpha$)). These methods use the greedy target policy ${\pi_\text{target}(\action|\state) \in \argmax_{\action \in A} (r(\state,\action) + \gamma \sum_{\state' \in S} P^\pi(\state'|\action) v^{\pi_k}(\state'))}$ in the exact case, or the $\epsilon$-greedy target policy in the approximate case. This procedure is illustrated in Figure~\ref{fig:general_evolution} for the exact case. The plot depicts a policy that evolves between two distributions - a Gaussian centered over some action to a delta function of the greedy action w.r.t. $Q$. It is well known that when $\pi_\text{target}$ is the 1-step greedy policy, this procedure converges to a globally optimal policy \citep{kakade2002approximately, scherrer2014approximate}. An interesting property of this approach is that it results in a smoothly evolving policy (Definition~\ref{def:smooth evolving}). Inspired by these methods, in the next section we introduce Distributional Policy Optimization (DPO), a distributional approach for updating a policy in $\Pi$-space. } \cmnt{ \subsection{A Generative Approach using Quantile Regression} A benefit of using distributions over actions is that they can be approximated using samples. Optimization in policy space $\Pi$ can thus be applied using empirical samples of the target distribution $\mathcal{D}^{\pi_k}_{I^\pi}$. This requires the ability to represent arbitrarily complex distributions for the policy, achievable using generative modeling techniques. Contrary to parametric distribution functions (e.g., Gaussian, in which the model outputs the mean and variance of the distribution), a generative model learns to directly map $\tau \sim U([0,1])$ to some target distribution. Generative models are able to represent arbitrarily complex distribution functions as they are only limited by their modeling capacity and not by the parameters of the predefined parametric distribution class. Optimization of generative models has been extensively studied in recent years with advances such as generative adversarial networks \citep{goodfellow2014generative}, variational inference \citep{kingma2013auto}, and autoregressive density estimation \citep{van2016conditional}. Here, we build upon the well-understood statistical method of quantile regression in order to optimize a generative model for the actor policy $\pi$ towards a target distribution $\pi'$.} \section{Method}\label{sec: method: our approach} In this section we present our method, the Generative Actor Critic, which learns a policy based on the Distributional Policy Optimization framework (Section~\ref{sec:dist_approach}). Distributional Policy Optimization requires a model which is both capable of representing arbitrarily complex distributions and can be optimized by minimizing a distributional distance. We consider the Autoregressive Implicit Quantile Network \citep{ostrovski2018autoregressive}, which is detailed below. \subsection{Quantile Regression \& Autoregressive Implicit Quantile Networks}\label{sec: quantile regression} As seen in Algorithm~\ref{alg:dpo}, DPO requires the ability to minimize a distance between two distributions. The Implicit Quantile Network (IQN) \citep{dabney2018implicit} provides such an approach using the Wasserstein metric. The IQN receives a quantile value $\tau \in [0,1]$ and is tasked at returning the value of the corresponding quantile from a target distribution. As the IQN learns to predict the value of the quantile, it allows one to sample from the underlying distribution (i.e., by sampling $\tau \sim U([0,1])$ and performing a forward pass). Learning such a model requires the ability to estimate the quantiles. The quantile regression loss \citep{koenker2001quantile} provides this ability. It is given by $\rho_\tau (u) = (\tau - \mathbf{1}\{u \leq 0\})u$, where $\tau \in [0, 1]$ is the quantile and $u$ the error. Nevertheless, the IQN is only capable of coping with univariate (scalar) distribution functions. \cite{ostrovski2018autoregressive} proposed to extend the IQN to the multi-variate case using quantile autoregression \citep{koenker2006quantile}. Let $\mathbf{X} = (X_1, \hdots , X_k)$ be an n-dimensional random variable. Given a fixed ordering of the $n$ dimensions, the c.d.f. can be written as the product of conditional likelihoods $ F_\mathbf{X} (x) = P \left( X^1 \leq x^1, \hdots, X^n \leq x^n \right) = \Pi_{i=1}^n F_{X^i | X^{i-1}, \hdots, X^1} (x^i) \, . $ The Autoregressive Implicit Quantile Network (AIQN), receives an i.i.d. vector $\tau \sim U([0,1]^n)$. The network architecture then ensures each output dimension $x_i$ is conditioned on the previously generated values $x_1, \hdots, x_{i-1}$; trained by minimizing the quantile regression loss. \begin{figure}[t!] \centering \begin{subfigure}{0.45\textwidth} \label{fig: gac 0} \centering \includegraphics[width=\textwidth]{figures/gac_0_evolution} \caption{$\pi_0$} \end{subfigure}% \begin{subfigure}{0.45\textwidth} \label{fig: gac 1} \centering \includegraphics[width=\textwidth]{figures/gac_1_evolution} \caption{$\pi_1$} \end{subfigure}% \\ \begin{subfigure}{0.45\textwidth} \label{fig: gac 2} \centering \includegraphics[width=\textwidth]{figures/gac_2_evolution} \caption{$\pi_2$} \end{subfigure}% \begin{subfigure}{0.45\textwidth} \label{fig: gac 3} \centering \includegraphics[width=\textwidth]{figures/gac_3_evolution} \caption{$\pi_k$} \end{subfigure}% \caption{Policy evolution of a general, non-parametric policy, where the target policy is a distribution over the actions with positive advantage. The horizontal dashed line denotes the current value of the policy, the colored green region denotes the target distribution (i.e., the actions with a positive advantage) and $\pi_k$ denotes the policy after multiple updates. As opposed to Delta and Gaussian distributions, the fixed point of this approach is the optimal policy.} \label{fig:gac_evolution} \end{figure} \subsection{Generative Actor Critic (GAC)}\label{sec: gac} Next, we introduce a practical implementation of the DPO framework. As shown in Section~\ref{sec:dist_approach}, DPO is composed of 4 elements: an actor, a delayed actor, a value, and an action-value estimator. The Generative Actor Critic (GAC) uses a generative actor trained using an AIQN, as described below. Contrary to parametric distribution functions, a generative neural network acts as a universal function approximator, enabling us to represent arbitrarily complex distributions, as corollary of the following lemma. \begin{lemma}[Kernels and Randomization \citep{kallenberg2006foundations}] Let $\pi$ be a probability kernel from a measurable space $S$ to a Borel space $\mathcal{A}$. Then there exists some measurable function ${f: S \times [0, 1] \to \mathcal{A}}$ such that if $\theta$ is $U(0, 1)$, then $f(s, \theta)$ has distribution $\pi(\action|\state)$ for every $\state \in S$. \end{lemma} \textbf{Actor:} DPO defines the actor as one which is capable of representing arbitrarily complex policies. To obtain this we construct a generative neural network, an AIQN. The AIQN learns a mapping from a sampled noise vector $\tau \sim U([0,1]^n)$ to a target distribution. As illustrated in Figure~\ref{fig:architecture}, the actor network contains a recurrent cell which enables sequential generation of the action. This generation schematic ensures the autoregressive nature of the model. Each generated action dimension is conditioned only on the current sampled noise scalar $\tau^i$ and the previous action dimensions $\action^{i-1}, \hdots, \action^1$. In order to train the generative actor, the AIQN requires the ability to produce samples from the target distribution $\mathcal{D}^{\pi'}_{I^{\pi'}}$. Although we are unable to sample from this distribution, given an action, we are able to estimate its probability. An unbiased estimator of the loss can be attained by uniformly sampling actions and then multiplying them by their corresponding weight. More specifically, the weighted autoregressive quantile loss is defined by \begin{equation} \label{eq:weighted quantile loss} \sum_{\action_j \sim U(\mathcal{A})} \mathcal{D}^{\pi'}_{{I}^{\pi'}}(\action_j|\state) \sum_{i=1}^n \rho_{\tau^i_j}^k (\action^i_j - \pi_\phi (\tau^i_j | \action^{i-1}_j, \hdots, \action^1_j)) \,, \end{equation} where $\action^i_j$ is the $i^{th}$ coordinate of action $\action_j$, and $\rho_{\tau_j^i}^k$ is the Huber quantile loss \citep{huber1992robust, dabney2018distributional}. Estimation of ${I}^{\pi'}$ in the target distribution is obtained using the estimated advantage. \begin{wrapfigure}{r}{0.35\textwidth} \centering \includegraphics[width=0.33\textwidth]{figures/architecture} \caption{Illustration of the actor's architecture. $\otimes$ is the hadamard product, $\oplus$ a concatenation operator, and $\psi$ a mapping $[0,1] \mapsto \reals^d$.} \label{fig:architecture} \vspace{-0.3cm} \end{wrapfigure} \textbf{Delayed Actor:} The delayed actor, also known as Polyak averaging \citep{polyak1990new}, is an appealing requirement as it is common in off-policy actor-critic schemes \citep{lillicrap2015continuous}. The delayed actor is an additional AIQN $\pi_{\theta'}$, which tracks $\pi_\theta$. It is updated based on $\theta_{k+1}' = (1 - \alpha)\theta_k' + \alpha \theta_k$ and is used for training the value and critic networks. \textbf{Value and Action-Value:} While it is possible to train a critic and use its empirical mean w.r.t. the policy as a value estimate, we found it to be noisy, resulting in bad convergence. We therefore train a value network to estimate the expectation of the critic w.r.t. the delayed policy. In addition, as suggested in \cite{fujimoto2018addressing}, we train two critic networks in parallel. During both policy and value updates, we refer to the minimal value of the two critics. We observed that this indeed reduced variance and improved overall performance. To summarize, GAC combines 4 elements. The delayed actor tracks the actor using a Polyak averaging scheme. The value and critic networks estimate the performance of the delayed actor. Provided $Q$ and $v$ estimations, we are able to estimate the advantage of each action and thus propose the weighted autoregressive quantile loss, used to train the actor network. We refer the reader to the supplementary material for an exhaustive overview of the algorithm and architectural details. \section{Experiments}\label{sec:experiments} In order to evaluate our approach, we test GAC on a variety of continuous control tasks in the MuJoCo control suite \citep{todorov2012mujoco}. The agents are composed of $n$ joints: from 2 joints in the simplistic Swimmer task and up to 17 in the Humanoid robot task. The state is a vector representation of the agent, containing the spatial location and angular velocity of each element. The action is a continuous $n$ dimensional vector, representing how much torque to apply to each joint. The task in these domains is to move forward as much as possible within a given time-limit. We run each task for 1 million steps and, as GAC is an off-poicy approach, evaluate the policy every 5000 steps and report the average over 10 evaluations. We train GAC using a batch size of 128 and uncorrelated Gaussian noise for exploration. Results are depicted in Figure~\ref{fig: results}. Each curve presented is a product of 5 training procedures with a randomly sampled seed. In addition to our raw results, we compare to the relevant baselines\footnote{We use the implementations of DDPG and PPO from the OpenAI baselines repo \citep{baselines}, and TD3 \citep{fujimoto2018addressing} from the authors GitHub repository.}, including: (1) DDPG \citep{lillicrap2015continuous}, \mbox{(2) TD3 \citep{fujimoto2018addressing}}, an off-policy actor critic approach which represents the policy using a deterministic delta distribution, and (3) PPO \citep{schulman2017proximal}, an on-policy method which represents the policy using a Gaussian distribution. As we have shown in the previous sections, DPO and GAC only require \emph{some} target distribution to be defined, namely, a distribution over actions with positive advantage. In our results we present two such distributions: the linear and Boltzmann distributions (see Table \ref{table: distributions}). We also test a non-autoregressive version of our model \footnote{Theoretically, the dimensions of the actions may be correlated and thus should be represented using an auto-regressive model.} using an IQN. For completeness, we provide additional discussion regarding the various parameters and how they performed, in addition to a pseudo-code illustration of our approach, in the supplementary material. \begin{figure}[t] \centering \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=\linewidth]{figures/humanoid_2} \end{subfigure}% \begin{subfigure}{.33\textwidth} \includegraphics[width=\linewidth]{figures/ant_2} \end{subfigure}% \begin{subfigure}{.33\textwidth} \includegraphics[width=\linewidth]{figures/halfcheetah_2} \end{subfigure}% \\ \begin{subfigure}{.33\textwidth} \includegraphics[width=\linewidth]{figures/hopper_2} \end{subfigure}% \begin{subfigure}{.33\textwidth} \includegraphics[width=\linewidth]{figures/walker_2} \end{subfigure}% \begin{subfigure}{.33\textwidth} \includegraphics[width=\linewidth]{figures/swimmer_2} \end{subfigure}% \caption{Training curves on continuous control benchmarks. For the Generative Actor Critic approach we present both the Autoregressive and Non-autoregressive approaches, the exact hyperparameters for each domain are provided in the appendix.} \label{fig: results} \end{figure} \textbf{Comparison to the policy gradient baselines:} Results in Figure~\ref{fig: results} show the ability of GAC to solve complex, high dimensional problems. GAC attains competitive results across all domains, often outperforming the baseline policy gradient algorithms and exhibiting lower variance. This is somewhat surprising, as GAC is a vanila algorithm, it is not supported by numerous improvements apparent in recent PG methods. In addition to these results, we provide numerical results in the supplementary material, which emphasize this claim. \textbf{Parameter Comparison:} Below we discuss how various parameters affect the behavior of GAC in terms of convergence rates and overall performance: \begin{enumerate} \item At each step, the target policy is approximated through samples using the weighted quantile loss (Equation \eqref{eq:weighted quantile loss}). The results presented in Figure~\ref{fig: results} are obtained using 32 (256 for HalfCheetah and Walker) samples at each step. 32 (128) samples are taken uniformly over the action space and 32 (128) from the delayed policy $\pi'$ (a form of combining exploration and exploitation). Ablation tests showed that increasing the number of samples improved stability and overall performance. Moreover, we observed that the combination of both sampling methods is crucial for success. \item Not presented is the Uniform distribution, which did not work well. We believe this is due to the fact that the Uniform target provides an equal weight to actions which are very good while also to those which barely improve the value. \item We observed that in most tasks, similar to the observations of \cite{korenkevych2019autoregressive}, the AIQN model outperforms the IQN (non-autoregressive) one. \end{enumerate} \begin{table}[t] \centering \caption{Relative best GAC results compared to the best policy gradient baseline} \hbox{\hbox{\hbox{\hbox{\hbox{ \hspace{-1cm} \scalebox{0.85}{ \begin{tabular}{c|c|c|c|c|c|c} \hline \thead{Environment} & Humanoid-v2 & Walker2d-v2 & Hopper-v2 & HalfCheetah-v2 & Ant-v2 & Swimmer-v2\\ \hline \thead{Relative Result} & $\mathbf{+3447}\,(+595\%)$ & $\mathbf{+533}\,(+14\%)$ & $\mathbf{+467}\,(+17\%)$ & $\mathbf{-381}\,(-4\%)$ & $\mathbf{-444}\,(-8\%)$ & $\mathbf{+107}\,(+81\%)$\\ \hline \end{tabular} }}}}}} \label{tab: relative results} \end{table} \section{Related Work} \label{sec:related_work} \textbf{Distributional RL:} Recent interest in distributional methods for RL has grown with the introduction of deep RL approaches for learning the distribution of the return. \cite{bellemare2017distributional} presented the C51-DQN which partitions the possible values $[-v_{\max},v_{\max}]$ into a fixed number of bins and estimates the p.d.f. of the return over this discrete set. \cite{dabney2017distributional} extended this work by representing the c.d.f. using a fixed number of quantiles. Finally, \cite{dabney2018implicit} extended the QR-DQN to represent the entire distribution using the Implicit Quantile Network (IQN). In addition to the empirical line of work, \cite{qu2018nonlinear} and \cite{rowland2018analysis} have provided fundamental theoretical results for this framework. \textbf{Generative Modeling:} Generative Adversarial Networks (GANs) \citep{goodfellow2014generative} combine two neural networks in a game-theoretic approach which attempt to find a Nash Equilbirium. This equilibrium is found when the generative model is capable of ``fooling" the discriminator (i.e., the discriminator is no longer capable of distinguishing between samples produced from the real distribution and those from the generator). Multiple GAN models and training methods have been introduced, including the Wasserstein-GAN \citep{arjovsky2017wasserstein} which minimizes the Wasserstein loss. However, as the optimization scheme is highly non-convex, these approaches are not proven to converge and may thus suffer from instability and mode collapse \citep{salimans2016improved}. \textbf{Policy Learning:} Learning a policy is generally performed using one of two methods. The Policy Gradient (PG) \citep{reinforce,pg_theorem} defines the gradient as the direction which maximizes the reward under the assumed policy parametrization class. Although there have been a multitude of improvements, including the ability to cope with deterministic policies \citep{silver2014deterministic,lillicrap2015continuous}, stabilize learning through trust region updates \citep{schulman2015trust,schulman2017proximal} and bayesian approaches \citep{ghavamzadeh2016bayesian}, these methods are bounded to parametric distribution sets (as the gradient is w.r.t. the log probability of the action). An alternative line of work formulates the problem as a maximum entropy \citep{haarnoja2018soft}, this enables the definition of the target policy using an energy functional. However, training is performed via minimizing the KL-divergence. The need to know the KL-divergence limits practical implementation to parametric distributions functions, similar to PG methods. \section{Discussion and Future Work} \label{sec:discussion} In this work we presented limitations inherent to empirical Policy Gradient (PG) approaches in continuous control. While current PG methods in continuous control are computationally efficient, they are not ensured to converge to a global extrema. As the policy gradient is defined w.r.t. the log probability of the policy, the gradient results in local changes in the action space (e.g., changing the mean and variance of a Gaussian policy). These limitations do not occur in discrete action spaces. In order to ensure better asymptotic results, it is often needed to use methods that are more complex and computationally demanding (i.e., ``No Free Lunch" \citep{wolpert1997no}). Existing approaches attempting to mitigate these issues, either enrich the policy space using mixture models, or discretize the action space. However, while the discretization scheme is appealing, there is a clear trade-off between optimality and efficiency. While finer discretization improves guarantees, the complexity (number of discrete actions) grows exponentially in the action dimension \citep{tang2019discretizing}. Similar to the limitations inherent in PG approaches, these limitations also exist when considering mixture models, such as Gaussian Mixtures. A mixture model of $k$-Gaussians provides a categorical distribution over $k$ Gaussian distributions. The policy gradient w.r.t. these parameters, similarly to the single Gaussian model, directly controls the mean $\mu$ and variance $\sigma$ of each Gaussian independently. As such, even a mixture model is confined to local improvement in the action space. In practical scenarios, and as the number of Gaussians grows, it is likely that the modes of the mixture would be located in a vicinity of a global optima. A Gaussian Mixture model may therefore be able to cope with various non-convex continuous control problems. Nevertheless, we note that Gaussian Mixture models, unlike a single Gaussian, are numerically unstable. Due to the summation over Gaussians, the log probability of a mixture of Gaussians does not result in a linear representation. This can cause numerical instability, and thus hinder the learning process. These insights lead us to question the optimality of current PG approaches in continuous control, suggesting that, although these approaches are well understood, there is room for research into alternative policy-based approaches. In this paper we suggested the Distributional Policy Optimization (DPO) framework and its empirical implementation - the Generative Actor Critic (GAC). We evaluated GAC on a series of continuous control tasks under the MuJoCo control suite. When considering overall performance, we observed that despite the algorithmic maturity of PG methods, GAC attains competitive performance and often outperforms the various baselines. Nevertheless, as noted above, there is ``no free lunch". While GAC remains as sample efficient as the current PG methods (in terms of the batch size during training and number of environment interactions), it suffers from high computational complexity. \cmnt{Our current model uses a naive sampling scheme with many samples having a negative advantage. These samples are thus discarded.} Finally, the elementary framework presented in this paper can be extended in various future research directions. First, improving the computational efficiency is a top priority for GAC to achieve deployment in real robotic agents. In addition, as the target distribution is defined w.r.t. the advantage function, future work may consider integrating uncertainty estimates in order to improve exploration. Moreover, PG methods have been thoroughly researched and many of their improvements, such as trust region optimization \citep{schulman2015trust}, can be adapted to the DPO framework. Finally, DPO and GAC can be readily applied to other well-known frameworks such as the Soft-Actor-Critic \citep{haarnoja2018soft}, in which entropy of the policy is encouraged through an augmented reward function. We believe this work is a first step towards a principal alternative for RL in continuous action space domains. \cmnt{ In this work we presented a new paradigm for policy search in continuous action spaces. Inherent limitations of parametric distribution classes are eliminated using a distributional framework (DPO), yielding global optimality. DPO is based on insights from Policy Iteration schemes, in which the policy is updated in distribution space. This is contrary to current Policy Gradient methods, in the continuous action setting, for which the policy is optimized in the space of its parameters, which is inherently limited to the space of actions. DPO builds a principal alternative for RL in continuous action space domains. Based upon DPO, we proposed a practical off-policy generative algorithm. In GAC, a generative model is optimized by minimizing the Wasserstein distance between the actor's policy distribution and some improving policy distribution. Such an optimization method is appealing as it can be implemented using Implicit Quantile Networks \citep{dabney2018implicit}. In our work we chose to model the autoregressive nature of the action using recurrent neural networks. Empirically we see that GAC compares to, and often outperforms, current state-of-the-art PG methods (despite their current algorithmic maturity). The elementary framework presented in this paper can be extended in various future research directions. It is interesting to explore other generative optimization methods as well as their practical and theoretical implications. In this work we used generic schemes for sampling and targeting actions. Future work may explore this venue by stabilizing the optimization process and speeding up learning using proximal methods, better exploration schemes, and improved uncertainty estimates of the critic and value. In addition, in its current formulation, GAC suffers from high computation complexity due to the recurrent autoregressive model. Attention-based transformer models \citep{vaswani2017attention} can mitigate slow convergence rates and lower the overall model complexity. Finally, GAC can be readily applied to other well-known frameworks such as the Soft-Actor-Critic \citep{haarnoja2018soft}, in which entropy of the policy is encouraged through an augmented reward function. } \section{Acknowledgement} We thank Yonathan Efroni for his fruitful comments that greatly improved this paper. \bibliographystyle{plainnat}
\section*{Introduction} In $1925$, {\sc Born} and {\sc Jordan} introduced the non-commutative algebra of formal power series in the variables $q,p,h$ subject to the relation \[ pq-qp=\frac{h}{2\pi i}\] to explain the calculation of {\sc Heisenberg} for the spectrum of the anharmonic oscillator~\cite{Born_Jordan}. Since then quantum mechanics has been considered as a deformation of classical mechanics, with $h$ as a small parameter. However, it is also well-known that many quantities of interest are not holomorphic in $h$ near $h=0$; the wave associated by {\sc De Broglie} to a free particle with momentum $p =h k/2\pi$ \[ e^{2\pi i p q/h}=e^{i k q},\] being a fundamental example in case. Further examples related to the Hamiltonian \[ H=\frac{1}{2m}p^2 +V(q)\] of a particle of mass $m$ in a potential $V(q)$ are numerous: tunnel amplitudes, in the first order of the WKB-approximation, like \[ e^{-\frac{2\pi}{h} \int \sqrt{2m(V(q)-E)}dq},\;\] or the exponentially small separation between the first and the second eigenvalue of the quartic oscillator with potential $V(q)=q^4-\beta q^2$. These phenomena lead to the fact that most series in $h$ appearing in perturbation theory are divergent and have an asymptotical meaning at best, a point of view already advocated by {\sc Birkhoff} \cite{birkhoff1933quantum}. The traditional approach to deal with such quantities is to use classical Hilbert space analysis on the Schr\"o\-dinger equation or use semi-classical or more general micro-local analysis~\cite{Sjostrand,Reed_Simon,zworski2012semiclassical}. Deformation quantisation initially ignored exponentially small quantities; like in formal quantum mechanics, series in $h$ had only a formal meaning~\cite{Flato}. Nevertheless, in the late eighties, {\sc Rieffel} constructed examples of non-formal deformation quantisations in the real differentiable context, and since there has been several works in this direction~\cite{rieffel1989deformation}~(see also \cite{bieliavsky2011deformation}). Parallel to real analysis, one may study these divergent expansions from the complex geometric viewpoint. It was indeed realised early (or sometimes simply conjectured) that many of them have a property of {\em endless analytic continuation}, when expressed in a Borel transformed variable $\xi$. This led {\sc Voros} and {\sc Zinn-Justin} to exact quantisation formul\ae\ which were later explained, by {\sc Delabaere} and {\sc Pham}, as resurgence properties of the complex WKB expansions~\cite{delabaere1997unfolding,Eremenko_Gabrielov_quartic,Voros,Zinn_Justin_nuclear,zinn2004multi}. However, this approach which gathers Voros-Zinn-Justin conjectures and resurgence analysis is for the moment still conjectural. The purpose of this paper is to define a {\em resurgent Heisenberg algebra} $\mathcal{Q}^A$ or more precisely an algebra of resurgent operators with algebraic singularities. We hope this algebra will be rich enough to capture quantum effects beyond perturbation theory and lead to a better understanding of the complex WKB method and exact quantisation conjectures. However, for the moment, we observe that the dual star-algebra defined in this paper obeys {\sc \'Ecalle}'s philosophy that although complicated transcendental function may appear, the description of their singularities is simple and can be made explicit. For instance, we will see that Laplace transforms of hypergeometric functions appear naturally as products of algebraic functions. \section{Heisenberg algebras} In this section we introduce various versions of the Heisenberg algebra. As $h$, the imaginary unit $i$ and factors $2\pi$ appear in many formulas, we will set \[ t:=\frac{h}{2\pi i}~.\] On the polynomial ring $\mathbb{C}[t,q,p]$, we consider the (non-commutative associative) normal product $\star$ given by \[p \star q=qp+t,\;\; q \star p=qp,\] and furthermore \[p \star p^{n-1}=p^n,\;\;q \star q^{n-1}=q^n,\;\;t \star p=p \star t =tp,\;\;t \star q=q \star t=tq,\] where on the right hand side we use the ordinary product of polynomials. The resulting algebra with product $\star$ is known as the {\em Heisenberg algebra} and will be denoted by $\mathcal{Q}$. The mapping $q \mapsto q,\;\;p \mapsto t\frac{d}{dq},$ identifies $\mathcal{Q}$ with the {\em Weyl-algebra} of $t$-differential operators $\mathcal{Q} \cong \mathbb{C}<t,q,t\frac{d}{dq}>$. When we write elements $f,g \in \mathcal{Q}$ as $f=\sum_{n\ge 0} f_n t^n$, $g=\sum_{n \ge 0} g_n t^n$, with coefficients $f_n, g_m \in \mathbb{C}[q,p]$, we can expand the $\star$-product of $f$ and $g$ as \[ f \star g= \sum_{l \ge 0} h_l t^l.\] \begin{proposition}[\cite{Moyal}]\label{P::Moyal} The coefficient $h_l$ in the expansion \[ f \star g= \sum_{l \ge 0} h_l t^l\]is given by the formula \begin{equation} h_l=\sum_{n+m+k=l} \frac{1}{k!} \frac{\partial^k f_n(q,p)}{\partial p^k}\frac{\partial^k g_m(q,p)}{\partial q^k}. \label{eq-starformel} \end{equation} where $f=\sum_{n\ge 0} f_n t^n$, $g=\sum_{n \ge 0} g_n t^n$. \end{proposition} As these expressions make sense for formal power series, one can use this formula to obtain a $\star$-product on $\mathbb{C}[[t,q,p]]$. The resulting algebra we call the {\em formal Heisenberg algebra} and denote it by $\widehat{\mathcal{Q}}$. Clearly $\mathcal{Q} \subset \widehat{\mathcal{Q}}$. There are various interesting algebras between $\mathcal{Q}$ and $\widehat{\mathcal{Q}}$, for example the algebras $\mathbb{C}[[t]][q,p]$ and $\mathbb{C}[q,p][[t]]$, that appear naturally in constructions that proceed order-by-order in $t$ or $h$. But in this paper we will be interested in quite different sub-algebras of $\widehat{\mathcal{Q}}$ that are characterised by analytic properties and analytic continuation. \subsection*{There is no $\star$-algebra of analytic operators.} It is a fundamental fact that it is not possible to define a $\star$-algebra of analytic operators. Even for meromorphic functions, the $\star$-product leads in general to divergent series and is therefore ambiguous. We can observe this fact by explicit computation. Let us denote by \begin{equation*} E(t):=\sum_{n=0}^\infty n! t^n \end{equation*} the power series considered by {\sc Euler}~\cite{Euler_divergent}. \begin{proposition} \label{P::Euler} The star-product of $\frac{1}{1-p} $ and $ \frac{1}{1-q}$ is a divergent series given by the formula $$ \frac{1}{1-p} \star \frac{1}{1-q}=\frac{1}{(1-p)(1-q)}E\left(\frac{t}{(1-p)(1-q)}\right). $$ \end{proposition} \begin{proof} We have to compute $\sum_{n,m\ge 0} p^n \star q^m$. From the formula \eqref{eq-starformel} of the $\star$-product we find $$ \begin{array}{rcl} p^n \star q^m&=&\sum_{k \ge 0} \frac{1}{k!}\partial^k_p p^n \partial_q^kq^m t^k=\sum_{k \geq 0}k!\begin{pmatrix}n\\ k \end{pmatrix} \begin{pmatrix}m\\k \end{pmatrix}p^{n-k}q^{m-k}t^k. \end{array} $$ Summing over $n,m$ and using $\frac{1}{(1-x)^{k+1}}=\sum_{n \geq 0}\begin{pmatrix}n+k \\ k \end{pmatrix} x^n$, we obtain \begin{eqnarray*} \sum_{n,m \geq 0} p^n \star q^m&=&\sum_{k,n,m \geq 0}k! \frac{1}{(1-p)^{k+1}} \frac{1}{(1-q)^{k+1}}t^k\\[0.3cm] &=&\frac{1}{(1-p)(1-q)}E\left(\frac{t}{(1-p)(1-q)}\right). \end{eqnarray*} \end{proof} A similar calculation gives the following slightly more general formula: \[\frac{1}{1-(\a p+\b q)} \star \frac{1}{1-(\gamma p+\delta q)}=\frac{1}{\Delta}E(\frac{\a \delta }{\Delta}),\] where \[\Delta:=(1-(\a p+\b q))(1-(\gamma p+\delta q)).\] These examples show that the product of two meromorphic functions leads to a series in $t$, that for no fixed values of $p$ and $q$ can be interpreted as the Taylor expansion of a holomorphic function in $t$ at the origin. \section{The {\sc Gevrey Heisenberg} algebra and its {\sc Borel} dual} \label{S::Borel} Following {\sc Borel}, one may interpret the divergent series that appear in the above calculation as the asymptotic expansion of a Laplace integral~\cite{borel1901leccons}. To do this, in the case of one variable, we first define the {\em Borel transform} of a series $f(t)=\sum_{n \ge 0} a_n t^n$ as the series in a ``Borel-dual'' variable $\xi$ defined by: \[ g(\xi)=\sum_{n \ge 0} a_n \frac{\xi^n}{n!}.\] For example, the Euler power series $E(t)=\sum_n n! t^n$ has $g(\xi):=\sum_n\xi^n$ as its Borel transform, which is equal to $\frac{1}{1-\xi}$ if $|\xi|<1$. If the Borel transform has a positive radius of convergence $R$, for any $r <R$, one can consider the function \begin{equation} F_{r}(t):=\frac{1}{t} \int_0^r g(\xi) e^{-\xi/t}d\xi\;.\label{eq-invbeta} \end{equation} The function $F_r$ is holomorphic in the half-plane $\Re(t)>0$, and from the formula $$ n!t^n=\frac{1}{t} \int_{0}^{\infty} \xi^n e^{-\xi/t}d\xi, $$ one can show that the function $F_r$ has the series $f(t)$ as asymptotic expansion on the half-plane: $F_{r}(t) \sim f(t)$. Note however, that the function $F_{r}$ depends not only on $f$, but also on~ $r$. In particular, to associate a function to the formal power series expansion in this way is, in general, ambiguous. \subsection*{The Gevrey-Heisenberg algebra} Although there is no analytic $\star$-algebra, there is a Gevrey one. In particular, the type of divergence that appeared in the above example computation of the $\star$-product is typical. We now recall this observation which goes back to {\sc Boutet de Monvel} and {\sc Kr\'ee}~\cite{Boutet_Kree} (see also \cite{Pham_resurgence,Sjostrand_asterisque}). To do this, we consider the {\em formal Borel transform } \[ \beta: \mathbb{C}[[t,q,p]] \longrightarrow \mathbb{C}[[\xi,q,p]] \] defined by setting \[\beta(\sum_{ijk} a_{ijk}q^ip^jt^k)=\sum_{ijk}a_{ijk}q^ip^j\frac{\xi^k}{k!}.\] Note that it is a linear bijection that maps $\mathbb{C}[t,q,p]$ onto $\mathbb{C}[\xi,q,p]$, but, of course, it is not compatible with the product. As usual, we denote by $\mathbb{C}\{\xi,q,p\}$ the ring of convergent power series. A series $f \in \mathbb{C}[[t,q,p]]$ such that $\beta(f) \in \mathbb{C}\{\xi,q,p\}$ is called a {\em Gevrey series}. We denote by \[\mathcal{Q}^G:=\{ f \in \mathbb{C}[[t,q,p]]\;\;|\;\;\beta(f) \in \mathbb{C}\{\xi,q,p\} \}\] the set of all Gevrey series (in $t$, but holomorphic in $q,p$), and we recall the following standard result concerning the $\star$-product. \begin{proposition}[\cite{Boutet_Kree,Pham_resurgence,Sjostrand_asterisque}] \label{P::product} The subset $\mathcal{Q}^G \subset \widehat{\mathcal{Q}}$ is a subalgebra, i.e., if two functions have a convergent Borel transform, so does their $\star$-product. \end{proposition} The algebra $\mathcal{Q}^G$ was used in \cite{quantique} to prove a general result saying that the formal Rayleigh-Schr\"odinger series for the $n$-th energy level of an anharmonic oscillator are in fact Gevrey series. \subsection*{The Borel dual algebra} One can also use the map $\beta$ to transfer the $\star$-product on $\mathbb{C}[[t,q,p]]$ to $\mathbb{C}[[\xi,q,p]]$ and write the Heisenberg algebra in the dual variable $\xi$. So, we introduce the following new product: for any $f,g\in\mathbb{C}[[\xi,q,p]]$, \begin{equation} f * g :=\beta(\beta^{-1}(f) \star \beta^{-1}(g)),\label{eq-def*} \end{equation} and expand $f=\sum_n \phi_n\xi^n$ and $g=\sum_m \psi_m \xi^m$ in series with $\phi_n,\psi_m \in \mathbb{C}[[q,p]]$. One can see that the product \eqref{eq-def*} is given by the formula: \[f * g= \sum_{l\ge 0} \gamma_l \xi^l,\] where \begin{equation} \gamma_l=\sum_{n+m+k=l} \frac{n! m! k!}{(n+m+k)!} \frac{1}{k!}\partial_p^k\phi_n(q,p) \frac{1}{k!}\partial_q^k\psi_m(q,p).\label{eq-formel*} \end{equation} This corresponds to the dual version of \eqref{eq-starformel}. Applied to polynomials of $\mathbb{C}[\xi,q,p]$, it gives: $$q * p=qp,\quad p * q=qp+\xi,\quad \xi^n * \xi^m=\frac{n!m!}{(n+m)!}\xi^{n+m}. $$ Thus, we can directly obtain a dual version of Proposition~\ref{P::product} : \begin{proposition} \label{P::Borel} Consider the non-commutative associative product on $\mathbb{C}[[\xi,q,p]]$ defined by \eqref{eq-def*}. For any convergent power series $f,g \in \mathbb{C}\{\xi,q,p\}$, the product $f * g$ is also in $\mathbb{C}\{\xi,q,p\}$. \end{proposition} Note that this result can also be derived from the integral formula of the $*$-product given in the next section (Proposition~\ref{P::integral}).\\ We will denote the algebra $\mathbb{C}\{\xi,q,p\}$ with the product $*$ by $\mathcal{Q}^B$ and call it the {\em Borel dual algebra}. The formal Borel transform identifies it with the algebra $\mathcal{Q}^G$, that is, the linear bijection $$\beta: \mathcal{Q}^G \longrightarrow \mathcal{Q}^B$$ interchanges the $\star$-product on the left hand side with the $*$-product on the right hand side. In going from $\mathbb{C}[[t,q,p]]$ to $\mathbb{C}[[\xi,q,p]]$ with the formal Borel transform, it will sometimes be useful to use the same name for a series in $\mathcal{Q}^G$ and its Borel transform in $\mathcal{Q}^B$ and simply write $f(\xi,q,p)$ for $\beta(f(t,q,p))$. \begin{proposition}\label{ex-*} The $*$-product of $1/(1-p)$ and $1/(1-q)$ in $\mathcal{Q}^B$ is given by the formula $$ \frac{1}{1-p} * \frac{1}{1-q}=\frac{1}{(1-p)(1-q)-\xi}. $$ \end{proposition} \begin{proof} Indeed, we have $$p^n * q^m=\sum_{k \geq 0}\begin{pmatrix}n\\ k \end{pmatrix} \begin{pmatrix}m\\k \end{pmatrix}p^{n-k}q^{m-k}\xi^k. $$ As in the proof of Proposition \ref{P::Euler}, we obtain $$\sum_{n,m \geq 0} p^n * q^m=\sum_{k \geq 0}\frac{1}{(1-p)^{k+1}} \frac{1}{(1-q)^{k+1}}\xi^k=\frac{1}{(1-p)(1-q)-\xi}.$$ \end{proof} This proposition has two consequences: it implies Proposition~\ref{P::Euler} and it explains the origin of the divergence for the $\star$-product. Indeed, choose any $r \in ]0,1[$, Proposition \ref{ex-*} and Formula \eqref{eq-invbeta} imply that the function $$\frac{1}{t}\int_{0}^r \frac{1}{(1-p)(1-q)-\xi}\, e^{-\xi/t} d\xi$$ has the series given by $$\frac{1}{1-p} \star \frac{1}{1-q}, $$ as asymptotic expansion. As the meromorphic function $\frac{1}{1-\xi}$ is the Borel transform of the Euler series $E(t)$, the asymptotic expansion at the origin of the right-hand side gives the formula of Proposition~\ref{P::Euler}. The divergence of the $\star$-product is now explained : it is due to the appearance of singularities in the dual variable. The ambiguity in the choice of the integration path gives rises to a small exponential correction for different choices, which cannot be captured by perturbation theory. Let us make this more precise. The $\star$-product $$\frac{1}{1-p} \star \frac{1}{1-q} $$ is ambiguous since it defines a divergent series which can be interpreted as the asymptotic expansion of many holomorphic functions. However from the dual viewpoint, that is for the $*$-product in the $\xi$-variable, there is no longer any ambiguity, and the product is given by: $$\frac{1}{1-p} * \frac{1}{1-q}=\frac{1}{(1-p)(1-q)-\xi} .$$ As a function of the variable $\xi$, the meromorphic function $\frac{1}{(1-p)(1-q)-\xi}$ is not only holomorphic at zero: it extends to a whole punctured complex $\xi$-plane with a simple pole at the puncture $\xi=(1-p)(1-q)$. Let us now slightly deform the half-line going from $0$ to $+\infty$ to paths $\gamma_+$ and $\gamma_-$, in the upper and lower half-plane as in Figure \ref{fig-eulerpaths}. \begin{figure}[!htb] \centering \includegraphics[width=10cm]{eulerpaths.pdf} \caption[Euler]{\footnotesize{Deformation of $[0,+\infty)$ into $\gamma_+,\gamma_-$.}} \label{fig-eulerpaths} \end{figure} By following each of these two integration paths, we obtain two preferred ``Euler functions'' $E_+$ and $E_-$ defined by \[E_{\pm}(t):=\frac{1}{t}\int_{\gamma_{\pm}}\frac{1}{1-\xi}e^{-\xi/t} d\xi.\] These are both asymptotic to the Euler series $E(t)$ in the halfplane $\Re(t) >0$ and differ by an exponentially small function: \begin{equation} E_{-}(t)-E_{+}(t)=\frac{1}{t}\int_{\sigma} \frac{1}{1-\xi}e^{-\xi/t}d\xi=\frac{2\pi i}{t}e^{-1/t}.\label{eq-smallexp} \end{equation} Here $\sigma$ is a small loop running in the positive direction around the pole at $1$. This small exponential factor explains the divergence of the original $\star$-product. Now the important point is that knowing the singularities of $f$ and $g$, we are going to describe the singularities of $f*g$. To this aim, we first give an integral formula for the $*$-product. \section{Integral formula for the $*$-product} Proposition \ref{ex-*} shows that the $*$-product (in the Borel dual variable $\xi$) can be analytically continued along all paths that avoid a rather small set. Our aim is to prove, more generally, that the $*$-product of two multi-valued functions over $\mathbb{C}^{2n+1}$ whose singularity set is algebraic is again a function of this type~(Theorem~\ref{T::product}). This will be done by first proving an explicit integral formula for the $*$-product in this section. To obtain such an integral formula, we start with the case of the $\star$-product (in $t$-variable) and then we look at its integral expression in the Borel plane. \subsection*{The thimble formula} The following integral expression for the $\star$-product on $\mathcal{Q}$ is a variant of the Moyal formula \cite{Moyal}: \begin{proposition} The $\star$-product of two polynomials $f,g \in \mathcal{Q}=\mathbb{C}[t,q,p]$ is given by the integral formula \begin{equation} f \star g(t,q,p)= \frac{1}{2\pi i t} \int_ \mathbb{C} f(t,q,p+\bar z ) g(t,q+z,p) e^{-| z|^2/t} d\bar z \wedge dz.\label{eq-starintegr} \end{equation} \end{proposition} \begin{proof} It suffices to check the formula for $f=p^n$ and $g=q^m$. In the expansion \[ (p+\bar z)^n(q+z)^m =\sum_{k,l \ge 0} \begin{pmatrix}n\\ k \end{pmatrix} \begin{pmatrix}m \\ l \end{pmatrix} p^{n-k} q^{m-l} z^l \bar z^k, \] only the terms with $k=l$ will contribute to the integral. Furthermore, one has \[ \int_{\mathbb{C}} |z|^{2k} e^{-|z|^2/t} d\bar z \wedge dz= 2\pi i k! t^{k+1}\] and thus it follows from Proposition~\ref{P::Moyal} that we get indeed the star product $p^n \star q^m$. \end{proof} In order to explain the name {\em thimble-formula}, we will rewrite the above formula \eqref{eq-starintegr} in a slightly more geometrical way. The domain integration is the two dimensional chain $$D:=\{ (x,y) \in \mathbb{C}^2 :y=\bar x \},$$ so that the formula becomes \[f \star g(t,q,p)= \frac{1}{2\pi i t} \int_ Df(t,q,p+y) g(t,x+q,p) e^{-xy/t} dx \wedge dy.\] This can be re-written as \begin{equation} f \star g(t,q,p)= \frac{1}{2\pi i t} \int_ {D_{q,p}}f(t,q,y) g(t,x,p) e^{-F_{q,p}(x,y)/t} dx\wedge dy,\label{eq-thimbleform} \end{equation} with $F_{q,p}(x,y) := (x-q)(y-p)$ and \begin{equation} D_{q,p}:=\{ (x,y) \in \mathbb{C}^2 :y-p=\overline{ (x-q)} \}.\label{eq-cycle} \end{equation} The polynomial $F_{q,p}$ defines a map $$\mathbb{C}^2 \longrightarrow \mathbb{C},(x,y) \mapsto F_{q,p}(x,y),$$ that has the point $(q,p)$ as unique non-degenerate critical point with critical value $0$. For $\xi \neq 0$, the Riemann surface $X_{\xi,q,p}:=F_{q,p}^{-1}(\xi)$ has the topology of a cylinder and contains a $1$-cycle $$\gamma_{\xi,q,p}:=D \cap \{F_{q,p}=\xi\}$$ parametrised by $\theta \in [0,2\pi]$ via \[ x(\theta):=q+\sqrt{\xi}e^{i\theta},\;\;y(\theta):=p+\sqrt{\xi}e^{-i\theta}.\] For $\xi=0$, the cylinder degenerates into a cone~(see Figure \ref{fig-cylinder}). \begin{figure}[!htb] \centering \includegraphics[height=6cm]{cylinder.pdf} \caption[Cylinder]{\footnotesize{Riemann surfaces $X_{0,q,p}$ and $X_{\xi,q,p}$.}} \label{fig-cylinder} \end{figure} When $\xi$ goes to $0$, the circles $\gamma_{\xi,q,p}$ centred at $(q,p)$ with radius $\sqrt{\xi}$ retract to the critical point $(q,p)$. For this reason, these $\gamma_{\xi,q,p}$ are called {\em vanishing cycles} for the $A_1$-singularity defined by $F_{q,p}$. Note that the cycle $\gamma_{\xi,q,p}$ is a generator of the corresponding homology group \[ H_1(X_{\xi,q,p})=\mathbb{Z} [\gamma_{\xi,q,p}].\] In the following, we can restrict to the case $\xi \in \mathbb{R}_{\ge 0}$ useful for the Laplace representation. The cycle $D_{q,p}$ can be seen as a {\em Lefschetz thimble} (see Figure \ref{fig-thimble}), that is, the union of the circles $\gamma_{\xi,q,p}$ centred at $(q,p)$ with radius $\sqrt{\xi}$: $$D_{q,p}=\bigcup_{\xi \geq 0}\gamma_{\xi,q,p}. $$ \begin{figure}[!htb] \centering \includegraphics[width=10cm]{thimble.pdf} \caption[Thimble]{\footnotesize{Lefschetz thimble.}} \label{fig-thimble} \end{figure} \subsection*{Representation as Laplace integral} The representation of $D_{q,p}$ \eqref{eq-cycle} as thimble, sliced into vanishing cycles, leads to the representation as a Laplace integral. Consider a polynomial differential two-form $\omega$, so that the integral $$\int_{D_{q,p}} e^{-F_{q,p}/t}\omega $$ is well-defined for $t\in \mathbb{R}_+$. Using the general residue-formula $$\int_{D_{q,p}} e^{-F_{q,p}/t}\omega =\int_0^{\infty} e^{-\xi/t} d\xi \int_{D_{q,p} \cap \{F_{q,p}=\xi\}} Res \left( \frac{\omega}{F_{q,p}(x,y)-\xi} \right),$$ we can write the integral formula \eqref{eq-thimbleform} for the $\star$-product as: \[f\star g(t,q,p)=\frac{1}{t} \int_{0}^{\infty} e^{-\xi/t} d\xi \int_{\gamma_{\xi,q,p}} f(\xi,q,y) \bullet g(\xi,x,p) \omega_{\xi,q,p}.\] Let us explain the notations introduced for the integrand. Because we change from $t$ to the Borel dual variable $\xi$, the ordinary product of functions in $t,q,p$ has to be replaced by the convolution product in the variable $\xi$. It is denoted by $\bullet$ and defined on $f,g\in\mathbb{C}[\xi,q,p]$ by \[ f(\xi,q,y)\bullet g(\xi,x,p):=\beta(\beta^{-1}(f)(t,q,y).\beta^{-1}(g)(t,x,p)),\] with explicit expression: \begin{equation} p \bullet q=q \bullet p,\quad \xi^n \bullet \xi^m=\xi^n \ast \xi^m=\frac{n!m!}{(n+m)!}\xi^{n+m} .\label{eq-convolpol} \end{equation} The holomorphic $1$-form $\omega_{\xi,q,p}$ on the Riemann surface $X_{\xi,q,p}:=F_{q,p}^{-1}(\xi)$ is defined as the Poincar\'e residue of the $2$-form with first order pole along the hypersurface $X_{\xi,q,p}$: \[ \omega_{\xi,q,p}:=\frac{1}{2\pi i} Res\left(\frac{dx\wedge dy}{F_{q,p}(x,y)-\xi}\right).\] A representative for $\omega_{\xi,q,p}$ can be computed explicitly as $$Res\left(\frac{dx\wedge dy}{F_{q,p}(x,y)-\xi}\right)=\frac{dx}{x-q} $$ with $\int_{\gamma_{\xi,q,p}}\omega_{\xi,q,p}=1$. This explains the expression of the star product as a Laplace integral. Our aim is to extend this integral formula to a larger class of function than polynomials but, before that, we indicate how to generalise the formula to higher dimensions. \subsection*{Extension to $n$-degrees of freedom} The above discussion can be generalised to $n$ degrees of freedom. For $$(q,p)=(q_1,q_2,\ldots,q_n,p_1,p_2,\ldots,p_n),$$ we consider the polynomial $$F_{q,p}(x,y)=\sum_{j=1}^n(x_j-q_j)(y_j-p_j). $$ It defines a map $\mathbb{C}^{2n} \longrightarrow \mathbb{C}$ which has $(q,p)$ as unique non-degenerate critical point, i.e. an $A_1$-singularity in $2n$-variables. The complex $(2n-1)$-dimensional hypersurface $X_{\xi,q,p}=F^{-1}_{q,p}(\xi)$ contains a real $(2n-1)$-dimensional vanishing sphere \[\gamma_{\xi,q,p}=(q,p)+\{(z,\bar z):| z_1|^2+\dots+|z_n|^2=\xi \} .\] By orienting this sphere, we get a generator of the middle dimensional homology group: \[ H_{2n-1}(X_{\xi,q,p})=\mathbb{Z} [\gamma_{\xi,q,p}],\] for $\xi \neq 0$. The hypersurface $ X_{\xi,q,p}$ carries a holomorphic $(2n-1)$-form $$\omega_{\xi,q,p}:=\frac{1}{(2\pi i)^n}\, Res \left( \frac{dx_1\wedge dy_1\wedge \dots \wedge dx_n \wedge dy_n}{F_{q,p}-\xi} \right).$$ One can also easily compute a representative for the residue form $$ Res \left( \frac{dx_1\wedge dy_1\wedge \dots \wedge dx_n \wedge dy_n}{F_{q,p}-\xi} \right)= \frac{dx_1\wedge dy_1\wedge \dots \wedge dx_{n-1} \wedge dy_{n-1} \wedge dx_n}{x_n-q_n}.$$ The sphere $\gamma_{\xi,q,p}$ is oriented in such a way that $\int_{\gamma_{\xi,q,p}} \omega_{\xi,q,p} = 1$. As in the case of $n=1$, by denoting also $\mathcal{Q}=\mathbb{C}[t,q,p]$ (with $q,p\in\mathbb{C}^n$) endowed with the $\star$-product, we find: \begin{proposition} For $f,g \in \mathcal{Q}$ one has \[f \star g (t,q,p)=\frac{1}{t} \int_0^{\infty} e^{-\xi/t} \left( \int_{\gamma_{\xi,q,p}}f(\xi,q,p) \bullet g(\xi,q,p) \omega_{\xi,q,p} \right) d\xi.\] \end{proposition} \subsection*{Vanishing cycle formula} The above representation of the $\star$-product as Laplace integral suggests that it is possible to express the $*$-product in the Borel dual $\xi$ variable directly as an integral over the vanishing cycle. This is one of the key results of this paper and it turns out that the formula makes sense for arbitrary elements of $\mathcal{Q}^B$. \begin{proposition} \label{P::integral} For $f,g \in \mathcal{Q}^B=\mathbb{C}\{ \xi,q_1,\dots,q_n,p_1,\dots,p_n\}$ the $*$-product is expressed into an integral of the $\bullet$-product over a vanishing cycle \begin{equation} f * g (\xi,q,p)=\int_{\gamma_{\xi,q,p}} f(\xi,q,y) \bullet g(\xi,x,p) \omega_{\xi,q,p},\label{eq-integr*} \end{equation} where $ \gamma_{\xi,q,p}$ is the $(2n-1)$-dimensional sphere $$ \gamma_{\xi,q,p}=(q,p)+\{(z,\bar z):| z_1|^2+\dots+|z_n|^2=\xi \} ,$$ $(\xi,q,p)$ belongs to a sufficiently small neighbourhood of the origin and $\xi \in \mathbb{R}_{>0}$. \end{proposition} Before giving the proof, we remark that the $\bullet$-product \eqref{eq-convolpol} can be extended on two elements from $\mathbb{C}\{\xi,q,p\}$ and it obviously again belongs to $\mathbb{C}\{\xi,q,p\}$. Thus it follows from the formula \eqref{eq-integr*} that the non-commutative algebra $\mathcal{Q}^B$ is closed under $*$~(Proposition~\ref{P::Borel}). \begin{proof} When we expand both sides of the to-be-proven equality \[ f \ast g (\xi,q,p)=\int_{\gamma_{\xi,q,p}} f(\xi,q,y) \bullet g(\xi,x,p) \omega_{\xi,q,p}\] in powers of $\xi$, since $\xi^n\ast\xi^m=\xi^n\bullet\xi^m$ (see \eqref{eq-convolpol}), we readily reduce to the case when $f$ and $g$ do not depend on $\xi$. Next, we fix $(q,p)=(q_1,q_2,\ldots,q_n,p_1,p_2,\ldots,p_n) \in \mathbb{C}^{2n}$ and consider Taylor expansions at the origin of the functions $y \mapsto f(q,y)$ and $x \mapsto g(x,p)$. We get: \[g(x,p)=\sum_\a a_\a(q,p)(x-q)^\a,\;\;\ f(q,y)=\sum_\b b_\b(q,p) (y-p)^\b\] where $\a=(\a_1,\dots,\a_n)$ and $\b=(\b_1,\dots,\b_n)$ are multi-indices and \[ a_\a(q,p):=\frac{1}{(\sum_{j=1}^n\a_j)!}\partial^\a_pg(q,p),\;\;\;b_\b(q,p):=\frac{1}{(\sum_{j=1}^n\b_j)!}\partial_q^\b f(q,p).\] As the cycle $\gamma_{\xi,q,p}$ is compact, we can interchange the integral and summation: $$\int_{\gamma_{\xi,q,p}} f(q,y) \bullet g(x,p) \omega_{\xi,q,p} =\sum_{\a,\b}a_\a b_\b \int_{\gamma_{\xi,q,p}} (x-q)^\a (y-p)^\b \omega_{\xi,q,p}.$$ Therefore, according to the formula \eqref{eq-formel*} of the $*$-product in the Borel dual algebra, the above proposition reduces to the following lemma: \begin{lemma}\label{L::beta} For any $\a,\b \in \mathbb{Z}^n_{\geq 0}$, we have \[ \int_{\gamma_{\xi,q,p}} (x-q)^\a (y-p)^\b \omega_{\xi,q,p}=\frac{\prod \a_j!}{(\sum \a_j)!}\delta_{\a,\b}\xi^{|\alpha|}\] with $|\a|:=\sum_{j=1}^n\alpha_j$. \end{lemma} \begin{proof} As the left and the right-hand side are invariant under translation, it is sufficient to prove the lemma for $q=p=0$. By homogeneity, we may also assume that $\xi=1$. We now compute explicitly the integral for $q=p=0$, $\xi=1$. To do this, we parametrise the sphere $\gamma_{\xi,q,p}$ by $$x_j=\sqrt{s_j}e^{i\varphi_j},\ y_j=\sqrt{s_j}e^{-i\varphi_j}, $$ where $(s_1,s_2,\ldots,s_n)$ belongs to the simplex $\Delta \subset \mathbb{R}^n$ defined by the conditions $s_j \ge 0$, $\sum_j s_j = 1$. We get $$\frac{dx_1\wedge dy_1\wedge \dots \wedge dx_n \wedge dy_n}{x_1y_1+\dots+x_ny_n-\xi}=\frac{ds_1\wedge d\varphi_1\wedge \dots\ ds_n \wedge d\varphi_n}{s_1+\dots+s_n-\xi},$$ so $$Res\left(\frac{dx_1\wedge dy_1\wedge \dots \wedge dx_n \wedge dy_n}{x_1y_1+\dots+x_ny_n-\xi}\right)= d\varphi_1\wedge ds_2 \wedge d\varphi_2 \wedge \dots\wedge ds_n \wedge d\varphi_n, $$ and $$ \int
_{\gamma_{1,0} }x^\a y^\b\omega_{\xi,0}=\delta_{\a,\b}\int_{\Delta} s^\a ds_2 \wedge \dots \wedge ds_n .$$ This integral over the simplex is well-known; it is a case of the Dirichlet multi-dimensional generalisation of the beta-integral of Euler: $$ \int_{\Delta} s^\a ds_2 \wedge \dots \wedge ds_n=\frac{\prod \a_j!}{(\sum \a_j)!} .$$ \end{proof} \end{proof} \section{Analytic continuation} \label{sec-anal} From the integral formula of Proposition \ref{P::integral}, we see that the analytic continuation of the $\ast$-product naturally falls into two sub-problems: \begin{enumerate}[{\rm A)}] \item study the continuation properties of integrals of the form \[\int_{\gamma_{\xi,q,p}} f \omega_{\xi,q,p},\] \item study the continuation properties of the $\bullet$-product. \end{enumerate} The computation of the singularities for problems A) and B) determines the singularities of $f*g$ as a particular case. \subsection*{Riemann domain and analytic continuation.} The analytic continuation of a holomorphic function germ $f \in \mathbb{C}\{x_1,x_2,\ldots.x_n\}$ along a path $\gamma$ starting at $0$ may be blocked by a singularity. Sometimes one may deform $\gamma$ slightly to circumvent it and resume the continuation. In other cases an essential boundary appears and such a continuation becomes impossible. \begin{figure}[!htb] \centering \includegraphics[scale=0.4]{analytic.pdf} \caption[Analytic]{\footnotesize{Analytic continuation of a holomorphic germ.}} \label{fig-analytic} \end{figure} For example, in one variable, the first alternative occurs for the power series expansions of $(1-x)^{\alpha},\ \a \in \mathbb{C}$ or $\log(1-x)$, whereas the $\theta$-series $\sum_{n=0}^{+\infty} x^{n^2}$ provides an example of the second type of behaviour: it cannot be extended analytically outside the unit disk. The notion of Riemann surface attached to a germ extends naturally to arbitrary dimensions~: continuations along different paths with the same endpoint may lead to different values, but the set of all continuations of a germ $f \in \mathbb{C}\{x_1,x_2,\ldots,x_n\}$ can be made into a (connected) $n$-dimensional complex manifold $R_f$, called the {\em Riemann domain}, on which it has a single-valued extension~(see for instance~\cite[Chapter III]{Chabat_Introduction_II}). This Riemann domain comes with a natural projection map \[ \pi : R_f \longrightarrow \mathbb{C}^n,\] that is locally biholomorphic and has discrete fibres. The germ $f$ itself represents a canonical origin $O \in R_f$ lying over the origin in $\mathbb{C}^n$. As a general rule, the map $\pi$ will, however, not be a (regular) covering in the topological sense. A path $\gamma:[0,1] \longrightarrow \mathbb{C}$ has at most one lift to a path in $R_f$ starting at $O$. A path that is {\em not} liftable to a path starting at $O$, but whose restriction to $[0,1)$ {\em is} liftable, is called a {\em blocked path} and its endpoint $\gamma(1) \in \mathbb{C}^n$ is called a {\em singular point of $f$}. We denote by $\Sigma_f \subset \mathbb{C}^n$ the set of all singular points of $f$. Clearly, $f$ can be continued along any path that avoids the set $\Sigma_f$. In general, even for $n=1$, the structure of the map $\pi:R_f \longrightarrow \mathbb{C}$ and the set $\Sigma_f$ can be extremely complicated. In the simplest cases the singular set $\S_f$ is finite. This happens for instance if $f$ is algebraic, or more generally, if $f$ is {\em holonomic}, i.e. satisfies a homogeneous linear differential equation with polynomial coefficients. Slightly more complicated are the cases in which $\S_f$ is countable and discrete. There are however many important germs not belonging to this class. For example, the inverse function of the indefinite Abelian integral \[ S(x)=\int_0^x p dq,\;\;p^2-F(q)=0,\] where $F$ is a general polynomial of degree $\ge 5$, provides an example of a germ for which $\Sigma_f$ is a countable {\em dense} subset~\cite{Pham_livre_resurgence}. Far worse behaviour can occur: in $1918$ {\sc Gross} gave an example of an entire function $g:\mathbb{C} \longrightarrow \mathbb{C}$ which has every value as asymptotic value~\cite{gross1918ganze} . If $f$ denotes the germ at $0$ of the inverse of $g$, one can identify the map $\pi: R_f \longrightarrow \mathbb{C}$ with $g:\mathbb{C} \longrightarrow \mathbb{C}$, and $\Sigma_f=\mathbb{C}$. \subsection*{Algebro-resurgence.} One key idea of resurgence theory, developed first by {\sc \'Ecalle} \cite{Ecalle_fonctions} and then by {\sc Pham} \cite{Pham_livre_resurgence}, is to single out classes of germs $f \in \mathbb{C}\{x_1,x_2,\ldots,x_n\}$ closed under interesting operations like convolution product and for which the singular set $\S_f$ is not too big. The weakest condition is maybe to ask that $\mathbb{C}^n\setminus \Sigma_f$ is path-connected and dense. In such a situation $f$ has the {\em Iversen property}: for each path $\gamma$ starting at $0$ and each $\varepsilon >0$, there is an $\varepsilon$-near path $\tilde{\gamma}$ along which one can continue $f$~\cite{eremenko2004geometric}. A stronger natural condition is to ask that $\Sigma_f$ is a countable union of (algebraic or analytic) hypersurfaces. This gives a variant of resurgence, that we call {\em algebro-resurgence}~: \begin{definition}\label{D::ar} We say that $f \in \mathbb{C} \{ x_1,\dots,x_n \}$ is algebro-resurgent if $\Sigma_f$ is an algebraic subvariety of $\mathbb{C}^n$. \end{definition} Algebro-resurgent power series of one variable have a finite singular set $\S_f$~: meromorphic functions, fractional powers, logarithms, algebraic functions, solutions of linear differential equations with regular poles are algebro-resurgent. But the gamma function and most indefinite abelian integral are not algebro-resurgent. We may now state our main result~: \begin{theorem}~\label{T::product} The non-commutative associative product $f * g$ of two algebro-resurgent power series $f,g \in \mathbb{C}\{ \xi,q_1,\dots,q_n,p_1,\dots,p_n\}$ is also alge\-bro-re\-surgent. \end{theorem} As we shall now see, the theorem is a consequence of the integral formula for the $*$-product and standard stratification theory. The proof is constructive~: knowing the singularity sets of $f$ and $g$, it gives an explicit description of the singularity set for $f*g$. \subsection*{Stability under integration.} To fix the ideas, let us first come back to the integral formula \eqref{eq-integr*} with one degree of freedom. So let us assume for the moment that problem B) is solved and that $$h(\xi,q,p,x,y)=f\bullet g (\xi,q,p,x,y)$$ is an algebro-resurgent power series. We denote by $\S_h$ its singular locus, which is thus supposed to be an algebraic $4$-fold in $\mathbb{C}^5$. The Riemann surface $X_{\xi,q,p}$ intersects $\S_h$ in finitely many points. If $(\xi,q,p)$ is sufficiently close to the origin then these points are far away from the vanishing cycle $\gamma_{\xi,q,p}\,$, so that the $*$-product is well-defined by the integral formula \eqref{eq-integr*}. As one moves $(\xi,q,p)$ further from the origin, these intersection points start ``moving around'' on the Riemann surface, and one has to continue the vanishing cycle avoiding the moving points. In such a case, the vanishing cycle $\gamma_{\xi,q,p}$ separates the Riemann surface in two components and hence the singular points in two groups. Now, when two points on different sides of $\gamma_{\xi,q,p}$ come together, the cycle gets pinched, the integral develops a singularity and the cycle cannot avoid the singular points any longer (see LHS of Figure \ref{fig-pinch}). This corresponds to a singularity of the integral and hence of the $*$-product. \begin{figure}[!htb] \centering \includegraphics[height=6cm]{pinch.pdf} \caption[Pinch]{\footnotesize{Singular points on $X_{\xi,q,p}$.}} \label{fig-pinch} \end{figure} Another thing that may happen is that some $(\xi,q,p)$, one of the singular points `runs to infinity' and pushes the cycle with it (see RHS of Figure \ref{fig-pinch}). However, as long as one avoids such a collision and run-away catastrophe, the cycle can be deformed as to stay away from the singularities and the function can be in this way analytically continued. This situation is of course general and holds for the {\em integral of any closed algebro-resurgent differential form over a cycle.} By algebro-resurgent differential $p$-form on $X=\mathbb{C}^n$, we mean a germ of a $p$-form $\omega$ for which the coefficients $A_I=A_I(x_1,\ldots,x_n)$ in a local coordinate representation \[\omega=\sum A_I dx_I,\;\;dx_I=dx_{i_1}\wedge dx_{i_2} \wedge \ldots \wedge dx_{i_p},\] are all algebro-resurgent germs. The singular locus $\Sigma_{\omega}$ is defined to be the union of singular loci of the coefficients. For a polynomial map $F:X \longrightarrow \mathbb{C}^l$, one means by {\em horizontal family of $p$-cycles} over $V \subset \mathbb{C}^l$, a section over $V$ of the direct image sheaf $R^pF_*\mathbb{Z}$. This is the sheaf associated to the presheaf $$V \mapsto H^p(F^{-1}(V),\mathbb{Z}),$$ and a section over $V$ can be thought of as a family $\Gamma_{\l}, \lambda \in V$, of cycles in the fibres $X_{\l}:=F^{-1}(\l)$ of the map $F$. \begin{proposition} Let $\omega$ be a closed algebro-resurgent $p$-differential form on $X=\mathbb{C}^N$ with singularity set $\S_{\omega}$ and let \[F:X \to \mathbb{C}^l \] be a polynomial map. Let $\Gamma_{\l}, \l \in V$, a horizontal family of $p$-cycles in $X \setminus \S_{\omega}$ over an open neighbourhood $V \subset \mathbb{C}^l$ of the origin. Then the germ at $0$ of the function on $V$ defined by the integral $$g(\l)=\int_{\Gamma_{\l}} \omega$$ is algebro-resurgent. \end{proposition} \begin{proof} It is a fundamental fact from affine algebraic geometry that there exists a Zariski-open subset $U \subset \mathbb{C}^l$ such that the restriction \[F':(X \setminus \Sigma_\omega) \cap F^{-1}(U) \longrightarrow U \] of $F$ over the set $U$ is a topologically trivial fibration~(see for instance~\cite{Verdier_w}). Consider a path $\gamma:[0,1] \longrightarrow \mathbb{C}^l$ with $\gamma(0)=0$ and whose restriction to $(0,1]$ is mapped into $U$. By the local topological triviality over $U$, we can continue the horizontal family of cycles $\Gamma_{\l}, \l \in U \cap V$, along the path $\gamma$. By construction, the continuation of the cycle $\Gamma_{\l}$ stays inside $X \setminus \Sigma_\omega$ and thus the differential form $\omega$ can be continued along the trace of the cycle. This shows that the germ $g(\l)$ can be analytically continued along all paths starting at $0$ and (whose restriction to $(0,1]$) avoid the algebraic set $\mathbb{C}^l\setminus U$. \end{proof} Note that the above proof is constructive: the singularities of the integral $g(\l)=\int_{\Gamma_{\l}} \omega$ are explicitly described once we chose the corresponding fibration. We apply the proposition to the polynomial map \[F:\mathbb{C}^{4n} \longrightarrow \mathbb{C}^{2n+1}, \;(q,p,x,y) \mapsto (\sum_i^n(x_i-q_i)(y_i-p_i),q,p),\] which is the composition of \[ \mathbb{C}^{4n} \longrightarrow \mathbb{C}^{4n+1}, (q,p,x,y) \mapsto (\sum_{i=1}^n(x_i-q_i)(y_i-p_i), q,p,x,y)\] with the canonical linear projection \[ \mathbb{C}^{4n+1} \longrightarrow \mathbb{C}^{2n+1},\;(\xi,q,p,x,y) \mapsto (\xi,q,p)\;.\] We take also the family of vanishing cycles $\gamma_{\xi,q,p} \in H_n(X_{\xi,q,p},\mathbb{Z})$ as the horizontal family. So, to conclude the proof of Theorem~\ref{T::product}, it remains to prove that the product $f\bullet g$ of algebro-resurgent functions is also algebro-resurgent. Let us first analyse the additive convolution. \subsection*{Stability under additive convolution.} The behaviour of the singular set under convolution is a classical subject of analysis, which goes back to the papers of {\sc Hadamard} and {\sc Hurwitz}~\cite{Hadamard_produit,Hurwitz_produits}. The {\em Hadamard product} of two formal power series $$f =\sum_n a_n\xi^n,\quad g=\sum_n b_n \xi^n \in \mathbb{C}[[\xi]]$$ is defined as $\sum_n a_nb_n \xi^n$. If $f$ and $g$ are convergent power series, it can be represented by the integral formula \[\frac{1}{2\pi i} \oint f\left(t \right) g\left(\frac{\xi}{t}\right)\frac{dt}{t},\] called multiplicative convolution. Similarly, the {\em Hurwitz product } \[\sum_k c_k \xi^k,\ c_k:=\sum_{n+m+1=k} \frac{n! m!}{(n+m+1)!}a_nb_m,\] can be expressed as {\em the additive convolution} of $f$ and $g$: \begin{equation} f \oplus g:= \int_0^{\xi} f(t)g(\xi-t)dt.\label{eq-addconvol} \end{equation} (We use neither the notation $*$ nor $\star$ for the convolution product to avoid confusions with previous products.) These integral formulas can be used to show that the singularities of the convolution are obtained by multiplication resp. addition of the singularities of $f$ and $g$. This result will be useful for the case of the $\bullet$-product. \begin{proposition} \label{P::closed} Let $f,g \in \mathbb{C}\{ x \}$ be two algebro-resurgent functions. The additive convolution $f \oplus g$ is also algebro-resurgent and its singularity set is a subset of $$\left( \S_f + \S_g \right) \cup \S_f \cup \S_g.$$ \end{proposition} \begin{proof} Each of the functions $f,g$ possesses a Riemann surface $R_f$, $R_g$ together with a projection $$R_f \stackrel{\pi_f}{\longrightarrow} \mathbb{C},\ R_g \stackrel{\pi_g}{\longrightarrow} \mathbb{C}, $$ which combine to a map $\pi: R_f \times R_g \longrightarrow \mathbb{C} \times \mathbb{C}$. The {\em sum map} $\mathbb{C} \times \mathbb{C} \to \mathbb{C},\ (x,y) \mapsto x+y $, pulls-back to a map on the product $R_f\times R_g$: $$R_f \times R_g \to \mathbb{C},\ (x,y) \mapsto \pi_f(x)+\pi_g(y). $$ Now consider a path $\gamma :[0,1] \to \mathbb{C}$ whose image avoids both $\S_f$ and $\S_g$. It lifts to both Riemann surfaces, so we get paths $\gamma_f$ in $R_f$ and $\gamma_g$ in $R_g$. By the Poincar\'e-Leray residue formula, for $\xi=\gamma(t)$, the convolution product is given by the formula $$f \oplus g(\xi)=\int_{ \delta_t} f(x)g(y) Res\left(\frac{dx \wedge dy}{x+y-\xi}\right), $$ where $\delta_t$ is a path joining $(\gamma_f(t),0)$ to $(0,\gamma_g(t))$ in the fibre $$\{ \pi_f(x)+\pi_g(y) =\xi \} \subset R_f \times R_g,$$ depending continuously on $\xi$. As the integral of a holomorphic differential form along a continuous family of chains is holomorphic, analytic continuation reduces to a topological issue: to find paths $\delta_s$ on $R_f \times R_g$, depending continuously on $s$, which connect $(\gamma_f(s),0)$ to $(0,\gamma_g(s))$ and such that the path $ \delta_s $ projects to the point $\gamma(s) \in \mathbb{C}$. See Figure \ref{fig-produit} for a real picture in case the Riemann surfaces of $f$ and $g$ are respectively $\mathbb{C} \setminus \{ \a \}$ and $\mathbb{C} \setminus \{ \b \}$ with $\a,\b \in \mathbb{R}$. \begin{figure}[!htb] \centering \includegraphics[scale=0.4]{produit.pdf} \caption[Produit]{\footnotesize{Real picture of the path $\delta_s$: $x+y=\xi$.}} \label{fig-produit} \end{figure} There is an obvious obstruction extending a lift $\delta_s$ : if $\xi=\gamma(s)$ is of the form $\a+\b$ with $\a \in \S_f$ and $\b \in \S_g$, the path $\delta_s$ might get pinched as in Figure \ref{fig-pintch}. \begin{figure}[!htb] \centering \includegraphics[scale=0.3]{pintch.pdf} \caption[Pintch]{\footnotesize{Map $\delta_s$ getting pinched by singular points $A_s$ and $B_s$.}} \label{fig-pintch} \end{figure} If we turn around the point $\a+\b$, then we may continue the path as in Figure \ref{fig-avoid}. \begin{figure}[!htb] \centering \includegraphics[scale=0.5]{avoid.pdf} \caption[Avoid]{\footnotesize{Analytic continuation when $\xi$ avoids the set $\Sigma_f+\Sigma_g$.}} \label{fig-avoid} \end{figure} This explains that analytic continuation can only be ensured if we also avoid the set $$ \S_f + \S_g:=\{ \a+\b,\ \a \in \S_f,\ \b \in \S_g\}.$$ The maps $\pi_f$ and $\pi_g$ are local diffeomorphisms thus, according to the above discussion, the following lemma finishes the proof. \begin{lemma} \label{P::lift} If $A$ and $B$ are finite subsets of $\mathbb{C}$, the sum map induces a locally trivial fibration $$ \mathbb{C} \setminus A \times \mathbb{C} \setminus B \to \mathbb{C} \setminus \left(A \cup B \cup (A+B)\right),\ (x,y) \mapsto x+y.$$ \end{lemma} \begin{proof} For the reader's convenience, we include a proof of this elementary lemma. Denote by $2\varepsilon$ the minimal distance between the points and let $\psi:\mathbb{R} \to [0,1] $ be the bump function such that $$\psi(x)= \left\{ \begin{matrix}1 &{\rm \ for\ }& x \in [-\frac{\varepsilon}{2},\frac{\varepsilon}{2}], \\ 0 &{\rm \ for\ }& x \notin [-\varepsilon,\varepsilon] .\end{matrix} \right. $$ Denote by $A_\varepsilon, B_\varepsilon$ tubular neighbourhoods of size $\varepsilon$ and put $(A+B)_\varepsilon:=A_\varepsilon + B_\varepsilon$. The restriction of the sum map above the complement of $(A+B)_\varepsilon$ retracts by deformation on the complement over $A+B$. For a complex number $z \in \mathbb{C}$, we use the subscript $z_1$ for its real part and $z_2$ for its imaginary part. Denote the horizontal and vertical distance $d_j$ by $d_j(x,y)=x_j-y_j$, for $x,y\in \mathbb{C}$. Then, we put $$a_j(x):=d_j(x,A),\ b_j:=d_j(x,B). $$ Consider the vector fields $$X_j=\frac{1-\psi(a_j(x))+\psi(b_j(x))}{2}\d_{x_j} + \frac{1+\psi(a_j(x))-\psi(b_j(x))}{2}\d_{y_j} .$$ Near $A$, we have $ X_j= \d_{y_j} $, while near $B$, we get that $ X_j= \d_{x_j}$. Away from these sets we have $ X_j= \d_{x_j}+ \d_{y_j}$ and the vector field $X_j$ lifts $\d_{\xi_j} $. For $j,k\in \{ 1,2 \}$ and $j \neq k$, we have $$\d_{x_j} a_k(x)=\d_{y_j} a_k(x)=\d_{x_j} b_k(x)=\d_{y_j} b_k(x)=0,$$ thus the vector fields $X_1,X_2$ commute and hence define a local trivialisation of the bundle. This proves the lemma and concludes the proof of the proposition. \end{proof} \end{proof} \subsection*{Generalisation to higher dimensions} The $\bullet$-product which appears in the $*$-product of $f,g \in \mathbb{C}\{\xi,\l\}$, with $\l=(q,p)$, and determined by \eqref{eq-convolpol}, is related to additive convolution by the formula: \begin{equation} f \bullet g (\xi,\l) = \int_0^{\xi} (\partial_\xi f)(\xi',\l) g(\xi-\xi',\l) d\xi'+f(0,\l)g(\xi,\l).\label{eq-convolbullet} \end{equation} Notice that the variables $q,p$ can be considered as a parameter $\l=(q,p)$. We may adapt Proposition \ref{P::closed} to this situation: \begin{proposition} If $f,g \in \mathbb{C}\{\xi,\l \}$ are algebro-resurgent functions then the product $f \bullet g$ is also algebro-resurgent. \end{proposition} \begin{proof} The set $$ \S_f \bullet \S_g:=\S_f \cup \S_g \cup \{ (\l,x)+(\l,y): (\l,x) \in \S_f,\ (\l,y) \in \S_g \} $$ is an algebraic variety of $\mathbb{C}^{d+1}$, $d =\dim \mathbb{C}\{ \xi,\l \}$. Thus, one can find a Zariski open subset $U \subset \mathbb{C}^d$ over which the map $$\mathbb{C}^{d+1} \setminus \left( \S_f \bullet \S_g \right) \to \mathbb{C}^d, (\l,\xi) \mapsto \l $$ defines a locally trivial fibration. On each fibre of this fibration, one can repeat the proof of the one variable case. This proves the proposition. \end{proof} \subsection*{A closed formula for hypergeometric functions.} Contrary to the $\star$-product, the dual $*$-product defines a non-commutative algebra for {\em analytic} power series (Proposition~\ref{P::Borel}). According to Theorem \ref{T::product}, algebro-resurgent power series form a $*$-subalgebra of functions having endless analytic continuation. The following proposition shows that algebraic functions do not form a $*$-subalgebra~: \begin{proposition} \label{ex-hyperg} The hypergeometric function $$F\left(-\a,-\b, 1; \xi\right):=\sum_{k=0}^{\infty}\begin{pmatrix} \alpha\\ k \end{pmatrix}\begin{pmatrix}\beta\\ k \end{pmatrix} \xi^k$$ satisfies the identity \[(1+p)^{\a} * (1+q)^{\b}=(1+p)^{\a}(1+q)^{\b} F(-\a,-\b, 1;\frac{\xi}{(1+p)(1+q)}). \] \end{proposition} \begin{proof} Although this is expected by the description of the singularities of the $*$-product, we prove the result directly by a naive direct computation. First, note that if $f$ and $g$ do not depend on $\xi$, then the $\bullet$-product reduces to the ordinary product: $f(q,p)\bullet g(q,p)=f(q,p)g(q,p)$. In the case of one degree of freedom ($n=1)$, we get \begin{eqnarray*} f * g (\xi,q,p)&=&\frac{1}{2\pi i} \int_{\gamma} f(q,y)g(x,p) Res\left(\frac{dx\wedge dy}{(x-q)(y-p)-\xi}\right)\\ &=&\frac{1}{2\pi i} \oint f(q,p+\frac{\xi}{x})g(q+x,p) \frac{dx}{x}. \end{eqnarray*} Hence we see that the product $f * g $ of two elements that do not depend on $\xi$ is just equal to a certain {\em Hadamard product}~\cite{Hadamard_produit}. Then, \[(1+p)^{\a} * (1+q)^{\b}= \frac{1}{2\pi i}\oint (1+p+\frac{\xi}{x})^{\a}(1+q+x)^{\b} \frac{dx}{x}.\] After expanding the powers with the binomial theorem we see that the integral picks out corresponding $x$-powers and we get \[(1+p)^{\a} * (1+q)^{\b}=(1+p)^{\a}(1+q)^{\b}F(-\a,-\b, 1;\frac{\xi}{(1+p)(1+q)}).\] This proves the proposition. \end{proof} In particular, there might exist many closed formul\ae relating modular functions to $*$-products. For instance, if we take $\a=\b=-1/2$ then \begin{equation*} \frac{1}{\sqrt{1+p}}*\frac{1}{\sqrt{1+q}}(\xi,0,0)=1+\left(\frac{1}{2}\right)^2\xi+\left(\frac{1.3}{2.4}\right)^2\xi^2+\left(\frac{1.3.5}{2.4.6}\right)^2\xi^3+\ldots \end{equation*} which is exactly the elliptic modular function $\frac{2}{\pi}K(k)$, for $k=\sqrt{\xi}$, with $$K(k)=\int_{0}^{1} \frac{dx}{\sqrt{(1-x^2)(1-k^2x^2)}}=\int_0^{\pi/2} \frac{d\varphi}{\sqrt{1-k^2\sin^2 \varphi}} .$$ \section{Outlook.} Thanks to an integral formula proved in Proposition \ref{P::integral} for the $*$-product, we have been able to continue analytically this product. Indeed, we introduced the notion of algebro-resurgence in Definition \ref{D::ar} and we proved that the set $\mathcal{Q}^A \subset \mathbb{C}\{\xi,q,p\}$ of all algebro-resurgent germs forms an algebra under the $*$-product in Theorem \ref{T::product}. Note that we obtain as a byproduct that the class of Gevrey series in $t$-variable which is Borel dual to the algebro-resurgent germs is an algebra for the $\star$-product, and it contains in particular the Euler series. Exponential small quantities - which encode interesting quantum effects - can be seen for example as the difference between the two Euler functions $E_\pm$ involved in \eqref{eq-smallexp}, which admit both the Euler series as asymptotic series. In a Borel dual point a view, these exponential small quantities should be interpreted in terms of the singularities of functions in $\mathcal{Q}^A$, which is one of the main ideas in resurgence theory. As a particular case of algebro-resurgence, any $*$-product of algebraic power series in $\xi, q,p$ (that is, elements from the Henselian local ring) is algebro-resurgent. In fact, algebraic power series are even {\em holonomic}, meaning that they satisfy a holonomic system of differential equations and the subset $\mathcal{Q}^H \subset \mathcal{Q}^A$ of holonomic power series happens to be closed under the $*$-product. This is partly a consequence of the integral formula of Proposition \ref{P::integral}: the stability under integration follows from the fact that integrals of vanishing cycles always satisfy a Picard-Fuchs type equation. The stability under the convolution of functions satisfying a linear differential equation is a classical theorem of {\sc Hurwitz}. Details will appear elsewhere. The algebra $\mathcal{Q}^A$ we have constructed here seems to be rich enough to capture interesting quantum effects. One of the main difficulties in {\sc Pham}'s approach to {\sc Voros-Zinn-Justin} conjectures was indeed the absence of a convenient tool to describe singularities arising from algebraic operations. As we saw here, the singularities of a star-product can be explicitly described in the Borel plane. This led to the observation that starting from certain algebraic functions, one ends up with hypergeometric functions (see Proposition \ref{ex-hyperg}). On one side, this shows that the star-product immediately produces highly transcendental functions but from the point of view of singularities, they stay relatively simple since hypergeometric functions are just solutions to linear differential equations with a finite number of poles. In this paper, we gave an abstract description of singularities and apply it only on the simple example of hypergeometric functions. However, we expect that for the case of the anharmonic oscillator, we should be able to obtain an explicit description of the singularities such as conjectured by {\sc Delabaere-Pham} \cite{delabaere1997unfolding} although these might involve complicated special functions. There are also bigger algebras one could consider. Actually, the $*$-exponential maps the algebra $\mathcal{Q}^A$ to a bigger one, and there are several ways in which one could try to enlarge $\mathcal{Q}^A$ so that the exponential maps the algebra to itself. {\sc Pham} and coworkers define a subspace $\mathcal{R} \subset \mathbb{C}\{\xi\}$ of {\em resurgent germs} in one variable, by saying that $f \in \mathcal{R}$ if for all $L >0$ there is a finite set $\S_f(L)\subset \mathbb{C}$, such that all $f$ can be analytically continued along all paths length $\le L$ that avoid $\S_f(L)$~\cite{Pham}. In \cite{Pham_livre_resurgence}, the authors sketch an argument that $\mathcal{R}$ is closed under convolution, and it would be tempting to try to construct an analogue quantum $*$-algebra of this convolution algebra, using the integral formula of Proposition \ref{P::integral}. However, as observed by {\sc Delabaere}, {\sc Ou} and {\sc Sauzin}, there are some imprecisions in the original proofs of the convolution theorem for resurgent germs~\cite{Ou_resurgence,Sauzin_resurgence}. We can see that the statement is true, by Proposition~\ref{P::closed}, if the singularity set is finite. Note that in the case the singularity set is a semi-group, detailed proofs have been given by {\sc Ou} for one-dimensional semi-groups and by {\sc Sauzin} in the two dimensional case. \subsection*{Acknowledgments:} A. de Goursac acknowledges the F.R.S.-FNRS for financial support as well as the Belgian Interuniversity Attraction Pole (IAP) for support within the framework ``Dynamics, Geometry and Statistical Physics'' (DYGEST). M. Garay and A. de Goursac also acknowledge the Max Planck Institut f\"ur Mathematik in Bonn for invitation during this work. \bibliographystyle{amsplain}
\section{Introduction} \linespread{1.4} Let $\{p_n\}_{n\ge 1}$ be the sequence of the prime numbers and $\vartheta(x) = \sum_{p \leq x} \log p$, where $p$ runs over the primes not exceeding $x$, be the Chebyshev $\vartheta$-function. The type of bounds that we shall discuss here was introduced by Bonse \cite{bonse}, who showed that $\vartheta(p_n)>2\log p_{n+1}$ holds for every $n\ge 4$ and $\vartheta(p_n)>3\log p_{n+1}$ holds for every $n\ge 5$. Thereafter, P\'osa \cite{posa} showed that, given any $k>1,$ there exists $n_k$ such that $\vartheta(p_n)>k\log p_{n+1}$ holds for all $n\ge n_k.$ Panaitopol \cite{panaitopol} showed that in P\'osa's result we can have $n_k=2k$ and also gave the bound \begin{equation*}\frac{\vartheta(p_n)}{\log p_{n+1}}>n-\pi(n) \quad (n\ge 2),\end{equation*} where $\pi(n)$ is equal to the number of primes less or equal to $n$. Hassani \cite{hassani} refined Panaitopol's inequality to the following \begin{equation}\label{hassani}\frac{\vartheta(p_n)}{\log p_{n+1}}>n-\pi(n)\Big(1-\frac{1}{\log n} \Big)\quad (n\geq 101).\end{equation} Recently, Axler [1, Propositions 4.1 and 4.5] showed that \begin{equation*}\label{axler}1 + \frac{1}{\log p_n} + \frac{2.7}{\log^2p_n} < \log p_n - \frac{\vartheta(p_n)}{n} < 1 + \frac{1}{\log p_n} + \frac{3.84}{\log^2p_n},\end{equation*} where the left-hand side inequality is valid for every integer $n \geq 218$ and the right-hand side inequality holds for every $n \geq 74004585$. This provides the following asymptotic formula $$\displaystyle\frac{\vartheta(p_n)}{n} = \log p_n - 1 - \frac{1}{\log p_n} + \Theta \Big(\frac{1}{\log^2p_n}\Big).$$ For further terms, see Axler [1, Proposition 2.1]. In the present note, we show the following result, which is a refinement of \eqref{hassani}. \begin{theorem} For all $n\ge 6,$ we have \begin{equation}\label{th1} n\Big(1-\frac{1}{\log n}+\frac{\log\log n}{4 \log^2 n}\Big)\le\frac{\vartheta(p_n)}{\log p_{n+1}}\le n\Big(1-\frac{1}{\log n}+\frac{\log\log n}{\log^2 n}\Big). \end{equation}The left-hand side inequality also holds for $2\le n\le 6.$ \end{theorem} We also generalise the left-hand side of \eqref{th1} to have the following result. \begin{theorem} For every $0<\varepsilon<1,$ there exists $n_\varepsilon\in \mathbb{N}$ such that for every $n\geq n_{\varepsilon}$ it holds that \begin{equation}\label{th1.2} n\Big(1-\frac{1}{\log n}+(1-\varepsilon)\frac{\log\log n}{ \log^2 n}\Big)\le\frac{\vartheta(p_n)}{\log p_{n+1}}\le n\Big(1-\frac{1}{\log n}+\frac{\log\log n}{\log^2 n}\Big). \end{equation} \end{theorem} \begin{corollary} We have $\displaystyle\frac{\vartheta(p_n)}{n}=\log p_{n+1}\left(1-\frac{1}{\log n}+\frac{\log\log n}{\log^2 n}\left(1+o(1)\right)\right).$ \end{corollary} \section{Preliminaries} Define $ G(n,a)=\log n+\log\log n-1+\frac{\log\log n-a}{\log n}$. We shall use the following bounds for $\vartheta(p_n)/n$. \begin{lemma} For every $n \geq 3$, we have \begin{equation} \frac{\vartheta(p_n)}{n} \geq G(n,2.1454), \label{2.2} \end{equation} and for every $n \geq 198$, we have \begin{equation} \frac{\vartheta(p_n)}{n} \leq G(n,2). \label{2.3} \end{equation} \end{lemma} \begin{proof} The inequality \eqref{2.2} is due to Robin \cite{robin}, and the inequality \eqref{2.3} was given by Massias and Robin \cite{massias-robin}. \end{proof} \begin{lemma} For every $n\ge 227$, we have \begin{equation}\label{2.4} p_n\le n(\log n+\log\log n-0.8), \end{equation}and for every $n\ge 2$, \begin{equation}\label{Dusart} p_n\ge n(\log n+\log\log n-1). \end{equation}\end{lemma} \begin{proof} For $n\ge 8602$, we have the following stronger bound \begin{equation}\label{massias-robin2} p_n\le n(\log n+\log\log n-0.9385) \end{equation} given by Massias and Robin \cite{massias-robin}. For $227\le n\le 8601$ we verify the inequality \eqref{2.4} by direct computation. The inequality (\ref{Dusart}) is due to Dusart \cite{dusart}. \end{proof} For the sake of brevity, we shall define $\displaystyle \mathcal{F}(n,\lambda)=1-\frac{1}{\log n}+\lambda\frac{\log\log n}{\log^2 n}$ and rewrite (\ref{th1}) as \begin{equation}\label{th2} \mathcal{F}(n,0.25)\log p_{n+1}\le\vartheta(p_n)/n\le\mathcal{F}(n,1)\log p_{n+1}\end{equation} and rewrite \eqref{th1.2} as \begin{equation}\label{new} \mathcal{F}\left(n,{1-\varepsilon}\right)\log p_{n+1}\le\vartheta(p_n)/n\le\mathcal{F}(n,1)\log p_{n+1}.\end{equation} \section{Proof of Theorem 1} The proof of Theorem 1 is split into two lemmas. In the first lemma, we give lower and upper bounds for $\log p_{n+1}.$ \begin{lemma}\label{mybd} For every $n\ge 140$, we have \begin{equation}\label{lemma3.1} \log p_{n+1}<\log n+\log\log n+\frac{\log\log n-0.8+0.018}{\log n}=U(n), \end{equation}and for every $n \geq 2$, we have \begin{equation}\label{lemma3.2} \log p_{n+1}>\log n+\log\log n+\frac{\log\log n-1}{\log n+0.5(\log\log n-1)}=V(n). \end{equation} \end{lemma} \begin{proof} First, we show that for every $x\geq 1$ \begin{equation}\label{logineq} \frac{1}{x+0.4}> \log\left(1+\frac{1}{x}\right)>\frac{1}{x+0.5}. \end{equation} In order to prove this, we set $ f_a(x)=\log(1+x)-\frac{x}{1+ax}$ and note that, $ f_a'(x)=\frac{x(a^2x+2a-1)}{(1+x)(1+ax)^2}.$ Hence, $f'_{0.4}(x)<0$ for every $x\in(0,1.25)$ which yields $f_{0.4}(1/x)<f_{0.4}(0)=0$ for every $x\ge 1.$ On the other hand, $f'_{0.5}(x)>0$ for all positive $x$, which gives $f_{0.5}(1/x)>f_{0.5}(0)=0$ for every $x\ge 1.$ This completes the proof of (\ref{logineq}). Next, we give a proof of \eqref{lemma3.1}. By \eqref{2.4}, we have for $n \geq 227$, \begin{equation}\label{3.4} \log p_{n+1} \leq \log n+1 + \log ( \log n+1 + \log \log n+1 - 0.8).\end{equation} The left-hand side inequality of (\ref{logineq}) implies $\displaystyle\log(n+1)<\log n+\frac{1}{n+0.4}.$ Using (\ref{logineq}) once again, we get\begin{equation*} \log\log (n+1)<\log\log n+\log\Big(1+\frac{1}{(n+0.4)\log n}\Big)<\log\log n+\frac{1}{(n+0.4)\log n}. \end{equation*} Applying this to (\ref{3.4}), we obtain for $n \geq 227$, \begin{equation}\label{3.5} \log p_{n+1}<\log n+\log\log n+\frac{\log\log n-0.8}{\log n}+\frac{1}{\log n}\cdot\frac{\log n+1+1/\log n}{n+0.4}. \end{equation} Now, $g(x)=\displaystyle\frac{\log x+1+1/\log x}{x+0.4}$ is a decreasing function for $x\ge2$ with $g(e^{5.99})\le 0.018$. Hence $g(x) \leq 0.018$ for every $x \geq 400 > e^{5.99}$. Combined with (\ref{3.5}), it shows out that $\log p_{n+1} < U(n)$
for every $n \geq 400$. For every $140\leq n\leq 399$ we check the inequality \eqref{lemma3.1} with a computer. This completes the proof of (\ref{lemma3.1}). To prove the inequality \eqref{lemma3.2}, first note that \eqref{Dusart} gives for every $n\ge 1$,\begin{equation}\label{3.6} \log p_{n+1}\ge\log(n+1)+\log(\log(n+1)+\log\log(n+1)-1). \end{equation}The right-side inequality of (\ref{logineq}) gives $\displaystyle\log(n+1)>\log n+\frac{1}{n+0.5}.$ Using (\ref{logineq}) once again, we get, for $n\ge2,$ \begin{align*} \log\log(n+1)-\log\log n>\log\Big(1+\frac{1}{(n+0.5)\log n}\Big)>\frac{1}{(n+0.5)\log n+0.5}. \end{align*}Applying this to \eqref{3.6}, we arrive at \begin{align*} \log p_{n+1}&>\log n+\log\Big(\log n+\frac{1}{n+0.5}+\log\log n+\frac{1}{(n+0.5)\log n+0.5}-1\Big)\\ &>\log n+\log\log n+\log\Big(1+\frac{\log\log n-1}{\log n}\Big).\end{align*}Applying (\ref{logineq}) one more time, we get $\log p_{n+1}>V(n)$ for every $n\ge 2.$\end{proof} \vspace{-3mm}\begin{lemma}\label{lm4} For every $n\ge 396$, we have \begin{equation}\label{lemma4.1} G(n,2.1454)\ge \mathcal{F}(n,0.25)\cdot U(n),\end{equation} and for every $n\ge 2$, we have \begin{equation}\label{lemma4.2} G(n,2)\le \mathcal{F}(n,1)\cdot V(n). \end{equation} Here $U(n)$ and $V(n)$ are defined as in Lemma \ref{mybd}. \end{lemma} \begin{proof} We start with the proof of (\ref{lemma4.1}). Setting $x = \log n$, the inequality (\ref{lemma4.1}) can be rewritten as \begin{equation*} x+\log x-1+\frac{\log x-2.1454}{x}\ge \Big(1-\frac{1}{x}+\frac{\log x}{4x^2}\Big)\Big(x+\log x+\frac{\log x-0.8+0.018}{x}\Big), \end{equation*}which is equivalent to \begin{equation*} \Big(\frac{3}{4}\log x+\frac{\log x}{x}\Big)+\Big(-2.1454-\frac{\log^2 x}{4x}-\frac{\log^2 x}{4x^2}\Big)+(0.8-0.018)\Big(1-\frac{1}{x}+\frac{\log x}{4x^2}\Big)\geq 0. \end{equation*} The left-hand side is a sum of three increasing functions on the interval $[5.7, \infty)$ and at $x = 5.99$ the left-hand side is positive. So the last inequality holds for every $x \geq 5.99$; i.e., for every $n \geq 400$. A direct computation shows that the inequality (\ref{lemma4.1}) also holds for every $n$ satisfying $396 \leq n \leq 399$. Next, we give a proof of (\ref{lemma4.2}). It is easy to see that $$x^2 + \log x(\log x - 1) > \frac{x}{2}\log x(\log x - 1)$$ for every $x> 0$. Now, for $x\ge 1$, the last inequality is seen to be equivalent to $$ \left( 1 - \frac{1}{x} + \frac{\log x}{x^2} \right) \frac{\log x - 1}{x + 0.5(\log x-1)} \geq \frac{\log x-2}{x}.$$ Since $\frac{\log^2 x}{x^2} \geq 0$ for every $x > 0$, we get \begin{equation}\label{3.9} \frac{\log^2 x}{x^2} + \left( 1 - \frac{1}{x} + \frac{\log x}{x^2} \right) \frac{\log x - 1}{x + 0.5(\log x-1)} \geq \frac{\log x-2}{x}\end{equation} for every $x \ge 1$. Substituting $x = \log n$ in (\ref{3.9}), we obtain the inequality (\ref{lemma4.2}) for every integer $n \geq 3$. We can directly check that (\ref{lemma4.2}) holds for $n=2$ as well.\end{proof} Finally, we give a proof of Theorem 1. \begin{proof}[Proof of Theorem 1.] We use (\ref{2.2}), (\ref{lemma4.1}) and (\ref{lemma3.1}) respectively to see that for every $n\ge 396,$ \begin{equation*} \vartheta(p_n)/n\ge G(n,2.1454)\ge \mathcal{F}(n,0.25)U(n)>\mathcal{F}(n,0.25)\log p_{n+1}. \end{equation*} A direct computation shows that the left-hand side inequality of (\ref{th2}) also holds for every integer $n$ with $2 \leq n \leq 395$. In order to prove the right-hand side inequality of (\ref{th2}), we combine (\ref{2.3}), (\ref{lemma4.2}) and (\ref{lemma3.2}) respectively to get \begin{equation*}\vartheta(p_n)/n\le G(n,2)\le \mathcal{F}(n,1)V(n)\le\mathcal{F}(n,1)\log p_{n+1} \end{equation*}for every $n\ge 198$. For smaller values of $n$, we use a computer. \end{proof} \section{Proof of Theorem 2} The right-hand side of \eqref{new} has been established already. To show the left-hand side, we start with the following lemma. \begin{lemma}\label{lm5} For any $0<\varepsilon<1$, there exists $m_\varepsilon\in\mathbb{N}$ such that \begin{equation}\label{lemma last} G(n,2.1454)\ge \mathcal{F}(n,1-\varepsilon)\cdot U(n)\end{equation}holds for every $n\geq m_\varepsilon.$ Here $U(n)$ is defined as in Lemma \ref{mybd}. \end{lemma} \begin{proof} Fix any $0<\varepsilon<1$. We denote $a=2.1454, b=0.8-0.018$ and set $x = \log n$ to transform the inequality (\ref{lemma last}) into \begin{equation*} x+\log x-1+\frac{\log x-a}{x}\ge \Big(1-\frac{1}{x}+(1-\varepsilon)\frac{\log x}{4x^2}\Big)\Big(x+\log x+\frac{\log x-b}{x}\Big). \end{equation*}This is equivalent to \begin{equation*} \left(\varepsilon\log x+\frac{\log x}{x}\right)+\left(-a-(1-\varepsilon)\frac{\log^2 x}{x^2}(x+1)\right)+b\left(1-\frac{1}{x}+(1-\varepsilon)\frac{\log x}{x^2}\right)\geq 0. \end{equation*} Now, the left-hand side is a sum of three functions, each of which is strictly increasing for all sufficiently large $x,$ and the limit of the left-hand side, as $x\to\infty$, is $+\infty.$ Therefore we conclude that the last inequality holds for all sufficiently large $x.$ \end{proof} \begin{proof}[Proof of Theorem 2.] For any $0<\varepsilon<1,$ we have $m_\varepsilon\in\mathbb{N}$ such that (\ref{lemma last}) holds for every $n\ge m_\varepsilon$. We combine this with (\ref{2.2}) and (\ref{lemma3.1}) to obtain that for every $n\geq n_\varepsilon := \max\{m_\varepsilon,140\}$ \begin{equation*} \vartheta(p_n)/n\ge G(n,2.1454)\ge \mathcal{F}(n,1-\varepsilon)U(n)\geq\mathcal{F}(n,1-\varepsilon)\log p_{n+1}. \end{equation*}This completes the proof.\end{proof} \section{Remarks} \begin{enumerate} \item For every $n\ge 599$, we have \[\frac{\pi(n)}{n}\ge \frac{1}{\log n}+\frac{1}{\log^2 n},\] which was found by Dusart \cite{dusart2}. Using this and a computer, we get \begin{displaymath} \frac{\pi(n)}{n} \geq \frac{1}{\log n - 1} \left( 1 - \frac{\log \log n}{4 \log n} \right) \end{displaymath} for every integer $n \geq 83$. Hence, \eqref{th1} is an improvement of \eqref{hassani}. \vspace{2mm} \item The bounds given in \eqref{th1} are particularly useful for comparing $\vartheta(p_n)/n$ with $\log p_{n+1}.$ To see a numerical example, we use a computer to find that for $n\ge 23$ the relative error in approximating $\vartheta(p_n)/n$ with $\mathcal{F}(n,0.25)$ is less than $5\%$ and for $n\ge 114$ it is less than $2\%.$ An important feature of \eqref{th1} is that it holds even for very small values of $n.$ \end{enumerate} \section*{Acknowledgements} I am thankful to Mridul Nandi (Indian Statistical Institute, Kolkata, India) and Mehdi Hassani (University of Zanjan, Iran) for their valuable suggestions. \makeatother
\section{Introduction} \begin{figure}[!t] \centering \subfigure[]{ \label{fig:LidarProjectionDetails:a} \includegraphics[height=2.6in]{"Figures/326_53858c_case1"}} \hspace{-0.19in} \subfigure[]{ \label{fig:LidarProjectionDetails:b} \includegraphics[height=2.6in]{"Figures/157_53354r_case1"}} \hspace{-0.19in} \subfigure[]{ \label{fig:LidarProjectionDetails:c} \includegraphics[height=2.6in]{"Figures/912_55586l_case1"}} \hspace{-0.19in} \subfigure[]{ \label{fig:LidarProjectionDetails:d} \includegraphics[height=2.6in]{"Figures/780_55195l_case1"}} \caption{Examples of projecting lidar points to images before (top row) and after (bottom row) ego-motion correction. Misalignment of lidar points and features in the images due to ego-motion distortion can be clearly seen for a traffic sign in (a), a parked car and a building behind in (b), a tree trunk in (c), and lastly a walking pedestrian in (d). The figures in the bottom row show the improved results in the projection of ego-motion corrected lidar points using the proposed approach. The ego-motion correction uncertainty is available for each projected point, but not shown in the figures due to size constraints. The projected lidar points are colour-coded by range.} \label{fig:LidarProjectionDetails} \end{figure} To navigate through any environment, a mobile robot platform is required to perceive the environment and achieve some level of understanding of the surroundings. In many sophisticated systems, this requires the combining of information from heterogeneous on-board sensors. Lidars and cameras are complementary sensors that are extensively used in various robotic systems. Each sensor has a different strength---lidars offer precise range and reflective intensity measurements that are registered in 3D space, while cameras provide rich information of colour, texture and other visual features only in 2D. A considerable proportion of autonomous driving solutions proposed and developed by the automotive industry and research institutes rely on multiple cameras and lidars, in particular multi-beam lidars, to capture the activity of road users in the vicinity and to build contextual information---pedestrians, cyclists, other vehicles, traffic signs, lane markings, the road itself, etc.---in a traffic scene. Through the fusion of information from the two sensors of different modalities, we are able to transfer relevant data from the lidar to the camera domain, and vice versa, providing a better understanding of the surroundings' structure \cite{paper:ChienKlette2016}. It is thus of great importance to achieve accurate and robust perception by fusing camera and lidar information in a consistent manner. Although often over-simplified in many applications as being measured at a single time, each point contained in a lidar scan is in fact captured at a slightly different time due to the laser firing cycles. The motion of the egocentric robot platform causes distortion in the lidar measurements as the sensor coordinate system moves along with the platform during the period of a scan. In theory, every 3D point is measured from a temporally unique frame of reference. The lidar points therefore must be compensated for ego-motion and transformed into a common reference coordinate system before further point cloud and sensor fusion related processing can take place. This can include, for instance, lidar feature extraction, projection to image frames, transferring segmentation results from the image space onto the 3D point cloud, or 3D mapping. The ego-motion correction becomes more essential for higher speed motion of the robot system, where the distortion caused by ego-motion tends to be more severe. Examples presented in Figure \ref{fig:LidarProjectionDetails} illustrate misalignment of lidar and visual features in the environment when projecting uncorrected lidar points to images, which can cause degraded performance in the sensor fusion. Interested readers can refer to \cite{paper:RiekenMaurer2016} for quantitative analyses of the time-related effects of moving scanning sensors on different perception tasks for multiple sensor systems. Depending on the way the underlying ego-motion estimation of the sensor platform is conducted, there are two main categories of existing approaches proposed in the literature to correct the 3D lidar point cloud distortion due to ego-motion of the platform. In the first type, such as \cite{paper:SchneiderHimmelsbach2010, paper:HimmelsbachMuller2010, paper:MerriauxDupuis2017}, lidar scans are corrected by exploiting information from motion estimation sensors, such as IMU and odometry measurements. More complicated work presented in \cite{paper:VargaCostea2017, paper:ByunNa2015} obtains vehicle pose translation and rotation by fusing precise GNSS and IMU measurements. However, high end GNSS/INS units are costly, and their desired performance would not be achieved in GNSS denied environments. The other type relies on lidar based odometry estimation \cite{paper:TangYoon2018}, which eliminates the requirement of additional hardware. It can be further decomposed to simultaneous localization and mapping (SLAM) based approaches \cite{paper:MoosmannStiller2011, paper:ZhangSingh2014} that estimate the ego-motion by comparing point cloud features, and iterative closest point (ICP) based methods \cite{paper:HongKo2010} which infer the ego-motion through matching of consecutive scans. The SLAM based approaches are preferred in 3D map construction, yet loop closure is not achievable in some environments. ICP based methods are prone to errors brought by moving objects in the scene, such as pedestrians and vehicles \cite{paper:VargaCostea2017}. Overall, it requires a substantial computational overhead for extracting features from a lidar point cloud and comparing lidar scans in these approaches. None of the above approaches provides a way to estimate the uncertainty associated with each of the 3D lidar points and/or 2D image points in ego-motion correction process. We stress that there is always some uncertainty in 3D space associated with each motion corrected lidar point brought by errors in ego-motion estimation, regardless of which odometry sensors and/or estimation frameworks are adopted. Likewise, the motion corrected points when projected into a camera coordinate system also contain uncertainty in 2D image coordinates. The uncertainty is often considerable under many circumstances and has to be estimated along with the ego-motion correction process. Thus, the perception system can benefit from the uncertainty estimates in the subsequent point cloud and sensor fusion processing pipeline. A probabilistic approach was proposed by \cite{paper:LeGentilVidalCalleja2018} that includes the correction due to motion distortion in 3D point clouds using IMU data considering measurement uncertainty. Yet, the approach mainly focuses on recovery of extrinsic calibration parameters of a lidar-IMU tightly coupled system, which does not produce explicit estimation uncertainty for corrected lidar points. The more recent work \cite{paper:CharikaShan2019} presents a probabilistic approach to estimate the uncertainty in the lidar-to-camera projection process. It employs a Jacobian based uncertainty model to estimate for each projected lidar point the combined uncertainty (in 2D) resulting from noise in ego-motion correction and errors in sensors' extrinsic and intrinsic calibration. Nevertheless, the uncertainty estimation for ego-motion corrected lidar points themselves (in 3D) is not supported by the approach. The paper examines probabilistic ego-motion correction of lidar 3D point clouds to an arbitrary reference timestamp and projection to 2D image frames considering uncertainty in ego-motion estimation of the moving platform. On top of ego-motion correction outcomes, the proposed approach provides uncertainty estimates separately for each ego-motion corrected 3D lidar point and each projected 2D pixel point. Besides, the proposed approach considers additional uncertainty brought by time jitter in sensor data timestamps, which is a practical issue in many robotics systems. The proposed approach is validated using real-world data collected on an electric vehicle platform. Simulation results are also presented to quantitatively evaluate the proposed approach and assess the estimator credibility. The remainder of the paper is organised as follows. Section \ref{sec:approach} presents the details of the proposed approach, including the probabilistic lidar ego-motion correction and the projection to an image frame. The experiment outcomes are presented in Section \ref{sec:results}, where simulation results are also included. Lastly, Section \ref{sec:conlusions} concludes the paper. \section{The Proposed Approach} \label{sec:approach} In this section, we elaborate on the calibration of multiple cameras and a lidar in our experimental platform, and the methodology to estimate uncertainties as a result of probabilistic lidar ego-motion correction and projection. \subsection{Calibration} The cameras located on the electric vehicle platform used in the experiment have a lens of $100^\circ$ horizontal field of view, which is classified as a fisheye lens. We have calibrated the cameras by using a variation of the ROS package \textit{camera\_calibration} \cite{ros_camera_calib} that uses a generic camera model \cite{fish_eye_model}. The camera intrinsic parameters for this model consist of the focal length, principal points and 4 fisheye equidistant distortion coefficients. These values are critical for lidar-to-camera projection and the subsequent sensor fusion. The extrinsic camera calibration is challenging when working with wide angle cameras due to the significant distortions in the lens. The extrinsic calibration in our electric vehicle platform was conducted as specified in the previous work \cite{SurabhiITSC}. This process uses a checkerboard from which the same features are extracted by both the camera and the lidar. The features are the centre point and the normal vector of the board. These features are fed to a genetic algorithm which is in charge of optimising the geometrical extrinsic parameters of the 3D transformation \(T_{cam}^{ld}\) between the two sensors. \subsection{Probabilistic Lidar Ego-Motion Correction and Projection to Image Frame} Usually within a lidar scan, lidar measurements with similar timestamps are grouped into a single lidar packet, with a common timestamp assigned to the grouping for convenience of processing. For instance, the Velodyne VLP-16 software driver used in our electric vehicle platform produces 76 packets for each full revolution scan. Each packet covers an azimuth angle of approximately \(4.74^{\circ}\). Alternatively, processing can be based on each individual lidar point with its own precise timestamp, though this comes at a significantly higher computational cost. Each of the lidar packets is transformed based on the estimated delta translation and rotation of the vehicle platform between the packet's timestamp and the reference timestamp \(t_{ref}\), as illustrated in Figure \ref{fig:time_line}. The proposed approach makes use of the unscented transform (UT) to propagate the uncertainties from the ego-motion estimation to corrected 3D lidar points and then to projected pixel coordinates in each camera image. The entire process can be divided into three cascaded stages, namely vehicle ego-motion estimation, lidar motion correction, and lidar-to-camera projection, each can be fitted into the UT pipeline. The reference time \(t_{ref}\) is usually chosen to be the timestamp corresponding to a common frame of reference where sensor fusion or subsequent processing happens. In scenarios where camera-lidar sensor fusion is desired, rectification of the lidar points have to be matched with the timestamp of the associated camera frame before the lidar-to-camera projection can be carried out \cite{paper:VargaCostea2017}. For instance, the \(t_{ref}\) can be set to coincide with the timestamp of the most recent or closest image. \begin{figure}[!t] \vspace{2mm} \centerline{ \includegraphics[width=0.9\columnwidth]{Figures/time_line.png} } \caption{Lidar point cloud motion correction process.} \label{fig:time_line} \end{figure} We assume a lidar scan is comprised of a set of \(N\) packets and their timestamps denoted as \begin{equation} \left\{pk_{i}, t_{i}^{lpk}\right\}_{i=0}^{N-1}, \end{equation} where \(pk_{i}\) contains a set of \(M\) 3D lidar measurement points \(\left\{\bm{z}_{i,j}^{ld}\right\}_{j=0}^{M-1}\), and \(\bm{z}^{ld} = \begin{bmatrix} x^{ld} & y^{ld} & z^{ld} & 1 \end{bmatrix}^{T}\). Before we proceed, the UT state decomposition and recovery functions are presented in Table \ref{table:UT_Decompose} and Table \ref{table:UT_Restore} respectively for the convenience of subsequent discussion, where \(\lambda = \alpha^2\left(d+\kappa\right)-d\), \(d = dim\left(\textbf{x}\right)\) is the dimension of state \(\textbf{x}\), scaling parameters \(\kappa \ge 0\), \(\alpha \in \left(0, 1\right]\), and \(\beta = 2\) for Gaussian distribution, \(\left(\sqrt{\bm{\Sigma}}\right)_i\) is to obtain the \(i^{th}\) column of the matrix square root \(\textbf{R} = \sqrt{\bm{\Sigma}}\), which can be computed by Cholesky decomposition such that we have \(\bm{\Sigma} = \textbf{R}\textbf{R}^{T}\). \subsubsection{Vehicle Ego-Motion Estimation} A proper way to estimate the ego-motion of the moving sensor platform is required to address its changing poses when perceiving the environment. In the electric vehicle platform used in our experiments, instantaneous linear and angular velocities are read from onboard wheel encoders and an IMU, respectively, at a rate of 100 Hz. Based on a sequence of monotonically increasing packet timestamps \(\bm{t}_{0:N-1}^{pk} = \left\{t_{i}^{pk}\right\}_{i=0}^{N-1}\), it is reasonable to construct a sequence of linear velocity vectors \(\bm{z}_{0:N-1}^{v} = \left\{\bm{z}_{i}^{v}\right\}_{i=0}^{N-1}\) corresponding to \(\bm{t}_{0:N-1}^{pk}\), and likewise a sequence of angular velocity vectors \(\bm{z}_{0:N-1}^{\omega} = \left\{\bm{z}_{i}^{\omega}\right\}_{i=0}^{N-1}\). \begin{table}[!t] \vspace{2mm} \centering \caption{Algorithm: State Decomposition in Unscented Transform} \label{table:UT_Decompose} \scalebox{1.0}{ \begin{tabular}{cl} \toprule \multicolumn{2}{l}{\(\left\{\bm{\mathcal{X}}_{i}, w_{i}^{m}, w_{i}^{c}\right\}_{i=0}^{2d} \leftarrow UTD\left(\bar{\textbf{x}}, \bm{\Sigma}\right)\)}\\ \midrule 1: & \hspace{-10pt} \( \bm{\mathcal{X}}_{0} = \bar{\textbf{x}} \)\\ 2: & \hspace{-10pt} \( \bm{\mathcal{X}}_{i} = \bar{\textbf{x}} + \left(\sqrt{\left(d+\lambda\right)\bm{\Sigma}}\right)_i\ \text{for}\ i=1,\cdots,d \)\\ 3: & \hspace{-10pt} \( \bm{\mathcal{X}}_{i} = \bar{\textbf{x}} - \left(\sqrt{\left(d+\lambda\right)\bm{\Sigma}}\right)_i\ \text{for}\ i=d+1,\cdots,2d \)\\ 4: & \hspace{-10pt} \( w_{0}^{m} = \frac{\lambda}{d+\lambda} \)\\ 5: & \hspace{-10pt} \( w_{0}^{c} = \frac{\lambda}{d+\lambda} + \left(1-\alpha^2+\beta\right) \)\\ 6: & \hspace{-10pt} \( w_{i}^{m} = w_{i}^{c} = \frac{1}{2\left(d+\lambda\right)}\ \text{for}\ i=1,\cdots,2d \)\\ \bottomrule \end{tabular}} \end{table} \begin{table}[!t] \centering \caption{Algorithm: State Recovery in Unscented Transform} \label{table:UT_Restore} \scalebox{1.0}{ \begin{tabular}{cl} \toprule \multicolumn{2}{l}{\(\bar{\textbf{x}}, \bm{\Sigma} \leftarrow UTR\left(\left\{\bm{\mathcal{X}}_{i}, w_{i}^{m}, w_{i}^{c}\right\}_{i=0}^{2d}\right)\)}\\ \midrule 1: & \hspace{-10pt} \( \bar{\textbf{x}} = \sum_{i=0}^{2d}{w_{i}^{m} \bm{\mathcal{X}}_{i}} \)\\ 2: & \hspace{-10pt} \( \bm{\Sigma} = \sum_{i=0}^{2d}{w_{i}^{c} \left(\bm{\mathcal{X}}_{i}-\bar{\textbf{x}}\right) \left(\bm{\mathcal{X}}_{i}-\bar{\textbf{x}}\right)^T} \)\\ \bottomrule \end{tabular}} \end{table} Each \(\bm{z}_{i}^{v}\) is a column vector with linear velocity readings along with \(x\), \(y\), and \(z\) and each \(\bm{z}_{i}^{\omega}\) is a column vector with the angular velocity measurements in \(roll\), \(pitch\), and \(yaw\) in the local frame of reference of the vehicle. In cases where odometry data and lidar packets are asynchronous, \(\bm{z}_{i}^{v}\) and \(\bm{z}_{i}^{\omega}\) can be well approximated using those with the closest timestamps to \(t_{i}^{pk}\), respectively, so long as the assumption that the vehicle kinematic state does not change dramatically during the time difference holds. Also, \(\bm{z}_{i}^{v}\) and \(\bm{z}_{i}^{\omega}\) are assumed to contain independently and identically distributed zero-mean Gaussian noises with their covariance matrices denoted as \(\bm{\Sigma}^{v}\) and \(\bm{\Sigma}^{\omega}\), respectively. The timing jitter in \(t_{i}^{pk}\) is modelled as zero-mean Gaussian noise with its standard deviation \(\sigma_{t}\). Please note that one can choose to use other off-the-shelf ego-motion estimation methods depending on the sensor configurations on the target platforms, as long as the method can provide robust and consistent uncertainty estimates. Given \(\bm{t}_{0:N-1}^{pk}\), \(\bm{z}_{0:N-1}^{v}\), \(\bm{z}_{0:N-1}^{\omega}\), and \(t_{ref}\), the vehicle ego-motion estimation is required to find out the sequence of Gaussian variables \(\left\{\textbf{x}_{i}^{veh} \sim \mathcal{N}\left(\bar{\textbf{x}}_{i}^{veh}, \bm{\Sigma}_{i}^{veh}\right)\right\}_{i=0}^{N-1}\) representing the estimated vehicle egocentric poses at \(\bm{t}^{lpk}\) with respect to that at \(t_{ref}\). Let \(\textbf{x}_{ref}^{veh}\) denote the Gaussian variable representing the vehicle egocentric state at \(t_{ref}\). \begin{equation} \textbf{x}_{ref}^{veh} \sim \mathcal{N}\left(\bar{\textbf{x}}_{ref}^{veh}, \bm{\Sigma}_{ref}^{veh}\right), \end{equation} where we set the mean vector \(\bar{\textbf{x}}_{ref}^{veh} = \bm{0}\) and the covariance matrix \(\bm{\Sigma}_{ref}^{veh}\) to a diagonal matrix with close to zero elements, since we are performing ego-motion estimation within the local coordinate system of the vehicle at \(t_{ref}\). If \(t_{ref} > t_{0}^{lpk}\), then backward ego-motion estimation is performed by first initialising intermediate variables: \begin{align} \label{eq:egomotion_prediction_init} t_{*} &\leftarrow t_{ref} & \bar{\textbf{x}}_{*}^{veh} &\leftarrow \bar{\textbf{x}}_{ref}^{veh} & \bm{\Sigma}_{*}^{veh} &\leftarrow \bm{\Sigma}_{ref}^{veh}. \end{align} Then for \(i = \max\left( \left\{p : t_{p}^{pk} \in \bm{t}^{pk} \wedge t_{p}^{pk} < t_{ref} \right\} \right), \cdots, 0\), an augmented state vector is constructed by concatenating intermediate vehicle egocentric state \(\textbf{x}_{*}^{veh}\) and kinematic measurements at \(t_{i}^{pk}\). \begin{equation} \label{eq:egomotion_prediction_start} \textbf{x}_{*}^{a} \sim \mathcal{N}\left( \bar{\textbf{x}}_{*}^{a}, \bm{\Sigma}_{*}^{a} \right), \end{equation} where \( \bar{\textbf{x}}_{*}^{a} = \begin{bmatrix} \left(\bar{\textbf{x}}_{*}^{veh}\right)^{T} & \left(\bm{z}_{i}^{v}\right)^{T} & \left(\bm{z}_{i}^{\omega}\right)^{T} & t_{i}^{pk} & t_{*} \end{bmatrix} \), and \( \bm{\Sigma}_{*}^{a} = \begin{bmatrix} \bm{\Sigma}_{*}^{veh} & \bm{0} & \bm{0} & 0 & 0 \\ \bm{0} & \bm{\Sigma}_{v} & \bm{0} & 0 & 0 \\ \bm{0} & \bm{0} & \bm{\Sigma}_{\omega} & 0 & 0 \\ 0 & 0 & 0 & \sigma_{t}^{2} & 0 \\ 0 & 0 & 0 & 0 & \sigma_{t}^{2} \\ \end{bmatrix} \). The backward motion estimation goes from a later timestamp \(t_{*}\) to an earlier \(t_{i}^{pk}\), resulting in a negative time difference considered in the kinematic model. The augmented state mean and covariance matrix are decomposed through UT into a set of sigma points. \begin{equation} \left\{\bm{\mathcal{X}}_{j}^{a}, w_{j}^{m}, w_{j}^{c}\right\}_{j=0}^{2d} \leftarrow UTD\left(\bar{\textbf{x}}_{*}^{a}, \bm{\Sigma}_{*}^{a}\right). \end{equation} For \(j = 0,\cdots,2d\), motion estimation is conducted backward in time. \begin{equation} \bm{\mathcal{Y}}_{j}^{veh} = f_{km}\left(\bm{\mathcal{X}}_{j}^{a}\right), \end{equation} where \(f_{km}\left(\cdot\right)\) is the vehicle kinematic model that predicts vehicle pose based on a given pose and kinematic velocities over a given time duration. The estimated vehicle egocentric state at timestamp \(t_{i}^{pk}\) is recovered by \begin{equation} \bar{\textbf{x}}_{i}^{veh}, \bm{\Sigma}_{i}^{veh} \leftarrow UTR\left(\left\{\bm{\mathcal{Y}}_{j}^{veh}, w_{j}^{m}, w_{j}^{c}\right\}_{j=0}^{2d}\right). \end{equation} The results also serve as intermediate variables for the next iteration: \begin{align} \label{eq:egomotion_prediction_end} t_{*} &\leftarrow t_{i}^{pk} & \bar{\textbf{x}}_{*}^{veh} &\leftarrow \bar{\textbf{x}}_{i}^{veh} & \bm{\Sigma}_{*}^{veh} &\leftarrow \bm{\Sigma}_{i}^{veh}. \end{align} If \(t_{ref} \leq t_{N-1}^{pk}\), then forward vehicle ego-motion estimation is carried out by initialising intermediate variables as in \eqref{eq:egomotion_prediction_init}, and for \(i = \min\left(\left\{p : t_{p}^{pk} \in \bm{t}^{pk} \wedge t_{p}^{pk} \geq t_{ref}\right\}\right),\cdots,N-1\), using the same set of equations \eqref{eq:egomotion_prediction_start}-\eqref{eq:egomotion_prediction_end}, except that in this case \( \bar{\textbf{x}}_{*}^{a} = \begin{bmatrix} \left(\bar{\textbf{x}}_{*}^{veh}\right)^{T} & \left(\bm{z}_{i}^{v}\right)^{T} & \left(\bm{z}_{i}^{\omega}\right)^{T} & t_{*} & t_{i}^{pk} \end{bmatrix} \), and in every iteration the motion estimation starts from an earlier timestamp \(t_{i}^{pk}\) to a later \(t_{*}\). \subsubsection{3D Lidar Points Motion Correction} With a sequence of estimated vehicle egocentric poses \(\left\{\textbf{x}_{i}^{veh} \sim \mathcal{N}\left(\bar{\textbf{x}}_{i}^{veh}, \bm{\Sigma}_{i}^{veh}\right)\right\}_{i=0}^{N-1}\) at \(\bm{t}^{pk}\) obtained from the vehicle ego-motion estimation stage, motion correction is applied for each corresponding packet of 3D lidar measurement points. For \(i = 0,1,\cdots,N-1\), the estimated state mean and covariance matrix are decomposed into a set of sigma points
: \begin{equation} \left\{\bm{\mathcal{X}}_{i,k}^{veh}, w_{i,k}^{m}, w_{i,k}^{c}\right\}_{k=0}^{2d} \leftarrow UTD\left(\bar{\textbf{x}}_{i}^{veh}, \bm{\Sigma}_{i}^{veh}\right). \end{equation} A set of \(4\times 4\) homogeneous transformation matrices \(\left\{\mathcal{T}_{i,k}^{veh}\right\}_{k=0}^{2d}\) are constructed based on the rotation and translation in each \(\bm{\mathcal{X}}_{i,k}^{veh}\). For \(j = 0,\cdots,M-1\), and for \(k = 0,\cdots,2d\), a motion corrected sigma point is calculated as \begin{equation} \bm{\mathcal{Z}}_{i,j,k}^{cld} = (T_{veh}^{ld})^{-1} \cdot \mathcal{T}_{i,k}^{veh} \cdot T_{veh}^{ld} \cdot \bm{z}_{i,j}^{ld}, \end{equation} where the lidar point is translated to the vehicle's base frame by the rigid transform \(T_{veh}^{ld}\), followed by transformation that encapsulates delta ego-motion in vehicle base frame. Lastly, the point is translated back to the lidar coordinate system. At this stage, a motion corrected lidar point \(\bm{z}_{i,j}^{cld}\) within lidar packet \(pk_{i}\) can be recovered to a Gaussian variable through \begin{equation} \bar{\bm{z}}_{i,j}^{cld}, \bm{\Sigma}_{i,j}^{cld} \leftarrow UTR\left(\left\{\bm{\mathcal{Z}}_{i,j,k}^{cld}, w_{i,k}^{m}, w_{i,k}^{c}\right\}_{k=0}^{2d}\right). \end{equation} In the end, the process produces a set of motion corrected sigma points for lidar points denoted by \begin{equation} \bm{\Omega} = \left\{\left\{\left\{\bm{\mathcal{Z}}_{i,j,k}^{cld}\right\}_{j=0}^{M-1}, w_{i,k}^{m}, w_{i,k}^{c}\right\}_{k=0}^{2d}\right\}_{i=0}^{N-1}. \end{equation} The lidar ego-motion correction with uncertainty completes at this stage. Further transformation can be applied to \(\bm{\Omega}\) for lidar-to-camera projection with ego-motion uncertainty. Please refer to next section for details. \subsubsection{Lidar-to-Camera Projection} This stage is only for systems that require camera-lidar sensor fusion. It takes a motion corrected 3D lidar point cloud as input and project the lidar points to a given camera coordinate system. Essentially, the timestamp of the image for projection has been used as the reference time \(t_{ref}\) in the motion correction process, in pursuance of bringing about an accurate camera-lidar projection. Before projection happens, a 3D lidar point needs to be translated from lidar frame to camera frame given the extrinsic calibration between both camera and lidar sensors represented as the transformation matrix \(T_{cam}^{ld}\): \begin{equation} \bm{z}^{cam} = T_{cam}^{ld} \bm{z}^{ld}, \end{equation} where \(\bm{z}^{cam} = \begin{bmatrix} x^{cam} & y^{cam} & z^{cam} & 1 \end{bmatrix}^{T}\) is the 3D lidar point translated to camera frame. The generic lidar-to-camera projection function is defined as \begin{equation} \begin{bmatrix} u \\ v \end{bmatrix} = f_{proj} \left(\bm{z}^{cam}\right), \end{equation} which is to find the pixel coordinates \(u\) and \(v\) in the image frame corresponding to a 3D lidar point \(\bm{z}^{cam}\) in the camera frame by using the camera model and its intrinsic parameters. The function first makes use of the generic pinhole camera-image projection equations, which states \begin{align} a &= \frac{x^{cam}}{z^{cam}} & b &= \frac{y^{cam}}{z^{cam}} \label{eq_pc1} \end{align} \begin{align} r &= \sqrt{a^{2}+b^{2}} & \theta &= \textup{atan}(r) \label{eq_pc2}. \end{align} Since our cameras have fisheye lenses, we need to apply the distortion established by the camera model to find the corresponding pixel in the image \cite{opencv}. The distortion of the lens is calculated as follows: \begin{equation} \label{eq_pc3} \theta_d = \theta(1+k_1\theta^2+k_2\theta^4+k_3\theta^6+k_4\theta^8), \end{equation} where $k_1$, $k_2$, $k_3$ and $k_4$ are the lens' distortion coefficients. Then we compute the distorted point coordinates as \begin{align} x' &= (\theta_d/r)a & y' &= (\theta_d/r)b. \end{align} The definite pixel coordinates vector \(\begin{bmatrix} u & v \end{bmatrix}^{T}\) in the image frame of a 3D lidar point can be calculated as \begin{align} u &= f_x \cdot (x' + e y')+c_x & v &= f_y \cdot y'+c_y, \label{eq_pc4} \end{align} where \(e\) is the camera's skew coefficient, $[c_x,c_y]$ the principal point offset and $[f_x, f_y]$ are the focal lengths expressed in pixel units. In order to produce projected points in the image frame with uncertainty information, and also to avoid unnecessary UT, this stage works directly on \(\bm{\Omega}\), which is the set of sigma points for each 3D lidar point as the output of the 3D lidar points motion correction stage. We can combine translation of each sigma point \(\bm{\mathcal{Z}}_{i,j,k}^{cld} \in \bm{\Omega}\) from lidar frame to camera frame using \(T_{cam}^{ld}\) and the projection to the image frame by \begin{equation} \label{eq_pc0} \left\{\bm{\mathcal{K}}_{i,j,k}^{cam} : \left(\exists \bm{\mathcal{Z}}_{i,j,k}^{cld} \in \bm{\Omega}\right) \left[ \bm{\mathcal{K}}_{i,j,k}^{cam} = f_{proj}\left(T_{cam}^{ld} \bm{\mathcal{Z}}_{i,j,k}^{cld}\right) \right] \right\}. \end{equation} For \(i = 0,\cdots,N-1\) and for \(j = 0,\cdots,M-1\), the image pixel projected from the lidar point \(\bm{z}_{i,j}^{cld}\) within lidar packet \(pk_{i}\) can be recovered with its mean values and covariance matrix by \begin{equation} \begin{bmatrix} \bar{u}_{i,j} \\ \bar{v}_{i,j} \end{bmatrix}, \bm{\Sigma}_{i,j}^{uv} \leftarrow UTR\left(\left\{\bm{\mathcal{K}}_{i,j,k}^{cam}, w_{i,k}^{m}, w_{i,k}^{c}\right\}_{k=0}^{2d}\right). \end{equation} \section{Results} \label{sec:results} \subsection{Experiment Results} We implemented the proposed approach in C++ under ROS Melodic release and tested it in the USyd Dataset \cite{usyd_dataset, USYD_Segmentation_2019}, which was obtained with the electric vehicle platform shown in Figure \ref{fig:platform}. The vehicle is equipped with a Velodyne VLP-16 lidar and five fixed lens gigabit multimedia serial link (GMSL) cameras, each covers a \(100^{\circ}\) horizontal field of view. The camera images have a resolution of \(1920 \times 1208\) and are captured at 30 FPS by an onboard NVIDIA DRIVE PX2 automotive computer. The extrinsic camera calibration is conducted relative to the lidar sensor frame, and both are registered to the local frame of reference of the vehicle. The platform also contains wheel encoders and an IMU, which produce odometry measurements at 100 Hz. The constant turn rate and velocity (CTRV) kinematic model is adopted for the vehicle. In the experiment, we use the proposed approach to correct the lidar point cloud using the timestamp of the last lidar packet as \(t_{ref}\). We also project the latest lidar point cloud to the most recent image frames from three front facing cameras, in which case the timestamps of the image frames are chosen as \(t_{ref}\). Only the \(x\) component of the linear velocity measurements is used with the standard deviation of its noise set to 0.05 m/s. The measurements of angular velocities in \(roll\), \(pitch\), and \(yaw\) are used with 2 deg/s set as the standard deviation of their noise, which takes into account the IMU's thermo-mechanical white noise, and the noise from mechanical vibration when the vehicle is moving. \(\sigma_{t}\) is set to 0.0006 s. Each corrected point cloud is published as a ROS \textit{sensor\_msgs/PointCloud2} message, where the data fields of every 3D lidar point are augmented with its covariance in 3D, its projected image coordinates, and their associated covariance in 2D. An example of lidar ego-motion correction can be found in Figure \ref{fig:LidarCorrectionLarge}, which clearly demonstrates the corrected lidar point cloud with uncertainty estimates, and is compared with the uncorrected point cloud. The ego-motion distortion can often cause issues in lidar feature extraction. This can, for instance, manifest as a ghost image of the lidar features which is often observed in the overlapping area of the first and last packets of a lidar scan, as shown in the left column of Figure \ref{fig:LidarCorrectionDetails}. As illustrated in the right column of Figure \ref{fig:LidarCorrectionDetails}, the issue can be effectively rectified using the proposed approach, which also provides uncertainty estimate for each lidar point. \begin{figure}[!t] \vspace{2mm} \centerline{ \includegraphics[width=0.9\columnwidth]{Figures/platform.png} } \caption{Experimental platform equipped with five cameras (two side cameras and an arrange of three cameras front facing), one Velodyne VLP-16 lidar, wheel encoders and an IMU that contains gyroscopes, accelerometers and magnetometers. } \label{fig:platform} \end{figure} \begin{figure}[!t] \vspace{2mm} \centering \subfigure[]{ \label{fig:LidarCorrectionLarge:a} \includegraphics[trim={1.5cm 0 1.5cm 0.9cm},clip, width=0.9\columnwidth]{"Figures/784_uncor"}} \hspace{-0.0in} \subfigure[]{ \label{fig:LidarCorrectionLarge:b} \includegraphics[trim={1.5cm 0 1.5cm 0.9cm},clip, width=0.9\columnwidth]{"Figures/784_cortd_x"}} \hspace{-0.0in} \subfigure[]{ \label{fig:LidarCorrectionLarge:c} \includegraphics[trim={1.5cm 0 1.5cm 0.9cm},clip, width=0.9\columnwidth]{"Figures/784_cortd_y"}} \hspace{-0.0in} \subfigure[]{ \label{fig:LidarCorrectionLarge:d} \includegraphics[width=2.0in]{"Figures/colorbar"}} \caption{Lidar point cloud before and after ego-motion correction. (a) shows the point cloud before ego-motion correction. (b) and (c) present the corrected point cloud coloured by the standard deviation in \(x\) and \(y\) directions, respectively. As lidar scans in the clockwise direction, the older points tend to have a higher level of uncertainty due to ego-motion. The uncertainty in \(z\) is found less significant and thus not shown.} \label{fig:LidarCorrectionLarge} \end{figure} \begin{figure}[!t] \vspace{2mm} \centering \subfigure[]{ \label{fig:LidarCorrectionDetails:a} \includegraphics[trim={0 0.15cm 0 0.2cm},clip,width=1.6in]{"Figures/320_uncor"}} \hspace{-0.1in} \subfigure[]{ \label{fig:LidarCorrectionDetails:b} \includegraphics[trim={0 0.15cm 0 0.2cm},clip,width=1.6in]{"Figures/320_cortd_x"}} \subfigure[]{ \label{fig:LidarCorrectionDetails:c} \includegraphics[trim={0 0.15cm 0 0.4cm},clip,width=1.3in]{"Figures/799_uncor"}} \hspace{-0.1in} \subfigure[]{ \label{fig:LidarCorrectionDetails:d} \includegraphics[trim={0 0.15cm 0 0.4cm},clip,width=1.3in]{"Figures/799_cortd_x"}} \subfigure[]{ \label{fig:LidarCorrectionDetails:e} \includegraphics[trim={0 0.15cm 0 0.1cm},clip,width=1.6in]{"Figures/653_uncor"}} \hspace{-0.1in} \subfigure[]{ \label{fig:LidarCorrectionDetails:f} \includegraphics[trim={0 0.15cm 0 0.1cm},clip,width=1.6in]{"Figures/653_cortd_x"}} \hspace{-0.0in} \subfigure[]{ \label{fig:LidarCorrectionDetails:g} \includegraphics[width=2.0in]{"Figures/colorbar"}} \caption{Lidar features before and after ego-motion correction. (a), (c), and (e) illustrate the lidar points of a traffic sign, a pillar, and a pedestrian, respectively, before ego-motion correction. (b), (d), and (f) depict the motion corrected points coloured by the standard deviation in \(x\) direction. The standard deviations in \(y\) and \(z\) directions are available but not shown here.} \label{fig:LidarCorrectionDetails} \end{figure} \begin{figure}[!t] \vspace{2mm} \centering \subfigure[]{ \label{fig:LidarProjectionLarge:a} \includegraphics[trim={0.2cm 0.15cm 0.2cm 0.15cm},clip, height=2.37in]{"Figures/635_54770l_case1"}} \hspace{-0.13in} \subfigure[]{ \label{fig:LidarProjectionLarge:b} \includegraphics[trim={0.2cm 0.15cm 0.2cm 0.15cm},clip, height=2.37in]{"Figures/635_54770l_case2"}} \caption{An example of lidar-to-camera projection using the proposed approach. The projection of the raw point cloud to an image from the front-left camera is shown in top figure of (a), where the misalignment of lidar points and visual features is apparent. A finer overlapping between the points and the image can be observed in the projection of motion corrected lidar points in bottom figure of (a). In addition, (b) shows the uncertainty estimates of each projected point as a result of the proposed approach. Every ellipse in (b) covers a 95\% confidence area. The projected lidar points are colour-coded by range.} \label{fig:LidarProjectionLarge} \end{figure} Besides the results previously presented in Figure \ref{fig:LidarProjectionDetails}, more results of lidar-to-camera projection can be found in Figure \ref{fig:LidarProjectionLarge}. It can be clearly seen that precision of the projection improves significantly through the use of the proposed approach. The uncertainty estimates for each projected lidar point on the image are illustrated in Figure \ref{fig:LidarProjectionLarge:b}. It is important to note that in each lidar-to-camera projection figure presented here, the lidar can partially see behind objects seen by the camera. The lidar viewpoint is slightly different to the camera as the sensors are not co-located, and as a result objects observed by one sensor can block the visibility of the other. This is due to the physical setup of cameras and the lidar in the vehicle being mounted in different positions with the aim of providing wide coverage using an array of cameras. In this case, the cameras and lidar perceive the environment from different vantage points. Further processing is required to address this occlusion problem \cite{paper:SchneiderHimmelsbach2010}. \subsection{Simulation Results} The proposed approach is also assessed quantitatively using simulation, as the ground truth is not available for the experiments with the real vehicle. The simulation is setup as close as possible to the vehicle platform used in the experiment. In every simulation episode the vehicle is moving with ground truth constant linear velocity \(v_{x}\) in the vehicle's \(x\) direction of travel and angular velocity \(\omega_{yaw}\) in \(yaw\) randomly drawn from uniform distributions \(\mathcal{U}\left(2, 10\right)\) m/s and \(\mathcal{U}\left(-60, 60\right)\) deg/s, respectively. As the vehicle moves, the lidar scans for one revolution in 0.1 s, generating 76 lidar packets at different rotational angles. Each packet contains one pair of elevation angle and range data drawn from uniform distributions \(\mathcal{U}\left(-15, 15\right)\) deg and \(\mathcal{U}\left(1, 100\right)\) m, respectively. Linear and angular velocity measurements are polluted with additive Gaussian noise \(\mathcal{N}\left(0, 0.1^{2}\right)\) m/s and \(\mathcal{N}\left(0, 5^{2}\right)\) deg/s, respectively. The timestamp of every piece of sensory measurement contains jitter modelled as Gaussian noise \(\mathcal{N}\left(0, 0.0003^2\right)\) s. Those parameters are chosen to produce a clear result for visualisation. As soon as the lidar finishes one revolution of scanning, the front camera takes an image, to which the lidar point cloud are then projected. Here, the timestamp of the image is used as the reference time. The same intrinsic and extrinsic calibration parameters in the experiment vehicle platform are used in the simulation. The simulation results are analysed based on 200 Monte Carlo runs, which in total generate over 15,000 3D lidar points and 4,000 2D projected points. Figure \ref{fig:SimulationLidar} presents the ego-motion corrected lidar point cloud and the comparison with ground truth and uncorrected point clouds from one of the simulation runs, and Figure \ref{fig:SimulationImage} illustrates the same point cloud projected to the image. The ground truth linear and angular velocities in this particular case are 2.81 m/s and -56.2 deg/s (negative means turning right), respectively. Due to the constraint of figure size, we only show the uncertainty of a corrected 3D lidar point and a 2D projected point in Figure \ref{fig:SimulationLidar:b} and Figure \ref{fig:SimulationImage:b}, respectively. Normalised estimation error squared (NEES) is adopted in the test as the metric of consistency for the proposed lidar ego-motion correction approach. The NEES value for a given 3D lidar sample or 2D projected sample \(\mathcal{N}\left(\bar{\bm{z}}_{i}, \bm{\Sigma}_{i}\right)\) and its ground truth \(\bm{z}_{i}\) is calculated by \begin{equation} \epsilon\left(i\right) = \left(\bar{\bm{z}}_{i} - \bm{z}_{i}\right)^{T} \bm{\Sigma}_{i}\left(\bar{\bm{z}}_{i} - \bm{z}_{i}\right). \end{equation} Then the \(\epsilon\left(i\right)\) will have a \(\chi^{2}\) (chi-square) distribution with \(dim\left(\bm{z}_{i}\right)\) degrees of freedom, under the hypothesis that the tested estimator is consistent and approximately linear and Gaussian \cite{paper:BahrWalter}. The state estimation errors are considered consistent with the calculated covariances if \( \epsilon\left(i\right) \in \left[\chi^{2}_{dim\left(\bm{z}_{i}\right)}\left(0.025\right),\chi^{2}_{dim\left(\bm{z}_{i}\right)}\left(0.975\right)\right] \), where \(dim\left(\bm{z}_{i}\right) = 3\) for a 3D point, and \(dim\left(\bm{z}_{i}\right) = 2\) for a 2D point. This interval associates bounds for the two-sided \(95\%\) probability. The estimation tends to be optimistic if the \(\epsilon\) for all motion corrected lidar points rises significantly higher than the upper bound, while if it stays below the lower bound for a majority of time, the estimator is considered conservative \cite{paper:BaileyNieto}. The consistency check results are presented in Figure \ref{fig:NEES}. \begin{figure}[!t] \vspace{2mm} \centering \subfigure[]{ \label{fig:SimulationLidar:a} \includegraphics[width=3.3in]{"Figures/simulation_lidar"}} \subfigure[]{ \label{fig:SimulationLidar:b} \includegraphics[width=3.1in]{"Figures/simulation_ellipsoid"}} \caption{The comparison of motion corrected lidar point cloud with uncorrected and ground truth point clouds. In (a), as the lidar rotates in clockwise direction, the correction is found more evident to those older points, which have timestamps further from the reference time. The uncertainty of a 3D lidar point after correction is represented as an ellipsoid in (b), which covers 95\% volume of confidence.} \label{fig:SimulationLidar} \end{figure} \begin{figure}[!t] \vspace{2mm} \centering \subfigure[]{ \label{fig:SimulationImage:a} \includegraphics[width=3.1in]{"Figures/simulation_image"}} \subfigure[]{ \label{fig:SimulationImage:b} \includegraphics[width=3.1in]{"Figures/simulation_ellipse"}} \caption{The projection of motion corrected lidar point cloud to the image compared with that of uncorrected and ground truth point clouds. lidar scans from left to right in the image in (a), where the correction is found more evident to those points at the left side, which have timestamps further from the reference time. The uncertainty of a 2D projected point after correction is represented as an ellipse in (b), which covers a 95\% confidence area.} \label{fig:SimulationImage} \end{figure} \begin{figure}[!t] \centering \subfigure[]{ \label{fig:NEES:a} \includegraphics[trim={0.9cm 0 0.9cm 0.3cm},clip,width=3.2in]{"Figures/simulation_lidar_nees"}} \subfigure[]{ \label{fig:NEES:b} \includegraphics[trim={0.9cm 0 0.9cm 0.3 cm},clip,width=3.2in]{"Figures/simulation_image_nees"}} \caption{NEES consistency test for the ego-motion correction results. The in-bound rate for the motion corrected 3D lidar points is about 90.92\%, while the in-bound rate of the projection results in the image frame is found to be 94.29\%. Both results indicate consistent estimation of uncertainties.} \label{fig:NEES} \end{figure} \section{Conclusions And Future Work} \label{sec:conlusions} In this paper, a novel probabilistic approach is proposed for lidar ego-motion correction and lidar-to-camera projection with robust uncertainty estimation. The approach accounts for the main error sources which include noise in ego-motion estimation and time jitter in sensor measurements due to practical and theoretical limitations. The proposed approach considers a sequence of lidar packets, calculates the vehicle ego-motion estimation results for the given packet timestamps, applies the motion correction to the lidar packets against an arbitrarily chosen reference timestamp, and projects the motion corrected lidar points to camera coordinate system. The chain of the above three cascaded stages is formulated into an unscented transform pipeline. Essentially, the corrected and projected points are produced with ego-motion uncertainty information preserved for subsequent processing. The experimental results demonstrate the accuracy of the ego-motion correction for lidar points, and the projection to the image frame. This was tested on an electric vehicle platform driven in a university campus environment. The simulation results further validate the consistency of the uncertainty estimation in motion correction and lidar-to-camera projection. Essentially, the capability of producing robust and consistent uncertainty estimates incorporating the lidar ego-motion correction process makes the proposed approach one of the first of its kind to have the potential to be integrated into perception applications that require uncertainty information. The proposed approach associates 3D lidar point and the 2D image coordinates in a probabilistic manner. This is particularly useful in applications that involve probabilistic camera-lidar sensor fusion, where information can be transferred from lidar point to image domain and vice versa with the relevant uncertainty considered. The future work includes the probabilistic fusion of ego-motion corrected lidar points with semantically labelled images, which combines the heuristic uncertainty associated with a labelled image and the uncertainty from the ego-motion correction of LiDAR point clouds. In this case, the value of the semantic label retrieved from the corresponding pixel in an image frame can be included probabilistically into a point cloud as an additional information field for each 3D point. This helps pave the way to a higher level understanding of the scene, which can be used to enable context based algorithms for collision avoidance and navigation.
\section{Introduction} \label{intro} By the 1940's, physicists had identified two classes of ``elementary" particles with widely different group behavior, bosons and fermions. The prototypic boson is the photon which generates electromagnetic forces; electrons, the essential constituents of matter, are fermions which satisfy Pauli's exclusion principle. This distinction was quickly extended to Yukawa's particle (boson), the generator of Strong Interactions, and to nucleons (fermions). A compelling characterization followed: matter is built out of fermions, while forces are generated by bosons. Einstein's premature dream of unifying {\em all} constituents of the physical world should have provided a clue for that of fermions and bosons; yet it took physicists a long time to relate them by symmetry. This fermion-boson symmetry is called ``{\em Supersymmetry}". Supersymmetry, a necessary ingredient of string theory, turns out to have further remarkable formal properties when applied to local quantum field theory, by restricting its ultraviolet behavior, and providing unexpected insights into its non-perturbative behavior. It may also play a pragmatic role as the glue that explains the weakness of the elementary forces within the Standard Model of Particle Physics at short distances. \section{Early Hint} In 1937, Eugene Wigner, with some help from his brother-in-law, publishes one of his many famous papers\cite{Wigner} ``On Unitary Representations of the Inhomogeneous Lorentz Group". He was then at the University of Wisconsin at Madison, a refugee from Princeton which had denied him tenure. It was not an easy paper to read, but its results were very simple: there were five types of representations labelled by the values of $P^2\equiv p^\mu p_\mu=m^2$, one of the Poincar\'e group's Casimir operator. All but two representations describe familiar particles found in Nature. Massive particles come with momentum $\bf p$, spin $j$, and $2j+1$ states of polarization, e.g. electrons and nucleons with spin $1/2$. There are also four types of massless representations with spin replaced by helicity (spin projection along the momentum). The first two describe massless particles with a single helicity (photons with helicity $\pm1$), or half-odd integer helicity, such as ``massless" neutrinos with helicity $+1/2$. The last two representations $O(\Xi)$ and $O'(\Xi)$ describe states which look like massless ``objects", particle-like in the sense that they have four-momentum, but with bizarre helicities: each representation contains an infinite tower of helicities, one with integer helicities, the other with half-odd integer helicities. These have no analogues in Nature\footnote{``Infinite spin" representations do not appear in the Poincar\'e decomposition of the conformal group}. Physicists were slow in recognizing the importance of group representations, even though Pauli's provided the first solution of the quantum-mechanical Hydrogen atom using group-theory. Wigner's paper does not seem to have moved any mountains, and infinite spin representations were simply ignored, except of course by Wigner. Yet, $O(\Xi)$ and $O'(\Xi)$ contained important information: they are ``supersymmetric partners'' of one another! \section{Hadrons \& Mesons} \label{sec:2} Symmetries were gaining credence among physicists, not as a simplifying device but as a guide to the organization of Nature. Wigner and St\"uckelberg's ``supermultiplet model" unified $SU(2)$ isospin and spin. Once Gell-Mann and Ne'eman generalized isospin to $SU(3)$, it did not take long for Feza G\"ursey and Luigi Radicati\cite{Gursey}, as well as Bunji Sakita\cite{Sakita}, to propose its unification with spin into $SU(6)$. Pseudoscalar and vector mesons (bosons) were found in the $\bf {35}$ of SU(6), while the hadrons (fermions) surprisingly lived in the $\bf{56}$, not in the $\bf{20}$\cite{Sakita}, as expected by the statistics of the time. This non-relativistic unification proved very successful, both experimentally and conceptually, since it led to the hitherto unsuspected {\em color} quantum number. In 1966, Hironari Miyazawa\cite{Miyazawa} proposed further unification. His aim was to assemble the fermionic $\bf{56}$ and the bosonic $\bf{35}$ into one mathematical structure such as $SU(9)$, but at the cost of disregarding spin-statistics. In order to explain the bounty of strange particle discovered in the 1950's, Sakata had proposed to explain mesons as $T\overline T$ bound states of the spin one-half triplet $$T~=~(\,p,\,n,\,\Lambda\,).$$ Miyazawa adds a {\em pseudoscalar} triplet $$t~=~(\,K^+_{}, K^0_{},\,\eta\,),$$ to the Sakata spinor triplet. The hadron octet would then be described by another bound state, $T\bar t$, but he could not describe the spin three-half baryons decimet in the ${\bf 56}$. He introduces a toy model with two fundamental constituents, a spin one-half and a spin zero particle, $ {\bf p}=(\alpha_\uparrow,\alpha_\downarrow,\gamma)$. The nine currents $$ {\bf p}^\dagger\lambda^{}_i{\bf p}=\cases{F^{}_i,\quad i=0,1,2,3,8;\cr G^{}_i,\quad i=4,5,6,7},$$ satisfy a current algebra with both commutators and anticommutators, \[ [\,F^{}_i\,,\,F^{}_j\,]~=~if^{}_{ijk} F^{}_k, \] \[ [\,F^{}_i\,,\,G^{}_j\,]~=~if^{}_{ijk} G^{}_k,\] \[ \{\,G^{}_i\,,\,G^{}_j\,\}~=~d^{}_{ijk} F^{}_k,\] a ``generalized Jordan algebra" which he calls $V(3)$. This is the first example, albeit non-relativistic, of a superalgebra, today called $SU(2/1)$ with even part $SU(2)\times U(1)$. In 1967, he expanded his construction\cite{Miyazawa2}, to general superalgebras he calls $V(n,m)$ with the idea of including the decimet. Alas, the phenomenology was not as compelling as that of $SU(6)$; two of the quarks inside a nucleon do not seem live together in an antitriplet color state. In 1969, F.A. Berezin and G. I. Kac\cite{Berezin} show the mathematical consistency of graded Lie algebra which contains both commutators and anticommutators; they give its simplest example generated by the three Pauli matrices $\sigma_+,\sigma_-,\sigma_3$. Physical applications are not discussed, although Berezin's advocacy of Grassmann variables in path integrals was no doubt a motivation. \section{Dual Resonance Models} \label{sec:3} In the 1960's, physicists had all but given up on a Lagrangian description of the Strong Interactions, to be replaced by the S-matrix program: amplitudes were determined from general principles and symmetries, locality, causality, and Lorentz invariance. Further requirements on the amplitudes such as Regge behavior and its consequent bootstrap program were still not sufficient to determine the amplitudes. In 1967, Dolen, Horn and Schmid\cite{Dolen} discovered a peculiar relation in $\pi-N$ scattering. At tree-level, its fermionic $s$-channel ($\pi\,N\rightarrow \pi\,N $) is dominated by resonances ($\Delta^{++}$, ...), as shown by countless experiments. On the other hand, its bosonic $
t$-channel ($\pi\,\bar\pi \rightarrow N \,\overline N$) is dominated by the $\rho$-meson. Using the tools of S-matrix theory in the form of ``finite energy sum rules", they found that the Regge shadow of the bosonic $t$-channel's $\rho$-meson {\em averaged} the fermionic resonances in the $s$-channel! This was totally unexpected since these two contributions, described by different Feynman diagrams, should have been independent. Was this the additional piece of information needed to fully determine the amplitudes of Strong Interactions? This early example of fermion-boson kinship led, through an unlikely tortuous path, to modern Supersymmetry. An intense theoretical search for amplitudes where the $s$- and $t$-channel contributions are automatically related to one another followed. Under the spherical cow principle, spin was set aside and the search for DHS-type amplitudes focused on the purely bosonic process $\omega\rightarrow \pi\pi\pi$\cite{Ademollo}. Soon thereafter, Veneziano\cite{Veneziano} proposed a four-point amplitude with the desired crossing symmetry, $$A(s,t) \sim \frac{\Gamma(-\alpha(s))\Gamma(-\alpha(t))}{\Gamma(-\alpha(s)-\alpha(t)},$$ where $\alpha(x)=\alpha_0+\alpha'x$ is the linear Regge trajectory. It displays an infinite number of poles in {\em both} s-channel $s>0,~ t<0$ and t-channel $s<0,~t>0$. Veneziano's construction was quickly generalized to n-point ``dual" amplitudes. The infinite series of poles were recognized as the vibrations of a string\cite{string}. The amplitudes were linear combinations of tree chains which factorize into three-point vertices and propagators. A generalized coordinate emerged\cite{Fubini} from this analysis, $$\quad Q^{}_\mu(\tau)=x^{}_\mu+\tau\,p^{}_\mu+\sum_{n=1}^\infty\frac{1}{\sqrt{2n\alpha'}}\left(a^{}_{n\mu}e^{in\tau}_{}-a^{\dagger }_{n\mu}e^{-in\tau}_{}\right),$$ with an infinite set of oscillators, $$[a^{}_{n\mu},a^{\dagger}_{m\nu}]=\delta^{}_{nm}g^{}_{\mu\nu}$$ The vertex for emitting a particle of momentum $k_\mu$ from the linear chain was simple, $$V(k,\tau)~=:e^{ik\cdot Q(\tau)}_{}:.$$ Out of its corresponding generalized momentum \begin{equation} P^{}_\mu(\tau)~=~\frac{dQ^{}_\mu}{d\tau}, \end{equation} one derived the operators, $$L^{}_n~=~\frac{1}{2\pi}\int^\pi_{-\pi}d\tau e^{in\tau}_{}:P^\mu_{}P_\mu^{}:~\equiv~<:P^\mu_{}P_\mu^{}:>^{}_n, $$ which satisfy the Virasoro algebra\footnote{c-number is added anachronostically}, $$[\,L^{}_m\,,\,L^{}_n\,]~=~(m-n)L^{}_{n+m}+{{\frac{D}{12}m(m^2-1)\delta^{}_{m,-n}}}.$$ Its finite subalgebra, $L_0,L_\pm$, the Gliozzi algebra, generates conformal transformations in two dimensions. The propagator was given by $$\frac{1}{(\alpha'L^{}_0+1)}.$$ \section{Superstrings} The Klein-Gordon equation for a point particle, $$0~=~p^2_{}+m^2_{}~=~ <P^\mu_{}>^{}_0<P_\mu^{}>^{}_0+m^2, $$ could then be interpreted as a special case of $$0~=~ <P^\mu_{}P_\mu^{}>^{}_0+ m^2$$ suggesting a correspondence\cite{Ramond1} between point particles and dual amplitudes, $$<A><B>~\rightarrow~<A\,B>. $$ Fermions should satisfy the Dirac equation, $$0~=~\gamma^{}_\mu\,p^\mu_{}+m~=~<\Gamma_\mu^{}>^{}_0<P^\mu>^{}_0+m.$$ This requires a generalization of the Dirac matrices as dynamical operators, $$\gamma^{}_\mu~~\rightarrow~~ \Gamma^{}_\mu~=~\gamma^{}_\mu+i\gamma^{}_5\sum_{n=0}^\infty\left(b^{}_{n\mu} e^{in\tau}_{}+b^{\dagger}_{n\mu} e^{-in\tau}_{}\right) $$ where the oscillators are {\em Lorentz vectors}\footnote{Later was it realized that this made sense only in ten space-time dimensions where the little group is the spinor-vector schizophrenic $SO(8)$}, which satisfy anticommuting relations, $$\{b^{}_{n\mu},b^{\dagger}_{n\mu}\}~=~\delta^{}_{nm}g^{}_{\mu\nu},$$ the sum running over the positive integers. This led me to propose the string Dirac equation in the winter of 1970\cite{Ramond}, which readily followed from that correspondence, $$ 0~=~{{<\Gamma^{}_\mu\,P^\mu_{}>^{}_0+m}}.$$ The basic Dirac algebra, $\{\gamma\cdot p,\gamma\cdot p\}=p^2_{}$ is seen to be generalized to an algebra with both commutator and anticommutators, $$\{\,F_n^{}\,,\,F^{}_m\,\}~=~ 2L^{}_{n+m},\quad [\,L^{}_n\,,\,F^{}_m\,]~=~(2m-n)F^{}_{m+n},$$ where $F_n=<\Gamma_\mu P^\mu>_n$, and these new $L_n$'s also satisfy the Virasoro algebra, but with a different $c$-number. Andr\'e Neveu and John Schwarz then compute the amplitude for a dual fermion emitting three pseudoscalars with the Yukawa vertex, $$\Gamma_5^{}:e^{ik\cdot Q(\tau)}:,\quad \Gamma_5~=~\gamma_5(-1)^{\sum b_{n}^\dagger\cdot b_{n}},$$ and find that the resulting amplitude contains an infinite number of poles in its fermion-antifermion channel, and even identify the residue of the first pole\cite{Neveu2}! A new model with bosonic poles and vertices emerges, written in terms of an infinite tower of anticommuting vector oscillators, $$\{b^{}_{r\mu},b^\dagger_{s\nu}\}~=~\delta^{}_{rs}g^{}_{\mu\nu},\quad r,s={\textstyle\frac{1}{2},\frac{3}{2},\cdots}.$$ The triple boson vertex is given by $$V_{NS}^{}(k,\tau)k_{}^\mu~=~ H^{}_\mu(\tau):e^{ik\cdot Q(\tau)}_{}:,$$ where $$ H^{}_\mu(\tau)~=~\sum_{ r=1/2,3/2,\dots} [b^{}_{r\mu}e^{-ir\tau}+b^\dagger_{r\mu}e^{ir\tau}].$$ These are the building blocks of the ``Dual Pion model"\cite{Neveu}, published in April 1971. The algebraic structure found in the generalized Dirac equation remains the same, producing a super-Virasoro algebra which decouples unwanted modes\cite{NST}, with $\Gamma_\mu$ replaced by $H_\mu$, through the operators, $$G_r^{}~=~<H
bf{x}^*+(\mathbf{y}',f(\mathbf{y}'))_B \in \Gamma. $$ The constant $L'$ depends on the maximum curvature of $\Gamma$ and can be taken to be independent of $\bar\mathbf{x}^*$ . Moreover, since $P_\Gamma$ is smooth in the tubular neighborhood $T_\varepsilon$ of $\Gamma$, the mapping $(\mathbf{y}',z')\mapsto P_\Gamma(\bar\mathbf{x}^*+(\mathbf{y}',z')_B)$ is smooth for $(\mathbf{y}',z')\in {\mathcal T}_{\varepsilon} := \{(\mathbf{y}',z') \in\mathbb{R}^3\,:\, \bar\mathbf{x}^*+(\mathbf{y}',z')_B\in T_\varepsilon\}$. Therefore, for $(\mathbf{y}',z')\in {\mathcal M}_{L}={\mathcal T}_{\varepsilon} \cap ({\mathcal I}_{L}\times\mathbb{R})$, with $L$ possibly smaller than $L'$, we can use the $B$-basis and $f$, to write the closest point mapping as \begin{equation}\label{eq:Pgamma-to-f-and-h} P_\Gamma(\bar\mathbf{x}^*+(\mathbf{y}',z')_B) = \bar\mathbf{x}^*+ \big( \mathbf{y}_{\text{p}},f(\mathbf{y}_{\text{p}})\big)_B, \qquad \mathbf{y}_{\text{p}} := \mathbf{h}(\mathbf{y}',z'), \end{equation} for some smooth function $\mathbf{h}\in C^\infty({\mathcal M}_{L})$. The constant $L$ is chosen such that \[ \sup_{(\mathbf{y}',z')\in {\mathcal M}_{L}} |\mathbf{h}(\mathbf{y}',z')|\leq L'. \] Clearly $\mathbf{h}(\mathbf{0},z')=\mathbf{0}$, which guarantees that $L>0$. We now write $(\mathbf{y}+\mathbf{y}_0(z),z)$ as a point in the $B$-basis centered in the target point $\bar\mathbf{x}^*$, \[ (\mathbf{y}+\mathbf{y}_0(z),z) = \bar\mathbf{x}^* + (\mathbf{y}',z')_B. \] For $(\mathbf{y}',z')\in{\mathcal M}_{L}$ we can then write the numerators and denominators of the layer kernels \eqref{eq:layer-kernels-convenient-setting} using \eqref{eq:Pgamma-to-f-and-h} and the orthogonality of $Q$: \begin{align} \left|P_\Gamma(\mathbf{y}+\mathbf{y}_0(z),z)-\bar\mathbf{x}^*\right| =& \left| \big( \mathbf{y}_{\text{p}},f(\mathbf{y}_{\text{p}}) \big)_B \right| = \left| \big( \mathbf{y}_{\text{p}},f(\mathbf{y}_{\text{p}}) \big) \right|, \label{eq:SL-denominator-explicit}\\[0.3cm] \mathbf{\bar n}_x^T( P_\Gamma (\mathbf{y}+\mathbf{y}_0(z),z)-\bar\mathbf{x}^*) =& \left( \begin{array}{c} \mathbf{0} \\ 1 \end{array} \right)^T_B \left( \begin{array}{c} \mathbf{y}_{\text{p}} \\ f(\mathbf{y}_{\text{p}}) \end{array} \right)_B = \left( \begin{array}{c} \mathbf{0} \\ 1 \end{array} \right)^T \left( \begin{array}{c} \mathbf{y}_{\text{p}} \\ f(\mathbf{y}_{\text{p}}) \end{array} \right),\label{eq:DLC-numerator-explicit}\\[0.4cm] \mathbf{\bar n}_y^T( P_\Gamma (\mathbf{y}+\mathbf{y}_0(z),z)-\bar\mathbf{x}^*) =& \frac{1}{\sqrt{1+(\nabla f(\mathbf{y}_{\text{p}}))^2}} \left( \begin{array}{c} -\nabla f(\mathbf{y}_{\text{p}}) \\ 1 \end{array} \right)^T_B \left( \begin{array}{c} \mathbf{y}_{\text{p}} \\ f(\mathbf{y}_{\text{p}}) \end{array} \right)_B \nonumber\\ =& \frac{1}{\sqrt{1+(\nabla f(\mathbf{y}_{\text{p}}))^2}} \left( \begin{array}{c} -\nabla f(\mathbf{y}_{\text{p}}) \\ 1 \end{array} \right)^T \left( \begin{array}{c} \mathbf{y}_{\text{p}} \\ f(\mathbf{y}_{\text{p}}) \end{array} \right).\label{eq:DL-numerator-explicit} \end{align} We next have to find how $(\mathbf{y}',z')$ depends on $\mathbf{y}$ and $z$. From the definitions above we have $$ \left( \begin{array}{c} \mathbf{y}\\ 0 \end{array} \right) +\bar\mathbf{y}_0(z)= \left( \begin{array}{c} \mathbf{y}+\mathbf{y}_0(z) \\ z \end{array} \right) = \bar\mathbf{x}^* + Q \left( \begin{array}{c} \mathbf{y}' \\ z' \end{array} \right). $$ Since $\bar\mathbf{y}_0(z)-\bar\mathbf{x}^*$ is parallel to the normal $\mathbf{\bar n}$ by definition, we can express this as $$ \bar\mathbf{y}_0(z)-\bar\mathbf{x}^* = (0,\eta(z))_B, \quad \Rightarrow \quad \left( \begin{array}{c} \mathbf{y}\\ 0 \end{array} \right) = Q \left( \begin{array}{c} \mathbf{y}' \\ z'-\eta(z) \end{array} \right). $$ where $\eta(z):=d_\Gamma(\bar\mathbf{y}_0(z))$ is the signed distance of $\bar\mathbf{y}_0(z)$ to $\Gamma$. Defining \begin{equation}\label{eq:def-QT-A-dvec} Q^{T}=\left( \begin{array}{cc} {A} & \mathbf{c} \\ \mathbf{d}^T & \alpha \end{array} \right),\qquad {A} \in\mathbb{R}^{2\times 2},\quad \mathbf{c},\mathbf{d}\in\mathbb{R}^{2\times 1}, \quad \alpha\in\mathbb{R}, \end{equation} we finally obtain \begin{equation}\label{eq:x=Ay and z} \left\{ \begin{array}{rl} \mathbf{y}' =& {A}\mathbf{y}\,, \\ z' =& \mathbf{d}^T\mathbf{y}+\eta(z). \end{array} \right. \end{equation} Therefore, we can write the kernels \eqref{eq:layer-kernels-convenient-setting} using (\ref{eq:SL-denominator-explicit},\ref{eq:DLC-numerator-explicit},\ref{eq:DL-numerator-explicit}) and \eqref{eq:x=Ay and z}: \begin{equation}\label{eq:layer-kernels-not-expanded} \begin{array}{lrl} & \mathbf{y}_{\text{p}}:=& \mathbf{h}(A\mathbf{y},\,\mathbf{d}^T\mathbf{y}+\eta(z)), \\[0.2cm] \text{(SL) }& s^{SL}(\mathbf{y};z) = & \dfrac{1}{4\pi}\dfrac{1}{|(\mathbf{y}_{\text{p}},f(\mathbf{y}_{\text{p}}))|},\\[0.5cm] \text{(DLC) }& s^{DLC}(\mathbf{y};z) = & \dfrac{1}{4\pi}\dfrac{f(\mathbf{y}_{\text{p}})}{|(\mathbf{y}_{\text{p}},f(\mathbf{y}_{\text{p}}))|^3}, \\[0.5cm] \text{(DL) }& s^{DL}(\mathbf{y};z) = & -\dfrac{1}{4\pi}\dfrac{(-\nabla f(\mathbf{y}_{\text{p}}),1)}{|(\mathbf{y}_{\text{p}},f(\mathbf{y}_{\text{p}}))|^3\sqrt{1+(\nabla f(\mathbf{y}_{\text{p}}))^2}} \left( \begin{array}{c} \mathbf{y}_{\text{p}} \\ f(\mathbf{y}_{\text{p}}) \end{array} \right) . \end{array} \end{equation} These expressions are valid for $(\mathbf{y}',z')\in{\mathcal M}_L$. By \eqref{eq:x=Ay and z} and the fact that $|A\mathbf{y}|\leq |Q^T(\mathbf{y},0)|=|\mathbf{y}|$ (with equality if $\mathbf{y}\perp\mathbf{d}$) they are therefore valid when $(\mathbf{y}+\mathbf{y}_0(z),z)\in T_\varepsilon$ and $|\mathbf{y}|< L$. \subsubsection{Expansion of $f$ and $\mathbf{h}$}\label{subsub:fbh-expand} By the definition of the $B$-basis, the function $f$ introduced above in Section~\ref{subsub:proj-mapping-to-layer-kernels} satisfies \begin{equation}\label{eq:f-properties-def-M} f(\mathbf{0})=0,\ \qquad \nabla f(\mathbf{0})=\mathbf{0},\ \qquad \dfrac{\partial^2 f}{\partial \mathbf{x}^2}(\mathbf{0})= \left( \begin{matrix} \kappa_1 & 0 \\ 0 & \kappa_2 \end{matrix} \right)=:M. \end{equation} The Taylor expansions up to second order for $f$ and $\nabla f$ are then given by \begin{equation}\label{eq:high_order_f_df} \begin{array}{rl} f(\mathbf{y}) &= \frac{1}{2}\mathbf{y}^TM\mathbf{y}+B(\mathbf{y},\mathbf{y},\mathbf{y})+\mathcal{O}(|\mathbf{y}|^4)\,, \\[0.1cm] \nabla f(\mathbf{y}) &= M\mathbf{y}+C(\mathbf{y},\mathbf{y})+\mathcal{O}(|\mathbf{y}|^3), \end{array} \end{equation} where $B$ is the third order trilinear term, and $C$ is its bilinear gradient. With $\mathbf{y}=(x,y)$, they are given by \begin{equation}\label{eq:surface-B-C} \begin{array}{rl} B(\mathbf{y},\mathbf{y},\mathbf{y}) :=\frac{1}{2}\left[ f_{xxx}\frac{x^3}{3}+f_{yyy}\frac{y^3}{3}+f_{xxy}x^2y+f_{xyy}xy^2 \right],\\[0.3cm] C(\mathbf{y},\mathbf{y})& := \left( \begin{matrix} \frac{\partial B}{\partial x} \\[0.2cm] \frac{\partial B}{\partial y} \end{matrix} \right)=\dfrac{1}{2} \left( \begin{matrix} f_{xxx}x^2+2f_{xxy}xy+f_{xyy}y^2 \\[0.2cm] f_{yyy}y^2+2f_{xyy}xy+f_{xxy}x^2 \end{matrix} \right); \end{array} \end{equation} where $f_{xxx},f_{xxy},f_{xyy},f_{yyy}$ are the third order derivatives of $f$ evaluated in $\mathbf{0}$. We next need to expand $\mathbf{h}$. It is given by the following lemma, the proof of which can be found in the Appendix~\ref{sub:A:properties-of-h}. \begin{lemma}\label{lem:properties-of-h} Let \[ D(z')=(I-z'M)^{-1}, \] with $M$ given in \eqref{eq:f-properties-def-M}. For $(\mathbf{y}',z')\in {\mathcal M}_L$ the matrix is well-defined. The function $\mathbf{h}\in C^{\infty}( {\mathcal M}_L)$ introduced in Section~\ref{subsub:proj-mapping-to-layer-kernels} then satisfies \begin{equation}\label{eq:hprop1} \mathbf{h}(\mathbf{0},z')=\mathbf{0}, \qquad \frac{\partial \mathbf{h}}{\partial z}(\mathbf{0},z')=\mathbf{0}, \qquad \frac{\partial \mathbf{h}}{\partial \mathbf{y}}(\mathbf{0},z')=D(z'), \end{equation} and the Taylor expansion of $\mathbf{h}$ can be written in the form \begin{equation \mathbf{h}(\mathbf{y}',z') \,=\, D(z')\mathbf{y}' + z'\,D(z')C\Bigl(D(z')\mathbf{y}',D(z')\mathbf{y}'\Bigr) +\mathcal{O}(|\mathbf{y}'|^3),\label{eq:hig_order_proj_x} \end{equation} where $C$ is defined in \eqref{eq:surface-B-C}. \end{lemma} \subsubsection{General form of the kernels}\label{subsub:kernel-expandability} We now have expressions \eqref{eq:layer-kernels-not-expanded} of the kernels and expansions around $\mathbf{y}=\y_{\text{p}}=\mathbf{0}$ of $\mathbf{h}$ and $f$. The next step is to prove \eqref{eq:ellform}, i.e. that the three kernels in \eqref{eq:layer-kernels-not-expanded} can all be written in the form $|\mathbf{y}|^{-1}\ell(|\mathbf{y}|,\mathbf{y}/|\mathbf{y}|)$. To do this we use the following lemma, a proof of which can be found in Appendix~\ref{sub:A:g=f-over-distance}. \begin{lemma}\label{lem:g=f-over-distance} Let $\bar\mathbf{g}:\mathbb{R}^m\to\mathbb{R}^n$ be $C^\infty(B_{r_0}(\mathbf{0}))$ for some $r_0>0$, with $n>m$, $\bar\mathbf{g}(\mathbf{0})=\mathbf{0}$, and $D \bar\mathbf{g}(\mathbf{0})\in\mathbb{R}^{n\times m}$ has full rank. Let $\bar\mathbf{p}:\mathbb{R}^m\to\mathbb{R}^n$ be $C^\infty(B_{r_0}(\mathbf{0}))$, such that $\bar\mathbf{p}(\mathbf{0})^TD\bar\mathbf{g}(\mathbf{0})=\mathbf{0}$. Then there exist functions $\ell_1,\,\ell_2$ and $0<r_1\leq r_0$ such that $\ell_i:\mathbb{R}\times\mathbb{S}^{m-1}\to\mathbb{R}$, $\ell_i\in C^\infty((-r_1,r_1)\times\mathbb{S}^{m-1})$, $i=1,2$ and \[ \dfrac{1}{|\bar\mathbf{g}(\mathbf{y})|}=\dfrac{1}{|\mathbf{y}|}\ell_1\left(|\mathbf{y}|,\dfrac{\mathbf{y}}{|\mathbf{y}|}\right) , \qquad \dfrac{\bar\mathbf{p}(\mathbf{y})^T\bar\mathbf{g}(\mathbf{y})}{|\bar\mathbf{g}(\mathbf{y})|^3}=\dfrac{1}{|\mathbf{y}|}\ell_2\left(|\mathbf{y}|,\dfrac{\mathbf{y}}{|\mathbf{y}|}\right). \] \end{lemma} For the single-layer kernel we take \[ \bar\mathbf{g}(\mathbf{y})=(\y_{\text{p}},f(\y_{\text{p}})) =\Big( \mathbf{h}(A\mathbf{y},\mathbf{d}^T\mathbf{y}+\eta(z)),f\big(\mathbf{h}(A\mathbf{y},\mathbf{d}^T\mathbf{y}+\eta(z))\big) \Big). \] For $(\mathbf{0},\eta(z))\in\mathcal{M}_L$, i.e. when $|\eta(z)|<\varepsilon$, Lemma~\ref{lem:properties-of-h} gives that $\bar\mathbf{g}(\mathbf{0})=(\mathbf{0},0)$ and \begin{align*} &\dfrac{\partial \bar\mathbf{g}}{\partial \mathbf{y}}(\mathbf{0}) = \left( \begin{array}{cc} \frac{\partial \mathbf{h}}{\partial \mathbf{y}}(\mathbf{0},\eta(z)) \left(A + \frac{\partial \mathbf{h}}{\partial z}(\mathbf{0},\eta(z))\mathbf{d}^T\right)\\ \left(\frac{\partial \mathbf{h}}{\partial \mathbf{y}}(\mathbf{0},\eta(z))\left(A + \frac{\partial \mathbf{h}}{\partial z}(\mathbf{0},\eta(z))\mathbf{d}^T\right)\right)^T\nabla f(\mathbf{0}) \end{array} \right) = \left( \begin{array}{cc} D(\eta(z))A \\ \mathbf{0} \end{array} \right), \end{align*} which has full rank since $\det A = \bar\mathbf{e}_z^T( \bar{\bm{\tau}}_1\times\bar{\bm{\tau}}_2) =\bar\mathbf{e}_z^T\mathbf{\bar n}\neq 0$. Hence, \eqref{eq:layer-kernels-not-expanded} together with the first result of Lemma~\ref{lem:g=f-over-distance} now shows \eqref{eq:ellform} for $X=SL$. For the double-layer case, we let $\bar\mathbf{p}(\mathbf{y})=(-\nabla f(\y_{\text{p}}),1)/\sqrt{1+|\nabla f(\y_{\text{p}})|^2}$ so that $s^{DL}=-\bar\mathbf{p}^T\bar\mathbf{g}/4\pi|\bar\mathbf{g}|^3$ by \eqref{eq:layer-kernels-not-expanded}. Then Lemma~\ref{lem:properties-of-h} gives \[ \bar\mathbf{p}(\mathbf{0})^TD\bar\mathbf{g}(\mathbf{0}) = \bar\mathbf{p}(\mathbf{0})^T\dfrac{\partial \bar\mathbf{g}}{\partial \mathbf{y}}(\mathbf{0}) = \left( \begin{array}{c} -\nabla f(\mathbf{0}) \\ 1 \end{array} \right)^T \left( \begin{array}{cc} D(\eta(z))A \\ \mathbf{0} \end{array} \right) = \left( \begin{array}{c} \mathbf{0} \\ 0 \end{array} \right), \] and the second result of Lemma \ref{lem:g=f-over-distance} shows \eqref{eq:ellform} for $X=DL$. Finally, for the double layer conjugate kernel we take simply $\bar\mathbf{p}(\mathbf{y})=(0,0,1)$, which again makes $\bar\mathbf{p}(\mathbf{0})^TD\bar\mathbf{g}(\mathbf{0}) =\mathbf{0}$ and \eqref{eq:ellform} for $X=DLC$ follows as before. This completes the proof of \eqref{eq:ellform}. \subsubsection{Kernel expansions}\label{subsub:kernel-expansions-summary} The expansion of the kernels is based on the expansions of $f$ in \eqref{eq:high_order_f_df} and $\mathbf{h}$ in \eqref{eq:hig_order_proj_x}. We will skip most tedious intermediate calculations and focus on the end results. We recall that $$ \mathbf{y}'={A}\mathbf{y},\qquad z'=\mathbf{d}^T\mathbf{y}+\eta(z),\qquad \eta(z)=d_\Gamma(\bar\mathbf{y}_0(z)), \qquad \hat\mathbf{y} := \mathbf{y}/|\mathbf{y}|. $$ In the first step we expand the functions $f(\mathbf{h}(\mathbf{y}',z'))$, $D(z')$ and $\mathbf{h}(\mathbf{y}',z')$ as functions of $\mathbf{y}$, instead of $\y_{\text{p}}$ and $\mathbf{y}'$ as before. We get \begin{align*} D(z') &= D_0\left[ I+\mathbf{d}^T\mathbf{y} D_0M\right]+\mathcal{O}(|\mathbf{y}|^2),\\ \mathbf{h}(\mathbf{y}',z') &= \chi_0(\hat \mathbf{y})|\mathbf{y}|+\chi_1(\hat \mathbf{y})|\mathbf{y}|^2+\mathcal{O}(|\mathbf{y}|^3),\\ f(\mathbf{y}_{\text{p}}) &= \xi_0(\hat\mathbf{y})|\mathbf{y}|^2+\xi_1(\hat\mathbf{y})|\mathbf{y}|^3+\mathcal{O}(|\mathbf{y}|^4), \end{align*} where $D_0:=(I-\eta M)^{-1}$, \begin{equation*} \begin{array}{rlrl} \chi_0 ( \mathbf{y}):=& D_0A \mathbf{y}, & \chi_1 ( \mathbf{y}):=& (\mathbf{d}^T\mathbf{y})D_0D_0MA\mathbf{y}\\ & & &+\eta D_0 C(D_0A\mathbf{y},D_0A\mathbf{y}),\\ \xi_0(\mathbf{y}):=& \dfrac{1}{2}\mathbf{y}^T(A^TD_0^TMD_0A)\mathbf{y}, & \xi_1(\mathbf{y}):=& \dfrac{1}{2}\eta(D_0C(D_0A\mathbf{y},D_0A\mathbf{y}))^TMD_0A\mathbf{y} \\ & & &+ \dfrac{1}{2}\eta(D_0A\mathbf{y})^TMD_0C(D_0A\mathbf{y},D_0A\mathbf{y})\\ & & &+ B(D_0A\mathbf{y},D_0A\mathbf{y},D_0A\mathbf{y}) \\ & & &+ (\mathbf{d}^T\mathbf{y})\mathbf{y}^TA^T(M^TD_0^TD_0^TMD_0)A\mathbf{y}. \end{array} \end{equation*} In this step we used the fact that $\chi_j$ and $\xi_j$ are homogeneous of degree $j+1$ and $j+2$ respectively, so that $\chi_j(\mathbf{y})=\chi_j(\hat\mathbf{y})|\mathbf{y}|^{j+1}$ and $\xi_j(\mathbf{y})=\xi_j(\hat\mathbf{y})|\mathbf{y}|^{j+2}$. From these expansions for $f$ and $\mathbf{h}$ we obtain furthermore that \begin{align}\label{eq:Pexpansions} \big|(\mathbf{y}_{\text{p}},f(\mathbf{y}_{\text{p}}))\big| &= \psi_0(\hat \mathbf{y})|\mathbf{y}|+\psi_1(\hat\mathbf{y})|\mathbf{y}|^2+\mathcal{O}(|\mathbf{y}|^3),\\ \dfrac{(\nabla f(\mathbf{y}_{\text{p}}),-1)}{\sqrt{1+(\nabla f(\mathbf{y}_{\text{p}}))^2}} \left( \begin{array}{c} \mathbf{y}_{\text{p}} \\ f(\mathbf{y}_{\text{p}}) \end{array}\right) &= \xi_0(\hat \mathbf{y})|\mathbf{y}|^2+\tilde \xi_1(\hat\mathbf{y})|\mathbf{y}|^3+\mathcal{O}(|\mathbf{y}|^4), \nonumber \end{align} where \begin{equation*} \begin{array}{rl} \psi_0(\mathbf{y}):=& |\chi_0( \mathbf{y})|,\qquad \psi_1(\mathbf{y}):=\ \ \dfrac{\chi_0(\mathbf{y})^T \chi_1(\mathbf{y})}{|\chi_0(\mathbf{y})|},\\ \tilde \xi_1( \mathbf{y}) :=& \frac{1}{2}\eta \left[ (D_0C(D_0A \mathbf{y},D_0A\mathbf{y}))^TMD_0A \mathbf{y}-(D_0A \mathbf{y})^TMD_0C(D_0A \mathbf{y},D_0A\mathbf{y}) \right]\\ &- B(D_0A \mathbf{y},D_0A\mathbf{y},D_0A\mathbf{y}) + \mathbf{y}^TA^TD_0^T(I+\eta MD_0)C(D_0A \mathbf{y},D_0A\mathbf{y}) \\ &+ (\mathbf{d}^T \mathbf{y}) \mathbf{y}^TA^TD_0MD_0D_0MA \mathbf{y}. \end{array} \end{equation*} Here $\tilde\xi_1$ is homogeneous of degree three. Using \eqref{eq:Pexpansions} one can finally deduce the expansions of the kernels in \eqref{eq:layer-kernels-not-expanded}. This concludes the proof of Theorem~\ref{thm:kernelexpansions}. We note that the matrix $A$ and vector $\mathbf{d}$ contain elements of the principal directions and normal at the target point; see \eqref{eq:def-Q} and \eqref{eq:def-QT-A-dvec}. The matrices $D_0$ and $M$ are built from the principal curvatures of $\Gamma$, and the functions $B$ and $C$ contain the third derivatives of $f$; see \eqref{eq:high_order_f_df} and \eqref{eq:surface-B-C}. In Appendix~\ref{sec:appendixB} we show how to numerically compute the information about the surface in the target point ($\kappa_1,\kappa_2$, $\bar{\bm{\tau}}_1$,$\bar{\bm{\tau}}_2$, and the third derivatives of $f$, $f_{xxx}$, $f_{xxy}$, $f_{xyy}$, $f_{yyy}$) using the projection mapping $P_\Gamma$ and its derivatives. \subsection{Corrections to the punctured trapezoidal rules in two dimensions} \label{sec:num_2D} The quadrature rules discussed in Section~\ref{sec:corrected-trapezoidal-rules} have been developed to correct any function of the kind \[ f(\mathbf{x}) = s_k(\mathbf{x})v(\mathbf{x})\ ,\ \ s_k(\mathbf{x})=|\mathbf{x}|^{k-1}\phi_k(\mathbf{x}/|\mathbf{x}|)\ ,\ \ k\in\mathbb{N}\setminus \{0\}\,, \] where $v$ is a smooth function, and then composite rules have been constructed to correct functions which can be expanded as \begin{align*} f(\mathbf{x})&=s(\mathbf{x}-\mathbf{x}_0)v(\mathbf{x}) \\ \text{where }\,s(\mathbf{x})&= s_0(\mathbf{x})+s_1(\mathbf{x})+s_2(\mathbf{x})+\dots. \end{align*} We tested the rules $Q_h^p$ for $p=1,2,3,4$ ($p=1$ \eqref{eq:single_correction_quadrature}, $p=2$ \eqref{eq:Q2-correction}, $p$ general \eqref{eq:Qp-correction}) for functions $s_k$, $k=0,1,2$. Specifically, we used the test function where $s_k$ and $v$ are: \begin{align}\label{eq:test-sk} s_k(\mathbf{x}) =& |\mathbf{x}|^{k-1} \phi(\mathbf{x}/|\mathbf{x}|),\\[0.1cm] \phi(\mathbf{x}/|\mathbf{x}|) =& \phi(\cos(\psi(\mathbf{x})),\sin(\psi(\mathbf{x}))) \nonumber\\[0.1cm] =& 4.2398+0.816735\cos(\psi(\mathbf{x})-0.2)-1.24397865\sin(2\psi(\mathbf{x})+0.1)\,, \nonumber\\ v(\mathbf{x}) =& \left(1.1+\Re\left(H_{|\mathbf{x}|^2 +1}^{(1)}(3)\right)\right)\exp\left(-|\mathbf{x}-(0.027,\,0.0197)|^8\right)\nonumber\\ & \cdot(0.5+\sin(\mathbf{x}_1(\mathbf{x}_2-1))).\nonumber \end{align} The function $H^{(1)}_\alpha$ is the Hankel function of the first kind of degree $\alpha$, and $\Re$ indicates the real part of a complex number. In Figure~\ref{fig:conv2D-sk} we plot the difference {between approximation values for grid sizes $h$ and $h/1.5$}, obtained for the four different quadratures $Q_h^p$, $p=1,2,3,4$, and the punctured trapezoidal rule $T_h^0$. The order of accuracy shown for integrating $s_k\,v$, $k=0,1,2$, is $k+1$ for the punctured trapezoidal rule and $k+p+1$ for the quadrature $Q_h^p$, as expected. The error constant is determined by the value of $(\alpha,\beta)$, and in our tests we fixed $(\alpha,\beta)=(0.81,0.46)$. The stencils used for the different quadratures are represented in Figure~\ref{fig:all_correction_grid_plot}. \begin{figure} \begin{center} \includegraphics{plots/conv2D_sk8_15x7.pdf} \end{center} \caption{Correction of $s_k$ in two dimensions} \footnotesize{Error from integrating $s_k$ \eqref{eq:test-sk} with $p$-order correction $Q_h^p$. For $k=0,1,2$ (left, center, and right figures respectively) we present the difference between values obtained from grid sizes $h$ and $h/1.5$, with the different methods. As expected, the order of accuracy is $k+p+1$ where $p$ is the order of the correction.} \label{fig:conv2D-sk} \end{figure} In order to test the general quadrature rule \eqref{eq:Qp-general-explicit} we used the function \begin{align} s(\mathbf{x}) =& \left(|\mathbf{x}|^{-1}\phi_0(\mathbf{x})+\phi_1(\mathbf{x})+|\mathbf{x}|\phi_2(\mathbf{x})+|\mathbf{x}|^2\phi_3(\mathbf{x})+|\mathbf{x}|^3r(\mathbf{x})\right)\label{eq:test-s-gen}\\[0.1cm] \text{where } \mathbf{x} =& |\mathbf{x}|(\cos(\psi(\mathbf{x})),\sin(\psi(\mathbf{x}))),\nonumber\\[0.1cm] \phi_0(\mathbf{x}) =& 4.2398+0.816735\cos(\psi(\mathbf{x})-0.2) -1.24397865\sin(2\psi(\mathbf{x})+0.1), \nonumber\\[0.1cm] \phi_1(\mathbf{x}) =& 0.78167\sin(\psi(\mathbf{x})+0.5)- 2.24397865\cos(3\psi(\mathbf{x})-0.3) \nonumber\\[0.1cm] \phi_2(\mathbf{x}) =& 1.127+1.2134875\cos(\psi(\mathbf{x})-0.65) -1.24397865\sin(2\psi(\mathbf{x})+0.1), \nonumber\\[0.1cm] \phi_3(\mathbf{x}) =& 0.77-1.29\cos(4\psi(\mathbf{x})-0.35)+0.987\sin(2\psi(\mathbf{x})+0.14), \nonumber\\[0.1cm] r(\mathbf{x}) =& 1.2927-0.929\cos(\psi(\mathbf{x})+0.34)+0.712\sin(3\psi(\mathbf{x})+0.14)\nonumber\\ &+\log(|\mathbf{x}|+1.3),\nonumber\\[0.1cm] v(\mathbf{x}) =& \left(1.1+\Re\left(H_{|\mathbf{x}|^2 +1}^{(1)}(3)\right)\right)\exp\left(-|\mathbf{x}-(0.027,\,0.0197)|^8\right)\nonumber\\ &\cdot(0.5+\sin(\mathbf{x}_1(\mathbf{x}_2-1))).\nonumber \end{align} In Figure~\ref{fig:conv2D-s-gen} we plot the difference {between values obtained with grid sizes $h$ and $h/1.5$} for the four different quadratures $\mathcal{U}_h^p$ \eqref{eq:Qp-general-explicit} and the punctured trapezoidal rule $T_h^0$. The order of accuracy shown is $1$ for the punctured trapezoidal rule and $p$ for the quadrature $\mathcal{U}_h^p$, which is what was expected. The error constant is determined by the value of $(\alpha,\beta)$. In all our tests we fixed $(\alpha,\beta)=(0.81,0.46)$. The stencils used for the different quadratures $Q_h^{k}$ needed to compose $\mathcal{U}_h^p$ are the same as the previous test, represented in Figure~\ref{fig:all_correction_grid_plot}. \begin{figure} \begin{center} \includegraphics{plots/conv2D_s_10x7_f.pdf} \end{center} \caption{Corrected trapezoidal rules for a general function $s$ in two dimensions} \footnotesize{Corrected trapezoidal rules $\mathcal{U}_h^p$ for $p=2,3,4,5$ using additive splitting \eqref{eq:Qp-general-explicit} for the function $f=s\,v$ with singular integrand $s$ \eqref{eq:test-s-gen}. The first $p-1$ terms of the expansion \eqref{eq:s-expansion} ($s_k$, $k=0,1,\dots,p-2$) are needed to use $\mathcal{U}_h^p$. In the plot we see that the punctured trapezoidal rule $T_h^0$ has first order accuracy, and the corrections $\mathcal{U}_h^p$ have order of accuracy $p$ as predicted.} \label{fig:conv2D-s-gen} \end{figure} \subsection{Evaluating the layer potentials in the IBIM formulation} \label{sub:num_3D} We demonstrate the convergence and accuracy of the proposed quadrature rules by evaluating the single-layer, double-layer, and double-layer conjugate potentials with some smooth density $\rho$ on the surface $\Gamma\subset\mathbb{R}^3$: \begin{equation*} \int_\Gamma G_0(\bar\mathbf{x}^*,\bar\mathbf{y})\rho(\bar\mathbf{y})\text{d}\sigma_{\bar\mathbf{y}} ,\ \ \ \int_\Gamma \dfrac{\partial G_0}{\partial\mathbf{n}_y}(\bar\mathbf{x}^*,\bar\mathbf{y})\rho(\bar\mathbf{y})\text{d}\sigma_{\bar\mathbf{y}} ,\ \ \ \int_\Gamma \dfrac{\partial G_0}{\partial\mathbf{n}_x}(\bar\mathbf{x}^*,\bar\mathbf{y})\rho(\bar\mathbf{y})\text{d}\sigma_{\bar\mathbf{y}},\ \ \bar\mathbf{x}^*\in\Gamma. \end{equation*} The integrals are first extended to the tubular neighborhood of $T_\varepsilon$, as in \eqref{eq:general_S} using the compactly supported $C^{\infty}$ averaging function \begin{equation}\label{eq:averagingfunction} \delta(\eta)= \begin{cases} a\,\exp\left( \dfrac{2}{\eta^2-1} \right), & \text{ if } |\eta|<1, \\ 0, & \text{ otherwise}; \end{cases} \end{equation} here $a\approx {{7.51393}}$ normalizes the integral $\int_{\mathbb{R}} \delta(\eta) \text{d}\eta$ to 1. The surface chosen for the tests is a torus, centered in a randomly chosen point in 3D and rotated with randomly chosen angles along the $x$-, $y$- and $z$-axes. This is to avoid any symmetry of the uniform Cartesian grid which can influence the convergence behavior. The torus is described by the following parametrization \begin{equation}\label{eq:torus} \mathcal{T}(\theta,\phi)=Q\left( \begin{matrix} (R_2\cos\theta+R_1)\cos\phi \\ (R_2\cos\theta+R_1)\sin\phi \\ R_2\sin\theta \end{matrix} \right)+\mathbf{C} \end{equation} where $R_1=0.7$, $R_2=0.2$, $\mathbf{C}$ imposes a translation, and $Q=Q_z(c)Q_y(b)Q_x(a)$ is the composition of three rotation matrices; $Q_x(a)$, $Q_y(a)$, and $Q_z(a)$ are the matrices corresponding to a rotation by an angle $a$ around the $x$, $y$, and $z$ axes respectively. The parameters used for the translation and the rotations were: \begin{align*} &\mathbf{C}=\big(0.5475547095598521,\, 0.6864792402110276,\, 0.3502726366462485\big)\cdot 10^{-1},\\ &a={0.199487}\cdot {10}^{{1}}, \quad b={0.2540979476510170}\cdot {10}^{{1}}, \quad c={0.4219760487439292}\cdot {10}^{{1}}. \end{align*} The known density function $\rho$ used in the test is defined using the parametrization of the torus: \[ \rho(\bar\mathbf{y}) = \rho(\theta,\phi) = 1.38 + 2.196\sin\theta - 0.29837\cos\phi\,\sin\theta + 1.128\sin\phi\,\cos\theta\,. \] We present the errors \begin{equation}\label{eq:error-Eh-SL-DL-DLC} \begin{array}{rl} E_{SL}^3(h) &=\, \left| \mathcal{V}^3_h\left[ G_0(\bar\mathbf{x}^*,\bar\mathbf{y})\rho(\bar\mathbf{y}) \right] - \mathcal{V}^3_{h_{\min}}\left[ G_0(\bar\mathbf{x}^*,\bar\mathbf{y})\rho(\bar\mathbf{y}) \right] \right|, \\[0.3cm] E_{DL}^3(h) &=\, \left| \mathcal{V}^3_h\left[ \dfrac{\partial G_0}{\partial \mathbf{n}_y}(\bar\mathbf{x}^*,\bar\mathbf{y})\rho(\bar\mathbf{y}) \right] - \mathcal{V}^3_{h_{\min}}\left[ \dfrac{\partial G_0}{\partial \mathbf{n}_y}(\bar\mathbf{x}^*,\bar\mathbf{y})\rho(\bar\mathbf{y}) \right] \right|, \\[0.3cm] E_{DLC}^3(h) &=\, \left| \mathcal{V}^3_h\left[ \dfrac{\partial G_0}{\partial \mathbf{n}_x}(\bar\mathbf{x}^*,\bar\mathbf{y})\rho(\bar\mathbf{y}) \right] - \mathcal{V}^3_{h_{\min}}\left[ \dfrac{\partial G_0}{\partial \mathbf{n}_x}(\bar\mathbf{x}^*,\bar\mathbf{y})\rho(\bar\mathbf{y}) \right] \right|, \end{array} \end{equation} computed for a sequence of grid size values $\{h_i\}_i$, where we used as reference value half of the smallest grid size $h_{\min}=\frac{1}{2}\min_i h_i$. We tested our third order rule $\mathcal{V}_h^3$ \eqref{eq:Q-3D-general}. Moreover we compared with the previously developed second order rule, denoted by $\mathcal{V}_h^2$, from~\cite{izzo2021corrected}. In the presented simulations, we take the component $n_z$ of $\mathbf{\bar n}$ to be dominant if $|\tan\theta|<\sqrt{2}$, where $\mathbf{\bar n}/|\mathbf{\bar n}|=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)$. If instead $|\tan\theta|\ge\sqrt{2}$ and $|\tan\phi|\ge 1$, we take $n_y$ to be dominant, and if $|\tan\theta|\ge\sqrt{2}$ and $|\tan\phi|<1$ we take $n_x$ to be dominant. We used $\theta$ and $\phi$ to determine the dominant direction because of their extensive use in the rest of the code. At each target point $\bar\mathbf{x}^*$, the total error is the sum of the errors of the two-dimensional rule applied on each plane. Recall that under the IBIM formulation, the kernel is singular along the surface's normal line passing through $\bar\mathbf{x}^*$, and the singularity of the kernel on each plane lies at the intersection of the surface normal line and that plane. Since the normal lines of the surface generally do not align with the grid, the position of the singular point relative to the grid tends not to lie on any grid node. Recall further that the parameters $\alpha,\beta$ are used to described the position of the singular point relative to the closest grid node on the plane, and the error constants depend on them. Those parameters may change abruptly between planes, depending on which grid node in the plane is closest to the singular point. The closest grid nodes to each surface normal line certainly are expected to exhibit jumps as one refines the grids (decreases $h$). Thus, as noted in \cite{izzo2021corrected}, the errors \eqref{eq:error-Eh-SL-DL-DLC} as functions of $h$ are generally not smooth. Consequently we cannot see a clear slope. To show the overall convergence behavior we average the errors, defined in \eqref{eq:error-Eh-SL-DL-DLC}, over 20 target points, randomly chosen. The results can be seen in Figure~\ref{fig:convergence3D}. In the left column we present the averaged errors. In the right column we present a scatter plot of the errors in all the target points. We additionally highlight the errors corresponding to two specific target points to showcase an ``average'' error behavior (green line) and a ``bad'' error behavior (magenta line). By construction of the quadrature rule \eqref{eq:Q-3D-general} we expect it to be third order accurate in $h$. However from the plots we observe order of accuracy $\ge 3.5$. We conjecture that an additional cancellation of errors occurs when adding the results from each plane (see \cite{izzo2021corrected} for a related discussion regarding $\mathcal{V}^2_h$). A rigorous analysis of this behavior is beyond the scope of this article. \begin{figure} \begin{center} \includegraphics{plots/torus_7x7b.pdf}\includegraphics{plots/torus_grid_7x7b.pdf} \end{center} \caption{Torus test surface} \footnotesize{Left: the torus used in the tests. Right: the torus and the projections of the Cartesian grid nodes inside the tubular neighborhood $T_\varepsilon$. The projected nodes serve as the quadrature nodes.} \label{fig:tiltedtorusplot} \end{figure} Of course to test our algorithms, we retain no information about the parametrizations. The test torus is represented only by $d_\Gamma$ and $P_\Gamma$ on the given grid. Figure~\ref{fig:tiltedtorusplot} shows the torus that we use and the points used in the quadrature rule for a given grid configuration. We use fourth-order centered differencing of $P_\Gamma$ on the grid to approximate the Jacobian $J_\Gamma$ (see \cite{tsaikublik16}). We also use fourth-order centered differencing of $P_\Gamma$ to find the third derivatives of $f$ needed for the functions $B$ and $C$ in \eqref{eq:surface-B-C}, as they are related via a linear system (see Appendix~\ref{sec:appendixB}). \begin{figure} \begin{center} \includegraphics[scale=0.9]{plots/conv_SL_15x7_4.pdf}\\ \includegraphics[scale=0.9]{plots/conv_DL_15x7_4.pdf}\\ \includegraphics[scale=0.9]{plots/conv_DLC_15x7_4.pdf} \end{center} \caption{Errors in the evaluation of the three Laplace layer potentials} \footnotesize{The errors \eqref{eq:error-Eh-SL-DL-DLC} are computed for 20 randomly chosen target points on a tilted torus. The plots in the left column show the mean of the 20 errors. The plots in the right column show the scatter plot of the 20 target points. In the right plots we additionally highlight the behavior of two specific target points, to showcase a ``bad'' error (magenta line) and an ``average'' error (green line).} \label{fig:convergence3D} \end{figure} \subsection{Proof of Theorem~\ref{thm:punctured-tr-s-ell-around-singularity}} \label{sec:A:thm-s-singularity} Consider a cut-off function $\psi\in C^\infty_c({\mathbb R}^n)$ such that \begin{equation}\label{eq:psi-function} \psi(\mathbf{x})= \begin{cases} 1\,, & |\mathbf{x}|\leq \frac{1}{2}\,,\\ 0\,, & |\mathbf{x}|\geq 1\,. \end{cases} \end{equation} Then we can write $f$ as \begin{align*} f(\mathbf{x}) =& s(\mathbf{x})v(\mathbf{x}) = s(\mathbf{x})v(\mathbf{x})\psi(\mathbf{x}/r_0)+s(\mathbf{x})v(\mathbf{x})(1-\psi(\mathbf{x}/r_0))\\ =& |\mathbf{x}|^j\ell(|\mathbf{x}|,\mathbf{x}/|\mathbf{x}|)\psi(\mathbf{x}/r_0)v(\mathbf{x})+s(\mathbf{x})v(\mathbf{x})(1-\psi(\mathbf{x}/r_0))\\ =& |\mathbf{x}|^j\ell_1(|\mathbf{x}|,\mathbf{x}/|\mathbf{x}|)v(\mathbf{x})+s(\mathbf{x})v(\mathbf{x})(1-\psi(\mathbf{x}/r_0)). \end{align*} The first term is a function compactly supported in $B_{r_0}$, so by extending it to zero in $\mathbb{R}^n$ it satisfies the hypotheses of Theorem~\ref{thm:A:puncturederr}. Hence the result is valid for the first term. The second term has regularity $C^\infty_c(\mathbb{R}^n)$ and is zero in $B_{r_0/2}$, so the error for the punctured trapezoidal rule will decrease faster than any polynomial of $h$. By combining the results for the two terms, we prove the result. \subsubsection{Results on which Theorem~\ref{thm:punctured-tr-s-ell-around-singularity} depends.} \label{sec:A:thm-s-singularity-supplementary} \begin{theorem}\label{thm:A:puncturederr} Suppose $v\in C^\infty_c({\mathbb R}^n)$ and $\ell\in C^{\infty}(\mathbb{R}\times\mathbb{S}^{n-1})$. Then, for integers $j\geq 1-n$, \begin{equation}\label{eq:A:theorem-statement-TR0-order} \left|\int_{\mathbb{R}^n} s(\mathbf{x})v(\mathbf{x}) \text{\emph{d}}\mathbf{x} - T^0_{h,\,\mathcal{N}_h}[s\, v]\right| \leq C h^{j+n}\, ,\ \ s(\mathbf{x})=|\mathbf{x}|^{j}\, \ell\left(|\mathbf{x}|,\frac{\mathbf{x}}{|\mathbf{x}|}\right)\,, \end{equation} where the constant $C$ is independent of $h$, but depends on $j$, $\ell$ and $v$. \end{theorem} \begin{proof} Define $f(\mathbf{x}):=|\mathbf{x}|^j\ell(|\mathbf{x}|,\mathbf{x}/|\mathbf{x}|)v(\mathbf{x})$, and consider the cut-off function $\psi\in C^\infty_c({\mathbb R}^n)$ \eqref{eq:psi-function}. Then we can write the punctured trapezoidal rule as \[ T^0_{h,\,\mathcal{N}_h}[f]=T_h[f(\,\cdot\,)(1-\psi(\cdot/h))], \] where we cut out the singularity point by multiplying by $1-\psi$ around $\mathbf{0}$; the scaling by $h$ ensures that, for fixed $h$, only the node in the singularity point is cut out. This allows us to split the error of the punctured trapezoidal rule as \begin{align*} \int_{\mathbb{R}^n}f(\mathbf{x})\text{d}\mathbf{x}-T_{h,\,\mathcal{N}_h}^0[f] =&\, \underbrace{\int_{\mathbb{R}^n}f(\mathbf{x})\psi(\mathbf{x}/h)\text{d}\mathbf{x}}_{(\textbf{I})} \\ &+ \underbrace{\int_{\mathbb{R}^n}f(\mathbf{x})(1-\psi(\mathbf{x}/h))\text{d}\mathbf{x}-T_h[f(\cdot)(1-\psi(\cdot/h))]}_{(\textbf{II})}\,. \end{align*} We will consider the two terms (\textbf{I}), (\textbf{II}) separately, and prove that both can be bounded by $Ch^{j+n}$.\\ \noindent(\textbf{I}): Given the compact support of $\psi$, the integral is reduced to an integral over $\{|\mathbf{x}|\leq h\}$: \begin{align*} \int_{\mathbb{R}^n}f(\mathbf{x})\psi(\mathbf{x}/h)\text{d}\mathbf{x} =&\, \int_{|\mathbf{x}|\leq h}v(\mathbf{x})|\mathbf{x}|^j\ell(|\mathbf{x}|,\mathbf{x}/|\mathbf{x}|)\psi(\mathbf{x}/h)\text{d}\mathbf{x} \\ =&\, h^{j+n}\int_{|\mathbf{x}|\leq 1}v(h\mathbf{x})|\mathbf{x}|^j\ell(|h\mathbf{x}|,\mathbf{x}/|\mathbf{x}|)\psi(\mathbf{x})\text{d}\mathbf{x} \\ \Rightarrow\ \left|\int_{\mathbb{R}^n}f(\mathbf{x})\psi(\mathbf{x}/h)\text{d}\mathbf{x}\right| \leq&\, h^{j+n} |v|_\infty|\ell|_\infty \int_{|\mathbf{x}|\leq 1}|\mathbf{x}|^j\text{d}\mathbf{x} \leq C_1 h^{j+n}, \end{align*} since $|\mathbf{x}|^j$ is integrable as $j\geq 1-n$. We have proven the estimate for the first term.\\ \noindent(\textbf{II}): For the second term, knowing that the volume of the fundamental parallelepiped of the lattice $V:=(h\mathbb{Z})^n$ is $h^n$ and that the dual lattice is $V^*=(h^{-1}\mathbb{Z})^n$, we use the Poisson summation formula: \[ T_h[f] = h^n\sum_{\mathbf{j}\in V} f(\mathbf{j}) = \dfrac{h^n}{h^n} \sum_{\mathbf{l}\in V^*} \hat f\left( {\mathbf{l}} \right) = \int_{\mathbb{R}^n} f(\mathbf{x})\text{d}\mathbf{x} + \sum_{\mathbf{k}\neq\mathbf{0}} \hat f\left( \dfrac{\mathbf{k}}{h} \right)\,. \] Then the error in (\textbf{II}) is: \[ T_h\left[ f(\cdot)\,(1-\psi(\cdot/h)) \right] - \int_{\mathbb{R}^n}f(\mathbf{x})(1-\psi(\mathbf{x}/h))\text{d}\mathbf{x} = \sum_{\mathbf{k}\neq\mathbf{0}} \hat f_\psi (\mathbf{k},h), \] where \begin{align*} \hat f_\psi(\mathbf{k},h) :=& \hat f(\mathbf{k}/h) = \int_{\mathbb{R}^n} f(\mathbf{x})(1-\psi(\mathbf{x}/h))e^{-2\pi\text{i}\mathbf{k} \cdot \mathbf{x}/h}\text{d}\mathbf{x} \\ =& h^n\int_{\mathbb{R}^n} f(h\mathbf{x})(1-\psi(\mathbf{x}))e^{-2\pi\text{i}\mathbf{k}\cdot\mathbf{x}}\text{d}\mathbf{x}\,. \end{align*} Using integration by parts separately on each of the variables, we find \begin{align*} \int_{\mathbb{R}^n} \partial_{\mathbf{x}}^{\beta}[f(h\mathbf{x})(1-\psi(\mathbf{x}))]e^{-2\pi\text{i}\mathbf{k}\cdot\mathbf{x}}\text{d}\mathbf{x} &= 2\pi\text{i}\,k_j \int_{\mathbb{R}^n} \partial_\mathbf{x}^{\beta-e_j}[f(h\mathbf{x})(1-\psi(\mathbf{x}))]e^{-2\pi\text{i} \mathbf{k}\cdot\mathbf{x}}\text{d}\mathbf{x} \\ &= (2\pi\text{i}\mathbf{k})^{\beta} \int_{\mathbb{R}^2}f(h\mathbf{x})(1-\psi(\mathbf{x}))e^{-2\pi\text{i}\mathbf{k}\cdot\mathbf{x}}\text{d}\mathbf{x}. \end{align*} For the Laplacian operator applied $q$ times we therefore have \begin{align*} & \int_{\mathbb{R}^n} \Delta^{q}[f(h\mathbf{x})(1-\psi(\mathbf{x}))]e^{-2\pi\text{i}\mathbf{k}\cdot\mathbf{x}}\text{d}\mathbf{x} \\ =& -4\pi^2\left(\sum_{j=1}^n k_j^2\right)\int_{\mathbb{R}^n} \Delta^{q-1}[f(h\mathbf{x})(1-\psi(\mathbf{x}))]e^{-2\pi\text{i}\mathbf{k}\cdot\mathbf{x}}\text{d}\mathbf{x} \\ =& (-1)^q (2\pi)^{2q} |\mathbf{k}|^{2q} \int_{\mathbb{R}^n}f(h\mathbf{x})(1-\psi(\mathbf{x}))e^{-2\pi\text{i}\mathbf{k}\cdot\mathbf{x}}\text{d}\mathbf{x} \,. \end{align*} We use this result to find an expression we can bound using Lemma \ref{lem:A:hp-hbeta-2-ell(r,s)}; given an integer $q$, we find \begin{align*} \left| \hat f_\psi(\mathbf{k},h) \right| &\leq \dfrac{h^n}{(2\pi)^{2q}|\mathbf{k}|^{2q}} \int_{\mathbb{R}^n}\left| \Delta^q[f(h\mathbf{x})(1-\psi(\mathbf{x})]e^{\text{i}\,\mathbf{k}\cdot\mathbf{x}} \right| \text{d}\mathbf{x} \\ &\leq \dfrac{h^n}{(2\pi)^{2q}|\mathbf{k}|^{2q}} \sum_{|\beta|=2q} c_\beta \int_{\mathbb{R}^n} \left| \partial_\mathbf{x}^\beta[f(h\mathbf{x})(1-\psi(\mathbf{x})] \right| \text{d}\mathbf{x} \\ &\leq \dfrac{h^n}{(2\pi)^{2q}|\mathbf{k}|^{2q}} \sum_{|\beta|=2q} \tilde c_\beta (h^j+h^{|\beta|-n}) =
\bar c_\beta \dfrac{h^{j+n}+h^{2q}}{|\mathbf{k}|^{2q}}\,. \end{align*} Then the series of Fourier coefficients is \begin{align*} \left| T_h\left[ f(\cdot)\,(1-\psi(\cdot/h)) \right] - \int_{\mathbb{R}^n}f(\mathbf{x})(1-\psi(\mathbf{x}/h))\text{d}\mathbf{x} \right| &\leq \sum_{\mathbf{k}\neq\mathbf{0}} \left| \hat f_\psi(\mathbf{k},h) \right| \\ & \leq \bar c_\beta\sum_{\mathbf{k}\neq\mathbf{0}} \dfrac{h^{j+n}+h^{2q}}{|\mathbf{k}|^{2q}}\,. \end{align*} The series converges if $2q>n$, and the leading order is $h^{j+n}$ if $2q\geq j+n$, so by taking $q\geq \max(1+n/2,(n+j)/2)$, we find the result sought. Combining the results for (\textbf{I}) and (\textbf{II}), we find the bound \[ \left| \int_{\mathbb{R}^n}f(\mathbf{x})\text{d}\mathbf{x}-T_{h,\,\mathcal{N}_h}^0[f] \right| \leq C_\beta h^{j+n}\,. \] This proves the theorem. \end{proof} We use the notation $\mathbf{x}=(x_1,x_2,\dots,x_n)=\sum_{l=1}^n x_l e_l$, and indicate with $e_l$ the $l$-th element of the standard $\mathbb{R}^n$ basis. \begin{lemma}\label{lem:A:hp-hbeta-2-ell(r,s)} Let $g,\psi\in C^\infty_c(\mathbb{R}^n)$, $\ell\in C^\infty(\mathbb{R}\times\mathbb{S}^{n-1})$, where $\psi$ is such that \[ \psi(\mathbf{x})= \begin{cases} 1 & |\mathbf{x}|\leq \frac{1}{2}\,,\\[0.1cm] 0 & |\mathbf{x}|\ge 1\,. \end{cases} \] Let $j\geq 1-n$, and $f(\mathbf{x})=|\mathbf{x}|^j \ell(|\mathbf{x}|,\mathbf{x}/|\mathbf{x}|)g(\mathbf{x})$; then, for any multi-index $\beta\in\mathbb{N}^n_0$ it exists a constant $C_\beta$ independent of $h$ such that, for $0<h\leq 1$, \begin{equation}\label{eq:lemma-thm-thesis} \int_{\mathbb{R}^2}\Big| \partial_\mathbf{x}^\beta\left[ f(h\mathbf{x})(1-\psi(\mathbf{x})) \right] \Big|\text{\emph{d}} \mathbf{x}\leq C_\beta(h^{j}+h^{|\beta|-n})\,. \end{equation} \end{lemma} \begin{proof} Given $\beta\in\mathbb{N}^n_0$, we first prove that there exist functions $f_{\beta}:\mathbb{R}\times\mathbb{S}^{n-1}\to\mathbb{R}$ in $C_c^\infty(\mathbb{R}\times\mathbb{S}^{n-1})$ such that \begin{equation}\label{eq:lemma-thm-firststep} \partial_\mathbf{x}^\beta f(\mathbf{x})=|\mathbf{x}|^{j-|\beta|} f_{\beta}(|\mathbf{x}|,\mathbf{x}/|\mathbf{x}|)\,. \end{equation} We prove this by induction. The induction base $\beta=\mathbf{0}$ is true because \[ \partial_\mathbf{x}^\mathbf{0} f(\mathbf{x})=f(\mathbf{x})=|\mathbf{x}|^j \ell(|\mathbf{x}|,\mathbf{x}/|\mathbf{x}|)g(\mathbf{x})=:|\mathbf{x}|^j f_{\mathbf{0}}(|\mathbf{x}|,\mathbf{x}/|\mathbf{x}|), \] where $f_{\mathbf{0}}\in C^\infty_c(\mathbb{R}\times\mathbb{S}^{n-1})$. For the induction step we assume that \eqref{eq:lemma-thm-firststep} is true for $\beta$ and prove it for $\beta+e_l$: \[ \partial_\mathbf{x}^{\beta+e_l}f(\mathbf{x})=\partial_\mathbf{x}^{e_l} |\mathbf{x}|^{j-|\beta|} f_{\beta}(|\mathbf{x}|,\mathbf{x}/|\mathbf{x}|)\,. \] By computing the derivative we find \begin{align*} \partial_\mathbf{x}^{e_l}|\mathbf{x}|^{j-|\beta|} f_{\beta}\left(|\mathbf{x}|,\dfrac{\mathbf{x}}{|\mathbf{x}|}\right) =&\, |\mathbf{x}|^{j-|\beta|-1}\Bigg[(j-|\beta|)\,\left(\dfrac{\mathbf{x}}{|\mathbf{x}|}\right)_l f_{\beta}\left(|\mathbf{x}|,\dfrac{\mathbf{x}}{|\mathbf{x}|}\right) \\ &+ \nabla_{\mathbf{u}}f_{\beta}\left(|\mathbf{x}|,\dfrac{\mathbf{x}}{|\mathbf{x}|}\right)\cdot\left(e_l-\left(\dfrac{\mathbf{x}}{|\mathbf{x}|}\right)_l\dfrac{\mathbf{x}}{|\mathbf{x}|}\right) \\ &+ |\mathbf{x}|\left(\dfrac{\mathbf{x}}{|\mathbf{x}|}\right)_l\partial_r f_{\beta}\left(|\mathbf{x}|,\dfrac{\mathbf{x}}{|\mathbf{x}|}\right)\Bigg]\\ =&:|\mathbf{x}|^{j-|\beta|-1}f_{\beta+e_l}\left(|\mathbf{x}|,\dfrac{\mathbf{x}}{|\mathbf{x}|}\right)\,. \end{align*} Because $f_\beta\in C^\infty_c(\mathbb{R}\times\mathbb{S}^{n-1})$ the same is also true for $f_{\beta+e_l}$.\\ The next step is to expand the derivative in \eqref{eq:lemma-thm-thesis} and use \eqref{eq:lemma-thm-firststep}, and then bound it: \begin{align*} \partial_\mathbf{x}^\beta\left[ f(h\mathbf{x})(1-\psi(\mathbf{x})) \right] =& \sum_{\nu\leq\beta}{\beta\choose\nu}\partial^{\beta-\nu}[1-\psi(\mathbf{x})]\,h^{|\nu|}\partial^\nu f(h\mathbf{x}) \\ =& \sum_{\nu\leq\beta}{\beta\choose\nu}\partial^{\beta-\nu}[1-\psi(\mathbf{x})]h^{j}|\mathbf{x}|^{j-|\nu|} f_{\nu}(|h\mathbf{x}|,\mathbf{u})\,. \end{align*} We use the properties of $\psi$, and the compact support of $f_\nu$. Let $L>0$ be such that $\forall \nu\leq\beta$, supp$\,f_\nu$ is contained in the ball $B_L(\mathbf{0})$. Note furthermore that the derivatives of $\psi$ are compactly supported in the annulus $\{\mathbf{x}\in\mathbb{R}^n\,:\,\frac{1}{2}\leq|\mathbf{x}|\leq 1\}$. From this we can say that \[ \Big| \partial_\mathbf{x}^\beta\left[ f(h\mathbf{x})(1-\psi(\mathbf{x})) \right] \Big|\leq\ C \begin{cases} 0\,, & |\mathbf{x}|\leq \frac{1}{2}\,, \\ h^{j}\,, & \frac{1}{2}\leq|\mathbf{x}|\leq 1\,, \\ h^{j}|\mathbf{x}|^{j-|\beta|}\,, & 1\leq|\mathbf{x}|\leq L/h\,, \\ 0\,, & |\mathbf{x}|>L/h\,. \end{cases} \] We use these bounds in the evaluation of the integral, and after passing to polar coordinates we arrive at \eqref{eq:lemma-thm-thesis} via \begin{align*} \int_{\mathbb{R}^n}\Big| \partial_\mathbf{x}^\beta\left[ f(h\mathbf{x})(1-\psi(\mathbf{x})) \right] \Big|\text{d}\mathbf{x} \leq& C_1\int_{1/2}^1 h^j r^{n-1}\text{d} r+C_2 h^j \int_{1}^{L/h}r^{j-|\beta|+n-1}\text{d} r \\ =& \bar C_1 h^j + C_2 h^{|\beta|-n} \int_{h}^L r^{j-|\beta|+n-1}\text{d} r \\ =& \bar C_1 h^j + C_2 h^{|\beta|-n} \left( C_3+C_4 h^{j-|\beta|+n} \right) \\ \leq & C_\beta\,( h^j + h^{|\beta|-n} )\,. \end{align*} The lemma is proven. \end{proof} \subsection{Proof of Lemma~\ref{lem:f-over-distance-expansion}} \label{sub:A:f-over-distance-expansion} For any $\mathbf{u}\in\mathbb{S}^1$, we expand $\ell$ around $r=0$ and write the remainder in integral form: \[ \ell(r,\mathbf{u})=\sum_{j=0}^q\dfrac{1}{j!}\partial_r^j\ell(0,\mathbf{u})r^j+\dfrac{r^{q+1}}{q!}\int_0^1\partial_r^{q+1}\ell(tr,\mathbf{u})(1-t)^q\text{d} t\,. \] Then \begin{align*} \triangle_q s(\mathbf{x})&=\dfrac{1}{|\mathbf{x}|}\ell\left(|\mathbf{x}|,\dfrac{\mathbf{x}}{|\mathbf{x}|}\right)-\sum_{j=0}^q \dfrac{1}{j!}\partial_r^j \ell\left(0,\dfrac{\mathbf{x}}{|\mathbf{x}|}\right)|\mathbf{x}|^{j-1}\\ &= \dfrac{|\mathbf{x}|^q}{q!}\int_0^1(1-t)^q\partial_r^{q+1}\ell(t|\mathbf{x}|,\mathbf{x}/|\mathbf{x}|)\text{d} t=|\mathbf{x}|^q\sigma (|\mathbf{x}|,\mathbf{x}/|\mathbf{x}|)\,, \end{align*} where $\sigma\in C^\infty((-r_0,r_0)\times \mathbb{S}^1)$ because $\ell\in C^\infty((-r_0,r_0)\times \mathbb{S}^1)$. The lemma is thus proven. \subsection{Proof of Lemma~\ref{lem:properties-of-h}} \label{sub:A:properties-of-h} The first two identities in \eqref{eq:hprop1} follows since $P_\Gamma(\bar\mathbf{x}^*+(\mathbf{0},z')_B) = \bar\mathbf{x}^*$ for all $z'$, as was already pointed out in Section~\ref{subsub:proj-mapping-to-layer-kernels}. For the second part, we note that the surface normal at the point $\bar\mathbf{x}^*+ \big( \mathbf{y}_{\text{p}},f(\mathbf{y}_{\text{p}})\big)_B$ is parallell to $(-\nabla f(\mathbf{y}_{\text{p}}),1)_B$. Therefore, there is a $t\in\mathbb{R}$ such that $$ \bar\mathbf{x}^*+(\mathbf{y}',z')_B= \bar\mathbf{x}^*+ \big( \mathbf{y}_{\text{p}},f(\mathbf{y}_{\text{p}})\big)_B+t (-\nabla f(\mathbf{y}_{\text{p}}),1)_B, $$ which implies that \begin{equation}\label{eq:A:x-projection-general-f} \mathbf{y}'= \mathbf{y}_{\text{p}}-(z'-f(\mathbf{y}_{\text{p}})) \nabla f(\mathbf{y}_{\text{p}})=:\mathbf{F}(\mathbf{y}_{\text{p}}). \end{equation} Using the fact that $\mathbf{y}_{\text{p}}=\mathbf{h}(\mathbf{y}',z')$ and differentiating both sides with respect to $\mathbf{y}'$ gives us, \begin{align*} I&= \frac{\partial \mathbf{F}(\y_{\text{p}})}{\partial \y_{\text{p}}}^T \frac{\partial \mathbf{h}}{\partial \mathbf{y}} =\frac{\partial \mathbf{h}}{\partial \mathbf{y}} -\left( (z'-f(\mathbf{y}_{\text{p}})) \frac{\partial^2 f}{\partial \mathbf{y}^2}(\mathbf{y}_{\text{p}}) -\nabla f(\mathbf{y}_{\text{p}}) \nabla f(\mathbf{y}_{\text{p}})^T\right)\frac{\partial \mathbf{h}}{\partial \mathbf{y}}, \end{align*} and the result follows upon evaluating at $\mathbf{y}'=\mathbf{y}_{\text{p}}=\mathbf{0}$ and using \eqref{eq:f-properties-def-M}. Since $\mathbf{h}$ is smooth on ${\mathcal M}_L$ the matrix $D$ must thus be well-defined. For the second order term in the Taylor expansion, we write $\y_{\text{p}}=(y_1,y_2)$, $\mathbf{h}=(h_1,h_2)^T$ and $\mathbf{F}=(F_1,F_2)^T$. We then get for $j=1,2$, $$ \mathbf{0} =\frac{\partial^2 F_j(\mathbf{h})}{\partial \mathbf{y}^2} = \frac{\partial F_j(\y_{\text{p}})}{\partial y_1} \frac{\partial^2 h_1}{\partial \mathbf{y}^2}+ \frac{\partial F_j(\y_{\text{p}})}{\partial y_2} \frac{\partial^2 h_2}{\partial \mathbf{y}^2} + \frac{\partial\mathbf{h}}{\partial \mathbf{y}}^T \frac{\partial^2 F_j(\y_{\text{p}})}{\partial \y_{\text{p}}^2} \frac{\partial\mathbf{h}}{\partial \mathbf{y}}. $$ From the expressions above we have that $\frac{\partial \mathbf{F}(\mathbf{0})}{\partial \y_{\text{p}}}=D^{-1}(z)$. Therefore, evaluating at $\mathbf{y}'=\mathbf{y}_{\text{p}}=\mathbf{0}$, yields $$ \mathbf{0} = D(z')^{-1}_{jj} \frac{\partial^2 h_j}{\partial \mathbf{y}^2} + D(z')^T \frac{\partial^2 F_j(\y_{\text{p}})}{\partial \y_{\text{p}}^2} D(z'), \qquad j=1,2. $$ Since $$ \left.\frac{\partial^2 F_j(\y_{\text{p}})}{\partial \y_{\text{p}}^2} \right|_{\y_{\text{p}}=0} =\left.-z'\frac{\partial}{\partial y_j}\frac{\partial^2 f}{\partial \y_{\text{p}}^2}\right|_{\y_{\text{p}}=0}, $$ we finally get \begin{align*} \frac{1}{2} \Vector{ {\mathbf{y}'}^T\frac{\partial^2 h_1}{\partial \mathbf{y}^2}\mathbf{y}'\\ {\mathbf{y}'}^T\frac{\partial^2 h_2}{\partial \mathbf{y}^2}\mathbf{y}' } =& -\frac{1}{2} D(z')\Vector{ {\mathbf{y}'}^TD(z')^T \frac{\partial^2 F_1(\y_{\text{p}})}{\partial \y_{\text{p}}^2} D(z'){\mathbf{y}'}\\ {\mathbf{y}'}^TD(z')^T \frac{\partial^2 F_2(\y_{\text{p}})}{\partial \y_{\text{p}}^2} D(z'){\mathbf{y}'}} \\ =&\ z' D(z')C\Bigl(D(z')\mathbf{y}',D(z')\mathbf{y}'\Bigr). \end{align*} This gives \eqref{eq:hig_order_proj_x} and the lemma is proven. \subsection{Proof of Lemma~\ref{lem:g=f-over-distance}} \label{sub:A:g=f-over-distance} For the first function, using the hypothesis $\bar\mathbf{g}(\mathbf{0})=\mathbf{0}$ and the notation $\mathbf{x}=|\mathbf{x}|\mathbf{u}$ with $\mathbf{x}/|\mathbf{x}|=:\mathbf{u}\in\mathbb{S}^{m-1}$ we write the expansion around $\mathbf{x}=\mathbf{0}$ as \begin{align*} \bar\mathbf{g}(\mathbf{x}) =& \bar\mathbf{g}(\mathbf{0})+D\bar\mathbf{g}(\mathbf{0})\mathbf{x}+\sum_{|\nu|=2}E_{\bar\mathbf{g},\nu}(\mathbf{x})\mathbf{x}^\nu \\ =& |\mathbf{x}|\left( D\bar\mathbf{g}(\mathbf{0})\mathbf{u}+|\mathbf{x}| \sum_{|\nu|=2}E_{\bar\mathbf{g},\nu}(\mathbf{x})\mathbf{u}^\nu\right) =:|\mathbf{x}|f(|\mathbf{x}|,\mathbf{u})\,, \end{align*} where $E_{\bar\mathbf{g},\nu}(\mathbf{x}):=\frac{2}{\nu!}\int_0^1(1-t)\partial^\nu \bar\mathbf{g}(t\mathbf{x})\text{d} t$ is given by the integral form of the remainder term. Using the full rank of $D\bar\mathbf{g}(\mathbf{0})$, there exists $0<r_1\leq r_0$ be such that $f(|\mathbf{x}|,\mathbf{u})\neq 0$ in $(-r_1,r_1)\times \mathbb{S}^{m-1}$. Then \[ \dfrac{1}{|\bar\mathbf{g}(\mathbf{x})|}=\dfrac{1}{|\mathbf{x}|}\dfrac{1}{|f(|\mathbf{x}|,\mathbf{u})|}=\dfrac{1}{|\mathbf{x}|}\ell_1\left(|\mathbf{x}|,\mathbf{u}\right)\,, \] and from the hypotheses on $D\bar\mathbf{g}(\mathbf{0})$ and on the smoothness of $\bar\mathbf{g}$, $\ell_1$ is $C^\infty((-r_1,r_1)\times \mathbb{S}^{m-1})$. For the second function form, let $r(\mathbf{x}):=\bar\mathbf{p}(\mathbf{x})^T\bar\mathbf{g}(\mathbf{x})$; then $\nabla r(\mathbf{x})=\bar\mathbf{g}(\mathbf{x})^TD\bar\mathbf{p}(\mathbf{x})+\bar\mathbf{p}(\mathbf{x})^TD\bar\mathbf{g}(\mathbf{x})$. Using the hypothesis $\bar\mathbf{p}(\mathbf{0})^TD\bar\mathbf{g}(\mathbf{0})=\mathbf{0}$, we write the expansion of $r$ around $\mathbf{x}=\mathbf{0}$ using the integral form of the remainder: \[ r(\mathbf{x})=r(\mathbf{0})+\nabla r(\mathbf{0})\mathbf{x}+\sum_{|\nu|=2}E_{r,\nu}(\mathbf{x})\mathbf{x}^\nu=|\mathbf{x}|^2\sum_{|\nu|=2}E_{r,\nu}(\mathbf{x})\mathbf{u}^\nu\,, \] where $E_{r,\nu}(\mathbf{x}):=\frac{2}{\nu!}\int_0^1(1-t)\partial^\nu r(t\mathbf{x})\text{d} t$, so that we find \[ \dfrac{\bar\mathbf{p}(\mathbf{x})^T\bar\mathbf{g}(\mathbf{x})}{|\bar\mathbf{g}(\mathbf{x})|^3}=\dfrac{|\mathbf{x}|^2\sum_{|\nu|=2}E_{r,\nu}(\mathbf{x})\mathbf{u}^\nu}{|\mathbf{x}|^3f(|\mathbf{x}|,\mathbf{u})^3}=\dfrac{1}{|\mathbf{x}|}\dfrac{\sum_{|\nu|=2}E_{r,\nu}(\mathbf{x})\mathbf{u}^\nu}{f(|\mathbf{x}|,\mathbf{u})^3}=\dfrac{1}{|\mathbf{x}|}\ell_2(|\mathbf{x}|,\mathbf{u})\,. \] From the hypotheses on the smoothness of $\bar\mathbf{g}$ and $\bar\mathbf{p}$, $\ell_2$ is $C^\infty((-r_1,r_1)\times \mathbb{S}^{m-1})$ and the result is proven. \section{Computation of the derivatives of the local surface function} \label{sec:appendixB} In this Section we will show how to find numerically the derivatives of $f$ in the Implicit Boundary Integral Methods setting of Section~\ref{sec:application-layer-potentials-3D}. The derivatives are needed to evaluate the functions $B$ and $C$ of~\eqref{eq:surface-B-C}, which are used in the approximated kernels \eqref{eq:s0s1-kernels}. The first derivatives and the mixed second derivatives are zero by construction, so we will show how to find the pure second derivatives and all the third derivatives.\\ Let $\bar{\mathbf{z}}$ be an arbitrary point in $T_\varepsilon$, and $\eta=d_\Gamma(\bar{\mathbf{z}})$. Let $\Gamma_\eta:=\{\bar{\mathbf{z}}\in T_\varepsilon\,:\,d_\Gamma(\bar{\mathbf{z}})=\eta\}$ be the surface parallel to $\Gamma$ at signed distance $\eta$. The pure second derivatives of $f$ at $P_\Gamma(\bar{\mathbf{z}})$, $f_{xx},f_{yy}$, are the principal directions $\kappa_1,\kappa_2$ of $\Gamma$ at $P_\Gamma(\bar{\mathbf{z}})$. We find the principal curvatures $g_1,g_2$ of $\Gamma_\eta$ in $\bar{\mathbf{z}}$ via the Hessian of $d_\Gamma$ at $\bar{\mathbf{z}}$: \[ H_{{d}_\Gamma}(\bar{\mathbf{z}})=\nabla^{2}{d}_\Gamma(\bar{\mathbf{z}})=\left[\begin{array}{ccc} \mathbf{\bar n} & \bar{\bm{\tau}}_{1} & \bar{\bm{\tau}}_{2}\end{array}\right]\begin{bmatrix}0\\ & -g_{1}\\ & & -g_{2} \end{bmatrix}\left[\begin{array}{ccc} \mathbf{\bar n} & \bar{\bm{\tau}}_{1} & \bar{\bm{\tau}}_{2}\end{array}\right]^{T} \] where $\bar{\bm{\tau}}_1$, $\bar{\bm{\tau}}_2$ are the principal directions and $\mathbf{\bar n}$ is the normal to $\Gamma$ in $P_\Gamma(\mathbf{z})$. In practice, the values of either $P_\Gamma$ or $d_\Gamma$ are given on the grid nodes. The principal directions and curvatures are computed from eigendecomposition of third order numerical approximations of the Hessian, $H_{d_\Gamma}$. Alternatively, one can obtain this information from the derivative matrix of $P_\Gamma$, see \cite{tsaikublik16}. Then the following relation lets us find the principal curvatures $\kappa_1,\kappa_2$ from $g_1,g_2$ and $\eta$: \[ -\kappa_{i}=\frac{-g_{i}}{1+\eta g_{i}},~~~i=1,2. \] The third derivatives of $f$ can be found by computing the second derivatives with respect to $\mathbf{y}'$ of $\mathbf{h}(\mathbf{y}',z')$ from Section~\ref{subsub:proj-mapping-to-layer-kernels}. By differentiating twice \eqref{eq:A:x-projection-general-f} with respect to $\mathbf{y}'=(x,y)$ with $\mathbf{h}(\mathbf{y}',z')=\mathbf{y}_{\text{p}}=(h_1,h_2)$ and evaluating in $\mathbf{y}'=\mathbf{0}$, we find the following two linear systems: \begin{align} &V \left( \begin{array}{c} f_{xxx} \\%[0.2cm] f_{xxy} \\%[0.2cm] f_{xyx} \\%[0.2cm] f_{xyy} \end{array} \right) = \dfrac{1-z'\kappa_1}{z'} \left( \begin{matrix} \frac{\partial^2 h_1}{\partial x^2}\\[0.1cm] \frac{\partial^2 h_1}{\partial x\partial y}\\[0.1cm] \frac{\partial^2 h_1}{\partial y\partial x}\\[0.1cm] \frac{\partial^2 h_1}{\partial y^2} \end{matrix} \right), \label{eq:fxxx} \hspace{1cm V \left( \begin{array}{c} f_{yxx} \\%[0.2cm] f_{yxy} \\%[0.2cm] f_{yyx} \\%[0.2cm] f_{yyy} \end{array} \right) = \dfrac{1-z'\kappa_2}{z'} \left( \begin{matrix} \frac{\partial^2 h_2}{\partial x^2}\\[0.1cm] \frac{\partial^2 h_2}{\partial x\partial y}\\[0.1cm] \frac{\partial^2 h_2}{\partial y\partial x}\\[0.1cm] \frac{\partial^2 h_2}{\partial y^2} \end{matrix} \right), \end{align} \begin{align*} \text{where }\ &V:=\left( \begin{matrix} \left(\frac{\partial h_1}{\partial x}\right)^2 & \frac{\partial h_1}{\partial x}\frac{\partial h_2}{\partial x} & \frac{\partial h_1}{\partial x}\frac{\partial h_2}{\partial x} & \left(\frac{\partial h_2}{\partial x}\right)^2\\ \frac{\partial h_1}{\partial x}\frac{\partial h_1}{\partial y} & \frac{\partial h_1}{\partial x}\frac{\partial h_2}{\partial y} & \frac{\partial h_1}{\partial y}\frac{\partial h_2}{\partial x} & \frac{\partial h_2}{\partial x}\frac{\partial h_2}{\partial y}\\ \frac{\partial h_1}{\partial x}\frac{\partial h_1}{\partial y} & \frac{\partial h_1}{\partial y}\frac{\partial h_2}{\partial x} & \frac{\partial h_1}{\partial x}\frac{\partial h_2}{\partial y} & \frac{\partial h_2}{\partial x}\frac{\partial h_2}{\partial y}\\ \left(\frac{\partial h_1}{\partial y}\right)^2 & \frac{\partial h_1}{\partial y}\frac{\partial h_2}{\partial y} & \frac{\partial h_1}{\partial y}\frac{\partial h_2}{\partial y} & \left(\frac{\partial h_2}{\partial y}\right)^2 \end{matrix} \right).\nonumber \end{align*} We find the first and second derivatives of $\mathbf{h}(\mathbf{y}',z')$ by computing the derivatives of $P_\Gamma$ in $\bar{\mathbf{z}}$ and applying a change of basis transformation. By construction $\bar{\mathbf{z}}=P_\Gamma(\bar{\mathbf{z}})+\eta \mathbf{\bar n}$. Then we use the closest point projections of the grid nodes around $\bar{\mathbf{z}}$, \[ \bar{\mathbf{v}}_{ijk}:=P_\Gamma(\bar{\mathbf{z}}+(i,j,k)h),\ \ i,j,k=-2,-1,0,1,2. \] In the $B$ basis, these points are expressed as $\bar{\mathbf{v}}_{ijk}=\bar\mathbf{x}^*+(\bar{\mathbf{w}}_{ijk})_B$, where $\bar{\mathbf{w}}_{ijk}=Q^{-1}(\bar{\mathbf{v}}_{ijk}-\bar\mathbf{x}^*)$. We apply finite differences (central differences of 4th order in this case) to the component of the nodes $\bar{\mathbf{w}}_{ijk}=(X_{ijk},Y_{ijk},Z_{ijk})$ to compute \[ W_1\approx\nabla X,\ \ W_2\approx\nabla Y,\ \ W_3\approx\nabla^2X,\ \ W_4\approx\nabla ^2Y. \] We can then use these approximations to find the derivatives of $h_i$, $i=1,2$ by applying the following transformations: \begin{equation*} \begin{array}{llll} \frac{\partial h_1}{\partial x} = \bar{\bm{\tau}}_1^T W_1, & \frac{\partial h_1}{\partial y} = \bar{\bm{\tau}}_2^T W_1 , & \frac{\partial h_2}{\partial x} = \bar{\bm{\tau}}_1^T W_2, & \frac{\partial h_2}{\partial y} = \bar{\bm{\tau}}_2^T W_2, \\ \frac{\partial^2 h_1}{\partial x^2} = \bar{\bm{\tau}}_1^TW_3\bar{\bm{\tau}}_1 , & \frac{\partial^2 h_1}{\partial x\partial y} = \bar{\bm{\tau}}_2^TW_3\bar{\bm{\tau}}_1 , & \frac{\partial^2 h_1}{\partial y\partial x} = \bar{\bm{\tau}}_1^TW_3\bar{\bm{\tau}}_2 , & \frac{\partial^2 h_1}{\partial y^2} = \bar{\bm{\tau}}_2^TW_3\bar{\bm{\tau}}_2,\\ \frac{\partial^2 h_2}{\partial x^2} = \bar{\bm{\tau}}_1^TW_4\bar{\bm{\tau}}_1 , & \frac{\partial^2 h_2}{\partial x\partial y} = \bar{\bm{\tau}}_2^TW_4\bar{\bm{\tau}}_1 , & \frac{\partial^2 h_2}{\partial y\partial x} = \bar{\bm{\tau}}_1^TW_4\bar{\bm{\tau}}_2 , & \frac{\partial^2 h_2}{\partial y^2} = \bar{\bm{\tau}}_2^TW_4\bar{\bm{\tau}}_2. \end{array} \end{equation*} Finally, we solve the two systems \eqref{eq:fxxx} with these values and $z'=\eta$.
\section{Introduction} \label{Sec:Intro} \setlength{\intextsep}{\savedintextsep} One-dimensional foliations, for example orbits of a flow, appeared early in the history of dynamical systems. More delicate applications, such as foliations stable for, or transverse to, a flow, arrived in due course. We refer to~\cite{HectorHirsch86} for a readable and well-illustrated introduction to this area. Suppose that $S$ is a closed, connected, oriented surface. Suppose that $f \from S \to S$ is a surface homeomorphism. The \emph{mapping torus} for $f$ is the manifold $M(f)$ obtained from $S \times [0, 1]$ by gluing, for every $x \in S$, the point $(x, 1)$ to the point $(f(x), 0)$. Then $M(f)$ is equipped with a \emph{suspension flow} $\Phi(f)$ along the intervals; this flow has a transverse foliation given by the copies of $S$. For an example in genus one, see \reffig{FigEightBox}. Suspension flows associated to surface homeomorphisms are particularly important, for example due to the work of Thurston~\cite[Theorem~5.6]{Thurston82}. When $f$ is (pseudo-)Anosov then we also have the \emph{stable} and \emph{unstable} (singular) foliations for $\Phi(f)$; these are obtained by taking suitable unions of flow-lines. See Examples~\ref{Exa:AnosovMap} and~\ref{Exa:PseudoAnosovMap}; for more detail we refer to~\cite[Chapter~1]{Calegari07}. Unfortunately, the \emph{leaf space} of $\Phi(f)$ is highly non-Hausdorff. To obtain a somewhat calmer object, we define $\mathcal{L}(f)$ to be the leaf space of the lift of $\Phi(f)$ to the universal cover of $M(f)$. Fenley and Mosher~\cite[Proposition~4.2]{FenleyMosher01} show that $\mathcal{L}(f)$ is homeomorphic to the plane $\mathbb{R}^2$. See also~\cite[Lemma~6.53]{Calegari07}. Furthermore, the stable and unstable foliations for $\Phi(f)$ induce singular transverse foliations $F^f$ and $F_f$ of $\mathcal{L}(f)$. The leaf space $\mathcal{L}(f)$ is still somewhat inhomogeneous. Following Agol and Gu\'eritaud~\cite{Agol11, Gueritaud16} we obtain $f^\circ \from S^\circ \to S^\circ$ by removing the singular points from $S$. Thus $M(f^\circ)$ is obtained from $M(f)$ by \emph{drilling}. The transverse foliations in $\mathcal{L}(f^\circ)$ are now non-singular. Here we axiomatise leaf spaces and draw out their connection to \emph{veering triangulations}~\cite{Agol11}. \subsection{This paper} A \emph{loom space} $\mathcal{L}$ is a copy of $\mathbb{R}^2$ equipped with transverse (non-singular) foliations $F^\mathcal{L}$ and $F_\mathcal{L}$, satisfying three axioms (\refdef{Loom}). In \refsec{LoomSpaces} we list several families of examples of loom spaces. We also discuss elementary relationships between the various \emph{skeletal rectangles} appearing in a loom space. In \refsec{Cusps} we formalise the notion of a \emph{cusp} of a loom space. These play an important combinatorial role in the rest of the work. In \refsec{Astroid} we prove a key finiteness result: the \emph{astroid lemma} (\reflem{Astroid}). This places a strong restriction on the projections of certain cusps to certain leaves of the two foliations of $\mathcal{L}$. See \refrem{Others} for several versions of the astroid lemma appearing in previous work. In \refsec{Veering} we review the basics of ideal triangulations and introduce \emph{locally veering triangulations}; we show in \refprop{LocallyVeering} that these are a mild generalisation of veering triangulations. In \refsec{Construction} we give our version of Gu\'eritaud's construction~\cite{Gueritaud16}. We then prove the following. \begin{restate}{Proposition}{Prop:Functorial} Gu\'eritaud's construction is a functor from the category of loom spaces to the category of locally veering triangulations. \end{restate} In \refsec{Convex} we define notions of \emph{geodesics} and \emph{convexity} in loom spaces. Using these we prove the following. \begin{restate}{Theorem}{Thm:ThreeSpace} For any loom space, the topological realisation of its veering triangulation is homeomorphic to $\mathbb{R}^3$. \end{restate} \subsection{Future work} \label{Sec:Future} The functor $\veer$ given by Gu\'eritaud's construction is, in fact, an \emph{equivalence} from $\operatorname{\mathsf{Loom}}(\mathbb{R}^2)$, the category of loom spaces, to $\operatorname{\mathsf{Veer}}(\mathbb{R}^3)$, the category of veering triangulations of $\mathbb{R}^3$. That is, there is a functor $\loom \from \operatorname{\mathsf{Veer}}(\mathbb{R}^3) \to \operatorname{\mathsf{Loom}}(\mathbb{R}^2)$ so that the $\loom \circ \veer$ and $\veer \circ \loom$ admit natural transformations to the identities on $\operatorname{\mathsf{Loom}}(\mathbb{R}^2)$ and $\operatorname{\mathsf{Veer}}(\mathbb{R}^3)$, respectively. We will prove this by building, from a veering triangulation $\mathcal{V}$ of $\mathbb{R}^3$, a \emph{veering circle}, a pair of laminations in that circle, and thus a \emph{link space} $\loom(\mathcal{V})$. After proving that $\loom(\mathcal{V})$ is a loom space we check naturality. In other work, joint with Jason Manning, we will show how the veering circle for $\mathcal{V}$ compactifies the link space $\loom(\mathcal{V})$ to give the \emph{veering disk} $\operatorname{\mathsf{D}}(\mathcal{V})$. We will then use the astroid lemma (\reflem{Astroid}) to give a careful description of various Hausdorff limits in $\operatorname{\mathsf{D}}(\mathcal{V})$. Further work will prove, when $\mathcal{V}$ gives a finite-volume cusped hyperbolic three-manifold $M$, that the \emph{veering two-sphere} is equivarently homeomorphic to the Bowditch boundary of $\pi_1(M)$: that is, to $\bdy \mathbb{H}^3$. We will then use naturality to obtain new examples of Cannon-Thurston maps. \subsection{Constructions of veering triangulations} The definition and first construction of veering triangulations in the fibred case are due to Agol~\cite{Agol11}. The second author and collaborators generalised the definition~\cite{HRST11}; they also answered a question of Agol, using a computer search to find the first non-fibred examples. Gu\'eritaud~\cite{Gueritaud16} gave an alternative construction in the fibred case, which has inspired much later work, including this paper. We~\cite{Segerman15} announced a procedure to perform Dehn surgery along horizontal annuli or M\"obius strips in veering triangulations. We gave an implementation of a special case of this in the file \texttt{veering\_dehn\_surgery.py} in our codebase~\cite{Veering21}. Shortly afterwards, Agol and Gu\'eritaud~\cite{Agol15} announced an extension of Gu\'eritaud's construction to drillings of manifolds admitting pseudo-Anosov flows without perfect fits. A computer generated census of all transverse veering triangulations with up to 16 tetrahedra was found by Giannopolous and ourselves~\cite{GSS19}. Chi Cheuk Tsang~\cite{Tsang21} announced a procedure very similar to our veering Dehn surgery, which he calls \emph{horizontal surgery}. He also introduced \emph{vertical surgery} along strictly ascending loops in the stable branched surface. Landry, Minsky, and Taylor~\cite{LMT21} gave an exposition of the Agol-Gu\'eritaud construction. Moreover, they proved that the veering triangulation can be made smoothly transverse to the pseudo-Anosov flow. \subsection*{Acknowledgements} The second author was supported in part by National Science Foundation grant DMS-1708239. We thank Sabetta Matsumoto for sourcing the fabric shown in \reffig{Fabric}. \section{Loom spaces} \label{Sec:LoomSpaces} \subsection{Rectangles} Suppose that $\mathcal{L}$ is a copy of $\mathbb{R}^2$, equipped with two transverse foliations $F^\mathcal{L}$ and $F_\mathcal{L}$. We call these the \emph{upper} and \emph{lower} foliations respectively. \begin{remark} \label{Rem:Leaves} The foliations $F^\mathcal{L}$ and $F_\mathcal{L}$ have no singularities in $\mathcal{L}$. Thus, by the Poincar\'e--Hopf theorem~\cite[page~35]{Milnor65}, any two leaves are equal, disjoint, or meet in exactly one point. We deduce that every leaf is properly embedded in $\mathcal{L}$. Thus, by the Jordan curve theorem~\cite[page~94]{Wilder49}, every leaf separates $\mathcal{L}$. \end{remark} \begin{definition} \label{Def:Rectangle} A \emph{rectangle} $R$ in $\mathcal{L}$ is an open subset equipped with a homeomorphism $f_R \from (0,1)^2 \to R$. We require that $f_R$ sends intervals parallel to the $x$--axis to arcs of $F_\mathcal{L}$ and sends intervals parallel to the $y$--axis to arcs of $F^\mathcal{L}$. \end{definition} \begin{remark} \label{Rem:Cardinal} Since $\mathcal{L}$ is simply connected, we may choose orientations for the foliations $F^\mathcal{L}$ and $F_\mathcal{L}$. When we do this, we also assume that all rectangle maps $f_R$ preserve these orientations. This allows us to refer to the directions south, east, north, and west in $\mathcal{L}$. \end{remark} \begin{definition} \label{Def:Sides} Suppose that $F^\mathcal{L}$ and $F_\mathcal{L}$ are oriented. Suppose that $R$ is a rectangle in $\mathcal{L}$. Let $\gamma_t \from (0,1) \to (0,1)^2$ be given by $\gamma_t(s) = (t,s)$. The \emph{west side} of $R$ is the set of accumulation points of the sequence of arcs $(f_R(\gamma_t))_{t\to 0}$. We define the south, east, and north sides of $R$ similarly. Intersections of sides, when they exist, are called \emph{material corners} of $R$. \end{definition} \begin{figure}[htbp] \subfloat[Cusp rectangle]{ \label{Fig:CuspRect} \includegraphics[width = 0.4\textwidth]{Figures/cusp_rect} } \subfloat[Edge rectangle]{ \label{Fig:EdgeRect} \includegraphics[width = 0.4\textwidth]{Figures/edge_rect} } \subfloat[Face rectangle]{ \label{Fig:FaceRect} \includegraphics[width = 0.4\textwidth]{Figures/face_rect} } \subfloat[Tetrahedron rectangle]{ \label{Fig:TetRect} \includegraphics[width = 0.4\textwidth]{Figures/tet_rect} } \caption{Examples of cusp, edge, face, and tetrahedron rectangles. Here we indicate a point missing from the closure of a rectangle with a black dot.} \label{Fig:SkeletalRects} \end{figure} \begin{definition} \label{Def:CuspRect} Suppose that $F^\mathcal{L}$ and $F_\mathcal{L}$ are oriented. A rectangle $R$ in $\mathcal{L}$ is a \emph{south-west cusp rectangle} if there is a continuous extension of $f_R$ to a homeomorphism \[ \closure{f}_R \from [0,1]^2 - \{(0,0)\} \to \closure{R} \] We define \emph{south-east}, \emph{north-east}, and \emph{north-west cusp rectangles} similarly. Suppose that $R$ is a south-west cusp rectangle. Note that the north and east sides of $R$ are closed intervals and that the south and west sides of $R$ are half-open intervals. We call the south and west sides of $R$ \emph{cusp sides}. We make similar definitions for the other types of cusp rectangle. \end{definition} See \reffig{CuspRect} for an example of a south-west cusp rectangle. \begin{definition} \label{Def:TetRect} A rectangle $R$ in $\mathcal{L}$ is a \emph{tetrahedron rectangle} if there are $a, b, c, d \in (0,1)$ and a continuous extension of $f_R$ to a homeomorphism \[ \closure{f}_R \from [0,1]^2 - \{(a,0), (1,b), (c,1), (0,d)\} \to \closure{R}\qedhere \] \end{definition} See \reffig{TetRect} for an example of a tetrahedron rectangle. \subsection{Loom spaces} We are now ready to state our main definition. \begin{wrapfigure}[12]{r}{0.345\textwidth} \centering \labellist \small\hair 2pt \pinlabel {$R$} at 82 68 \endlabellist \vspace{5pt} \includegraphics[width = 0.25\textwidth]{Figures/cusp_side} \caption{A cusp rectangle $R$. Its southern side is contained in the shaded rectangle.} \label{Fig:CuspSide} \end{wrapfigure} \vspace{-22pt} \begin{definition} \label{Def:Loom} A \emph{loom space} $\mathcal{L}$ is a copy of $\mathbb{R}^2$ equipped with two transverse foliations $F^\mathcal{L}$ and $F_\mathcal{L}$ satisfying the following axioms. \begin{enumerate} \item \label{Itm:Cusp} Every cusp side of every cusp rectangle is contained in some rectangle. (See \reffig{CuspSide}.) \item \label{Itm:Tet} Every rectangle is contained in some tetrahedron rectangle. \item \label{Itm:Keane} If $R$ is a tetrahedron rectangle with associated parameters $a$, $b$, $c$, and $d$ then $a \neq c$ and $b \neq d$. \qedhere \end{enumerate} \end{definition} \begin{remark} To explain the name \emph{loom space} we recall that the warp and weft of a fabric, as produced by a loom, give a pair of transverse foliations. See \reffig{Fabric}. \end{remark} \begin{definition} \label{Def:LoomIso} Suppose that $\mathcal{L}$ and $\mathcal{M}$ are loom spaces. We say that $f \from \mathcal{L} \to \mathcal{M}$ is a \emph{loom isomorphism} if \begin{itemize} \item $f$ is a homeomorphism and \item $f$ sends leaves to leaves. \qedhere \end{itemize} \end{definition} Note that a loom isomorphism $f$ may send leaves of $F^\mathcal{L}$ to leaves of $F^\mathcal{M}$ \emph{or} to leaves of $F_\mathcal{M}$. We use $\Isom(\mathcal{L}, \mathcal{M})$ to denote the set of loom isomorphisms from $\mathcal{L}$ to $\mathcal{M}$. Note that loom isomorphisms compose in the usual way. Thus loom spaces, together with their loom isomorphisms, form a category; we denote this by $\operatorname{\mathsf{Loom}}(\mathbb{R}^2)$. Finally, since loom isomorphisms have inverses the set $\Aut(\mathcal{L}) = \Isom(\mathcal{L}, \mathcal{L})$ is a group with respect to composition. \subsection{Examples of loom spaces} Our first loom space comes from a well-known example in dynamics. The earliest exposition that we are aware of is due to Smale~\cite[page~757]{Smale67}. \begin{example} \label{Exa:AnosovMap} Suppose that $A_0 \in \SL(2, \mathbb{Z})$ is an \emph{Anosov matrix}: that is, $\operatorname{trace}(A_0)^2 > 4$. As an example, in \reffig{AnosovMap} we take \[ A_0 = \begin{pmatrix} 2 & 1 \\ 1 & 1 \end{pmatrix} \] Let $T = \mathbb{R}^2 / \mathbb{Z}^2$ be the two-torus; let $A$ be the homeomorphism of $T$ induced by $A_0$. Let $F^A$ and $F_A$ be the resulting eigenfoliations in $T$. Let $x \in T$ be the image of the origin. Let $\mathcal{L}$ be the universal cover of $T^\circ = T - \{x\}$. Define $F^\mathcal{L}$ and $F_\mathcal{L}$ by lifting the eigenfoliations. Then $\mathcal{L}$, with these foliations, is a loom space. The deck transformations of the covering give examples of loom isomorphisms. We obtain two more isomorphisms by lifting the actions of the matrices \[ R = \begin{pmatrix*}[r] 0 & -1 \\ 1 & 0 \end{pmatrix*} \quad \mbox{and} \quad G = \begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix} \] on $\mathbb{R}^2$ to $\mathcal{L}$. It is an exercise to show that these (and the deck transformations) generate $\Aut(\mathcal{L})$. \end{example} \begin{figure}[htbp] \labellist \small\hair 2pt \pinlabel \rotatebox{31.718}{$S$} at 115 20 \pinlabel \rotatebox{31.718}{$W$} at -25 63 \pinlabel \rotatebox{31.718}{$N$} at 5 210 \pinlabel \rotatebox{31.718}{$E$} at 150 170 \endlabellist \includegraphics[width = 0.6\textwidth]{Figures/anosov_map} \caption{The action of $A_0$ on its eigenfoliations. The dots are placed at integer lattice points. The rectangles containing letters (with their usual aspect ratio) are mapped by $A_0$ to the corresponding shaded rectangles. These descend to $T$ to give a \emph{Markov partition}.} \label{Fig:AnosovMap} \end{figure} Our next family of examples comes from work of Thurston~\cite[Theorem~4(ii)]{Thurston88}; as their name indicates, these generalise \refexa{AnosovMap} to surfaces of higher genus. \begin{example} \label{Exa:PseudoAnosovMap} Suppose that $S$ is a closed, connected, oriented surface with genus two or more. Suppose that $f \from S \to S$ is a \emph{pseudo-Anosov map}: that is, there are transverse measured singular foliations $F^f$ and $F_f$, each preserved leafwise by $f$, whose measures are, by $f$, respectively expanded and contracted by a common factor $\lambda_f > 1$. Let $Z \subset S$ be the set of singularities of $F^f$ and $F_f$. Let $S^\circ = S - Z$. Form $\mathcal{L}$ by taking the universal cover of $S^\circ$ and lifting the foliations. Then $\mathcal{L}$, with these foliations, is a loom space. The element $f$ and the deck transformations generate a free-by-cyclic group. In future work we will show that this is a finite index subgroup of $\Aut(\mathcal{L})$. See \refsec{Future}. \end{example} We can generalise \refexa{PseudoAnosovMap} by instead taking $q$ to be a \emph{quadratic differential} on $S$. (Excellent introductions to abelian and quadratic differentials include~\cite{Zorich06} and~\cite{Wright15}.) We must assume that the vertical and horizontal foliations $F^q$ and $F_q$ have no compact leaves. Taking $Z$ to be the set of zeros of $q$, the rest of the construction is the same as \refexa{PseudoAnosovMap}. This gives uncountably many examples of loom spaces. Returning to the topological theme, suppose that $f \from S \to S$ is a surface homeomorphism. As discussed in \refsec{Intro}, from $f$ we form the mapping torus $M(f)$ and its suspension flow $\Phi(f)$. For an example, see \reffig{FigEightBox}. It is an exercise to show that $\Phi(f)$ is a \emph{smooth pseudo-Anosov flow} if and only if $f$ is a pseudo-Anosov homeomorphism. For definitions, see~\cite[Section~6.6]{Calegari07}. Our next example generalises this to other three-manifolds; we refer to Fenley's work, in particular~\cite[Definition~3.2]{Fenley98}, for an overview of pseudo-Anosov flows \emph{without perfect fits}. \begin{figure}[htbp] \includegraphics[width = 0.6\textwidth]{Figures/fig_8_flow_boxes} \caption{A flow box decomposition of the suspension of \refexa{AnosovMap}. To obtain a pseudo-Anosov flow, take a branched cover over the suspension of the origin.} \label{Fig:FigEightBox} \end{figure} \begin{example} \label{Exa:PseudoAnosovFlow} Suppose that $M$ is a closed, connected, oriented three-manifold. Suppose that $\Phi \from M \times \mathbb{R} \to M$ is a topological pseudo-Anosov flow (without perfect fits). Let $\Sigma^\Phi$ and $\Sigma_\Phi$ be the stable and unstable foliations of $M$. We remove from $M$ all singular flow loops to obtain the \emph{drilled space} $M^\circ$. We restrict $\Phi$ to $M^\circ$ to obtain $\Phi^\circ$. We form the universal cover $\cover{M^\circ}$ and lift both foliations. The \emph{leaf space} $\mathcal{L}$ is the quotient of $\cover{M^\circ}$ by the flow $\cover{\Phi^\circ}$. The lifted stable and unstable foliations descend to give $F^\mathcal{L}$ and $F_\mathcal{L}$. In future work, we will give a combinatorial proof that $\mathcal{L}$ is a loom space. See also recent work of Landry, Minsky, and Taylor~\cite[Section 4]{LMT21}. In addition, we will show that $\pi_1(M^\circ)$ lies in $\Aut(\mathcal{L})$ as a finite index subgroup. \end{example} \begin{wrapfigure}[19]{l}{0.27\textwidth} \vspace{-8pt} \centering \includegraphics[width = 0.25\textwidth]{Figures/veering_fig_8_v3} \caption{The veering triangulation for the figure-eight knot complement.} \label{Fig:FigEightTri} \end{wrapfigure} We also note that pseudo-Anosov flows and maps are closely related to \emph{expansive flows} and maps in dimensions three and two. These are defined by Bowen and Walters~\cite{BowenWalters72}. They give suspensions as a particular example in Section~4 of~\cite{BowenWalters72}. In their Theorem~6 they prove that $\Phi(f)$ is expansive if and only if $f$ is expansive. Removing the singular orbits of the pseudo-Anosov flow yields a manifold with torus boundary components. Our final examples are closely related to these \emph{drilled} flows, but are completely combinatorial. These rely on Agol's notion of a \emph{veering triangulation}~\cite[Definition~4.1]{Agol11}; see \refsec{Veering} for precise definitions and see \reffig{FigEightTri} for a concrete example. \begin{example} \label{Exa:Veering} Suppose that $M$ is a compact, connected, oriented three-manifold with $\bdy M$ a non-empty collection of tori. Suppose that $\mathcal{V}$ is a \emph{veering} triangulation of $M$: that is, an ideal triangulation of the interior of $M$ equipped with a taut structure and a veering colouring. In future work we will show that \begin{itemize} \item there is a canonical \emph{link space} $\mathcal{L}$ associated to the lift of $\mathcal{V}$ to the universal cover of $M$, \item $\mathcal{L}$ is a loom space, and \item $\pi_1(M)$ is finite index in $\Aut(\mathcal{L})$. \end{itemize} For more details see \refsec{Future}. \end{example} The overall goal of this paper is to provide the converse to \refexa{Veering}. In \refprop{Functorial}, from a given loom space, we build a \emph{locally veering} triangulation. In \refthm{ThreeSpace} we prove that the realisation of this triangulation is homeomorphic to $\mathbb{R}^3$. \begin{remark} \label{Rem:Aperiodic} All of the examples of loom spaces given above have large automorphism groups. It is interesting to contemplate how one might obtain a finitely described, yet aperiodic, loom space. \end{remark} \subsection{Skeletal rectangles} From now on, we will assume that $\mathcal{L}$, equipped with the foliations $F^\mathcal{L}$ and $F_\mathcal{L}$, is a loom space in the sense of \refdef{Loom}. For the next two definitions we choose orientations as in \refrem{Cardinal}. See \reffig{SkeletalRects} for the following definitions and lemmas. \begin{definition} \label{Def:EdgeRect} A rectangle $R$ in $\mathcal{L}$ is a \emph{red edge rectangle} if there is a continuous extension of $f_R$ to a homeomorphism \[ \closure{f}_R \from [0,1]^2 - \{(0,0), (1,1)\} \to \closure{R} \] An edge rectangle $R$ is \emph{blue} if the missing points are instead $(0,1)$ and $(1,0)$. \end{definition} \begin{definition} \label{Def:FaceRect} A rectangle $R$ in $\mathcal{L}$ is a \emph{south-west face rectangle} if there are $a, b \in (0,1)$ and a continuous extension of $f_R$ to a homeomorphism \[ \closure{f}_R \from [0,1]^2 - \{(0,0), (1,a), (b,1)\} \to \closure{R} \] We define the three other types of face rectangle similarly. \end{definition} \begin{lemma} \label{Lem:TetFaceEdge} \leavevmode \begin{itemize} \item Every tetrahedron rectangle contains exactly four face rectangles. \item Every tetrahedron rectangle contains exactly six edge rectangles. \item Every face rectangle contains exactly three edge rectangles. \end{itemize} \end{lemma} \begin{proof} Suppose that $R$ is a tetrahedron rectangle and let $f_R$ be the given parametrisation. There are at most four face rectangles in $R$; each meets three of the four sides of $R$. Let $(a, 0)$ and $(c, 1)$ be the missing points on the southern and northern sides. Appealing to \refdef{Loom}\refitm{Keane} and breaking symmetry, suppose that $a < c$. Then \[ F = f_R \left( \{ (x, y) \in (0, 1)^2 \st x > a \} \right) \] is one of the desired face rectangles. The remaining three are formed similarly. The other two statements are proved similarly. \end{proof} \begin{lemma} \label{Lem:FaceTet} Every face rectangle is contained in exactly two tetrahedron rectangles. \end{lemma} \begin{proof} Breaking symmetry, suppose that $F$ is a north-west face rectangle. See \reffig{TetPairingLoom}. Let $\delta_F$ be the northern side of $F$. Let $C$ be a small rectangle contained in $F$ and meeting both the north and west sides of $F$. Let $\delta_C \subset \delta_F$ be the northern side of $C$. Since $C \subset F$ we deduce that $C$ is a cusp rectangle and that $\delta_C$ is a cusp side. By \refdef{Loom}\refitm{Cusp} we have that $\delta_C$ is contained in a rectangle, say $D$. Note that $\epsilon = \delta_F - D$ is compact in $\mathcal{L}$. So we may cover $\epsilon$ by a finite collection of rectangles. We deduce that there is a rectangle $F'$ so that $F'$ contains both $F$ and $\delta_F$. We appeal to \refdef{Loom}\refitm{Tet} to obtain a tetrahedron rectangle $P$ containing $F'$. Repeating the argument with the western side of $F$ gives another tetrahedron rectangle $Q$ containing $F$. The leaf containing $\delta_F$ meets $P$ but does not meet $Q$. Hence $P$ and $Q$ are distinct. We now show that there are at most two tetrahedron rectangles containing $F$. Suppose that $R$ is any tetrahedron rectangle containing $F$. Then the southern and eastern sides of $R$ contain those of $F$. The north-western corner of $F$ is the missing point from the western or northern side of $R$. We deduce that $R$ is thus equal to $P$ or to $Q$. \end{proof} It is more difficult to prove that an edge rectangle $E$ is contained in only finitely many face rectangles. This is deferred to \refcor{EdgeFace}. \section{Cusps and corners} \label{Sec:Cusps} The \emph{cusps} of a loom space $\mathcal{L}$ provide the beginnings of a boundary at infinity for $\mathcal{L}$. This section provides the background needed for the statement of the astroid lemma. \begin{definition} \label{Def:Equivalent} Suppose that $R$ and $Q$ are cusp rectangles in $\mathcal{L}$. We say that $R$ is \emph{equivalent} to $Q$ if there is a finite sequence of cusp rectangles \[ (R = R_0, R_1, \ldots, R_n = Q) \] so that for each pair $(R_i, R_{i+1})$ some cusp side of one is contained in some cusp side of the other. \end{definition} \begin{definition} \label{Def:Cusp} A \emph{cusp} is an equivalence class of cusp rectangles. \end{definition} We refer to a representative cusp rectangle $R$ for the cusp $c = [R]$ as a \emph{cusp rectangle for $c$}. \begin{definition} \label{Def:CuspsOf} Suppose that $Q$ is a subset of $\mathcal{L}$. We say that $c$ is a \emph{cusp of $Q$} if some cusp rectangle $R$ for $c$ lies in $Q$. We define $\Delta(Q)$ to be the set of cusps of $Q$. \end{definition} Recall from \refdef{Sides} that a rectangle may have as many as four material corners. Cusps provide any remaining corners, as follows. \begin{definition} Suppose that $Q \subset \mathcal{L}$ is a rectangle. Suppose that $Q$ contains a cusp rectangle $R$ where the cusp sides of $R$ are contained in sides of $Q$. Then we call $c = [R]$ an \emph{ideal corner} of $Q$. \end{definition} \begin{lemma} \label{Lem:Corners} Every rectangle has four corners. At most two of these are ideal. \end{lemma} \begin{proof} Suppose that $R$ is the given rectangle. By \refdef{Loom}\refitm{Tet} we have that $R$ is contained in a tetrahedron rectangle. The result now follows from \refdef{Loom}\refitm{Keane}. \end{proof} \begin{definition} Fix a rectangle $R \subset \mathcal{L}$. Suppose that $x$ and $y$ are corners (material or ideal) of $R$. We say that $x$ and $y$ are \emph{adjacent} if they are incident to a single side of $R$. If $x$ and $y$ are not adjacent then they are \emph{opposite}. \end{definition} \begin{definition} Suppose that $\ell$ is a leaf of $F^\mathcal{L}$ or $F_\mathcal{L}$. Suppose that $R$ is a cusp rectangle in $\mathcal{L}$ with a cusp side $\delta$. If $\delta$ is contained in $\ell$ then we call $\ell$ a \emph{cusp leaf for $c$}. \end{definition} It follows that $\delta$, the cusp side of $R$, contains an end of the leaf $\ell$. \begin{remark} \label{Rem:CuspLeaves} Suppose that $c$ is a cusp. As in \refrem{Leaves}, the Poincar\'e--Hopf theorem implies that any two cusp leaves for $c$ are disjoint. \end{remark} \begin{lemma} \label{Lem:AtMostOne} Any leaf $\ell$ of $F^\mathcal{L}$ (or of $F_\mathcal{L}$) is a cusp leaf for at most one cusp. \end{lemma} \begin{proof} Suppose that $c$ and $d$ are distinct cusps of $\mathcal{L}$. Suppose for a contradiction that $\ell$ is a cusp leaf for both $c$ and $d$. Let $R$ and $Q$ be cusp rectangles at $c$ and $d$ with cusp sides $\gamma$ and $\delta$, both contained in $\ell$. If $\gamma$ and $\delta$ contain the same end of $\ell$ then $c = d$, contrary to assumption. Thus $\gamma$ and $\delta$ contain the two ends of $\ell$. By \refdef{Loom}\refitm{Cusp} there are rectangles $R'$ and $Q'$ that contain $\gamma$ and $\delta$ in their interior. We cover the (necessarily compact) interval $\ell - (R' \cup Q')$ by finitely many rectangles. We deduce that all of $\ell$ is contained in a single rectangle. By \refdef{Loom}\refitm{Tet}, this rectangle is contained in a tetrahedron rectangle. Appealing to \refdef{Loom}\refitm{Keane}, we arrive at the desired contradiction. \end{proof} \begin{lemma} \label{Lem:OneEdge} Suppose that $R$ is an edge rectangle. Then the two cusps meeting $R$ are distinct. \end{lemma} \begin{proof} Let $x$ be an interior point of $R$. Let $\ell^x$ be the leaf of $F^\mathcal{L}$ containing $x$. Let $P$ and $Q$ be the two components of $R - \ell^x$. These are both cusp rectangles. Set $c = [P]$. Let $\ell^c$ and $m_c$ be the cusp leaves containing the cusp sides of $P$. Suppose that $P'$ is any cusp rectangle equivalent to $P$. Let \[ (P = P_0, P_1, \ldots, P_n = P') \] be a minimal sequence of cusp rectangles satisfying \refdef{Equivalent}. By \refrem{Leaves} one of the leaves $\ell^c$ or $m_c$ separates $P_1$ from $Q$. By minimality and induction, the same leaf separates $P_k$ from $Q$ for all $k>0$. \end{proof} \section{The astroid lemma} \label{Sec:Astroid} Here we prove the \emph{astroid lemma} (\reflem{Astroid}). This controls the projection of certain cusps to certain leaves of $F^\mathcal{L}$ and $F_\mathcal{L}$. \subsection{Staircases} Suppose that $x$ is a point or a cusp of $\mathcal{L}$. Fix any rectangle $R$ with a corner at $x$. Following Gu\'{e}ritaud~\cite[Section 4.3]{Gueritaud16}, we make the following definition. \begin{definition} \label{Def:Staircase} The \emph{staircase} $\stair(x, R)$ is the closure of the union of all rectangles $Q \subset \mathcal{L}$ where \begin{itemize} \item $x$ is a corner of $Q$ and \item $Q \cap R$ is non-empty. \qedhere \end{itemize} \end{definition} \noindent We often write $\stair(x) = \stair(x, R)$, suppressing the choice of $R$. \begin{figure}[htbp] \setlength{\abovecaptionskip}{20pt} \centering \labellist \small\hair 2pt \pinlabel $x$ [tr] at 0 0 \pinlabel $c$ [bl] at 140 75 \pinlabel $\pi_m(c)$ [t] at 140 0 \pinlabel $\pi^\ell(c)$ [r] at 0 75 \pinlabel $\ell$ [br] at 0 570 \pinlabel $c_m$ [t] at 350 0 \pinlabel $m$ [tl] at 570 0 \endlabellist \includegraphics[width=0.45\textwidth]{Figures/staircase} \caption{A staircase. Cusps are indicated with black dots. Labelled material points are indicated by yellow dots. In this example $x$ is a material point of $\mathcal{L}$ to the west of a cusp $c_m$, the axis cusp of the lower axis ray $m$.} \label{Fig:Staircase} \end{figure} \begin{definition} \label{Def:AxisRays} We take $m = m(x, R) \subset F_\mathcal{L}$ to be the union of the arcs $s$ in $F_\mathcal{L}$ so that there is a rectangle $Q$ so that \begin{itemize} \item $Q \subset \stair(x, R)$, \item $s$ is a side of $Q$, and \item $s$ has an endpoint at $x$. \end{itemize} We define $\ell = \ell(x, R) \subset F^\mathcal{L}$ similarly. We call $m$ and $\ell$ the \emph{lower} and \emph{upper axis rays}, respectively, for $\stair(x)$. \end{definition} \begin{lemma} \label{Lem:Initial} Suppose that $\stair(x) = \stair(x, R)$ is a staircase. Breaking symmetry, suppose that $x$ is the southwest corner of $R$. Suppose that $m'$ is an initial segment of the lower axis ray $m$. Then there is a rectangle $R'\subset \stair(x)$ so that \begin{itemize} \item $x$ is the southwest corner of $R'$ and \item $m'$ is the south side of $R'$. \end{itemize} \end{lemma} \begin{proof} Note that $x$ is either a point of a cusp of $\mathcal{L}$. Thus either using the fact that rectangles are a basis for the topology, or using \refdef{Loom}\refitm{Cusp}, there is an initial segment $m''$ of $m$ contained in a rectangle $Q$. If $m''$ contains $m'$ then we cut $Q$ using the axis rays to obtain $R'$. If not, there are two cases as the axis cusp $c_m$ is or is not contained in $m'$. If $c_m$ is not contained in $m'$ then the interval $m' - Q$ is compact and so covered by a rectangle $Q'$. Cutting $Q \cup Q'$ by the axis rays and reducing their height gives the desired rectangle $R'$. If $c_m$ is contained in $m'$ then we apply \refdef{Loom}\refitm{Cusp} twice, and the remaining argument is as above. \end{proof} \begin{lemma} \label{Lem:BdyQ} Suppose that $x$ is the southwest corner of $R$. Then the lower axis ray $m$ has the following properties. \begin{itemize} \item Suppose $x$ is a cusp. Then $m$ is a cusp leaf. \item Suppose $x$ is not a cusp. Let $m_x$ be the leaf of $F_\mathcal{L}$ containing $x$. \begin{itemize}[label=$\circ$] \item If $m_x$ is a non-cusp leaf (or a cusp leaf emanating from the east of its cusp) then $m$ is the eastern component of $m_x - x$. \item If $m_x$ is a cusp leaf emanating from the west of its cusp, then let $c_m$ be its cusp. In this case $m - m_x$ is a cusp leaf emanating from the east of $c_m$. \end{itemize} \end{itemize} A similar statement holds when $x$ is one of the other corners of $R$. Similar statements also hold for the upper axis ray $\ell$. \end{lemma} \begin{remark} It follows that the lower axis ray $m$ is contained in a union of at most two leaves of $F_\mathcal{L}$. If two leaves are required then $x$ is a point of $\mathcal{L}$, on a cusp leaf containing a cusp side of $R$. In this case the two leaves share a cusp, namely $c_m$, which is distinct from $x$. When it exists, we call $c_m$ the \emph{axis cusp} for $m$. See \reffig{Staircase}. We make similar definitions for the upper axis ray $\ell$. \end{remark} \begin{proof}[Proof of \reflem{BdyQ}] Suppose that $x$ is a cusp. Shrinking $R$ slightly, we may assume that $R$ is a cusp rectangle for $x$. Let $m'$ be the cusp leaf to the east of $x$ which contains the southern side of $R$. We must show that the lower axis ray $m$ equals $m'$. Note $m$ is the union of connected sets (the southern sides of rectangles), all meeting a connected set (the southern side of $R$). Thus $m$ is connected. We deduce that $m$ is contained in $m'$. It remains to prove that $m'$ is contained in $m$. Let $m''$ be any closed initial segment of $m'$. By \reflem{Initial}, the segment $m''$ is the southern side of some rectangle $R'$, showing that $m''$ lies in $m$. We deduce that $m'$ is contained in $m$. The remaining cases are similar. \end{proof} \begin{definition} \label{Def:ExteriorCusps} A cusp $c$ is an \emph{exterior cusp} of $\stair(x)$ if there is a rectangle in $\stair(x)$ having $c$ and $x$ as opposite corners. We define $\dotDelta(\stair(x))$ to be the set of exterior cusps of $\stair(x)$. \end{definition} Note that $\dotDelta(\stair(x)) \subset \Delta(\stair(x))$. When $x$ is a cusp, or when axis cusps exist, the containment $\dotDelta(\stair(x)) \subset \Delta(\stair(x))$ is proper. See \reffig{Staircase}. Let $\bdy m$ denote the end of the lower axis ray $m$ which is not at $x$ (or at the axis cusp $c_m$, if it exists). Note that $\bdy m$ is not at a cusp by Lemmas~\ref{Lem:AtMostOne} and~\ref{Lem:BdyQ}. We define $\bdy \ell$ similarly. We define a pair of projections \[ \pi_m \from \dotDelta(\stair(x)) \to m \quad \mbox{and} \quad \pi^\ell \from \dotDelta(\stair(x)) \to \ell \] as follows. Suppose that $c \in \dotDelta(\stair(x))$ is an exterior cusp. Then there is a rectangle $Q \subset \stair(x)$ with opposite corners at $x$ and $c$. We define $\pi_m(c)$ and $\pi^\ell(c)$ to be the corners of $Q$, \emph{other than} $x$, lying on $m$ and $\ell$ respectively. See \reffig{Staircase}. \subsection{Statement and proof} We now control the images of the projections $\pi_m$ and $\pi^\ell$. \begin{lemma}[Astroid lemma] \label{Lem:Astroid} Suppose that $\stair(x)$ is a staircase in $\mathcal{L}$; suppose that $m$ and $\ell$ are its axis rays. \begin{enumerate} \item \label{Itm:DoesNot} The image of $\pi_m$ does not accumulate at any interior point of $m$ (nor does it accumulate at the axis cusp $c_m$, if present). \item \label{Itm:Does} The image of $\pi_m$ accumulates at $x$ and at $\bdy m$. \end{enumerate} Similar statements hold for $\pi^\ell$. \end{lemma} \begin{remark} \label{Rem:Others} Loom spaces associated to pseudo-Anosov homeomorphisms have various natural non-complete euclidean metrics. Each such metric has a definite injectivity radius; the astroid lemma is immediate in these cases. See~\cite[Lemma~14]{DelecroixUlcigrai15},~\cite[Figure~12]{Gueritaud16}, and~\cite[Figure~12]{Landry19}. Loom spaces associated to pseudo-Anosov flows (without perfect fits) on finite volume hyperbolic three-manifolds need not have a natural choice of metric. However, in this setting the action of the fundamental group still gives a local finiteness that can replace the lower bound on injectivity radius. This, in slightly different language, is carried out in~\cite[Section 4]{LMT21}. See in particular their Figure~18. \end{remark} \begin{proof}[Proof of \reflem{Astroid}] Breaking symmetry, suppose that $x$ is southwest of $\stair(x)$. Suppose that, in contradiction to \refitm{DoesNot}, there is a sequence of distinct exterior cusps $c_i \in \dotDelta(\stair(x))$ so that $r_i = \pi_m(c_i)$ accumulates at $r_\infty$, an interior point of $m$ (possibly the axis cusp $c_m$). Define $s_i = \pi^\ell(c_i)$. Let $R_i \subset \stair(x)$ be the rectangle with corners at $x$, $r_i$, $c_i$, and $s_i$. Since the $c_i$ are all distinct, by \refdef{Loom}\refitm{Keane} the points $r_i$ and $s_i$ are also all distinct. We orient $m$ and $\ell$ away from $x$. We pass to a subsequence of the $c_i$ to ensure that the sequence $(r_i)$ is strictly monotonic in $m$. Note that the rectangles $R_i$ cannot nest; we deduce that the sequence $(s_i)$ is also strictly monotonic in $\ell$. Likewise, exactly one of the sequences $(r_i)$ and $(s_i)$ is increasing while the other is decreasing. See \reffig{Astroid}. \begin{figure}[htbp] \captionsetup[subfloat]{captionskip=15pt} \subfloat[]{ \centering \labellist \small\hair 2pt \pinlabel $x$ [tr] at 0 0 \pinlabel $\ell$ [b] at 7 590 \pinlabel $m$ [l] at 590 7 \pinlabel $c_1$ [bl] at 190 530 \pinlabel $r_1$ [t] at 190 0 \pinlabel $s_1$ [Br] at 5 530 \pinlabel $c_2$ [bl] at 280 420 \pinlabel $r_2$ [t] at 280 0 \pinlabel $s_2$ [Br] at 5 420 \pinlabel $c_3$ [bl] at 335 352 \pinlabel $r_3$ [t] at 335 0 \pinlabel $s_3$ [Br] at 5 352 \pinlabel $c_\infty$ [l] at 505 265 \pinlabel $r_\infty$ [t] at 518 0 \pinlabel $s_\infty$ [Br] at 5 265 \pinlabel $m_\infty$ [t] at 100 265 \pinlabel $P$ [t] at 445 245 \endlabellist \includegraphics[width=0.4\textwidth]{Figures/astroid1} \label{Fig:AstroidIncreasing} } \qquad \subfloat[]{ \centering \labellist \small\hair 2pt \pinlabel $x$ [tr] at 0 0 \pinlabel $\ell$ [b] at 7 590 \pinlabel $m$ [l] at 590 7 \pinlabel $\ell_\infty$ [b] at 190 583 \pinlabel $r_\infty$ [t] at 190 0 \pinlabel $c_4$ [bl] at 284 420 \pinlabel $r_4$ [t] at 284 0 \pinlabel $s_4$ [Br] at 5 420 \pinlabel $c_3$ [bl] at 327 353 \pinlabel $r_3$ [t] at 327 0 \pinlabel $s_3$ [Br] at 5 353 \pinlabel $c_2$ [bl] at 404 263 \pinlabel $r_2$ [t] at 404 0 \pinlabel $s_2$ [Br] at 5 263 \pinlabel $c_1$ [bl] at 515 177 \pinlabel $r_1$ [t] at 515 0 \pinlabel $s_1$ [Br] at 5 177 \pinlabel $R'_\infty$ at 100 505 \endlabellist \includegraphics[width=0.4\textwidth]{Figures/astroid2} \label{Fig:AstroidDecreasing} } \caption{The two possibilities as $r_i = \pi_m(c_i)$ is increasing or decreasing along $m$.} \label{Fig:Astroid} \end{figure} We break the proof into two cases. \begin{case*} Suppose that $(r_i)$ is increasing in $m$. \end{case*} \noindent Thus $(s_i)$ is decreasing along $\ell$. See \reffig{AstroidIncreasing}. Since $m$ can be realised as an increasing union of southern sides of rectangles, by \reflem{BdyQ} we have a rectangle $Q$ in the staircase with corners at $x$ and $r_\infty$. Thus the points $s_i$ do not enter the interior of the western side of $Q$. That is, the sequence $(s_i)$ is bounded away from $x$ in $\ell$. Thus there is some $s_\infty$ where they accumulate. Note that $s_\infty$ may be $c_\ell$, the axis cusp of $\ell$. Let $m_\infty$ be the ray of $F_\mathcal{L}$ emanating from $s_\infty$ and entering $\stair(x)$. Recall that $R_i$ is the rectangle with opposite corners at $x$ and $c_i$. Define $R'_i$ to be the component of $R_i - m_\infty$ with a corner at $x$. Define $R'_\infty$ to be the increasing union of the $R'_i$. Thus $R'_\infty$ is a rectangle. Let $c_\infty$ be its northeastern corner. By \reflem{Corners} we have that $c_\infty$ is a point or a cusp of $\mathcal{L}$. In either case (appealing to \refdef{Loom}\refitm{Cusp} if needed) there is a small rectangle $P$ so that \begin{itemize} \item $c_\infty$ lies in the interior of the eastern side of $P$ and \item the interior of $Q$ meets $m_\infty$. \end{itemize} However, the projections of $(c_i)$ accumulate on $s_\infty \in \ell$ and $r_\infty \in m$ respectively. Thus the $c_i$ enter $P$, a contradiction. Again, see \reffig{AstroidIncreasing}. \begin{case*} Suppose that $(r_i)$ is decreasing along $m$. \end{case*} \noindent Thus $(s_i)$ is increasing along $\ell$. See \reffig{AstroidDecreasing}. Let $\ell_\infty$ be the ray of $F^\mathcal{L}$ emanating from $r_\infty$ and entering $\stair(x)$. Define $R'_i$ to be the component of $R_i - \ell_\infty$ with a corner at $x$. Define $R'_\infty$ to be the union of the $R'_i$. Again, $R'_\infty$ is a rectangle. By \refdef{Loom}\refitm{Tet} there is a tetrahedron rectangle $Q$ containing $R'_\infty$. The north side of $Q$ meets $\ell$ and gives an upper bound for the $s_i$. We now apply the previous argument, swapping the roles of $m$ and $\ell$. This completes the proof of \refitm{DoesNot}. To prove \refitm{Does} we must find a sequence of exterior cusps $c_i \in \dotDelta(\stair(x))$ whose projections $\pi_m(c_i)$ accumulate at $\bdy m$ and whose projections $\pi^\ell(c_i)$ accumulate at $x$. Let $(m_i)$ be an increasing sequence of open initial segments of $m$, whose union is $m$. By \reflem{Initial}, there is a rectangle $R_i$ with southwest corner at $x$ and whose southern side is $m_i$. Let $Q_i$ be the union of all rectangles $Q$ so that \begin{itemize} \item $Q$ contains $R_i$ and \item the west sides of $Q$ and $R_i$ are identical. \end{itemize} Since $Q_i$ is a rectangle, by \refdef{Loom}\refitm{Tet} we have a tetrahedron rectangle, $R_i'$, containing $Q_i$. Thus there is a cusp $c_i$ contained in the east side of $R_i'$. Note that $c_i$ is north of $m$, by the construction of $Q_i$. So there is a rectangle in $\stair(x)$ with opposite corners at $x$ and $c_i$; thus $c_i$ is an exterior cusp of $\stair(x)$. By construction, the projection $\pi_m(c_i)$ is not contained in $m_i$. Since the $m_i$ exhaust $m$, the sequence of projections accumulates on $\bdy m$. It follows that the sequence of projections $\pi^\ell(c_i)$ is decreasing in $\ell$. By \refitm{DoesNot}, the sequence $\pi^\ell(c_i)$ tends to $x$. This proves \refitm{Does} for $m$; the proof for $\ell$ is similar. \end{proof} We record a few consequences of the astroid lemma. \begin{corollary} \label{Cor:CuspLeavesDense} The cusp leaves are dense in $F^\mathcal{L}$ and $F_\mathcal{L}$. \qed \end{corollary} See also~\cite[Lemma 4.2(1)]{LMT21}. \begin{corollary} \label{Cor:EdgesOnStair} Suppose that $c$ and $d$ are exterior cusps for $\stair(x)$. Suppose their projections to $\ell^x$ (or $m_x$) are consecutive. (That is, not separated by the image of any other exterior cusp.) Then there is an edge rectangle $R \subset \stair(x)$ having $c$ and $d$ as opposite corners. \qed \end{corollary} \subsection{Finiteness and connectedness} \begin{lemma} \label{Lem:Finite} For any rectangle $R$ there are only finitely many tetrahedron rectangles containing it. \end{lemma} \begin{proof} Suppose that $R$ is the given rectangle. Let $x$ be the south-west corner of $R$. Let $\stair(x) = \stair(x, R)$ the resulting staircase; let $m$ and $\ell$ be its axis rays. \begin{figure}[htbp] \centering \labellist \small\hair 2pt \pinlabel $x$ [tr] at 0 0 \pinlabel $R$ at 95 90 \pinlabel $\ell$ [b] at 7 590 \pinlabel $m$ [l] at 590 7 \pinlabel $c$ [b] at 150 555 \pinlabel $d$ [l] at 552 145 \endlabellist \includegraphics[width=0.4\textwidth]{Figures/maximise_rectangle} \caption{The staircase $\stair(x, R)$.} \label{Fig:MaximiseRectangle} \end{figure} Let $m_R \subset m$ be the projection of $R$ to $m$, along $F^\mathcal{L}$. Similarly, let $\ell^R \subset \ell$ be the projection of $R$ to $\ell$, along $F_\mathcal{L}$. See \reffig{MaximiseRectangle}. By \reflem{Astroid}\refitm{Does}, there are exterior cusps $c$ and $d$ in $\dotDelta(\stair(x))$ so that $\pi_m(c)$ lies in $m_R$ and $\pi^\ell(d)$ lies in $\ell^R$. By \reflem{Astroid}\refitm{DoesNot}, there are only finitely many cusps $c' \in \dotDelta(\stair(x))$ so that $\pi_m(c')$ lies between $\pi_m(c)$ and $\pi_m(d)$. Furthermore, we may replace $x$ with any other corner of $R$ and perform the same analysis in the corresponding staircase. This determines a finite collection of cusps. A tetrahedron rectangle is determined by the cusps in its four sides. This gives the desired bound. \end{proof} \begin{corollary} \label{Cor:EdgeFace} Any edge rectangle is contained in only finitely many face rectangles. \end{corollary} \begin{proof} This follows from Lemmas \ref{Lem:Finite} and \ref{Lem:TetFaceEdge}. \end{proof} Before giving the next result we require two definitions. \begin{definition} \label{Def:FaceAdjacent} Suppose that $P$ and $Q$ are distinct tetrahedron rectangles. We say that $P$ and $Q$ are \emph{face adjacent} if their intersection, $P \cap Q$, is a face rectangle. In general, we say that two tetrahedron rectangles $P$ and $Q$ are \emph{face connected} if there is a finite sequence $(P = P_0, P_1, \ldots, P_n = Q)$ of tetrahedron rectangles where $P_i$ and $P_{i+1}$ are face adjacent for all $i$. \end{definition} \begin{wrapfigure}[11]{r}{0.45\textwidth} \vspace{-3pt} \centering \labellist \small\hair 2pt \pinlabel $R$ at 215 190 \endlabellist \includegraphics[width=0.37\textwidth]{Figures/two_tet_rects_general} \caption{A possible picture for \reflem{Ascend}.} \label{Fig:Ascend} \end{wrapfigure} Note that every tetrahedron rectangle is face connected to itself. \begin{definition} \label{Def:Spans} Suppose that $P$ and $Q$ are rectangles of $\mathcal{L}$. We say that $P$ \emph{west-east spans} $Q$ if there is a leaf of the induced foliation $F_Q$ that is contained in $P$. We say that $P$ \emph{properly} west-east spans $Q$ if, additionally, $P - Q$ has two components. We define \emph{south-north spans} and \emph{properly south-north spans} similarly. \end{definition} \begin{remark} \label{Rem:Spans} Note that the definition of west-east spans is independent of any choices of orientation made as in \refrem{Cardinal}. \end{remark} \begin{lemma} \label{Lem:Ascend} Suppose that $P$ and $Q$ are tetrahedron rectangles. Suppose that $P$ west-east spans $Q$. Then there is a sequence of tetrahedron rectangles $(P = P_0, P_1, \ldots, P_n = Q)$ so that \begin{itemize} \item $P_i$ is face adjacent to $P_{i+1}$ and \item $P_i$ west-east spans $P_{i+1}$. \end{itemize} \end{lemma} \begin{proof} Let $R = P \cap Q$. We define $\operatorname{\mathsf{tet}}(R)$ to be the set of tetrahedron rectangles that contain $R$. By \reflem{Finite} the set $\operatorname{\mathsf{tet}}(R)$ is finite. We induct on the size of $\operatorname{\mathsf{tet}}(R)$. In the base case $R = P = Q$, so $\operatorname{\mathsf{tet}}(R)$ has exactly one element and there is nothing to prove. In general, let $F$ be any face rectangle of $P$ that south-north spans $P$ and contains $R$. Applying \reflem{FaceTet} there is exactly one tetrahedron rectangle $P'$ that is, via $F$, face adjacent to $P$. Note that $P$ west-east spans $P'$ which in turn west-east spans $Q$. See \reffig{Ascend}. Here the widest rectangle is $P$, the tallest is $Q$, and the remaining tetrahedron rectangle is $P'$. Set $R' = P' \cap Q$ and note that $R \subset R'$. Thus $\operatorname{\mathsf{tet}}(R') \subset \operatorname{\mathsf{tet}}(R)$. Furthermore, $P$ is an element of $\operatorname{\mathsf{tet}}(R)$ but is not an element of $\operatorname{\mathsf{tet}}(R')$. The induction hypothesis now implies that $P'$ is face connected to $Q$, using only the tetrahedra in $\operatorname{\mathsf{tet}}(R')$, completing the proof. \end{proof} \begin{proposition} \label{Prop:FaceConnected} The set of tetrahedron rectangles of $\mathcal{L}$ is face connected. \end{proposition} \begin{proof} Suppose that $P$ and $Q$ are tetrahedron rectangles. Choose an arc $\gamma \subset \mathcal{L}$ connecting a point of $P$ to a point of $Q$. Note that the open rectangles give a basis for the topology of $\mathcal{L}$. Also, $\gamma$ is compact. Thus $\gamma$ admits a finite covering by rectangles. By \refdef{Loom}\refitm{Tet} the arc $\gamma$ is covered by a finite collection of tetrahedron rectangles. Thus we are reduced to the case where $P$ and $Q$ intersect. Let $R = P \cap Q$. Let $R'$ be the rectangle so that \begin{itemize} \item $R'$ contains $R$, \item the west and east sides of $R'$ contain, respectively, the west and east sides of $R$, and \item $R'$ is maximal with respect to the above two properties. \end{itemize} From \refdef{Loom}\refitm{Tet} we deduce that $R'$ is a tetrahedron rectangle. From the construction we deduce that both $P$ and $Q$ west-east span $R'$ (and perhaps one or both equal $R'$). The proposition now follows from two applications of \reflem{Ascend}. \end{proof} We deduce the following. \begin{corollary} \label{Cor:Countable} There are countably infinitely many tetrahedron rectangles. Thus the same holds for face rectangles, edge rectangles, cusps, and cusp leaves. \qed \end{corollary} This implies that there are only countably many cusp leaves. We deduce the following. \begin{corollary} \label{Cor:NonCuspLeavesDense} The non-cusp leaves are dense in $F^\mathcal{L}$ and $F_\mathcal{L}$. \qed \end{corollary} \section{Locally veering triangulations} \label{Sec:Veering} In this section we review several combinatorial structures on triangulations of three-manifolds. We introduce the notions of \emph{taut isomorphisms} and \emph{locally veering triangulations}. We then follow Gu\'eritaud~\cite[Section~2]{Gueritaud16} to construct a locally veering triangulation from a loom space. In \refprop{Functorial} we show that this construction is functorial. \subsection{Definitions} A useful example for the first several definitions is the canonical triangulation of the figure-eight knot complement. See \reffig{FigEightTri}. \begin{wrapfigure}[18]{r}{0.3\textwidth} \vspace{-25pt} \captionsetup[subfloat]{captionskip=15pt} \centering \subfloat[Cusped model tetrahedron.]{ \labellist \small\hair 2pt \pinlabel {$v_0$} [tr] at 2 4 \pinlabel {$v_1$} [tl] at 292 96 \pinlabel {$v_2$} [bl] at 343 330 \pinlabel {$v_3$} [br] at 35 352 \endlabellist \centering \includegraphics[width = 0.2\textwidth]{Figures/model_tet} \label{Fig:Model} } \subfloat[Face pairing.]{ \centering \includegraphics[width = 0.25\textwidth]{Figures/glue_model_tetrahedra} \label{Fig:FacePairing} } \caption{} \label{Fig:Models} \end{wrapfigure} Let \[ \displaywidth=\parshapelength\numexpr\prevgraf+2\relax t^3 = \left\{ x \in \mathbb{R}^4 \st \mbox{$x_i \geq 0$ and $\sum s_i = 1$} \right\} \] be the \emph{standard} tetrahedron. This is equipped with the subspace topology. Note that the vertices of $t^3$ are the standard unit vectors. Their usual ordering gives an orientation to $t^3$. A copy $t$ of $t^3$ is called a \emph{model} tetrahedron. The facets (faces, edges, and vertices) of $t$ are called \emph{model facets}. Note that $t$ also inherits an orientation. See \reffig{Model}. Suppose that $t$ and $t'$ are model tetrahedra (which may be equal). Suppose that $f$ and $f'$ are faces of $t$ and $t'$, respectively. Suppose that $\phi \from f \to f'$ is a homeomorphism induced by restricting an affine map. We call $\phi$ a \emph{face pairing}. See \reffig{FacePairing}. Essentially following~\cite[Section~4.2]{Thurston78}, we define an \emph{ideal triangulation} $\mathcal{T} = (\{t_\alpha\}, \{\phi_\beta\})$ to be a collection of model tetrahedra and a collection of face pairings. The \emph{realisation} of $\mathcal{T}$, denoted $|\mathcal{T}|$, is the topological space obtained as follows. \begin{itemize} \item Take the disjoint union of the model tetrahedra. \item Quotient by the face pairings. \item Remove the zero-skeleton of the result. \end{itemize} The \emph{realisation} of a model facet of $\mathcal{T}$ is its image in $|\mathcal{T}|$. The \emph{models} of a realised facet in $|\mathcal{T}|$ are its preimages in $\mathcal{T}$. In order to ensure that $|\mathcal{T}|$ is a three-manifold we require the following. \begin{itemize} \item If $\phi$ is a face pairing then so is $\phi^{-1}$. \item Every model face occurs in exactly two face pairings. \item No face is paired with itself. \item Every edge has only finitely many models. \item The models of a single edge can be consistently oriented. \end{itemize} A \emph{taut} structure on a model tetrahedron $t$ is an assignment of dihedral angle of either zero or $\pi$ to the model edges of $t$. The dihedral angles are required to satisfy the following. \begin{itemize} \item Suppose that $v$ is a model vertex of $t$. Suppose that $e, e', e''$ are the model edges of $t$ adjacent to $v$. Then the sum of their dihedral angles is $\pi$. \end{itemize} In a taut tetrahedron, the edges with dihedral angle zero are called \emph{equatorial} while the edges with dihedral angle $\pi$ are called \emph{diagonal}. See \reffig{VeeringTet}. Following~\cite[Definition~1.1]{HRST11} (see also~\cite[page~370]{Lackenby00}) we say an ideal triangulation $\mathcal{T}$ is \emph{taut} if all model tetrahedra are taut and we moreover have the following. \begin{itemize} \item Suppose that $e$ is an edge of $|\mathcal{T}|$. Then the dihedral angles of the models of $e$ sum to $2\pi$. \end{itemize} \begin{definition} \label{Def:TautIsom} Suppose that $\mathcal{T}$ and $\mathcal{S}$ are taut ideal triangulations. If $f \from \mathcal{T} \to \mathcal{S}$ is an isomorphism of triangulations, and sends the taut structure on $\mathcal{T}$ to that on $\mathcal{S}$, then we call $f$ a \emph{taut isomorphism}. \end{definition} Again, taut isomorphisms compose in the usual way. Thus taut triangulations, together with taut isomorphisms, form a category denoted $\operatorname{\mathsf{Taut}}$. We use $\Isom(\mathcal{T}, \mathcal{S})$ to denote the set of taut isomorphisms from $\mathcal{T}$ to $\mathcal{S}$; we use $\Aut(\mathcal{T})$ to denote the group of taut automorphisms. It is an exercise to check that the taut automorphism group of the triangulation of the figure-eight knot complement, shown in \reffig{FigEightTri}, is isomorphic to the symmetries of the square. \begin{wrapfigure}[14]{r}{0.33\textwidth} \centering \labellist \small\hair 2pt \pinlabel {$0$} [tr] at 69 38 \pinlabel {$0$} [tl] at 165 54 \pinlabel {$0$} [bl] at 138 154 \pinlabel {$0$} [br] at 41 135 \pinlabel \textcolor{mygray}{$\pi$} [t] at 53 79 \pinlabel {$\pi$} [r] at 101 161 \endlabellist \vspace{-12pt} \includegraphics[width=0.3\textwidth]{Figures/VeeringTet} \caption{A veering tetrahedron. The $\pi$ angle edges may be either red or blue.} \label{Fig:VeeringTet} \end{wrapfigure} An oriented taut tetrahedron $t$ is \emph{veering} if there is a bi-colouring (by red and blue) of the model edges as follows. \begin{itemize} \item Suppose that, at a model vertex $v$, we have adjacent model edges $e, e', e''$. Suppose that this is the anticlockwise ordering on the edges, as viewed from outside of $t$ and using the induced orientation on $\bdy t$. Then if $e$ has dihedral angle $\pi$ we have that $e'$ is blue and $e''$ is red. \end{itemize} See \reffig{VeeringTet}. Suppose now that we have fixed an orientation of a taut ideal triangulation $\mathcal{V}$. Following~\cite[Definition~1.3]{HRST11} (see also~\cite[Definition~4.1]{Agol11}) we define a \emph{veering} structure on $\mathcal{V}$ to be a colouring (by red and blue) of the edges of $|\mathcal{V}|$ that pull back to give veering structures on all of the oriented model taut tetrahedra. For an example, see \reffig{FigEightTri}. We now turn to generalising veering triangulations to non-orientable manifolds. Suppose that $M$ is a three-manifold. Suppose that $\mathcal{T}$ is a taut ideal triangulation of $M$. Suppose that $e$ is an edge of $\mathcal{T}$. Let $(e_i)$ be the collection of models of $e$. We order the $e_i$ cyclically, as we walk about $e$ in $M$. For each model edge $e_i$ let $t_i$ be a copy of the model tetrahedron containing $e_i$. For each $i$ let $\phi_i$ be the face pairing, from $f_i \subset t_i$ to $g_i \subset t_{i+1}$, so that $\phi_i(e_i) = e_{i+1}$. We define $\mathcal{T}_e = (\{t_i\}, \{\phi_i\})$ to be the \emph{model edge neighbourhood} of $e$ in $\mathcal{T}$. Note that $\mathcal{T}_e$ inherits a taut structure from $\mathcal{T}$. Also, its realisation $|\mathcal{T}_e|$ is a three-ball. Choose an orientation on $|\mathcal{T}_e|$. We say that $\mathcal{T}$ is \emph{veering at $e$} if $\mathcal{T}_e$ admits a veering colouring. \begin{definition} \label{Def:LocallyVeering} Suppose that $M$ is a three-manifold. Suppose that $\mathcal{T}$ is a taut ideal triangulation of $M$. Then $\mathcal{T}$ is \emph{locally veering} if $\mathcal{T}$ is veering at every edge. \end{definition} \begin{example} \label{Exa:Gieseking} The Gieseking manifold $M_G$ can be obtained as a punctured torus bundle; the monodromy is the matrix $G$ given in \refexa{AnosovMap}. Thus $M_G$ admits a taut ideal triangulation $\mathcal{V}$ with a single tetrahedron. The orientation double cover of $M_G$ is the figure-eight knot complement; the taut triangulation $\mathcal{V}$ lifts to give the one shown in \reffig{FigEightTri}. Thus $\mathcal{V}$ is locally veering. \end{example} Locally veering triangulations are ``almost'' veering, in the following sense. \begin{proposition} \label{Prop:LocallyVeering} Suppose that $M$ is a three-manifold. Suppose that $\mathcal{V}$ is a taut ideal triangulation of $M$. Suppose that $(M, \mathcal{V})$ is locally veering. Then $(M, \mathcal{V})$ admits a veering structure if and only if $M$ is orientable. \qed \end{proposition} We use $\operatorname{\mathsf{Veer}}$ to denote the full subcategory of $\operatorname{\mathsf{Taut}}$ consisting of locally veering triangulations. We use $\operatorname{\mathsf{Veer}}(\mathbb{R}^3)$ to denote the further full subcategory of those triangulations whose realisation is homeomorphic to $\mathbb{R}^3$. \subsection{Building the triangulation} \label{Sec:Construction} We give a (metric-free) version of Gu\'eritaud's construction~\cite[Section~2]{Gueritaud16}. That is, for every loom space $\mathcal{L}$ we give a locally veering triangulation $\mathcal{V} = \veer(\mathcal{L})$. \begin{definition} \label{Def:InducedTriangulation} Suppose that $\mathcal{L}$ is a loom space. The \emph{induced} triangulation $\veer(\mathcal{L})$ has as its model tetrahedra \[ \{ t_P \st \mbox{$P$ is a tetrahedron rectangle of $\mathcal{L}$} \} \] We identify the vertices of $t_P$ with the four cusps on the sides of $P$. These are distinct by \reflem{OneEdge}. Suppose that $R$ is a face rectangle. By \reflem{FaceTet} the rectangle $R$ is contained in exactly two tetrahedron rectangles, say $P$ and $Q$. The five combined cusps of $P$ and $Q$ intersect in exactly the three cusps of $R$. This determines a face pairing $\phi_R$ between a face of $t_P$ and a face of $t_Q$. Finally, the face pairings of $\veer(\mathcal{L})$ are \[ \{ \phi_R \st \mbox{$R$ is a face rectangle of $\mathcal{L}$} \} \qedhere \] \end{definition} \begin{definition} \label{Def:InducedMap} Suppose that $f \from \mathcal{L} \to \mathcal{M}$ is a loom isomorphism. We define the \emph{induced} map $\veer_f \from \veer(\mathcal{L}) \to \veer(\mathcal{M})$ as follows. Suppose that $P \subset \mathcal{L}$ is a skeletal rectangle. Let $c_P$ be the corresponding cell of $\veer(\mathcal{L})$; so $c_P$ is either an edge, face, or tetrahedron. We take $\veer_f(c_P) = c_{f(P)}$. \end{definition} We also use the notation $\veer(f)$ for $\veer_f$. \subsection{The induced triangulation} \label{Sec:InducedTriangulation} We present a few properties of induced triangulations. \begin{lemma} \label{Lem:Manifold} Suppose that $\mathcal{L}$ is a loom space. Let $\mathcal{V} = \veer(\mathcal{L})$ be the induced triangulation. Then the realisation $|\mathcal{V}|$ is a non-compact, connected three-manifold. Furthermore $|\mathcal{V}|$ is orientable. \end{lemma} \begin{proof} By \refcor{Countable} the triangulation $\veer(\mathcal{L})$ has infinitely many model tetrahedra. We deduce that $|\mathcal{V}|$ is non-compact. By \refprop{FaceConnected} we have that $|\mathcal{V}|$ is connected. By \reflem{FaceTet} we have that every face of $|\mathcal{V}|$ meets exactly two tetrahedra of $|\mathcal{V}|$. By \refcor{EdgeFace} we have that every edge of $|\mathcal{V}|$ meets finitely many faces, and thus finitely many tetrahedra, of $|\mathcal{V}|$. Since tetrahedron rectangles are embedded in $\mathcal{L}$, no tetrahedron is glued to itself. Recall also that we removed the zero-skeleton from $|\mathcal{V}|$. We deduce that $|\mathcal{V}|$ is a non-compact, connected topological space which is a three-manifold away from the midpoints of edges. \begin{figure}[htbp] \centering \subfloat[The face rectangle $R$ is shaded.]{ \includegraphics[width=0.3\textwidth]{Figures/tet_pairing_loom} \label{Fig:TetPairingLoom} } \qquad \subfloat[The face $f_R$ is shaded.]{ \includegraphics[width=0.3\textwidth]{Figures/tet_pairing_tri} \label{Fig:TetPairingTri} } \caption{The induced orientations on the shared face are opposite.} \label{Fig:TetPairing} \end{figure} As in \refrem{Cardinal}, we fix orientations of $F^\mathcal{L}$ and $F_\mathcal{L}$. Ordering $F^\mathcal{L}$ before $F_\mathcal{L}$, these orientations determine an orientation of $\mathcal{L}$ as well as the cardinal directions south, east, north, and west. Suppose that $P$ is a tetrahedron rectangle in $\mathcal{L}$. We order the sides of $P$ according to their direction: first south, then east, north, and west. This induces an ordering on the model vertices of $t_P$ and thus induces an orientation on $t_P$. Suppose that $Q$ is another tetrahedron rectangle which is face adjacent to $P$. Suppose that $R = P \cap Q$ is the shared face rectangle. We note that the face pairing $\phi_R$ reverses orientation. See \reffig{TetPairing}. Thus the midpoints of edges are also manifold points of $|\mathcal{V}|$; also our choices of orientations above determine an orientation of $|\mathcal{V}|$. \end{proof} \begin{wrapfigure}[9]{r}{0.36\textwidth} \centering \labellist \small\hair 2pt \pinlabel {$a$} [r] at 1 113 \pinlabel {$b$} [l] at 347 202 \endlabellist \vspace{-2pt} \includegraphics[width=0.25\textwidth]{Figures/edge_rectangle_top_bottom_tet_rectangles} \caption{The small rectangular neighbourhoods of $m_a$ and $m_b$ are shaded.} \label{Fig:EdgeTopBottom} \end{wrapfigure} Before discussing the induced taut structure on $\veer(\mathcal{L})$ we require a lemma. \begin{lemma} \label{Lem:OneUpOneDown} Suppose that $e$ is an edge of $\veer(\mathcal{L})$. Then there is a unique tetrahedron rectangle $R^e$ containing $R(e)$ and south-north spanning $R(e)$. Similarly, there is a unique tetrahedron rectangle $R_e$ containing $R(e)$ and west-east spanning $R(e)$. The same holds replacing $e$ by a face $f$ of $\veer(\mathcal{L})$. \end{lemma} \begin{proof} Let $a$ and $b$ be the cusps meeting $R(e)$. Let $m_a, \ell^a, m_b, \ell^b$ be the sides of $R(e)$, contained in the associated cusp leaves. Now let $R^e$ be the union of all rectangles that contain both $m_a$ and $m_b$. Note that every rectangle in this union south-north spans $R(e)$. Appealing to \refdef{Loom}\refitm{Cusp} twice, $R^e$ is non-empty. Since $R^e$ can be given as an increasing union of rectangles, it is itself a rectangle. Since $R^e$ is maximal, it is a tetrahedron rectangle, by \refdef{Loom}\refitm{Tet}. See \reffig{EdgeTopBottom}. The same construction, using instead $\ell^a$ and $\ell^b$ produces the tetrahedron rectangle $R_e$. Finally, suppose that $R$ is a tetrahedron rectangle containing $R(e)$, and either south-north or west-east spanning $R(e)$. Thus $R$ is either contained in $R^e$ or in $R_e$, and we are done. The proof is similar and simpler when $e$ is replaced by a face $f$. \end{proof} We now define the \emph{induced} dihedral angle assignment on $\mathcal{V} = \veer(\mathcal{L})$. Suppose that $P$ is a tetrahedron rectangle in $\mathcal{L}$; let $t_P$ be the corresponding model tetrahedron. By \reflem{TetFaceEdge} there are six edge rectangles in $P$. By \refdef{TetRect}, exactly two of these span $P$. We give $t_P$ a taut structure as follows. If $e$ is a model edge of $t_P$ then it receives dihedral angle $\pi$ or zero exactly as $R(e)$ does or does not span $P$. From \reflem{OneUpOneDown} we deduce the following. \begin{corollary} \label{Cor:Taut} The induced dihedral angle assignment on $\veer(\mathcal{L})$ is a taut structure. \qed \end{corollary} We now define an \emph{induced} colouring of the one-skeleton of $\veer(\mathcal{L})$. We colour an edge $e$ of $\veer(\mathcal{L})$ the same as its edge rectangle $R(e)$ according to \refdef{EdgeRect}. Consulting \reffig{VeeringTet}, this gives us the following. \begin{corollary} \label{Cor:Veering} Orienting the foliations $F^\mathcal{L}$ and $F_\mathcal{L}$ induces a veering structure on $\veer(\mathcal{L})$. Thus $\veer(\mathcal{L})$ is locally veering. \qed \end{corollary} \subsection{Functorial} \label{Sec:Functorial} We summarise this section as follows. \begin{proposition} \label{Prop:Functorial} Gu\'eritaud's construction \[ \veer \from \operatorname{\mathsf{Loom}}(\mathbb{R}^2) \to \operatorname{\mathsf{Veer}} \] is a functor from the category of loom spaces to the category of locally veering triangulations. \end{proposition} \begin{proof} Suppose that $\mathcal{L}$ is a loom space. By \reflem{Manifold} the induced triangulation $\veer(\mathcal{L})$ is an ideal triangulation of a non-compact, connected, orientable three-manifold. By \refcor{Taut} the induced dihedral angle makes $\veer(\mathcal{L})$ into a taut triangulation. By \refcor{Veering} we have that $\veer(\mathcal{L})$ is locally veering. Suppose now that $\mathcal{M}$ and $\mathcal{N}$ are also loom spaces. Suppose that $f \from \mathcal{L} \to \mathcal{M}$ and $g \from \mathcal{M} \to \mathcal{N}$ are loom isomorphisms. Recall that we use the notations $\veer_f = \veer(f)$ to represent the induced map. Suppose that $P \subset \mathcal{L}$ is a skeletal rectangle. If $f = \operatorname{Id}_\mathcal{L}$ then, appealing to \refdef{InducedMap}, we have $\veer_f(c_P) = c_{f(P)} = c_P$. Thus $\veer(\operatorname{Id}_\mathcal{L}) = \operatorname{Id}_{\veer(\mathcal{L})}$, as desired. For general loom isomorphisms $f$ and $g$, and again appealing to \refdef{InducedMap}, we have \begin{align*} \veer_{g \circ f}(c_P) &= c_{g ( f (P) )} \\ &= \veer_g (c_{ f (P) }) \\ &= (\veer_g \circ \veer_f) (c_P) \end{align*} Thus $\veer(g \circ f) = \veer(g) \circ \veer(f)$. Finally, we claim that $\veer(f)$ is a taut isomorphism. This is because $f^{-1} \from \mathcal{M} \to \mathcal{L}$ is a loom isomorphism; thus $\veer(f^{-1})$ is the desired inverse for $\veer(f)$. \end{proof} We deduce that $\veer \from \Aut(\mathcal{L}) \to \Aut(\veer(\mathcal{L}))$ is a group homomorphism. In fact it is an isomorphism; we do not prove this here. See the discussion in \refsec{Future}. \section{Convexity} \label{Sec:Convex} In this section we prove \refthm{ThreeSpace}: the realisation of the triangulation $\veer(\mathcal{L})$ is homeomorphic to $\mathbb{R}^3$. \begin{remark} \label{Rem:Bundle} One proof of \refthm{ThreeSpace} runs along the following lines. Choose transverse measures of full support for $F^\mathcal{L}$ and $F_\mathcal{L}$. This gives $\mathcal{L}$ an incomplete, locally euclidean, metric. Suppose that $t \in \veer(\mathcal{L})$ is a model tetrahedron. By sending its vertices to the associated cusps (in the completed metric), and then extending to all of $t$ via barycentric coordinates, we obtain a linear map from $t$ to $\mathcal{L}$. We glue these linear maps together to obtain a piecewise-linear map $\pi \from |\veer(\mathcal{L})| \to \mathcal{L}$. We now claim the following. \begin{itemize} \item The map $\pi$ is continuous and surjective. \item Point preimages under $\pi$ are copies of $\mathbb{R}$. \item The map $\pi$ is a fibre bundle map. \end{itemize} The proof of surjectivity requires the astroid lemma (\reflem{Astroid}\refitm{DoesNot}). Since $\mathcal{L}$ is homeomorphic to $\mathbb{R}^2$ we deduce that $|\veer(\mathcal{L})|$ is isomorphic to a product. Thus the claims imply \refthm{ThreeSpace}. In their work, Landry, Minsky, and Taylor~\cite[Proposition 5.11]{LMT21} carry this strategy out, but in a more delicate setting. They begin with a pseudo-Anosov flow without perfect fits. After drilling, the leaf space is a loom space, equipped with an action by the fundamental group of the drilled three-manifold. Here one cannot simply choose measures since, in general, measures invariant under the action need not exist. \end{remark} In the remainder of \refsec{Convex} we give a very different proof. We develop synthetic notions of geodesicity and convexity in $\mathcal{L}$. We use these to prove that $|\veer(\mathcal{L})|$ admits an exhaustion by three-balls. The synthetic approach is longer than that of \refrem{Bundle}; however it is constructive and is more revealing of the structure of $\mathcal{L}$. \subsection{Geodesics} \label{Sec:Geodesics} We define a \emph{polygonal geodesic} in $\mathcal{L}$. These are \emph{polygonal paths} (as in \cite[Definition~3.1]{Fenley12}) with additional properties. Our geodesics are very similar to ``staircase'' geodesics in $\mathbb{R}^2$ when equipped with the $L^1$ metric. However, our definition is combinatorial as we do not have a metric. \begin{definition} \label{Def:Geodesic} A \emph{segment} in $\mathcal{L}$ is a subarc of a leaf of $F^\mathcal{L}$ or $F_\mathcal{L}$. Suppose that $a$ and $b$ lie in $\mathcal{L} \cup \Delta(\mathcal{L})$. A \emph{polygonal path} $\gamma$ from $a$ to $b$ is a finite sequence $(\rho_i)_{i = 0}^n$ of segments where \begin{itemize} \item the arc $\rho_0$ emanates from $a$, the arc $\rho_n$ ends at $b$, and \item for every $i$, the arcs $\rho_i$ and $\rho_{i+1}$ meet at a point or at a cusp. \end{itemize} We say that $\gamma$ has a \emph{corner} after index $i$ if there is a rectangle $R$ with consecutive sides contained in $\rho_i$ and $\rho_{i+1}$, respectively. This corner is \emph{right-turning} if $R$ is locally to the right of $\delta$. We define \emph{left-turning} corners similarly. We say that a rectangle $R$ is a \emph{U-turn for $\gamma$} if \begin{itemize} \item three sides of $R$ are contained in $\gamma$ and \item $R \cap \gamma$ is empty. \end{itemize} We say that a U-turn for $\gamma$ is \emph{maximal} if it is not properly contained in another U-turn for $\gamma$. See \reffig{UTurns} for several examples of maximal U-turns. We say that a polygonal path $\gamma$ is a \emph{geodesic} if \begin{itemize} \item the intersection of $\mathbin{\scalebox{1.5}{\ensuremath{\cup}}}_i \rho_i$ with any leaf is connected. \qedhere \end{itemize} \end{definition} \begin{figure}[htbp] \subfloat[]{ \includegraphics[height = 1.5 cm]{Figures/u-turn_in} } \quad \subfloat[]{ \includegraphics[height = 1.5 cm]{Figures/u-turn_out} } \quad \subfloat[]{ \includegraphics[height = 1.5 cm]{Figures/u-turn_middle} } \quad \subfloat[]{ \includegraphics[height = 1.5 cm]{Figures/u-turn_cusp} \label{Fig:UTurnCusp} } \caption{In each case the maximal U-turn is the shaded rectangle. Any combinatorial possibility may be obtained by combining these, and adding cusps to $\gamma$.} \label{Fig:UTurns} \end{figure} An Euler characteristic argument gives the following. \begin{lemma} \label{Lem:Ear} For any embedded polygonal loop $\gamma$ in $\mathcal{L}$ and for any segment $\sigma$ of $\gamma$, there is a U-turn of $\gamma$ disjoint from $\sigma$. \qed \end{lemma} We use this to prove the following. \begin{lemma} \label{Lem:NoUTurns} An embedded polygonal path $\delta$ is a geodesic if and only it it contains no U-turns. \end{lemma} \begin{proof} If $R$ is a U-turn for $\delta$ then some leaf $\ell$ crossing the interior of $R$ meets $\delta$ twice. For the other direction, suppose that $\ell$ is a leaf that meets $\delta$ twice. Thus there is a segment $\sigma \subset \ell$ so that \begin{itemize} \item $\sigma \cap \delta = \bdy \sigma$ and \item $\sigma \cup \delta$ contains a unique embedded polygonal loop $\gamma$. \end{itemize} We apply \reflem{Ear} and find a U-turn in $\delta$. \end{proof} \begin{lemma} \label{Lem:Geodesic} Suppose that $a$ and $b$ lie in $\mathcal{L} \cup \Delta(\mathcal{L})$. Then there is a geodesic $\gamma$ from $a$ to $b$. \end{lemma} \begin{proof} Using \refdef{Loom}\refitm{Cusp} if needed, we choose rectangles $A$ and $B$ that have $a$ and $b$, respectively, in the interior of one of their sides. Since $\mathcal{L}$ is path connected, there is a compact arc $\epsilon$ connecting a point of $A$ to a point of $B$. We cover $\epsilon$ by finitely many rectangles. The union of these rectangles with $A$ and $B$ contains an embedded polygonal path $\delta$ from $a$ to $b$. By \reflem{NoUTurns} it now suffices to ``straighten'' $\delta$ so that it contains no U-turns. To this end, we define the \emph{complexity} of $\delta$ to be the pair \[ \mbox{(number of corners, number of maximal U-turns)} \] ordered lexicographically. We now induct on the complexity. Suppose that $\delta$ is a polygonal path from $a$ and $b$. Suppose that $R$ is a maximal U-turn for $\delta$. Define \[ \delta' = (\delta - \bdy{R}) \cup (\bdy R - \delta) \] (Again, see \reffig{UTurns}.) If $\delta'$ is empty then $a = b$ and the desired geodesic path has no arcs. If not, then $\delta'$ is a disjoint union of a polygonal path from $a$ to $b$ and some number of polygonal loops. Let $\delta''$ be the polygonal path and let $\{\gamma_i\}$ be the polygonal loops. We now claim that $\delta''$ has lower complexity than $\delta$. If $\delta''$ has fewer corners than $\delta$ we are done. If $\delta''$ has the same number of corners as $\delta$ then $\{\gamma_i\}$ is empty. Let $\sigma$ be the side of the maximal U-turn $R$ not contained in $\delta$. We deduce that the interior of $\sigma$ is disjoint from $\delta$. We further deduce that $\sigma$ contains a cusp in its interior; see \reffig{UTurnCusp}. Thus $\delta''$ has fewer maximal U-turns than $\delta$. \end{proof} \subsection{Sectors} The following is similar to Fenley's definition of \emph{quarters}~\cite[page~22]{Fenley12}. \begin{definition} \label{Def:Sectors} Suppose that $x$ is a point or cusp of $\mathcal{L}$. Let $\Lambda(x)$ be the union of all leaves meeting $x$. We call the components of $\mathcal{L} - \Lambda(x)$ the \emph{sectors based at $x$}. Two sectors based at $x$ are \emph{adjacent} if they are disjoint but their closures meet along a subarc of some leaf. \end{definition} Note that a (cusp) rectangle $R$, with an (ideal) corner at $x$, determines a unique sector based at $x$, denoted $\operatorname{\mathsf{S}}(x, R)$. Note also that the sector $\operatorname{\mathsf{S}}(x, R)$ contains the staircase $\stair(x, R)$. \begin{lemma} \label{Lem:Sectors} If $x$ is a point of $\mathcal{L}$ then there are exactly four sectors based at $x$. If $x$ is a cusp of $\mathcal{L}$ then there are countably many sectors based at $x$; these are linearly ordered by the adjacency relation. \end{lemma} \begin{proof} The first follows from \refrem{Leaves}. The second follows from \refdef{Cusp}, \refdef{Loom}\refitm{Cusp}, and \refrem{CuspLeaves}. \end{proof} \begin{definition} \label{Def:Between} Suppose that $p$, $q$, and $r$ lie in $\mathcal{L} \cup \Delta(\mathcal{L})$. Suppose that $Q$ and $R$ are sectors based at $p$ so that $q$ and $r$ lie in the closures of $Q$ and $R$, respectively. If the sectors $Q$ and $R$ are distinct and not adjacent, then we say that $p$ is \emph{between} $q$ and $r$. \end{definition} \begin{lemma} \label{Lem:GeodesicThrough} Suppose that $p$, $q$, and $r$ lie in $\mathcal{L} \cup \Delta(\mathcal{L})$. Then $p$ is between $q$ and $r$ if and only if there is a geodesic from $q$ to $r$ passing through $p$. \end{lemma} \begin{proof} Suppose that $p$ is between $q$ and $r$. \reflem{Geodesic} gives us geodesics $\delta$ from $q$ to $p$ and $\epsilon$ from $p$ to $r$. Since $\delta$ is a geodesic, and since it meets $p$, it must be contained in the closure of a single sector at $p$. The same holds for $\epsilon$. Thus $\delta$ and $\epsilon$ are contained in the closures of distinct, non-adjacent sectors based at $p$. Thus $\delta \cup \epsilon$ is the desired geodesic, giving the forward direction. The backward direction follows from the definition of geodesics. \end{proof} \subsection{Hulls} Here we again follow, at least in spirit, Section~3 of Fenley's paper~\cite{Fenley12}; see in particular his notion of \emph{convex polygonal paths}~\cite[Definition~3.2]{Fenley12}. \begin{definition} \label{Def:Hull} Suppose that $C$ is a finite subset of $\mathcal{L} \cup \Delta(\mathcal{L})$. We define the \emph{hull} of $C$ as follows: \[ \operatorname{\mathsf{H}}(C) = \{ p \in \mathcal{L} \st \mbox{$p$ is between some pair of elements of $C$} \} \qedhere \] \end{definition} From \reflem{GeodesicThrough} we deduce the following. \begin{corollary} \label{Cor:Geodesic} Suppose that $C$ is a finite subset of $\mathcal{L} \cup \Delta(\mathcal{L})$. Then $\operatorname{\mathsf{H}}(C)$ is the union of all geodesics connecting elements of $C$. \qed \end{corollary} Set $\operatorname{\mathsf{H}}(q, r) = \operatorname{\mathsf{H}}(\{q, r\})$. Note that $\operatorname{\mathsf{H}}(q, r)$ is closed in $\mathcal{L}$. The definition of hull implies that $\operatorname{\mathsf{H}}(C) = \cup_{q, r \in C} \operatorname{\mathsf{H}}(q, r)$. Thus, since $\operatorname{\mathsf{H}}(C)$ is a finite union of closed sets, it is also closed. Recall from \refdef{CuspsOf} that $\Delta(\operatorname{\mathsf{H}}(C))$ is the set of cusps of the hull $\operatorname{\mathsf{H}}(C)$. \begin{lemma} \label{Lem:Convex} Suppose that $C$ is a finite subset of $\mathcal{L} \cup \Delta(\mathcal{L})$. Suppose that $q$ and $r$ lie in $\operatorname{\mathsf{H}}(C) \cup \Delta(\operatorname{\mathsf{H}}(C))$. Then $\operatorname{\mathsf{H}}(q, r) \subset \operatorname{\mathsf{H}}(C)$. \end{lemma} That is, hulls are \emph{convex}. In fact, the boundary of a hull is the union of finitely many \emph{convex polygonal paths} in the sense of~\cite[Definition~3.2]{Fenley12}. We prove a version of this in \reflem{Interval}\refitm{Rightmost}. \begin{proof}[Proof of \reflem{Convex}] Set $H = \operatorname{\mathsf{H}}(C)$. We prove the contrapositive. That is, we assume the following: \begin{itemize} \item $C$ is a finite subset of $\mathcal{L} \cup \Delta(\mathcal{L})$, \item $q$ and $r$ lie in $\mathcal{L} \cup \Delta(\mathcal{L})$, and \item some point $p \in \operatorname{\mathsf{H}}(q, r) \subset \mathcal{L}$ does not lie in $H$. \end{itemize} We must show that one of $q$ or $r$ lies outside of $H \cup \Delta(H)$. As in \refrem{Cardinal} we orient $F^\mathcal{L}$ and $F_\mathcal{L}$ so that we may refer to the cardinal directions in $\mathcal{L}$. By \refdef{Between} the point $p$ is between $q$ and $r$. Let $\ell^p$ and $m_p$ be the leaves through $p$. There are four sectors based at $p$; breaking symmetry, we assume that $q$ lies in the closure of the south-west sector while $r$ lies in the closure of the north-east sector. We have assumed that $p$ is not between any pair of points of $C$. Breaking symmetry, we may assume that $C$ lies strictly to the east of $\ell^p$. Since $C$ is finite, and appealing to either Corollary~\ref{Cor:CuspLeavesDense} or~\ref{Cor:NonCuspLeavesDense}, there is a leaf $\ell$ of $F^\mathcal{L}$ that separates $\ell^p$ from $C$. Appealing to \refdef{Between}, we deduce that $H$ is east of $\ell$. Suppose that $q$ is a point of $\mathcal{L}$. Then $q$ is on, or west of, $\ell^p$. Thus $q$ is strictly west of $\ell$. Thus $q$ is not between any pair of points of $C$ and we are done. Suppose instead that $q$ lies in $\Delta(\mathcal{L})$. Then no cusp rectangle at $q$ lies in $H$. Thus $q$ is not an element of $\Delta(H)$ and we are done. \end{proof} \begin{wrapfigure}[12]{r}{0.42\textwidth} \vspace{-10pt} \centering \labellist \small\hair 2pt \pinlabel {$q$} [tl] at 229 4 \pinlabel {$r$} [br] at 5 181 \pinlabel {$x$} [tr] at 24 68 \pinlabel {$\delta$} at 77 38 \pinlabel {$\epsilon$} at 202 110 \pinlabel {$R$} at 51 90 \endlabellist \includegraphics[height = 3.5 cm]{Figures/right_turning_corner} \caption{Two geodesics $\delta$ and $\epsilon$ from $q$ to $r$. There is a right-turning corner (on $\delta$) at $x$.} \label{Fig:RightOf} \end{wrapfigure} Suppose that $q$ and $r$ lie in $\mathcal{L} \cup \Delta(\mathcal{L})$. Suppose that $\operatorname{\mathsf{S}}(q)$ and $\operatorname{\mathsf{S}}(r)$ are sectors, based at $q$ and $r$ respectively, so that $r$ lies in $\operatorname{\mathsf{S}}(q)$ and $q$ lies in $\operatorname{\mathsf{S}}(r)$. (This is possible if and only if $q$ and $r$ do not lie on a common leaf.) Set $\operatorname{\mathsf{S}}(q, r) = \operatorname{\mathsf{S}}(q) \cap \operatorname{\mathsf{S}}(r)$. Note that the closure of $\operatorname{\mathsf{S}}(q, r)$ contains all geodesics from $q$ to $r$. Thus by \refcor{Geodesic}, the closure contains $\operatorname{\mathsf{H}}(q, r)$. We fix, for the remainder of \refsec{Convex}, an orientation on $\mathcal{L}$. Suppose that $\delta$ is a geodesic from $q$ to $r$. Let $R(\delta)$ be the union of the components of $\operatorname{\mathsf{S}}(q, r) - \delta$ to the right of $\delta$. We say that $x$, a point or cusp of $\operatorname{\mathsf{H}}(q, r)$, is \emph{to the right of} $\delta$ if $x$ is a point or cusp of $R(\delta)$. We say that a geodesic $\epsilon$ is \emph{to the right of} $\delta$ if all points of $\epsilon - \delta$ are to the right of $\delta$. We define \emph{to the left of} similarly. See \reffig{RightOf}. \begin{lemma} \label{Lem:Interval} Suppose that $q$ and $r$ lie in $\mathcal{L} \cup \Delta(\mathcal{L})$. Then we have the following. \begin{enumerate} \item \label{Itm:Finite} $\Delta(\operatorname{\mathsf{H}}(q, r))$ is finite. \item \label{Itm:OnGeodesic} For every cusp $c \in \Delta(\operatorname{\mathsf{H}}(q, r))$ there is a geodesic from $q$ to $r$ running through $c$. \item \label{Itm:Rightmost} $\operatorname{\mathsf{H}}(q, r)$ contains a unique rightmost geodesic and also a unique leftmost geodesic. The rightmost geodesic has no right-turning corners; the leftmost geodesic has no left-turning corners. \item \label{Itm:Boundary} If $q$ and $r$ are cusps then the boundary of $\operatorname{\mathsf{H}}(q, r)$ is the disjoint union of the rightmost and leftmost geodesics. \item \label{Itm:Disk} If $q$ and $r$ are cusps then $\operatorname{\mathsf{H}}(q, r)$ is a disjoint finite union of finite-sided disks (meeting only at cusps). \end{enumerate} \end{lemma} \begin{proof} \reflem{Geodesic} gives us a geodesic $\gamma$ from $q$ to $r$. Suppose that $x$ is a right-turning corner of $\gamma$. Let $\stair(x)$ be the staircase at $x$ whose axis rays $\ell^x$ and $m_x$ meet $\gamma$. Define $m_x^\gamma$ to be the \emph{projection} of $\gamma$, to $m_x$, along $F^\mathcal{L}$. That is, suppose for $y \in \mathcal{L}$ we have that $\ell^y$ is the leaf of $F^\mathcal{L}$ through $y$. Then we define $m_x^\gamma \subset m_x$ to be the polygonal path obtained by taking the closure of \[ \{ m_x \cap \ell^y \st y \in \gamma \} \] We define $\ell^x_\gamma \subset \ell^x$ similarly. See \reffig{Projection}. \begin{figure}[htbp] \labellist \small\hair 2pt \pinlabel {$x$} [tr] at 61 98 \pinlabel {$y$} [tr] at 161 46 \pinlabel {$c$} [bl] at 190 187 \pinlabel {$m_x$} [l] at 284 101 \pinlabel {$\ell^x$} [r] at 63 271 \pinlabel {$m_c$} [r] at 1 184 \pinlabel {$\ell^c$} [l] at 187 2 \pinlabel {$\ell^y$} [r] at 164 2 \pinlabel {$m_x^\gamma$} [b] at 213 117 \pinlabel {$\ell^x_\gamma$} [l] at 80 147 \pinlabel {$\gamma$} [t] at 225 40 \endlabellist \includegraphics[width = .5 \textwidth]{Figures/projection} \caption{The two projections of $\gamma$ to the axis rays of a right-turning corner $x$.} \label{Fig:Projection} \end{figure} Note that since $\gamma$ is a finite polygonal path, both $m_x^\gamma$ and $\ell^x_\gamma$ are bounded in $m_x$ and $\ell^x$ respectively. (However, by \reflem{BdyQ}, the projection $m_x^\gamma$ may meet the axis cusp $c_m$ for $m_x$, if it exists. A similar statement holds for $\ell^x_\gamma$.) Recall that $\dotDelta(\stair(x))$ is the set of exterior cusps for $\stair(x)$. We define $\dotDelta^\gamma(\stair(x))$ to be those cusps of $\dotDelta(\stair(x))$ whose projections to $m_x$ and $\ell^x$ lie in $m_x^\gamma$ and $\ell^x_\gamma$ respectively. By the astroid lemma (\reflem{Astroid}\refitm{DoesNot}) we have that $\dotDelta^\gamma(\stair(x))$ is finite. Suppose that $c$ is any cusp of $\Delta(\operatorname{\mathsf{H}}(q, r))$, not on $\gamma$. Suppose that $c$ is to the right of $\gamma$: that is, $c$ lies in one of the components of $\operatorname{\mathsf{S}}(q, r) - \gamma$ to the right of $\gamma$. Since $c$ is a cusp of $\Delta(\operatorname{\mathsf{H}}(q, r))$, it lies between $q$ and $r$. Thus there are distinct, non-adjacent sectors $Q$ and $R$ based at $c$ whose closures contain $q$ and $r$, respectively. By \reflem{Sectors} there is a sector $S$ based at $c$ separating $Q$ from $R$. Let $m_c$ and $\ell^c$ be the cusp leaves giving the sides of $S$. By construction neither $q$ nor $r$ lie in $S$. (They may lie in the closure of $S$; that is, in $m_c$ or in $\ell^c$.) Thus $\gamma$ meets both $m_c$ and $\ell^c$. Let $S_\gamma$ be the component of $S - \gamma$ that meets $c$. We define $\gamma' = \gamma \cap \bdy S_\gamma$. Note that $\bdy S_\gamma$ is a closed polygonal loop. An Euler characteristic argument, applied to $\bdy S_\gamma$, gives us a right-turning corner $x$ of $\gamma'$. We deduce that $\ell^c$ crosses $m_x^\gamma$; likewise $m_c$ crosses $\ell^x_\gamma$. Thus $c$ lies in $\dotDelta^\gamma(\stair(x))$. It follows that every cusp of $\Delta(\operatorname{\mathsf{H}}(q, r))$ either lies on $\gamma$ or lies in $\mathbin{\scalebox{1.5}{\ensuremath{\cup}}}_x \dotDelta^\gamma(\stair(x))$. Here $x$ ranges over the corners of $\gamma$. Thus $\Delta(\operatorname{\mathsf{H}}(q, r))$ is a finite union of finite sets. This proves \refitm{Finite}. Suppose now that $c$ is a cusp of $\
Delta(\operatorname{\mathsf{H}}(q, r))$. Pick any geodesic $\gamma$ from $q$ to $r$. If $c$ lies in $\gamma$ we are done. If not, then we may assume that $c$ is to the right of $\gamma$. Then, as above, we find $S$ and $S_\gamma$. Using these we define \[ \epsilon = (\gamma - \bdy S_\gamma) \cup (\bdy S_\gamma - \gamma) \] This is a geodesic through $c$, and so gives \refitm{OnGeodesic}. Starting with any geodesic from $q$ to $r$, \refitm{OnGeodesic} finds a geodesic to its right. Repeating this, we find a sequence of geodesics; by \refitm{Finite} this sequence ends with a geodesic $\zeta$ which passes through all of the rightmost cusps. If $\zeta$ has a right-turning corner, say at $x$, then the cusps immediately before and after $x$ span an edge rectangle. Flipping over such edge rectangles, we obtain a new geodesic $\rho$. Since $\rho$ has no right-turning corners, $\rho$ is rightmost. This proves \refitm{Rightmost}. Suppose that $\rho$ and $\lambda$ are, respectively, the rightmost and leftmost geodesics in $\operatorname{\mathsf{H}}(q, r)$. Suppose for a contradiction that $\rho$ and $\lambda$ meet at a point $p$ of $\mathcal{L}$. If $\rho$ and $\lambda$ cross at $p$ then either $\rho$ is not rightmost or $\lambda$ is not leftmost. Either is a contradiction. We deduce that near $p$ both $\rho$ and $\lambda$ lie in a single leaf, say $m_p$. Thus there is a cusp at the beginning of the segment $\rho \cap m_p$. Likewise there is a cusp at the end of the segment $\lambda \cap m_p$. These contradict \reflem{AtMostOne}. This proves \refitm{Boundary}. Appealing to the Jordan curve theorem and \refitm{Boundary} we obtain \refitm{Disk}. \end{proof} \begin{corollary} \label{Cor:StaircaseConvex} Staircases are convex. \end{corollary} \begin{proof} Suppose that $\stair(x)$ is the given staircase. By \reflem{Astroid}, the set of exterior cusps is countable. Let $H_k$ be the hull of $x$ together with the first $k$ exterior cusps. By \refcor{EdgesOnStair} and \reflem{Interval}\refitm{Rightmost} the boundary of $H_k$ is contained in the closure of $\stair(x)$. Thus the staircase is a growing union of convex sets. \end{proof} Lemmas \ref{Lem:Interval}\refitm{Disk} and \ref{Lem:Convex} prove the following. \begin{corollary} \label{Cor:Disks} Suppose that $C$ is a finite subset of $\Delta(\mathcal{L})$. Then $\operatorname{\mathsf{H}}(C)$ is a disjoint finite union of finite-sided disks (meeting only at cusps). \qed \end{corollary} \subsection{Skeletal rectangles, redux} With convexity in hand, we are equipped to prove various existence and uniqueness results. \begin{lemma} \label{Lem:TwoCusps} For any two distinct cusps of $\mathcal{L}$ there is at most one edge rectangle meeting both. \end{lemma} \begin{proof} Suppose that $a$ and $b$ are the given cusps. Let $H = \operatorname{\mathsf{H}}(a, b)$. Suppose that $R$ is an edge rectangle meeting $a$ and $b$. Since the closure of $R$ is a union of geodesics from $a$ to $b$ (\refcor{Geodesic}) we deduce that $R$ lies in $H$. Each boundary component of $R$ has exactly two segments. Thus one is rightmost for $H$ and the other leftmost. By \reflem{Interval}\refitm{Rightmost} the boundary of $R$ is the boundary of $H$. Thus $R$ is unique. \end{proof} We now deal with faces. \begin{lemma} \label{Lem:ThreeEdges} Suppose that three distinct edge rectangles meet three distinct cusps. Then they are all contained in a single face rectangle. \end{lemma} \begin{proof} Let $a$, $b$, and $c$ be the given cusps. Let $A$, $B$, and $C$ be the given edge rectangles; by \reflem{OneEdge}, we may assume that $A$ does not meet $a$, and so on. Let $H$ be the hull of $a$, $b$, and $c$. By definition, the union of the closures of $A$, $B$, and $C$ gives $H$. By \reflem{Hull}\refitm{FiniteDisk} the hull $H$ is a disjoint finite union of finite sided disks, adjacent only at cusps of $\Delta(H)$. Suppose that there are two or more disks in the disjoint union. Suppose that $c$, say, meets two of these disks. Then all geodesics from $a$ to $b$ run through $c$. In particular, all geodesics from $a$ to $b$ in the edge rectangle $C$ meet $c$. Thus $C$ meets $c$, a contradiction. Thus $H$ is a single disk. Breaking symmetry, we suppose that $a$, $b$, and $c$ appear in anticlockwise order around $\bdy H$. Let $\rho_a$ be the rightmost geodesic from $b$ to $c$ given by \reflem{Interval}\refitm{Rightmost}. Define $\rho_b$ and $\rho_c$ similarly. Since $a$ is to the left of $\rho_a$, we deduce that $\rho_a$ lies in $\bdy H$. Similarly, $\rho_b$ and $\rho_c$ lie in $\bdy H$. Since $b$ and $c$ are the cusps of $A$, we deduce that $\rho_a$ also lies in $\bdy A$. So $\rho_a$, and similarly $\rho_b$ and $\rho_c$, each consist of only two segments. We deduce that the only cusps meeting $\bdy H$ are $a$, $b$, and $c$. Also the only segments in $\bdy H$ are those of $\rho_a$, $\rho_b$, and $\rho_c$. Enumerating simple polygonal loops with at most six segments, which occur as the boundary of a convex set, we find two possibilities. One has four outward corners (one at a cusp) while the other has five (two at cusps). These are shown in \reffig{SixSides}. The latter does not contain an edge rectangle between two of of its cusps. The former is the desired face rectangle. \end{proof} \begin{figure}[htbp] \subfloat[Four outward corners.]{ \includegraphics[width = 0.4\textwidth]{Figures/face_rect} } \quad \subfloat[Five outward corners.]{ \includegraphics[width = 0.4\textwidth]{Figures/not_face_rect} } \caption{On the left we have one of the four possible face rectangles. Note that all four corners are outward. On the right we have one of the four possible convex hulls with six segments in its boundary and five outward corners.} \label{Fig:SixSides} \end{figure} Suppose that $e$, $f$, and $t$ are an edge, face, and tetrahedron of $\veer(\mathcal{L})$. We say that $e$ is the \emph{bottom edge of $f$} if $R(e)$ and $R(f)$ west-east span each other. We similarly define what it means for $e$ to be the \emph{bottom edge of $t$} and what it means for $f$ to be a \emph{bottom face of $t$} (there are two). We define the \emph{top} edges and faces similarly. See \reffig{TetRect}; the shaded edge rectangle corresponds to the bottom edge of the corresponding faces and tetrahedron. \begin{lemma} \label{Lem:TwoEdges} Suppose that $e$ and $e'$ are edges of $\veer(\mathcal{L})$ with cusps $a$ and $b$ and $a'$ and $b'$ respectively. Suppose that $R(e)$ properly west-east spans $R(e')$. Set \[ H = \operatorname{\mathsf{H}}(\{a, b, a', b'\}) \] Then there are tetrahedra $t$ and $t'$ (possibly equal) with $R(t), R(t') \subset H$ and where $e$ is the bottom edge of $t$ and $e'$ is the top edge of $t'$. \end{lemma} \begin{proof} By hypothesis there is at least one cusp of $\Delta(H) - \{a, b\}$ south of $R(e)$ and at least one north of $R(e)$ (and on leaves of $F^\mathcal{L}$ meeting $R(e)$). Taking two such cusps, as close as possible to the northern and southern sides of $R(e)$, and taking a convex hull with $R(e)$, gives $R(t)$. The tetrahedron rectangle $R(t')$ is found similarly. \end{proof} \subsection{Three-balls} For the remainder of \refsec{Convex} we fix $T$ a non-empty, finite, and face-connected collection of tetrahedra in $\veer(\mathcal{L})$. We define $R(T) = \mathbin{\scalebox{1.5}{\ensuremath{\cup}}}_{t \in T} R(t)$ to be the union of the associated tetrahedron rectangles. Note that $R(T)$ need not be convex. As usual, let $\Delta(R(T))$ be the cusps of $R(T)$. Also for the remainder of \refsec{Convex} we set $H = \operatorname{\mathsf{H}}(\Delta(R(T)))$: that is, $H$ is the hull of the cusps of the union of the associated tetrahedron rectangles. \begin{definition} \label{Def:Content} The \emph{content} of $H$, denoted $\veer(H)$, is the subtriangulation of $\veer(\mathcal{L})$ spanned by the tetrahedra $t$ with $R(t) \subset H$. \end{definition} \begin{lemma} \label{Lem:Hull} We have the following. \begin{enumerate} \item \label{Itm:Grows} $T \subset \veer(H)$. \item \label{Itm:FiniteContent} $\veer(H)$ is finite. \item \label{Itm:FiniteDisk} The hull $H$ is a finite-sided disk in $\mathcal{L}$. \end{enumerate} \end{lemma} \begin{proof} A tetrahedron rectangle is the (interior of the) convex hull of its cusps. This gives~\refitm{Grows}. By \refcor{Disks} we know that $H$ is a finite disjoint union of finite-sided disks. Thus $\Delta(H)$ is finite. Thus $\veer(H)$ is finite, giving~\refitm{FiniteContent}. Suppose that there is more than one disk component in the union given by \refcor{Disks}. We form a bipartite graph as follows: we take one set of nodes for the disk components, another set of nodes for the cusps adjacent to two or more disks, and edges for disk-cusp adjacency between the first and second set of nodes. By \reflem{Convex} this graph is a tree. Any leaf of the tree gives a disk that contains at least one tetrahedron rectangle $R(t)$ for some $t \in T$. Since there are at least two disks, the tree has at least two leaves. We deduce that $T$ is not face connected, a contradiction. This gives~\refitm{FiniteDisk}. \end{proof} Note that \reflem{Hull}\refitm{FiniteDisk} gives us a circular order on $\Delta(H)$. \begin{lemma} \label{Lem:Edge} Suppose that $a$ and $b$ are distinct cusps of $\Delta(H)$; suppose that $a$ and $b$ span an edge $e$ of $\veer(H)$. Then exactly one of the following occurs: \begin{itemize} \item $a$ and $b$ are adjacent in the circular order on $\Delta(H)$ or \item there is a tetrahedron $t$ of $T$ so that $R(t)$ properly spans $R(e)$. \end{itemize} \end{lemma} \begin{proof} From \reflem{Hull}\refitm{FiniteDisk}, the fact that $\bdy R(e)$ has exactly two components, and the fact that $R(e)$ is open, we deduce that $H - R(e)$ has exactly two components. Call these $P$ and $Q$. Every cusp of $H$ is incident to $P$ or $Q$ with only $a$ and $b$ incident to both. Suppose that $a$ and $b$ are not adjacent in the circular order on $\Delta(H)$. Thus, each of $\Delta(P)$ and $\Delta(Q)$ contains at least one cusp of $\Delta(H) - \{ a, b \}$. By \reflem{Convex}, the set $\Delta(P)$ contains at least one cusp from $\Delta(R(T)) - \{a, b\}$, say $p$. Similarly, $\Delta(Q)$ contains a cusp $q$ belonging to $\Delta(R(T)) - \{a, b\}$. Let $r$ and $s$ be tetrahedra of $T$ so that $p$ and $q$ are on the boundary of $R(r)$ and $R(s)$, respectively. Since $T$ is face connected, there is a sequence of tetrahedra \[ (r = t_0, t_1, t_2, \ldots, t_n = s) \] so that $t_i$ and $t_{i+1}$ share a face. Note that $R(t_n)$ is not contained in $P \cup R(e)$, since it meets $q$. Let $k$ be the first index so that $R(t_k)$ is not contained in $P \cup R(e)$. By induction, for $i$ between $0$ and $k$, we have that $R(t_i)$ meets $P$. Thus $R(t_k)$ meets both $P$ and $Q$. Thus $R(t_k)$ properly spans $R(e)$. \end{proof} \begin{wrapfigure}[6]{r}{0.39\textwidth} \vspace{-13pt} \centering \labellist \small\hair 2pt \pinlabel {$a$} [r] at 1 1 \pinlabel {$m_x$} [t] at 139 1 \pinlabel {$c$} [bl] at 100 67 \pinlabel {$b$} [l] at 277 150 \pinlabel {$\ell^x$} [l] at 277 75 \pinlabel {$x$} [l] at 277 1 \endlabellist \includegraphics[width=0.25\textwidth]{Figures/too_many_coastals} \caption{There cannot be a cusp between $a$ and $b$. } \label{Fig:TooManyCoastals} \end{wrapfigure} We now give a partial converse to \reflem{Edge}. \begin{lemma} \label{Lem:Coastal} Suppose that $a$ and $b$ are adjacent cusps in the circular order on $\Delta(H)$. Then the interior of $\operatorname{\mathsf{H}}(a, b)$ is an edge rectangle in $\mathcal{L}$. \end{lemma} We call the corresponding edge $e \in \veer(H)$ a \emph{coastal edge}. \begin{proof}[Proof of \reflem{Coastal}] The set $\operatorname{\mathsf{H}}(a, b)$ lies in the hull $H$ by \reflem{Convex}. Breaking symmetry, suppose that $b$ is anticlockwise of $a$ in $\bdy H$. As given by \reflem{Interval}\refitm{Rightmost}, let $\rho$ be the rightmost geodesic from $a$ to $b$. Thus $\rho$ lies in $\bdy H$. Recall also that $\rho$ has no right-turning corners. By \reflem{AtMostOne}, the geodesic $\rho$ has at least two segments. We deduce that $\rho$ has exactly two segments; these meet at one left-turning corner, say at $x$. Let $\ell^x$ and $m_x$ be the leaves through $x$. Breaking symmetry, we suppose that $\rho$ meets $m_x$ before it meets $\ell^x$. Let $\stair(x)$ be the staircase at $x$ that meets both $a$ and $b$. Let $\lambda$ be the leftmost geodesic from $a$ to $b$. By \refcor{StaircaseConvex}, the staircase $\stair(x)$ contains $\lambda$. If $\lambda$ has exactly two segments then the claim is proved. For a contradiction, suppose that $\lambda$ has more than two segments. By \reflem{Interval}\refitm{Rightmost} there is at least one cusp along $\lambda$. Let $c$ be the first such. Thus $\operatorname{\mathsf{H}}(a, c)$ is the closure of an edge rectangle. If $a$ and $c$ are adjacent in the circular order on $\Delta(H)$, then the leftmost geodesic from $a$ to $c$ is again in $\bdy H$. We deduce that no tetrahedron rectangle in $H$ meets $a$; see \reffig{TooManyCoastals}. Also $a$ does not meet the hull of any pair of cusps in $\Delta(R(T))$. Thus $a$ does not meet $H$, a contradiction. Therefore $a$ and $c$ are not adjacent. Applying \reflem{Edge}, we find a tetrahedron $t$ so that $R(t)$ properly spans the interior of $\operatorname{\mathsf{H}}(a, c)$. Thus $R(t)$ crosses one of $\ell^x$ or $m_x$. Either is a contradiction. Again, see \reffig{TooManyCoastals}. \end{proof} \begin{definition} We say that $e$, an edge of $\veer(H)$, is a \emph{lower edge for $H$} if it has the following property. For any edge $e'$ of $\veer(H)$ the edge rectangle $R(e')$ does not properly west-east span $R(e)$. We say that a face $f$ of $\veer(H)$ is a \emph{lower face for $H$} if all three edges of $f$ are lower edges for $H$. \end{definition} From \reflem{Coastal}, we deduce the following. \begin{corollary} \label{Cor:Lower} Coastal edges are lower edges for $H$. Lower edges do not link each other with respect to the given circular order on $\Delta(H)$. \qed \end{corollary} \begin{corollary} \label{Cor:BottomEdgeFaceTet} \leavevmode \begin{itemize} \item Suppose that $e$ is a non-coastal, lower edge for $H$. Then the tetrahedron $t$, having $e$ as its bottom edge, lies in $\veer(H)$. \item Suppose that $f$ is a lower face for $H$. Then the tetrahedron $t$, having $f$ as a bottom face, lies in $\veer(H)$. \end{itemize} \end{corollary} \begin{proof} Suppose that $e$ is the given lower edge. Define $U(e)$ to be those tetrahedra $t'$ so that $R(t')$ properly spans $R(e)$. Since $e$ is non-coastal, applying \reflem{Edge}, the set $U(e)$ is non-empty. Since $e$ is lower, no tetrahedron in $U(e)$ properly west-east spans $R(e)$. So every tetrahedron $t'$ in $U(e)$ has $R(t')$ properly south-north spanning $R(e)$. Take any such $t'$ and let $e'$ be the top edge of $t'$. We now apply \reflem{TwoEdges} to $e$ and $e'$ to obtain $t$. Suppose that $f$ is the given lower face. We take $e$ to be the bottom edge of $f$. By the previous paragraph the tetrahedron $t$ having $e$ as its bottom edge lies in $\veer(H)$. Since $t$ has $f$ as one of its bottom faces, we are done. \end{proof} \begin{lemma} \label{Lem:Landscape} The lower edges and faces for $H$ form a triangulated disk $L_H$ in $\veer(H)$. \end{lemma} \begin{proof} Let $L$ be the subcomplex of $\veer(\mathcal{L})$ consisting of $\Delta(H)$ and also the lower edges and faces for $H$. By \refcor{BottomEdgeFaceTet} we have that $L$ is a subcomplex of $\veer(H)$. Suppose that $\gamma$ is a simple edge cycle in $L$. Suppose further that $\gamma$ has no chords in $L$. By Lemmas \ref{Lem:OneEdge} and \ref{Lem:TwoCusps} the edge cycle $\gamma$ has length at least three. If $\gamma$ has length exactly three then, by \reflem{ThreeEdges} and by the definition of lower faces, there is a face $f$ contained in $L$ spanning $\gamma$. Suppose, for a contradiction, that $\gamma$ has length greater than three. Let $H_\gamma$ be the hull of the cusps of $\gamma$. For any edge $e$ of $\gamma$, the edge rectangle $R(e)$ is contained in $H_\gamma$. Also, $R(e)$ separates $\mathcal{L}$. Now fix any edge $e'$ of $\gamma$. Since all edges of $\gamma$ are lower for $H$ we deduce that all cusps of $\Delta(H_\gamma) - \Delta(R(e'))$ meet a common component of $\mathcal{L} - R(e')$. Orient $e'$ so that this component is to the left of $R(e')$. The orientation of $e'$ induces an orientation of $\gamma$, and thus all other edges of $\gamma$. For any edge $e$ of $\gamma$ let $\rho_{e}$ be the rightmost geodesic in $R(e)$. Thus by \reflem{Convex} we obtain the following. \begin{claim} \label{Clm:BdyHGamma} $\bdy H_\gamma = \sqcup_e \rho_e$ where the union ranges over the edges of $\gamma$. \qed \end{claim} Suppose that $e$ and $e'$ are distinct edges of $\gamma$. Since $e$ and $e'$ are lower, $\rho_e$ and $\rho_{e'}$ do not meet on their interiors. Since $\gamma$ is simple, $\rho_e$ and $\rho_{e'}$ share a cusp only if $e$ and $e'$ do. Thus $\bdy H_\gamma$ is simple and $H_\gamma$ is a disk. This gives a circular order on $\Delta(H_\gamma)$. We apply \reflem{Ear} to obtain an (open) rectangle $R \subset H_\gamma$ with three consecutive sides $s$, $s'$, and $s''$ contained in $\bdy H_\gamma$ and otherwise disjoint from $\bdy H_\gamma$. By \refclm{BdyHGamma} there is a cusp $b$ of $\gamma$ lying in the closure of $s'$. Let $a$ and $c$ be the (distinct) cusps of $\bdy H_\gamma$ immediately clockwise and anticlockwise of $b$, respectively. Suppose that there is an edge rectangle $R(e)$ with cusps at $a$ and $c$. Since $\gamma$ has length greater than three, we deduce that $e$ is not an edge of $\gamma$. Thus $e$ is not lower for $H$. Thus there is some edge $e'$ of $\veer(H)$ so that $R(e')$ properly west-east spans $R(e)$. However, $R(e')$ cannot west-east span any edge rectangle of $R(\gamma)$ as the latter are all lower for $H$. We deduce that $R(e')$ is lower for $H$ and has a cusp at $b$. By \refcor{Lower}, the edge $e'$ does not link any edge of $\gamma$; Thus $e'$ is a chord for $\gamma$, a contradiction. Now suppose that there does not exist an edge rectangle with cusps at $a$ and $c$. There are two cases as $b$ is in the boundary or interior of $s'$, the given side of $R$. See \reffig{GammaEar}. \begin{figure}[htbp] \centering \subfloat[$b$ is on the boundary of $s'$.]{ \labellist \small\hair 2pt \pinlabel {$a$} [br] at 120 193 \pinlabel {$b$} [tr] at 0 2 \pinlabel {$c$} [tl] at 273 80 \pinlabel {$d$} [bl] at 169 153 \endlabellist \includegraphics[width=0.4\textwidth]{Figures/cusp_on_boundary_of_side} } \quad \subfloat[$b$ is in the interior of $s'$.]{ \labellist \small\hair 2pt \pinlabel {$a$} [br] at 120 193 \pinlabel {$b$} [r] at 0 85 \pinlabel {$c$} [tl] at 273 0 \pinlabel {$d$} [bl] at 169 153 \endlabellist \includegraphics[width=0.4\textwidth]{Figures/cusp_in_interior_of_side} } \caption{Obtaining a chord for $\gamma$.} \label{Fig:GammaEar} \end{figure} In either case, the rightmost geodesic from $a$ to $c$ consists of exactly two segments. The leftmost geodesic from $a$ to $c$ necessarily meets at least one cusp, say $d$. The cusp $d$ is both an exterior cusp for the staircase based at $b$ and is a cusp of $\Delta(H)$. Thus there is an edge rectangle $R(e')$ with cusps at $b$ and $d$. By the hypotheses on $\gamma$, the edge $e'$ is lower for $H$. Since lower edges do not link (\refcor{Lower}), we deduce that $e'$ is a chord for $\gamma$, a contradiction. Thus $L_H = L$ is the desired triangulated disk. \end{proof} \begin{proposition} \label{Prop:ThreeBall} Suppose that $T$ is a finite, face-connected, non-empty collection of tetrahedra in $\veer(\mathcal{L})$. Let $H = \operatorname{\mathsf{H}}(\Delta(R(T)))$. Then the realisation of $\veer(H)$ is a three-ball. \end{proposition} \begin{proof} \reflem{Landscape} gives us a triangulated disk $L_H$, whose edges and faces are lower for $H$ and whose boundary consists of the coastal edges for $H$. Set $L_0 = L_H$. We now use induction to obtain a sequence of triangulated disks $L_k \subset \veer(H)$, with $\bdy L_k = \bdy L_0$. We say that an edge $e$ of $L_k$ is \emph{flippable} if both of its adjacent faces in $L_k$, say $f$ and $f'$, have $e$ as their bottom edge. (That is, all three of $R(e)$, $R(f)$, and $R(f')$ west-east span each other. See \reffig{TetRect}.) If $L_k$ has a flippable edge then let $e_k$ be one such. Consulting \reffig{TetRect}, we see that $f$ and $f'$ are the bottom faces of a tetrahedron $t_k$ in $\veer(H)$. Let $g$ and $g'$ be the top faces of $t_k$. We define the result of \emph{flipping $L_k$ across $e_k$} to be the triangulated disk \[ L_{k+1} = (L_k - (f \cup f')) \cup (g \cup g') \] On the other hand, if $L_k$ has no flippable edge then the induction is complete. \begin{claim} \label{Clm:Below} Suppose that $e'$ is an edge of $\veer(H)$ that is not equal to $e_j$, for any $j < k$. Then either $e'$ lies in $L_k$ or there is an edge $e$ of $L_k$ which properly west-east spans $e'$. \end{claim} \begin{proof} For $k = 0$ this follows from the definition of $L_0$. Suppose by induction that the claim holds at stage $k - 1$. Suppose that $e'$ is an edge of $\veer(H)$ that is not $e_j$ for any $j < k$. Suppose that no edge $e$ of $L_k$ properly west-east spans $e'$. Suppose that $e'$ lies in $L_{k-1}$. Since $e' \neq e_{k-1}$, we deduce that $e'$ lies in $L_k$. Thus, in this case we are done. Suppose instead that $e'$ does not lie in $L_{k-1}$. By induction, there is some edge $e$ of $L_{k-1}$ so that $R(e)$ properly west-east spans $R(e')$. If $e \neq e_{k-1}$ then $e$ lies in $L_k$, contrary to assumption. Thus $e = e_{k-1}$ is the only edge of $L_{k-1}$ whose rectangle properly west-east spans $R(e')$; we deduce that the equatorial edges of $t_{k-1}$ do not give properly spanning rectangles. Thus $e'$ is the top edge of $t_{k-1}$. Thus $e'$ lies in $L_k$ and we are done. \end{proof} \begin{claim} \label{Clm:Layering} For every tetrahedron $t'$ of $\veer(H)$, there is an $n$ so that $t' = t_n$. \end{claim} \begin{proof} Let $D(t')$ be the collection of tetrahedra $s$ in $\veer(H)$ which have $R(s)$ west-east spanning $R(t')$. The set $D(t')$ is finite and partially ordered by the spanning relation. Note that $t'$ lies in $D(t')$. Suppose that $R(e)$ is an edge rectangle in $H$ that west-east spans $R(t')$. Let $e'$ be the top edge of $t'$. Thus $R(e)$ properly west-east spans $R(e')$. By \reflem{TwoEdges}, there is a tetrahedron $t$ so that $e$ is the bottom edge of $t$ and $R(t)$ lies in the convex hull of the cusps of $R(e)$ and $R(e')$. By \reflem{Convex}, the tetrahedron $t$ lies in $D(t')$. Define $D_k(t') = D(t') - \{t_0, t_1, \dots, t_{k-1}\}$. If $D_k(t')$ is empty then we are done. Otherwise, suppose that $s' \in D_k(t')$ has a bottom edge which is not flippable in $L_k$. Thus some edge $e'$ of $s'$, other than the top edge of $s'$, is not contained in $L_k$. By \refclm{Below}, there is an edge $e$ of $L_k$ that properly west-east spans $e'$. By the previous paragraph, $s'$ was not a minimum of $D_k(t')$. That is, all minima of the partial order on $D_k(t')$ have bottom edges which are flippable in $L_k$. Any flippable edge in $L_k$, if not removed, remains flippable in $L_{k+1}$. Thus there is a $k' > k$ so that $D_{k'}(t')$ has fewer elements than $D_k(t')$. \end{proof} \begin{claim} \label{Clm:Thick} For every non-coastal edge $e$ of $L_0$, there is an $n$ so that $e = e_n$. \end{claim} \begin{proof} This follows from \refcor{BottomEdgeFaceTet} and \refclm{Layering}. \end{proof} \refclm{Layering} and \reflem{Hull}\refitm{FiniteContent} imply that the realisation of $\veer(H)$ is a finite collection of finitely triangulated three-balls, perhaps meeting along separating edges. \refclm{Thick} implies that there are no separating edges. \end{proof} \begin{theorem} \label{Thm:ThreeSpace} Suppose that $\mathcal{L}$ is a loom space. Then the realisation of the induced triangulation $\veer(\mathcal{L})$ is homeomorphic to $\mathbb{R}^3$. \end{theorem} \begin{proof} Choose an ordering $(t_i)_{i \in \mathbb{N}}$ for the tetrahedra of $\veer(\mathcal{L})$. Applying \refprop{FaceConnected}, we arrange matters so that any initial subsequence of $(t_i)$ is face-connected. Let $H_n$ be the convex hull of the union of the tetrahedron rectangles of the first $n$ tetrahedra. By \refprop{ThreeBall}, the realisation of $\veer(H_n)$ is a closed three-ball. Taking interiors, we find that $|\veer(\mathcal{L})|$ is an increasing union of open three-balls. The theorem now follows from a result of Brown~\cite{Brown61}. \end{proof} \renewcommand{\UrlFont}{\tiny\ttfamily} \renewcommand\hrefdefaultfont{\tiny\ttfamily} \bibliographystyle{plainurl}
\section{Introduction}\label{sec:intro} The time projection chamber (TPC) was invented in 1976~\cite{bib0} and is still used in today's particle physics experiments. Also for the International Large Detector (ILD)~\cite{ILD}, one of the two detectors foreseen for the future International Linear Collider (ILC), a TPC is planed as the major tracking detector. The most recent technology to equip the TPC-endplate are micropattern gaseous detectors (MPGDs). One of the concepts are Micromegas~\cite{bib1} which intrinsically come with a high granularity that is given by the distance between the holes in the grid. To reflect this from the readout side an ASIC, the Timepix chip~\cite{bib2}, with a pixel pitch of \SI{55}{\micro\meter} is used in our experiments. Such an ASIC with a post processed grid on top is called InGrid~\cite{chefdeville}. It is our goal to demonstrate that such a type of detector can be used to read out the ILD TPC. \subsection{Motivation}\label{sec:motiv} Micro-structuring of semiconductor devices was a breakthrough regarding the tracking capabilities of silicon detectors. In most of the particle physics experiments, strip and pixel detectors are used as vertex and tracking detectors in the central region. For gaseous detectors, micro-structuring of the readout anode led to a similar revolution with MPGDs replacing wires. The dimensions of the amplification structure could be reduced from the millimetre scale down to some \SI{10}{\micro\meter}. However, the dimension of a readout pad, that is used at many MPGDs, still does not match the high granularity. The amount of electronics involved in processing pad signals prevents to move to smaller pad sizes. Using ASICs with integrated electronics and about \SI{10}{\micro\meter} pixel size could be a solution to overcome this limitations.\\ In the case of thin planar drift detectors aiming for a high transverse spatial resolution $\mathrm{\sigma_{xy}} $, the limiting factor is the transverse diffusion constant $\mathrm{D_T}$ of primary electrons in the gas volume. Assuming an electron cloud from a primary ionisation with $O(10)$ electrons, the cloud will have a size $O(30$)\,\SI{}{\micro\meter} after one millimeter of drift in a gas with $\mathrm{D_T} = O(100)$\,\SI[per-mode=symbol]{}{\micro\meter\per\sqrt{\centi\meter}}. Hence, the best possible single point resolution is $O(10)$\,\SI{}{\micro\meter}, which is comparable to a silicon detector.\\ For long drift distances as in case of a TPC, the benefits of a fine-grained readout plane are the detailed visualisation of $\mathrm{\delta}$ electrons, the double-track resolution, the direct $\mathrm{dE/dx}$ measurement by cluster counting (see section~\ref{sec:InGrid}) and almost no track angle effects, as the pixels are quadratic and not rectangular. The data taken in the test beam campaign discussed later will be used to demonstrate these advantages.\\ As far as X-ray detection is essential, a pixel MPGD is capable to distinguish between photons, minimum ionising particles and alpha particles by means of pattern recognition, ionisation density and energy deposition. Depending on the gas, even photoelectron tracks can be recognised.\\ Because of these features, the InGrid is a good candidate to be used in rare event searches as CAST~\cite{cast} and DARWIN~\cite{darwin}, vertex detectors~\cite{gossip} or the ILD TPC. \subsection{History}\label{sec:hist} The concept of combining a pixel ASIC with a MPGD was first discussed by Bellazzini and Spandre in \cite{bellazzini} and Colas et al. in \cite{ColasNikhef}. The former used a Gas Elektron Multiplier (GEM) with \SI{60}{\micro\meter} pitch in combination with a 512~pixel readout with \SI{200}{\micro\meter} pitch. Their detector was built to measure the tracks of photo electrons for X-ray polarimetry. They noted that \textit{"the real challenge with this class of detectors is the design of the read-out system which should not spoil the intrinsic performance of the device"}.\\ Colas and his colleagues used the Medipix2 chip~\cite{medipix2} in combination with a triple GEM stack or a Micromegas. An iron source and cosmic rays were used to produce primary electrons. In this publication it was already mentioned that the goal of this effort is the development of a monolithic integrated Micromegas that was called TimePixGrid at that time.\\ In a later publication~\cite{singlee} the same group reported about the possibility to detect single electrons with 90~\% efficiency with this detector. For their measurements they used tracks of cosmic particles and also recorded $\mathrm{\delta}$ electrons.\\ The next step was to investigate a technique to align the holes in the grid with the pixels of the chip and control parameters like the hole size, gap height and grid thickness. The first approach was to develop a technology to build an aluminium grid on top of a bare silicon wafer, which is reported in \cite{postprocess}. This was followed by a detailed study~\cite{chefdeville} on the fabrication and testing of an integrated grid on a CMOS pixel chip, the Timepix chip.\\ Another important step was the first test beam campaign with combined MPGD and pixel readout which was carried out at the \SI{5}{\giga\electronvolt} electron beam of the DESY II synchrotron~\cite{bamberger}. With a setup of a triple-GEM stack and a Medipix and Timepix chip the resolution of such a detector was studied. For short drift distances, a single point resolution down to approximately \SI{25}{\micro\meter} could be achieved.\\ Up to now, MPGDs in combination with pixel readout are used only in small experiments. But a R{\&}D proposal~\cite{AtlasNoteInGrid} was approved by the Atlas Upgrade Steering Group as meaningful R{\&}D activity. \subsection{The Timepix chip}\label{sec:Timepix} The ASIC used in our experiments is the Timepix chip~\cite{bib2}. It was developed in 2007 by the Medipix2 Collaboration and has a matrix of 256$\times$256 pixels, each with a size of \SI{55}{\micro\meter}$\times$\SI{55}{\micro\meter}. The sensitive area is \SI{1.4}{\centi\meter}$\times$\SI{1.4}{\centi\meter}. The input pad of each pixel is connected to a charge sensitive preamplifier and a single threshold discriminator. This analog part is then connected to the digital part of the pixel, which is driven by an external clock. Each pixel contains a 14 bit counter. The logic for this counter can be set in one of the two main modes: It can either measure charge by counting the number of clock cycles the discriminator signal is over the threshold (time over threshold, TOT) or measure the arrival time with respect to a trigger signal. This can be achieved by counting the number of clock cycles from the moment when the signal exceeds the threshold until the end of a shutter window opened by the trigger. By knowing the length of the shutter window and the trigger delay, one can calculate the arrival time. The TOT mode needs a calibration to transform the measured clock cycles to the charge that generated the signal in the preamplifier. An injection capacitor in every pixel can be used for a calibration with well defined input charge by test pulses.\\ After each shutter window the complete pixel matrix needs to be read out, that is 917504 bit. The Timepix chip is designed to be operated with a maximum external clock frequency of \SI{200}{\mega\hertz}. This results in a readout rate of at most \SI{218}{\hertz}, if the shutter length is short compared to the readout time. For a typical setup, the external clock frequency is about \SI{50}{\mega\hertz} and the shutter length is in the order of \SI{1}{\milli\second}, resulting in a readout rate of \SI{50}{\hertz}. Data processing and transmission has not been taken into account in this calculations. Chips can be connected in a daisy chain, where the data is forwarded from one chip to the next. The maximum readout rate will in this case be divided by the number chips in the chain. \subsection{The InGrid}\label{sec:InGrid} An InGrid is a special type of Micromegas. In a photolithographic process, as described in \cite{postprocess} and \cite{chefdeville}, the grid is produced on the Timepix chip and the holes of the grid are aligned to the pixels. The grid is set on a positive potential such that primary electrons, originating from ionisations above the grid, are accelerated when they enter a hole. If the electric field is large enough an avalanche of secondary electrons is created and a signal can be registered in the pixel underneath the hole. The chip is protected against sparks by a high resistive layer of silicon nitride. A SEM image of an InGrid can be seen figure~\ref{fig:InGrid2}.\\ As the distance between grid and chip is only \SI{50}{\micro\meter}, the charge will not spread to an neighbouring pixel by diffusion. However, for large gas amplifications, the protection layer could spread the charge. As long as no more than one primary electron enters a hole and the gas amplification is high enough, each primary electron will activate one pixel. It is hence possible with this type of detector to detect primary electrons with a very high efficiency.\\ An example for a measurement, where this property is used are ionisation spectra of X-ray photons in a gas. An incoming photon ionises gas atoms and releases all its energy. The electrons ionised in this process are drifted towards the grid in a low electric field. By diffusion this charge cloud is spread, such that the probability that two or more primary electrons enter one hole of the grid decreases. By counting the number of activated pixels in one recorded frame an ionisation spectrum can be generated. The high single electron detection efficiency leads to a high energy resolution. For a given gas, the number of activated pixels is proportional to the X-ray energy. Using the same method on the ionisation density of a charged particle traversing a gas, the energy loss per track length ($\mathrm{dE/dx}$) can be directly measured by counting the number of activated pixels per track length. This can only be achieved in combination with the high spatial resolution of the Timepix chip. \begin{figure}[tbp] \centering \includegraphics[width=.8\textwidth]{InGrid2.jpg} \caption{SEM image of an InGrid with partly removed Grid made by the IZM Berlin. The height of a pillar is \SI{50}{\micro\meter}.} \label{fig:InGrid2} \end{figure} \section{Experimental setup}\label{sec:setup} In the test beam campaign described here, we operated eight InGrid chips arranged in a block of 2$\times$4, a so called octoboard. The chips are glued on a carrier board that is placed inside a module made of aluminium and connected to an intermediate board. A similar module, the Octopuce~\cite{lupberger} has already been constructed as a demonstrator and was tested in a shorter campaign with moderate gas gain. \subsection{InGrid production} The InGrid chips used in this campaign are from the fourth wafer scale production process performed in collaboration with the University of Twente and Fraunhofer IZM Berlin \cite{thorsten}. At the very beginning, when this technology was pioneered and optimised by NIKHEF and the University of Twente only individual chips could be processed at a time. Because of the high demands by the community due to the growing surface of detectors, this production technique has been transferred to the wafer scale. Now about 100 chips can be produced in one run. The quality and performance of these chips is similar to those produced individually. In an Argon/Isobutane 95/5 gas mixture, energy resolutions of 5~\% for and $\mathrm{Fe^{55}}$ escape peak and gas gains of 10000 can be reached. \subsection{Readout system} \begin{figure}[tbp] \centering \includegraphics[width=.6\textwidth]{SRSFEC2.JPG} \caption{Scalable Readout System with a single Timepix chip at the front end. From left to right: Single timepix chip on golden carrier at intermediate board connected to an adapter card (light green) at the SRS FEC.} \label{fig:SRSFEC} \end{figure} Based on the Scalable Readout System (SRS)~\cite{SRS}, a new readout system for the Timepix chip has been developed. In our setup, up to eight daisy chained Timepix chips on a carrier board are plugged onto an intermediate board. This board is directly connected to a power supply for the chips
and to the SRS with a VHDCI cable. An adapter card with a connector for this cable was designed to be plugged to the front end card (FEC), see figure~\ref{fig:SRSFEC}. For the FPGA on the FEC, a dedicated firmware has been developed following a first approach described in \cite{zamrowski}. The data from the chips are processed in the FPGA and sent via Gbit ethernet to a PC. For the computer, a C++ based data acquisition software was written. \paragraph{Current status} All the functionality for data taking is implemented in software and firmware. The basic commands like reset, setting the pixel matrix, reading out the data, setting the DACs, opening or closing the shutter are performed by the FPGA. By executing combinations of these commands from the software threshold equalisations, calibrations, data taking, etc. can be done. For calibration, external test pulses are used. In the future, a multiplexer already present at the intermediate board will be used for an automatic calibration. If only one or a few chips have to be read out, a smaller system based on a Xilinx ML605 Evaluation board is also available and will be used in the CAST experiment. \subsubsection{Requirements and solutions} For large area detectors a modular readout system is necessary. The decision was made to use the SRS from RD51 at CERN, as with this system several FECs can be combined by an ethernet switch. Dedicated hardware was designed to support the modularity. The FPGA can be programmed by the user according to his needs. For example, zero suppression and parallelised data management have been implemented to reach the maximum theoretical readout speed of up to \SI{100}{\hertz} for a single chip. \subsubsection{FPGA Firmware} The FPGA on the current SRS~FEC is a Xilinx~Virtex~5~vlx50t and will be updated to a Virtex~6 in the next version. The task of the firmware operating on this FPGA is on the one hand to control the chips and on the other hand to read out the data and transmit it to the PC as fast as possible. The code consists of several modules and is mainly written in VHDL. The ethernet communication is provided by the Xilinx Ethernet Media Access Control~(EMAC) in combination with some SRS common code. These modules communicate with the Timepix control module that transforms the ethernet commands to the command signals the chips need and vice versa. Another module is responsible for data handling and temporary data storage during zero suppression. To extend these features, the DDR2 memory of the SRS is implemented using the Memory Interface Generator~(MIG) together with a memory control module. A simplified schematic view of the FPGA firmware can be seen in figure~\ref{fig:Firmware}. \begin{figure}[tbp] \centering \includegraphics[width=.8\textwidth]{Firmware.jpg} \caption{Simplified schematic of the FPGA firmware used for the SRS Timepix readout.} \label{fig:Firmware} \end{figure} \subsection{Test beam setup} The goal of this test beam was to record tracks of ionising particles with two modules for a complete testbeam campaign and to demonstrate the functionality of the SRS system. The detectors had to show a stable behaviour at a gain with high single electron detection efficiency. Data to study the properties of pixelated readout had to be taken, as for example different track angles, momenta and z-positions. Within the data analysis this data can be used to investigate the track angular effect, $\mathrm{dE/dx}$ resolution and transverse spatial resolution.\\ For data taking, the large prototype (LP)~\cite{LP} of the linear collider TPC collaboration (LCTPC) was used. It is located in test beam area T24/1 at the DESY~II synchroton. During the 16-day campaign in spring~2013 also data with magnetic field was recorded, as the LP is inserted in a \SI{1}{\tesla} superconducting magnet called PCMAG. The TPC fieldcage has an inner diameter of \SI{72}{\centi\meter} and a length of \SI{56,76}{\centi\meter}. The anode endplate can host up to seven modules and resembles a segment of the ILD TPC endplate as shown in figure~\ref{fig:LP}. The SRS operated the Timepix chip with a clock frequency of \SI{40}{\mega\hertz} and a readout rate of \SI{2,5}{\hertz}, which is not the maximum speed for a chain of eight chips. The chips was set in the mode to measure the arrival time of the charge. With this information and the drift velocity inside the gas volume the z-position of the primary ionisation can be reconstructed. The z-axis is the axis along the TPC volume, while the x-y-plane is equivalent to the readout plane. For gas amplification in the InGrid the grid voltage was set to \SI{350}{\volt} which resembles to a gas gain of about 6000. The trigger for the system came from the coincidence signal of two scintillators in front of the TPC. The logic for recording a frame of data was implemented in the FPGA firmware and defines a shutter window within which the chip is sensitive. The shutter window was set such that the whole TPC volume is read out. More than 2~million tracks from electrons with up to \SI{6}{\giga\electronvolt} in T2K gas (95~\% Ar, 3~\% CF${_4}$, 2\% iC$_4$H$_{10}$) have been recorded. This gas has a drift velocity of \SI[per-mode=symbol]{74}{\milli\meter\per\micro\second}. As the \SI{40}{\mega\hertz} are also the sampling frequency for the charge arrival time measurement, the resolution in z-direction can not be better than $\mathrm{\sigma_{z,min} = \frac{\SI[per-mode=symbol]{74}{\milli\meter\per\micro\second}}{\sqrt{12} \cdot 40 MHz} =} \SI{534}{\micro\meter}$.\\ The campaign included measurements at $\mathrm{B} = \SI{0}{\tesla}$ and $\mathrm{B} = \SI{1}{\tesla}$ with two different drift fields: \SI[per-mode=symbol]{230}{\volt\per\centi\meter} and \SI[per-mode=symbol]{130}{\volt\per\centi\meter} respectively. The z-position and the angle of the beam with respect to the endplate was varied as well as the electron momentum. Two different readout modules were used, one with a triple GEM amplification structure in combination with an octoboard of unprocessed Timepix chips and another one with InGrids. Analysis of GEM data has not started yet, hence, we will focus on the InGrid module.\\ In figure~\ref{fig:InGridMod} an exploded view of the module can be seen. It is made of several aluminium parts and PCBs. The outermost part on the right hand side is the intermediate board (blue). It is connected to the chip carrier board (green) by a 40 pin connector for power, control and data signals. Additionally, there are two 4-pin connectors for the grid high voltage. In between the two PCBs there is a cooling structure made of aluminium \cite{menzen}. These four parts can be plugged from the back side into the aluminium frame. The innermost part facing the gas volume of the TPC is the anode plate that also overlaps the InGrid edges in order to minimise field distortions. As the InGrid is geometrically on a different z-position than the anode plate, the electrical potential needs to be adjusted. The setting of the correct potential has been studied first in the test beam. Field distortions at the edges of the board cause primary electrons to be focussed to the anode plate or the center of the chip. If the complete surface of the chip is illuminated in an integrated image of many frames (occupancy), then the field distortions are minimised. Hence, the voltage difference between grid and anode was varied until the occupancy of the octoboards has been optimised.\\ \begin{figure}[tbp] \centering \includegraphics[width=.6\textwidth]{LP.jpg} \caption{Illustration of the ILD TPC and image of the LP endplate.} \label{fig:LP} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=.6\textwidth]{InGridMod.jpg} \caption{Exploded view of the InGrid module.} \label{fig:InGridMod} \end{figure} \subsection{Preliminary analysis} A first preliminary analysis has been started using the MarlinTPC~framework~\cite{Marlin}. For the track reconstruction a two dimensional Hough transformation and a track fit was used. First results are presented here and discussed in full detail in \cite{menzen}. In figure~\ref{fig:eventdisplay1} and \ref{fig:eventdisplay2} online event display images of tracks for the two modules are shown. For the InGrid module, a double track event with clearly visible primary electron signals can be seen, whereas for the GEM module the track consists of several big blobs with less information about the number of primary electrons. In case of a GEM, several primary electrons end up in a single hole of the top GEM. Moreover the signal will spread out in the triple GEM stack and along the drift inside the stack towards the Timepix chips and create a large charge deposition. If the ionisation density of the charged particle is high, these depositions may overlap. Due to the characteristics of the InGrid detector (see section\ref{sec:InGrid}) the primary electrons are still visible. For the first analysis, a dataset of a z-scan with $\mathrm{E_{Drift}} = \SI[per-mode=symbol]{230}{\volt\per\centi\meter}$ was chosen. The transverse diffusion for this field configuration was calculated with MAGBOLTZ~\cite{magboltz} to $\mathrm{D_T(B = 0 T)} \approx \SI[per-mode=symbol]{310}{\micro\meter\per\sqrt{\centi\meter}}$ and $\mathrm{D_T(B = 1 T)} \approx \SI[per-mode=symbol]{100}{\micro\meter\per\sqrt{\centi\meter}}$. \begin{figure}[tbp] \centering \includegraphics[width=.8\textwidth]{InGridGEM1_1.jpg} \caption{Online event display of double track event from the InGrid module. The two tracks stem from different z-positions as they have a different transverse diffusion width.} \label{fig:eventdisplay1} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=.8\textwidth]{InGridGEM2_1.jpg} \caption{Online event display of an event from the GEM module.} \label{fig:eventdisplay2} \end{figure} \paragraph{Cuts} Due to a rarely occurring bit shift error in the readout system $\approx 4\%$ of the hits are outside the shutter window. They are physically not meaningful. One fourth of this originated from non-hit pixels, another fourth from pixels hit during the shutter window. From the recorded data only the physically meaningful hits from within the shutter window were accepted. For a simple analysis, cuts were applied to select single straight tracks, see figure~\ref{fig:cuts}. First of all, only tracks with more than 200~hits were accepted. This excludes track segments and $\mathrm{\delta}$-electrons. Next, events with more than two tracks were rejected to simplify the analysis. Finally, tracks were excluded that are too close to the lower and upper borders of the lower chip row (chips 1-4) such that some primary electrons could have diffused out of the sensitive area. Since the residuals of the hits on a track depend on the distance the primary electrons have to drift towards the anode, this cut depends on the z-position of the track. The cut was set to a distance of 3~$\mathrm{\sigma}$ of the expected transverse diffusion width. Chips 1-4 were choosen, as the beam was focussed on this lower chip row for data taking.\\ Figure~\ref{fig:recoTracks} shows two reconstructed tracks in the x-y~plane. One for $\mathrm{B} = \SI{0}{\tesla}$ and another for $\mathrm{B} = \SI{1}{\tesla}$. The track with $\mathrm{B} = \SI{1}{\tesla}$ originates from a eight times larger distance in z-direction. The suppression of the transverse diffusion is clearly visible, as both tracks have approximately the same width. Another remarkable fact can be seen in the enlarged part of the figure: The individual hits of the primary electrons are clearly visible. Even on the short track length of \SI{8}{\milli\meter} there are $O(100)$ track points. \begin{figure}[tbp] \center
)$ is at most $R'$, verifying the inclusion $$ P_{U_2}(U_1)\subset N_{R'}(P_{U_1}(U_2)). $$ The reverse inclusion is proven by switching the roles of $U_1$ and $U_2$. \qed \medskip Continuing with the notation of the lemma: \begin{cor}\label{cor:0-flow-space} If $d(U_1,U_2)\le R$, then $$\operatorname{Hd}(P_{U_1}(U_2), N_{R'}(U_2)\cap U_1)\le 2R' \hbox{~~and~~} \operatorname{Hd}(N_{R'}(U_1)\cap U_2, N_{R'}(U_2)\cap U_1)\le R'.$$ \end{cor} \proof 1. According to the lemma, $P_{U_1}(U_2)\subset N_{R'}(U_2)\cap U_1$. Conversely, given $x\in N_{R'}(U_2)\cap U_1$, there exists $y\in U_2$ with $d(x, y)\le R'$. Hence, $d(y, P_{U_1}(U_2)(y))\le R$, implying $$ d(x, P_{U_1}(U_2)(y))\le 2R'. $$ 2. The second claim is clear and holds for arbitrary $R'\ge 0$ and arbitrary subsets of arbitrary metric spaces. \qed \medskip Thus, we proved that if $d(U_1,U_2)\le R$, then all four subsets $$P_{U_1}(U_2), P_{U_2}(U_1), N_{R'}(U_1)\cap U_2, N_{R'}(U_2)\cap U_1$$ are within Hausdorff distance $2R'$ from each other. \section{Quasiconvex subgroups and actions} In this section we discuss quasiconvexity in the context of subgroups of hyperbolic groups and, more generally, group actions. \begin{defn} \label{defn:qc-subgroup} A subgroup $H$ of a hyperbolic group $G$ is said to be {\em quasiconvex} \index{quasiconvex subgroup} \index{quasiconvex action} if it is a quasiconvex subset of a Cayley graph of $G$ for a finite generating set. More generally, a (metrically) proper isometric action of a discrete group $H$ on a geodesic hyperbolic metric space $X$ is {\em quasiconvex} if one (equivalently, every) $H$-orbit in $X$ is a quasiconvex subset in $X$. \end{defn} \begin{lemma}\label{lem:qc action} If the action of $H$ on $X$ is {\em quasiconvex} then $H$ is finitely generated, the orbit map $H\to H\cdot x\subset X$ is a qi embedding and $H$ is a hyperbolic group. \end{lemma} \proof 1. Quasiconvexity of $Hx\subset X$ implies that $Hx$ is coarsely connected. Hence, by the Milnor--Schwarz Lemma, $H$ is finitely generated. We, thus, equip $H$ with a word metric corresponding to a finite generating set. 2. Metric properness of the action implies that the orbit map $o_x: H\to Hx\subset X$ is uniformly proper. Since the image of this map is a quasiconvex subset of $X$, the orbit map is a qi embedding (see Lemma \ref{lem:up+qc->qi}). 3. Since $X$ is assumed to be hyperbolic, in view of Lemma \ref{lem:qi-preserves-paths}, the existence of a qi embedding $o_x$ implies hyperbolicity of $H$. \qed \medskip We now discuss the notion of coarse intersection in relation to quasiconvex subgroups and actions. For general quasiconvex subsets $U, V$ of hyperbolic spaces $X$, coarse intersections $N_R(U)\cap V$ might not be Hausdorff-close to the actual intersections $U\cap V$: For instance, $U\cap V$ might be empty while for some $R>0$ the intersection $N_R(U)\cap V$ might be unbounded. As a specific example, consider $X=\mathbb R$ (which is $0$-hyperbolic) and $1$-quasiconvex subsets $U, V$ consisting of odd/even integers respectively. Then $N_1(U)\cap V=V$, while $U\cap V=\emptyset$. Nevertheless, in the group-theoretic setting we have \begin{lemma}\label{lem:qc-subgroups} Let $G$ be a hyperbolic group, $U, V$ be quasiconvex subgroups in $G$ with $W:= U\cap V$. Then for every $r>0$ there exists $R=R_{\ref{lem:qc-subgroups}}(G, r)$ such that $$ W\subset W_r:=V\cap N_r(U)\subset N_R(W). $$ \end{lemma} \proof The proof is quite standard, cf. \cite[pp. 164-165]{gromov-ai} or Lemma 2.6 in \cite{MR1389776}. Suppose that $u\in U, v\in V$ satisfy $d_G(u,v)\le r$, i.e. $u^{-1}v\in B_G(1, r)$. Since the ball $B_G(1, r)$ is finite, there exists a finite set of pairs $(u_i, v_i), i=1,...,n$ such that for any pair $u\in U, v\in V$ within distance $r$ from each other, the product $u^{-1}v$ equals $u_i^{-1}v_i$ for some $i\in \{1,...,n\}$. We have $$ u^{-1}v=u_i^{-1}v_i \Rightarrow u u_i^{-1}= v v_i^{-1}=w\in W=U\cap V. $$ Hence, $u= w u_i, v=wv_i$ and, therefore, for $$ R:= \max \{|v_i|: i=1,...,n\} $$ we have $d_G(v, W)\le R$. \qed \begin{cor}\label{cor:proj-to-qc-subgroup} In the setting of Lemma \ref{lem:qc-subgroups}, the distance between the restrictions to $V$ of the projections $P_{X,W}, P_{X,U}$, is at most $C_{\ref{cor:proj-to-qc-subgroup}}(G, \delta,\la)$, where $\delta$ is the hyperbolicity constant of $G$ and $\la$ is the maximum of the quasiconvexity constants of $U, V$ in $G$. \end{cor} \proof We take $r:= r_{\ref{lem:two-projections}}(\la,\la,\delta)$. According to Lemma \ref{lem:two-projections}, the restrictions to $V$ of the projections $$ P_{G, W_r}, P_{G,U} $$ are within distance $\mu+3\delta+r$. By Lemma \ref{lem:qc-subgroups}, the subsets $W, W_r\subset X$ are $R=R_{\ref{lem:qc-subgroups}}(G, r)$-Hausdorff-close. Therefore, by Corollary \ref{cor:proj-to-close-subsets}, the distance between the projections $P_{G, W_r}, P_{G, W}$ is $\le D_{\ref{cor:proj-to-close-subsets}}(\delta, \la, R)$. Thus, we can take $$ C_{\ref{cor:proj-to-qc-subgroup}}(G, \delta,\la):= D_{\ref{cor:proj-to-close-subsets}}(\delta, \la, R) + \mu+3\delta+r. \qed $$ As an immediate consequence we obtain the standard result on intersections of quasiconvex subgroups of hyperbolic groups (see e.g. \cite{short} or \cite[Proposition 4.13]{bridson-haefliger}): \begin{cor}\label{cor:qc-in} If $G$ is a hyperbolic group and $U, V$ are quasiconvex subgroups of $G$, then $U\cap V$ is also a quasiconvex subgroup of $G$, $H$ and $V$. \end{cor} \proof This is a combination of Lemmata \ref{lem:qc-subgroups}, \ref{lem:coarse-intersections-are-qc-2} and \ref{lem:nbds-of-qc}(2). \qed \medskip Essentially the same proofs as above work in the more general setting, when we have a quasiconvex metrically proper action of a hyperbolic group $G$ on a $\delta$-hyperbolic geodesic metric space $X$, and $Y\subset X$ is a quasiconvex subset with locally finite $G$-orbit (see Definition \ref{defn:locally finite action}). \begin{prop}\label{prop:proj-to-qc-action} Let $H< G$ denote the stabilizer of $Y$ in $G$, $x\in Y$. Then: a. There exists a function $R=R_{\ref{prop:proj-to-qc-action}}(x, r)$ such that $$ Hx\subset Gx\cap N_r(Y)\subset N_R(Hx) $$ b. $Hx$ is a quasiconvex $\mu$-subset of $X$, $\mu=\mu_{\ref{prop:proj-to-qc-action}}(x, \delta, \la)$, where $\la$ is the maximum of quasiconvexity constants (in $X$) of $Gx$ and $Y$. c. $H$ is a quasiconvex subgroup of $G$. d. The restrictions of $P_{X,Y}$ and $P_{X, Hx}$ to the orbit $Gx\subset X$ are within distance $C=C_{\ref{prop:proj-to-qc-action}}(x,\delta,\la)$. \end{prop} \proof a. Suppose that $d_X(gx,Y)\le r$ for some $g\in G$; equivalently, $g^{-1}Y\cap B(x,r)\ne \emptyset$. By the definition of a local finiteness of the $G$-orbit $GY$, there exist $h^{-1}\in H$ and $g_i^{-1}\in S$, where $S\subset G$ is a finite subset depending only on $x$ and $r$, such that $g^{-1}= g_i^{-1}h^{-1}$. We let $R=R(x,r)$ be the maximum of distances $d(x, g_i^{-1}(x))$ taken over $g_i^{-1}\in S$. Then $d(hx, gx)\le R$. This proves (a). b. We take $r:= 2\la + 2\delta$. By Lemma \ref{lem:coarse-intersections-are-qc-2}, the coarse intersection $Gx\cap N_r(Y)$ is $\la_{\ref{lem:coarse-intersections-are-qc-2}}(r,\delta)$-quasiconvex in $X$. On the other hand, by Part (a), $$ \operatorname{Hd}(Hx, Gx\cap N_r(Y))\le R. $$ Therefore, by Lemma \ref{lem:nbds-of-qc}(2), the subset $Hx$ is $\mu$-quasiconvex in $X$ with $$ \mu= 2R+2\delta+\la_{\ref{lem:coarse-intersections-are-qc-2}}(r,\delta).$$ c. Since the actions of $G$ and $H$ on $X$ are quasiconvex, the orbit maps $o_x: G\to X$, $o_x: H\to X$ are qi embeddings (see Lemma \ref{lem:qc action}). From this, we conclude that $H$ is qi embedded and, hence, is quasiconvex in $G$. d. The proof of this part is identical to that of Corollary \ref{cor:proj-to-qc-subgroup} and we omit it. \qed \medskip We assume now that $X$ is hyperbolic and that for each point $x\in X$ and each ideal boundary point $\xi\in \partial_{\infty} X$, there exists a geodesic ray $x\xi$ connecting $x$ to $\xi$ (e.g. $X$ is a proper geodesic metric space). \begin{defn}\label{defn:conical-limit} \index{conical limit point} \index{limit point} Suppose that $G$ acts isometrically and properly on $X$. A point $\xi\in \partial_{\infty} X$ is called a {\em limit point} of this action (or, simply, a limit point of $G$) if there exists a sequence $g_i\in G$ such that for some (equivalently, every) $x\in X$, the sequence $(g_i(x))$ converges to $\xi$. A limit point $\xi$ is called {\em conical} if the sequence $(g_i)$ can be chosen so that for some (equivalently, all) $x\in X, y\in X$, there exists a constant $R$ such that $d(g_i y, x\xi)\le R$ for all $i$. \end{defn} The proof of the following result (a {\em Beardon--Maskit criterion} for quasiconvexity) can be found for instance in Swenson's paper \cite{MR1804703} (cf. also \cite{MR1317633, bowditch-cgnce,MR1637829}); we will only need the easier direction (every limit point of a quasiconvex action is conical): \begin{thm} Suppose that $X$ is a proper geodesic hyperbolic metric space. Then a proper isometric action of a discrete group $G\curvearrowright X$ is quasiconvex if and only if every limit point of $G$ is conical. \end{thm} \section{Cobounded pairs of subsets}\label{sec:Cobounded pairs of subsets} Recall that in Definition \ref{def:cob} we defined Lipschitz-cobounded pairs of subsets of general metric spaces. Below, we establish a characterization of cobounded pairs in hyperbolic spaces. \begin{prop}[Characterizations of cobounded pairs] \label{prop:cobounded2} The following are equivalent for $\la$-quasiconvex subsets $Y, Z\subset X$ in a $\delta$-hyperbolic geodesic metric space $X$: \begin{enumerate} \item $Y, Z$ are $C_1$-Lipschitz cobounded. \item For every $R$ there exists $D=D(R)$ such that if $$ a_i\in Y, b_i\in Z, i=1, 2, $$ satisfy $d(a_i, b_i)\le R$, $i=1,2$, then $d(a_1,a_2)\le D, d(b_1, b_2)\le D$. \item The diameters of nearest-point projections $P_{X,Y}(Z)$, $P_{X,Z}(Y)$ are $\le C_2$. \end{enumerate} Moreover, once a constant $C_i$ (or a function $D(R)$) in one of the items is chosen, this, together with $\delta$ and $\la$, determines the constant/function in the other two items. \end{prop} \proof The implication (1)$\Rightarrow$(2) is proven in Lemma \ref{lem:cobounded1} for arbitrary subsets of arbitrary metric spaces, with $$ D=2C_1(R +1) +C_1 $$ For the implication (2)$\Rightarrow$(3), consider points $a_i\in Y, b_i\in Z$ such that $a_i\in P_{X,A}(b_i), i=1,2$. By Lemma \ref{proj-geod}, if $d(a_1,a_2)\geq D_{\ref{proj-geod}}(\delta, \la)$ then there exists $R=R_{\ref{proj-geod}}(\delta,\la)$ such that $$ a_1a_2\subset N_R(b_1b_2). $$ In that case, there are points $b_i'\in Z$ within distance $\le R+\la$ from $a_i$, $i=1,2$. Then, by (2), $$ d(a_1, a_2)\le D(R+\la). $$ Hence, we can take $C_2= \max\{D_{\ref{proj-geod}}(\delta, \la), D(R+\la)\}$. For the implication (3)$\Rightarrow$(1), we can take the retractions $$ r_A:= P_{X,A}, \quad r_B:= P_{X,B}. \qed $$ \medskip In view of this proposition, for quasiconvex subsets of hyperbolic spaces we will adopt the following terminology: \begin{defn}\label{defn:hyp-cobounded} \index{$C$-cobounded subsets in a hyperbolic space} A pair of subsets $Y, Z\subset X$ in a hyperbolic space $X$ is $C$-cobounded if the diameters of the projections $P_{X,Y}(Z), P_{X,Z}(Y)$ are $\le C$. \end{defn} \begin{lemma}\label{cobdd-cor} Given $\delta\geq 0$ and $\la\geq 0$ there are constants $R=R_{\ref{cobdd-cor}}(\delta, \la)$ and $D=D_{\ref{cobdd-cor}}(\delta, \la)$ such that the following holds: Suppose $X$ is a $\delta$-hyperbolic metric space and $Y, Z\subset X$ are two $\la$-quasiconvex and $R$-separated subsets. Then $Y, Z$ are $D$-cobounded. In fact, one can take $D=2\la+7\delta$ and $R= 2\la+5\delta$. \end{lemma} \proof We will show that the choice of $D=D_{\ref{proj-geod}}(\delta,\la)=2\la+7\delta$ and $$ R=\la+R_{\ref{proj-geod}}(\delta, \la)= 2\la+5\delta$$ works. Let $R_1=R_{\ref{proj-geod}}(\delta, \la)$, so that $R=\la+R_1$. Suppose the diameter of $P_{X,Z}(Y)$ is greater than or equal to $D$. Let $x,y\in Y$ be such that $d(P_{X,Z}(x), P_{X,Z}(y))\geq D$. Then by Lemma \ref{proj-geod} $P_{X,Z}(x)\in N_{R_1}(xy)$. But $Y$ is $\la$-quasiconvex and $x,y\in Z$. It follows that $P_{X,Z}(x)\in N_R(Y)$. Thus if $Y,Z$ are $R$-separated then the diameter of $P_{X,Y}(Z)$ and $P_{X,Z}(Y)$ are both less than $D$. \qed \medskip A consequence of this lemma allows one to simplify the verification that two subsets are cobounded; namely, it suffices to check only that one projection is bounded: \begin{cor}\label{cor:cob-char} Suppose that $U, V\subset X$ are $\la$-quasiconvex subsets in a $\delta$-hyperbolic space. a. If $\operatorname{diam}(P_U(V))\le D$, then $\operatorname{diam}(P_V(U))\le C=C_{\ref{cor:cob-char}}(\la,\delta,D)$, where $D\le C$. In particular, the pair $(U, V)$ is $C$-cobounded. b. If the pair $U, V\subset X$ is not $D_{\ref{cobdd-cor}}(\delta, \la)$-cobounded then $$ \operatorname{Hd}(P_{U}(V), P_{V}(U))\le D_{\ref{cor:cob-char}}(\delta,\la)= R_{\ref{cor:cob-char}}(\delta, \la)=2\la+3\delta + R_{\ref{cobdd-cor}}(\delta, \la). $$ \end{cor} \proof a. There are two cases to consider: 1. If $d(U,V)\ge R=R_{\ref{cobdd-cor}}(\delta, \la)$, then the pair $(U,V)$ is $D_{\ref{cobdd-cor}}(\delta, \la)$-cobounded by Lemma \ref{cobdd-cor}. 2. Suppose that $d(U,V)\le R=R_{\ref{cobdd-cor}}(\delta, \la)$. Then by Lemma \ref{lemma0-flow-space}, $$ \operatorname{Hd}(P_{U}(V), P_{V}(U))\le R'=2\la+3\delta +R.$$ Since $\operatorname{diam}(P_U(V))\le D$, it follows that $$ \operatorname{diam}(P_V(U))\le D+R'. $$ Taking $C:= \max(D+R', D_{\ref{cobdd-cor}}(\delta, \la))$, concludes the proof of a. \begin{rem} Note that $C= \max(D+4\la+8\delta, D_{\ref{proj-geod}}(\delta, \la))$. \end{rem} b. By the argument in Part a1, since the pair $U, V\subset X$ is not $D_{\ref{cobdd-cor}}(\delta, \la)$-cobounded, $d(U,V)< R=R_{\ref{cobdd-cor}}(\delta, \la)$. Thus, as in Part a2, $$ \operatorname{Hd}(P_{U}(V), P_{V}(U))\le R'=2\la+3\delta +R= 2\la+3\delta + R_{\ref{cobdd-cor}}(\delta, \la). \qedhere $$ \begin{rem}\label{rem:cbb-geodesics} If $U_1, U_2$ are geodesics in $X$, $\la=\delta$ and, by Lemma \ref{proj-geod}, one can take $D_{\ref{cobdd-cor}}(\delta, \delta)= 8\delta$ and $R_{\ref{cor:cob-char}}(\delta, \la)=12\delta$. \end{rem} \medskip Another application of Lemma \ref{cobdd-cor} is: \begin{cor}\label{cor:noncbd} Suppose that $\la$-quasiconvex subsets $U_1, U_2\subset X$ are not $D=D_{\ref{cobdd-cor}}(\delta, \la)$-cobounded. Then $$ P_{U_2}(U_1)\subset N_{4\la+8\delta}(U_1)\cap U_2. $$ \end{cor} \proof By Lemma \ref{cobdd-cor}, since $U_1, U_2$ are not $D$-cobounded, then $$ d(U_1,U_2)\le R=R_{\ref{cobdd-cor}}(\delta, \la)=2\la+5\delta. $$ According to Lemma \ref{lemma0-flow-space}, $$ P_{U_2}(U_1)\subset N_{R'}(U_1)\cap U_2, $$ where $R'=2\la+3\delta +R=4\la+8\delta$. \qed \medskip \begin{lemma}\label{cobdd-lem1} Given $\delta\geq 0$, $\la\geq 0$ and $C\ge 0$, there exists a constant $D= D_{\ref{cobdd-lem1}}(\delta, \la, C)$ such that the following holds: Suppose $X$ is a $\delta$-hyperbolic metric space and $U,V\subset X$ are two nonempty $\la$-quasicon\-vex and $C$-cobounded subsets. Then there are points $x_0\in U_0=P_U(V)\subset U$, $y_0\in V_0=P_V(U)\subset V$, such that $x_0y_0\subset N_D(xy)$, for all $x\in U$ and $y\in V$. \end{lemma} \begin{proof} Since the pair $(U, V)$ is $C$-cobounded, $$ \operatorname{diam}(V_0)\le C, \quad \operatorname{diam}(U_0)\le C. $$ Choose any pair of points $x_0\in U_0$, $y_0\in V_0$. Take $x\in U, y\in V$ and consider the points $\bar{x}= P_V(x)\in V_0, \bar{y} = P_U(y)\in U_0$. By Lemma \ref{lem:projection-1}, the points $\bar{x}, \bar{y}$ are within distance $\la+2\delta$ from $xy$. Therefore, $$ \max( d(x_0, xy), d(y_0, xy)) \le \la+2\delta+C $$ and, hence, we can take $D=\la+4\delta+C$. \end{proof} \begin{cor}\label{cobdd-cor1} Given $\delta\geq 0$ and $\la\geq 0$, there are constants $R=R_{\ref{cobdd-cor1}}(\delta, \la)$ and $D=D_{\ref{cobdd-cor1}}(\delta, \la)$ such that the following holds: Suppose $X$ is a $\delta$-hyperbolic metric space and $U,V\subset X$ are two $\la$-quasiconvex and $R$-separated subsets. Then there are points $x_0\in U$, $y_0\in V$ such that $x_0y_0\subset N_D(xy)$, for all $x\in U$ and $y\in V$. \end{cor} \begin{proof} By Lemma \ref{cobdd-cor}, there exists $R =R_{\ref{cobdd-cor}}$ such that the pair $(U,V)$ is $C=D_{\ref{cobdd-cor}}$-cobounded whenever $U,V$ are $R$-separated. Now, the claim follows from Lemma \ref{cobdd-lem1}. \end{proof} \chapter{Graphs of groups and trees of metric spaces}\label{ch:trees} \section{Generalities} \label{sec:generalities} We presume that the reader is familiar with the Bass--Serre theory. However, we briefly recall some of the concepts that we shall need. For details we refer the reader to Section 5.3 of Serre's book \cite{serre-trees}. \begin{defn}[Graph of groups] \index{graph of groups} A {\bf graph of groups} $(\mathcal G,\Gamma)$ consists of the following data: (1) A connected graph $\Gamma$. (2) An assignment to each vertex $v\in V(\Gamma)$ (and edge $e\in E(\Gamma)$) of a group $G_v$ (respectively $G_e$) together with injective homomorphisms $\phi_{e,o(e)}: G_e\rightarrow G_{o(e)}$ and $\phi_{e,t(e)}: G_e\rightarrow G_{t(e)}$ for all $e\in E(\Gamma)$, such that the following conditions hold: (i) $G_e=G_{\bar{e}}$, (ii) $\phi_{e,o(e)}=\phi_{\bar{e},t(\bar{e})}$ and $\phi_{e,t(e)}=\phi_{\bar{e},o(\bar{e})}$. \end{defn} We shall refer to the maps $\phi_{e,v}$ as the {\em canonical maps} of the graph of groups. We shall refer to the groups $G_v$ and $G_e$, $v\in V(\Gamma)$ and $e\in E(\Gamma)$ as {\em vertex groups} and {\em edge groups} respectively. For topological motivations of graph of groups and the following definition of the fundamental group of a graph of groups one is referred to \cite{scott-wall} or \cite{altop-hatcher}. In the terminology of \cite{bridson-haefliger}, a graph of groups is a covariant functor from the graph $\Gamma$ (regarded as a small category with set of objects $E\sqcup V$ and the set of morphisms consisting of the maps $o$ and $t$) to the category of groups, sending morphisms $o, t$ to group-monomorphisms. Functorially, in the case when $\Gamma$ is a tree, one can define the group $G$, the {\em fundamental group} of $(\mathcal G, \Gamma)$, or the {\em pushout} of the diagram ${\mathcal G}$, by a universal property. Namely, there exist monomorphisms $G_e\to G, G_v\to G$ such that the diagrams $$ \begin{diagram} G_e & \rTo & G_v\\ & \rdTo & \dTo \\ & \rdTo& G \end{diagram} $$ commute, and, whenever we have a group $H$ and a compatible collection of homomorphisms $G_e\to H, G_v\to H$ forming commutative diagrams $$ \begin{diagram} G_e & \rTo & G_v\\ & \rdTo & \dTo \\ & \rdTo& H \end{diagram} $$ there is a unique homomorphism $G\to H$ forming commutative diagrams $$ \begin{diagram} G_e & \rTo & G\\ & \rdTo & \dTo \\ & \rdTo& H \end{diagram} \quad \hbox{and}\quad \begin{diagram} G_v & \rTo & G\\ & \rdTo & \dTo \\ & \rdTo& H \end{diagram} $$ The general definition is more complicated: \begin{defn}[Fundamental group of a graph of groups]\index{fundamental group of a graph of groups} \label{defn:fundamental group of graph of groups} Suppose $(\mathcal G, \Gamma)$ is a graph of groups and let $S\subset \Gamma$ be a maximal (spanning) subtree. Then the fundamental group $G=\pi_1(\mathcal G,\Gamma, S)$ of $(\mathcal G,\Gamma)$ is defined in terms of generators and relators as follows: The generators of $G$ are the elements of the disjoint union of the generating sets of the vertex groups $G_v$, $v\in V(\Gamma)$ and the set $E(\Gamma)$ of {\em oriented} edges of $\Gamma$. The relators are of four types: (1) Those coming from the vertex groups; (2) $\bar{e}=e^{-1}$ for all edge $e$; (3) $e=1$ whenever $|e|$ is a unoriented edge of $S$; (4) $e\phi_{e,t(e)}(a)e^{-1}=\phi_{e,o(e)}(a)$ for all oriented edges $e$ and $a\in G_e$. \end{defn} The group $G$ does not depend on the choice of $S$ and it will be denoted $G=\pi_1(\mathcal G)$ in what follows. We will also frequently suppress the letter $\Gamma$ in the notation of a graph of groups. \begin{defn}[Bass--Serre tree of a graph of groups]\index{Bass--Serre tree} Suppose $(\mathcal G, \Gamma)$ is a graph of groups and let $S$ be a maximal tree in $\Gamma$ as in the above definition. Let $G=\pi_1(\mathcal G, \Gamma, S)$ be the fundamental group of this graph of groups. The {\em Bass--Serre tree}, denoted $T$, is the tree with the vertex set $$\coprod_{v\in V(\Gamma)} G/G_v$$ and the edge set $$ \coprod_{e\in E(\Gamma)} G/G^e_e$$ where $G^e_e=\phi_{e,t(e)}(G_e)<G_{t(e)}$. The origin/terminus maps are given by $$ t(gG^e_e)=g G_{t(e)},\,o(gG^e_e)=gG_{o(e)}.$$ Note that whenever $|e|$ is a unoriented edge of $S$, then we have $e=1$ in $G$. The group $G$ acts on $T$ via left multiplication. \end{defn} Conversely, given an action {\em without inversions}\footnote{which means that if $g\in G$ preserves an edge $[v, w]$ of $T$, then it also fixes both $v$ and $w$} of a group $G$ on a tree $T$, there exists a graph of groups ${\mathcal G}$ with $\pi_1({\mathcal G})\cong G$ such that $T$ is equivariantly isomorphic to the Bass--Serre tree of $\mathcal G$, see \cite{serre-trees}. Since our main motivation comes from geometric group theory and, hence, finitely generated groups, we observe that for $G=\pi_1(\mathcal G, \Gamma, S)$ to be finitely generated, it suffices (not not necessary!) to assume that each vertex group $G_v$ is finitely generated and the graph $\Gamma$ is finite. On the other hand, the edge groups $G_e$ need not be finitely generated. Natural examples of the latter situation are given by amalgams $$ G=G_v \star_{G_e} G_w, $$ where $G_e$ is an infinite rank free subgroup in two finitely-presented groups $G_v, G_w$: Such groups $G$ are finitely generated but not finitely presentable. In the context of combination theorems for hyperbolic groups, one assumes that the graph $\Gamma$ is finite, each vertex/edge group is hyperbolic and the monomorphisms $\phi$ are qi embeddings, i.e. have quasiconvex images. Returning to the general setting with finitely generated vertex groups and finite graph $\Gamma$, we note that while it is meaningless to assume that the canonical maps $\phi$ are uniformly proper (as edge-groups do not have canonical qi classes of metrics), nevertheless, if we equip $G_e$ with the pull-back a word metric from $G_{o(e)}$, while $G_{t(e)}$ has a word metric coming from a finite generating set, then the monomorphism $G_e\to G_{t(e)}$ is uniformly proper. Since the graph $\Gamma$ is finite, we conclude that each edge group has a left-invariant proper metric, such that the homomorphisms $\phi_{e,o(e)}$ and $\phi_{e,t(e)}$ are $(\eta,L)$-proper for some uniform function $\eta$ and a constant $L$. \medskip A {\em morphism} of graphs of groups, $\Psi: {\mathcal G}\to {\mathcal G}'$, consists of a morphism of the underlying graphs $\psi: \Gamma\to \Gamma'$ together with a collection of group homomorphisms $$ \Psi_v: G_v\to G_{\psi(v)}, v\in V(\Gamma), \quad \Psi_e: G_e\to G_{\psi(e)}, e\in E(\Gamma) $$ such that the following diagrams are commutative for $v=o(e)$ and $w=t(e)$ and their respective images $v'=\psi(v), w'=\psi(w), e'=\psi(e)$: $$ \begin{diagram} G_e & \rTo & G_{e'}\\ \dTo_{\phi_{e,v}} & & \dTo_{\phi_{e',v'}} \\ G_v & \rTo & G_{v'}\\ \end{diagram}, \begin{diagram} G_e & \rTo & G_{e'}\\ \dTo_{\phi_{e,w}} & & \dTo_{\phi_{e',w'}} \\ G_w & \rTo & G_{w'}\\ \end{diagram} $$ Given a graph of groups $({\mathcal G}',\Gamma')$ and a graph-morphism $\psi: \Gamma\to \Gamma'$ from a connected graph $\Gamma$, there is a canonical {\em pull-back} graph of groups $({\mathcal G}, \Gamma)$ and a morphism of graphs of groups $\Psi: {\mathcal G}\to {\mathcal G}'$, such that the underlying morphism of graphs $\Gamma\to \Gamma'$ equals $\psi$. In the special case when $\Gamma$ is a connected subgraph of $\Gamma'$, the graph of groups $({\mathcal G}, \Gamma)$ is called the {\em restriction} of ${\mathcal G}'$ to $\Gamma$ (see \cite[2.15]{Bass-93}). In this case, the Bass--Serre tree $T$ of $({\mathcal G}, \Gamma)$ admits a $G=\pi_1({\mathcal G})$-equivariant embedding in the Bass--Serre tree $T'$ of $({\mathcal G}', \Gamma')$ and $G$ equals the stabilizer of $T\subset T'$ in $G'$. We refer the reader to \cite{Bass-93} for further discussion of morphisms of graphs of groups. In the book, on several occasions we will use the following definition from the theory of group actions on trees: \begin{defn}\label{defn:acylindrical-action} \index{$k$-acylindrical group action} An action of a group $G$ on a tree $T$ is said to be {\em $k$-acylindrical} if whenever a nontrivial\footnote{In the literature, acylindricity is sometimes defined by requiring only that $G$-stabilizers of intervals of length $\ge k$ are finite, rather than trivial, subgroups.} element $g\in G$ fixes element-wise an interval $J\subset T$, then $J$ has length $\le k$. \end{defn} This terminology originates in Sela's paper \cite{Sela97}. The definition of acylindrical actions on trees was later coarsified and generalized by Bowditch in \cite{Bowditch-08}; we will not use his generalization. \section{Trees of spaces} \label{sec:trees-of-spaces} Each graph of groups yields a ``tree of metric spaces'' over its Bass--Serre tree; this was first formalized and used by Bestvina and Feighn in \cite{BF}. Below is our version of their definition. We start with the simpler concept of a {\em tree of topological spaces}. One can regard a (simplicial) tree $T$ (or a general graph) as a small category with object sets equal to $V(T)\sqcup E(T)$ and morphisms given by origin/terminus arrows. Then a tree of topological spaces over a tree $T$ is a functor ${\mathfrak X}$ from $T$ to the category of topological spaces. More explicitly: \begin{defn}\label{defn:top-tree}\index{tree of topological spaces} A tree of topological spaces over a tree $T$ is a collection ${\mathfrak X}$ of nonempty topological spaces (vertex and edge-spaces) $X_v, v\in V(T), X_e, e\in E(T)$, together a collection of continuous {\em incidence maps} $f_{ev}: X_e\to X_v$ defined for each oriented edge $e=[v,w]$. The {\em total space} $X$ of ${\mathfrak X}$ is the mapping cylinder of the collection of the maps $f_{ev}$, i.e. the quotient of the disjoint union $$ \coprod_{v\in V(T)} X_v \sqcup \coprod_{e\in E(T)} X_e\times [0,1] $$ by the equivalence relation $$ (x,0)\sim f_{ev}(x), (x,1)\sim f_{ew}(x), e= [v,w]\in E(T). $$ \end{defn} We will use trees of topological spaces in Section \ref{sec:CT-lamination}. For most of the book, we will work with trees of metric spaces defined below. \begin{comment} \begin{defn}\label{defn:retractive tree} A tree of spaces ${\mathfrak X}= (\pi: X\to T)$ is called $L'$-{\em retractive} if with the last (uniform properness) condition can be replaced by the following stronger property: For every oriented edge $e=[v,w]$, the restriction of the attaching map $f_e|_{ X_e \times\{v\}}$, admits an $L'$-coarse Lipschitz retraction $$ f_{ve}: X_{v}\to X_e. $$ The number $L'$ is the {\em retractivity constant} of ${\mathfrak X}$. \end{defn} In particular, for such a tree of spaces the attaching maps are $K$-quasiisometric embeddings to $(X_{v}, d_{X_{v}})$, where $$ K= \max(L, 2L'). $$ We will see below (Proposition \ref{unif-emb-subtree}) that uniform properness of the inclusion maps $X_v\to X$ is a consequence of the existence of coarse $L'$-retractions $X_{v}\to X_e$. Later, we will impose further {\em hyperbolicity} restrictions on trees of spaces. But first, we relate trees of metric spaces to a slightly different concept, which we call {\em an abstract tree of spaces}. In the second definition, instead of starting with a metric space equipped with a certain collection of maps, we start with a collection of metric spaces and {\em incidence maps} and from that produce a tree of spaces. \end{comment} Again, regarding a tree $T$ as a small category, to {\em some degree}, a tree of metric spaces ${\mathfrak X}$ over a tree $T$ is a functor from $T$ to the coarse category $\mathcal C$, see Remark \ref{rem:category}. The actual definition is somewhat more restrictive: \begin{defn}[Abstract tree of spaces] \label{defn:abstract-tree-of-spaces} An {\em abstract tree of (metric) spaces} ${\mathfrak X}$ over a simplicial tree $T$, is a collection of nonempty metric spaces (vertex and edge-spaces) $X_v, v\in V(T), X_e, e\in E(T)$, together a collection of $\psi$-uniformly proper coarse $L$-Lipschitz {\em incidence maps} $f_{ev}: X_e\to X_v$ defined for each oriented edge $e=[v,w]$. The constant $L$ and the function $\psi$ are the {\em parameters} of the abstract tree of spaces ${\mathfrak X}$. The tree $T$ is the {\em base} of ${\mathfrak X}$. \end{defn} Throughout the book, we will be assuming that all vertex-spaces $X_v$ are path-metric spaces. \medskip In view of the approximation lemmata (Lemma \ref{lem:graph-approximation} and Lemma \ref{lem:simplicial-approximation}), one can replace general path-metric spaces $X_v$ and incidence maps $f_{ev}$ by (connected) metric graphs (equipped with graph-metrics) and simplicial incidence maps. Below we define the {\em total space} $X$ of a tree of spaces and a projection $\pi: X\to T$. Thus, we will frequently refer to trees of spaces as ${\mathfrak X}= (\pi: X\to T)$, since the map $\pi$ records the most important information about ${\mathfrak X}$. In important class of trees of spaces consists of {\em metric bundles}. We refer to \cite{pranab-mahan} for the general definition; for the purpose of this book the following will suffice: \begin{defn}\label{defn:bundle} An abstract tree of spaces ${\mathfrak X}= (\pi: X\to T)$ is a {\em metric bundle} if the incidence maps $f_{ev}$ are uniform quasiisometries, i.e. there exists $\eps\ge 0$ such that for each edge $e=[v,w]\in E(T)$, the image $f_{ev}(X_e)$ (and, hence, $f_{ew}(X_w)$, by reversing the orientation on $e$) is $\eps$-dense in $X_v$. \end{defn} While the main motivation for trees of spaces comes from graphs of groups, the main group-theoretic examples of metric bundles over trees are short exact sequences $$ 1\to K\to G \to H\to 1, $$ where $K$ is a finitely generated group and $H$ is a free group of finite rank. \begin{defn}\label{defn:total space} The {\em total space}, or the {\em push-out}, of a tree of spaces ${\mathfrak X}$ is a metric space $X$ admitting a collection of $L'$-coarse Lipschitz maps $X_e\to X$, $e\in E(T)$, $X_v\to X, v\in V(T)$, and satisfying the following universal property: For every metric space $Y$ and a compatible collection of $L_1$-coarse Lipchitz maps $X_e\to Y, X_v\to Y$, there exists a unique, up to a uniformly bounded error, $L_2$-coarse Lipschitz map $X\to Y$ forming diagrams which commute up to a uniform error $C$: $$ \begin{diagram} X_e & \rTo & X\\ & \rdTo & \dTo \\ & \rdTo& Y \end{diagram}\quad \hbox{and}\quad \begin{diagram} X_v & \rTo & X\\ & \rdTo & \dTo \\ & \rdTo& Y \end{diagram} $$ Here $L_2$ and $C$ depend on $L_1$. \end{defn} This definition implies uniqueness (up to a quasiisometry) of the total space $X$. We will prove the existence of $X$ below (Theorem \ref{thm:existence-of-trees}). \begin{defn}\label{defn:retractive tree} An abstract tree of spaces is said to be {\em retractible} (or {\em retractive}), if there exists a collection of (uniformly) $L$-coarse Lipschitz maps (retractions) $f_{ve}: X_v\to X_e$ defined for oriented edges $e=[v,w]$, which are uniformly coarse left-inverses to the incidence maps $f_{ev}$, i.e. $$ \operatorname{dist}(f_{ve}\circ f_{ev}, \operatorname{id}_{X_e})\le \eps, $$ for some uniform constants $L\ge 1, \eps\in [0,\infty)$. \end{defn} Under the retractibility assumption, the incidence maps are not only uniformly proper but are also uniformly quasiisometric embeddings. While the definition is general, in this book, vertex and edge-spaces {\em mostly} will be uniformly hyperbolic, images of edge-spaces in vertex spaces will be uniformly quasiconvex and the retractions $f_{ve}$ will be given by nearest-point projections $P_{X_v, X_e}: X_v\to X_{ev}$. \medskip {\bf Morphisms.} Let ${\mathfrak X}, {\mathfrak X}'$ be abstract trees of spaces over trees $T, T'$ respectively with the respective vertex/edge spaces $X_v, X'_{v'}, X_e, X'_{e'}$. A {\em morphism} of abstract trees of spaces ${\mathfrak X}\to {\mathfrak X}'$ is a graph-morphism $T\to T'$, $v\mapsto v', e\mapsto e'$, together with a collection of \index{morphism of trees of spaces} uniformly coarse Lipschitz maps, between respective vertex and edge-spaces $$ h_v: X_v\to X'_{v'}, \quad h_e: X_e\to X'_{e'} $$ such that the diagrams (where the horizontal arrows are the incidence maps) $$ \begin{diagram} X_e & \rTo & X_v \\ \dTo_{h_e} & & \dTo_{h_v} \\ X'_{e'} & \rTo & X'_{v'} \end{diagram} $$ commute up to uniformly bounded errors. An {\em isomorphism} of abstract trees of spaces is an invertible morphism, equivalently, it is an isomorphism of trees $T\to T'$ and a collection of uniform quasiisometries $X_v\to X'_{v'}, X_e\to X'_{e'}$. \begin{rem} In this book we will be only considering {\em monic} morphisms of trees of spaces, i.e. ones for which the graph-morphism $T\to T'$ is injective and the maps $X_v\to X_{v'}$, $X_e\to X'_{e'}$ are $\zeta$-proper for some uniform function $\zeta$. \end{rem} \begin{example} The most common examples of morphisms of trees of spaces used in this book are {\em subtrees of spaces}. Namely, let $S\subset T$ is a subtree, ${\mathfrak X}$ is a trees of spaces over $T$. Then the {\em pull-back} of ${\mathfrak X}$ over $S$ is a tree of spaces ${\mathfrak Y}$ such that $Y_v=X_v, Y_e=X_e$, $v\in V(S), e\in E(S)$. The collection of identity maps $Y_v\to X_v, Y_e\to X_e$ defines a morphism of trees of spaces ${\mathfrak Y}\to {\mathfrak X}$. We will use the notation $X_S$ for the total space of the tree of spaces ${\mathfrak Y}$. In the case when $S$ is an interval (resp. tripod) in $T$, we will refer to $X_S$ as an {\em interval-space} (resp. {\em tripod-space}). \end{example} While the above definition is the main definition used in this book, we now connect the notion of an abstract trees of spaces to the notion of a tree of spaces as defined by Mitra in \cite{mitra-trees}. According to Mitra's definition, a tree of metric spaces is a path-metric space equipped with a certain auxiliary data, such as a map to a simplicial tree and a collection of maps to $X$ from certain spaces. Below, we use the $\ell_1$-metric on $X_e\times [v, w]$. Recall that for edges $e$ of $T$, $\dot{e}$ denotes the edge minus its end-points; below we will use the notation $m(e)$ for the midpoint of $e$. \begin{defn}[Tree of spaces] \label{defn:tree-of-spaces} A {\em tree of metric spaces}, denoted ${\mathfrak X}$, is a path-metric space $(X, d)$ equipped with a $1$-Lipschitz surjective map $\pi : X \rightarrow T$ onto a simplicial tree $T$, satisfying the following: \begin{enumerate} \item For each $v\in V(T)$, the corresponding {\em vertex-space} $X_v := \pi^{-1} (v)\subset X$ is rectifiably connected. \item For every edge $e\in E(T)$, the {\em edge-space} $X_e:= \pi^{-1}(m(e))$ is rectifiably connected. Every oriented edge $e=[v,w]$ comes equipped with an $L$-Lipschitz\footnote{The Lipschitz condition is absent in \cite{mitra-trees}, but it holds in all natural examples. On the other hand, Mitra assumes that each restriction $f_{e} |_{ X_e \times\dot{e}}$ is an isometry onto $\pi^{-1}(\dot{e})$, equipped with its path-metric induced from $X$. We find this assumption unnecessarily restrictive.} $\eta$-proper map $$ f_{e}: X_e\times [v,w]\to X, $$ such that $f_{ev}(X_e\times \{v\})\subset X_{v}$. \end{enumerate} \end{defn} By abusing the notation, we will denote a tree of spaces by $\pi: X\to T$. We will use the notation $f_{ev}$ for the composition $$ X_e\to X_e \times \{v\} \stackrel{f_{e}}{\longrightarrow} X_v $$ and $X_{ev}:= f_{ev}(X_e)$. \begin{rem} Mitra also assumes that inclusion maps $X_v\to X$ to be $\zeta$-proper for some function $\zeta$. We will see below that this is a consequence of uniform properness of the maps $X_e\to X_v$. \end{rem} \begin{comment} \begin{defn} We refer to the constants and functions appearing in this definition as {\em parameters} of the tree of spaces $\pi: X\to T$. The space $X$ is the {\em total space} of the tree of spaces $\pi: X\to T$. \end{defn} \begin{defn} A {\em morphism} of trees of spaces $\pi: X\to T, \pi': X'\to T'$ consists of a pair of maps $\iota: T\to T'$, $h: X\to X'$, where $\iota$ is a simplicial embedding and $h$ is a coarse Lipschitz map $h: X\to X'$ such that $\pi'\circ h= \iota\circ \pi$. Since $\iota$ is a simplicial embedding, we will frequently identity $T$ with its image $\iota(T)$. Two morphisms $(h,\iota), (h',\iota')$ are coarsely inverse to each other if $\iota'=\iota^{-1}$ and $h'$ is coarsely inverse to $h$. Similarly, one defines coarse right/left inverse morphisms. Two trees of spaces $\pi: X\to T, \pi': X'\to T$ are (coarsely) isomorphic if there exist their morphisms which are coarse inverse to each other. In particular, such morphisms are quasiisometries between $X$ and $X'$. \end{defn} {\scriptsize Note also that for every subtree $S\subset T$ and a tree of spaces ${\mathfrak X}= (\pi: X\to T)$, the preimage $X_S:= \pi^{-1}(S)$ carries a natural structure of a tree of spaces ${\mathcal S}= (X_S\to S)$ for which the inclusion maps $S\to T, X_S\to X$ define a morphism of trees of spaces. } \end{comment} \medskip Observe that each tree of spaces ${\mathfrak X}$ yields naturally an abstract tree of spaces ${\mathfrak X}^{ab}$, the {\em abstraction} of ${\mathfrak X}$, with the incidence maps $f_{ev}$. The next theorem is a converse to this {\em abstraction} procedure. \begin{theorem} [An existence theorem for trees of spaces] \label{thm:existence-of-trees} For each abstract tree of spaces ${\mathfrak X}$ over a tree $T$, there exists a (unique up to an isomorphism) tree of spaces $(\pi: X\to T)$, called a {\em concretization} of ${{\mathfrak Y}}$, such that ${{\mathfrak X}}$ is isomorphic to the abstraction ${\mathfrak X}^{ab}$ of $(\pi: X\to T)$. The total space $X$ of ${\mathfrak X}$ satisfies the universal property in the Definition \ref{defn:total space}. \end{theorem} \proof Our proof mimics the definition of the underlying topological space (equipped with the weak topology) of a cell complex, where the latter is defined via an inductively defined collection of attaching maps. We let $X$ denote the topological space obtained by attaching the products $X_e\times [0,1]$ to the disjoint union $$ {\mathcal X}=\coprod_{v\in V(T)} X_v $$ via the attaching maps $f_{ev}: X_e\times \{v\}\to X_{v}, f_{ew}: X_e\times \{w\}\to X_w$, $e=[v,w]\in E(T)$. In other words, $X$ is the mapping cylinder $$ Cyl(f: {\mathcal X}_E \to \XX_V)$$ of the map $$ f: {\mathcal X}_E:= \coprod_{e\in E(T)} X_e \to {\mathcal X}_V:= \coprod_{v\in V(T)} X_v, $$ given by the collection of incidence maps $f_{ev}$. For each edge $e$ of $T$ we will identify $X_e\times \dot{e}$ with its image in $X$. We define {\em admissible} paths in $X$ (see Section \ref{sec:length structures}) to be the continuous maps $c: [a, b]\to X$ which are concatenations of {\em vertical} paths, which are rectifiable (with respect to the metrics on vertex-spaces) paths contained in the vertex-spaces of $X$ and {\em horizontal} paths, which are rectifiable paths contained in the intervals of the form $x\times [0,1]$, $x\in X_e, e\in E(T)$. For every admissible path $c$, we let $length(c)$ be the sum of measures of lengths of its vertical and horizontal components. We leave it to the reader to verify that this defines a {\em length-structure} on $X$ and, hence, a path-metric $d$. {\em We retopologize $X$ using this path-metric}. By the construction, each inclusion map $X_e\times \dot{e}\to X$ is an isometry to its image, each vertex space is rectifiably--connected in $X$, each inclusion map $X_v\to X$ is $1$-Lipschitz and the projection map $$ \pi: X\to T $$ is $1$-Lipschitz as well. The verification that the space $X$ satisfies the universal property is rather straightforward. Given a collection of compatible coarse $L$-Lipschitz maps $h_v:X_v\to Y, h_e: X_e\to Y$ to a metric space $Y$, we define a map $h: X\to Y$ by sending each open interval $\{x\}\times (0,1)\subset X_e\times (0,1)$ to the point $h_e(x)$. The uniqueness of $h$ (up to a bounded error) follows from the fact that the union $$ \coprod_{v\in V(T)} X_v \sqcup \coprod_{e\in E(T)} X_e $$ forms a $1/2$-net in $X$. We will leave it to the reader to check that ${\mathfrak X}$ is isomorphic to the abstraction of $(\pi: X\to T)$. \qed \medskip \begin{rem} 1. A definition similar to our abstract tree of spaces and a construction analogous to the one in the proof of Theorem \ref{thm:existence-of-trees} appear in the work of Cashen and Martin \cite[2.4]{Cashen-Martin}. However, they work in the category of proper metric spaces and the metric spaces they produce do not satisfy all the properties in Definition \ref{defn:tree-of-spaces} and, hence, we cannot directly use their work. 2. Throughout the book, we will work with geometric realizations of trees of spaces constructed in the proof of Theorem \ref{thm:existence-of-trees}. In particular, the inclusion maps $$ X_v\to X $$ are $1$-Lipschitz. For every edge $e=[u,v]\in E(T)$ we will be frequently using the path-metric spaces $$ X_{uv}=X_{\llbracket u, v\rrbracket} =\pi^{-1}(\llbracket u, v\rrbracket). $$ The inclusion maps $X_u\to X_{uv}\leftarrow X_v$ are also $1$-Lipschitz. 3. One drawback of our construction is that even if vertex and edge-spaces are complete and geodesic, the tree of spaces we construct in the proof of Theorem \ref{thm:existence-of-trees} is a only a path-metric space and, a priori, is not a geodesic metric space and need not be complete. There are two ways to rectify this issue which we describe below. \end{rem} a. In the book, when we say ``a geodesic" we really mean a path which is $\eps$-short for a suitably chosen sufficiently small $\eps>0$. Similarly, when dealing with nearest-point projections, we frequently project to non-closed subsets. Then a nearest-point projection of $x\in X$ to $Y\subset X$ is a point $\bar{x}\in Y$ such that for a suitable chosen, sufficiently small $\eps>0$, $$ d(x, \bar{x}) \le d(x, Y) +\eps. $$ b. For a reader uncomfortable with such a fudge, we describe an alternative approach to rectifying the issue with geodesics and nearest-point projections. First of all, as we noted earlier (Lemmata \ref{lem:graph-approximation}, \ref{lem:simplicial-approximation}) without loss of generality, we may assume that all vertex spaces and edge-spaces $X_v, X_e$ are connected graphs equipped with standard graph-metrics. We will replace each $X_e$ with its vertex-space. Then the space $X$ defined in the proof of Theorem \ref{thm:existence-of-trees} is a connected graph and the path-metric on this graph defined in the proof is the standard graph-metric. The drawback of this approach is the need to keep track of combinatorial issues which, are, ultimately, irrelevant. \medskip From now on, we will work with abstract trees of spaces ${\mathfrak X}$ and their concretizations $\pi: X\to T$. The metric space $X$ is the {\em total space} of ${\mathfrak X}$. There is nothing particularly canonical about our choice of $X$ in this construction, it is just something we find convenient to work with. The reader could alternatively work for instance with, say, the $\ell_1$-metric coming from the products $X_e\times [v,w]$ in the mapping cylinder $X$. In fact, most of our arguments deal with vertex-spaces and pull-backs $X_{vw}$: We will be using the fact that the natural inclusion maps $X_v\to X_{vw}\leftarrow X_w$ are $1$-Lipschitz and either uniformly proper or, for trees of hyperbolic spaces, uniform qi embeddings. \begin{example}\label{ex:BS-tree} One motivation for our construction of $X$ comes from Cayley graphs of fundamental groups $G$ of graphs of groups. We assume that $({\mathcal G}, Y)$ is a finite graph of finitely generated groups, $S\subset Y$ is a spanning tree, and $G=\pi_1({\mathcal G}, Y, S)$ is the fundamental group. We will identify $S$ with a subtree in the Bass--Serre tree $T$ of $({\mathcal G}, Y)$. Then form a graph $\Gamma$ using the generators of $G$ as described in Definition \ref{defn:fundamental group of graph of groups}, except: (a) We fix an orientation of the edges of $Y$ and use only one generator per each edge (not two). (b) We use the given generating sets of the vertex-groups $G_v$ instead of the entire $G_v$. Thus, in the graph $\Gamma$ there are vertical edges (corresponding to translates $\Gamma_v, v\in V(T)$, of Cayley graphs of vertex groups) and horizontal edges (corresponding to the generators coming from the edges of $Y$). The vertex-spaces $X_v$ are, then the graphs $\Gamma_v$. The edge-spaces are the translates of the edge-groups, $gG_e$, $g\in G$, $e\in E(Y)$. The incidence maps $f_{ev}, f_{ew}$ for the oriented edges $e=[v,w]$ in $S$ come from the monomorphisms $\phi_{e,o(e)}$ and $\phi_{e,t(e)}$. For general edges $e$ of $T$ (which are translates of the edges $e'\in S$), the incidence maps are obtained by composing with the action of $G$ by left multiplication. Thus, we obtain a tree of spaces ${\mathfrak X}$ over $T$ with vertex spaces isometric to Cayley graphs of the vertex-groups $G_v$, $v\in V(Y)$, and edge-spaces isometric to edge-groups $G_e, e\in E(Y)$, with metrics obtained via pull-backs of word-metrics on the incident vertex-groups $G_v$, $v=t(e)$. Note that in the Cayley graph of $G$ there are no edges corresponding to generators of the edge-groups. This is consistent to our use of only horizontal paths over the edges of $T$ in the construction of the total space $X$ in the proof of Theorem \ref{thm:existence-of-trees}. We leave it to the reader to check that the Cayley graph $\Gamma$ as above is $G$-equivariantly isometric to the total space $X$ of the tree of spaces ${\mathfrak X}$ defined in the proof of Theorem \ref{thm:existence-of-trees}. \end{example} \begin{prop}\label{unif-emb-subtree} There exists a continuous function $\eta_{\ref{unif-emb-subtree}}$ depending on the parameters of an abstract tree of spaces ${\mathfrak X}$, such that for every subtree $S\subset T$, the inclusion map $$ X_S\to X $$ is an $\eta_{\ref{unif-emb-subtree}}$-uniformly proper embedding. \end{prop} \proof The key case to understand is when $T$ has a single edge $e=[u,v]$ and $S=\{u\}$. We let $Y$ denote the total space of the corresponding tree of spaces. It suffices to estimate (from below, in terms of $d_{X_u}(x,x')$) lengths of paths $c$ in $Y$ connecting $x=x_1, x'=x_n\in X_u$, such that $c$ is a concatenation of the form $$ c(x_1, y_1) \star c(y_1, z_1) \star c(z_1,z_2) \star c(z_2, y_2) \star c(y_2, x_2) \star c(x_2, x_3) \star ... \star c(y_n, z_n), $$ where $x_i=f_{eu}(y_i), z_i= f_{ev}(y_i)$ and paths $c(x_i,y_i), c(y_i,z_i)$ are horizontal, while the paths $c(x_j, x_{j+1}), c(z_k, z_{k+1})$ are vertical geodesics in the vertex-spaces $X_u, X_v$. The lengths of this path is $$ \operatorname{length}(c)= \sum_{i=\hbox{even}} d_{X_u}(x_i, x_{i+1}) + n + \sum_{j=\hbox{odd}} d_{X_v}(z_j, z_{j+1}). $$ Assume that $\operatorname{length}(c)\le D$. Then $n\le D$ and $d_{X_v}(z_j, z_{j+1})\le D$ for each odd index $j$. We have (for $j$ odd): $$ L^{-1} d_{X_u}(x_j, x_{j+1}) \le d_{X_e}(y_j, y_{j+1})\le \psi(d_{X_v}(z_j, z_{j+1})) $$ and, hence, $$ d_{X_u}(x_j, x_{j+1}) \le L \psi( d_{X_u}(z_j, z_{j+1})) \le L \psi(D). $$ Thus, the concatenation $c_u$ of vertical geodesics $[x_i x_{i+1}]_{X_u}$ connecting $x$ to $x'$ has total length $\operatorname{length}(c_u)$ satisfying \begin{align*} d_{X_u}(x,x')\le \operatorname{length}(c_u)= \sum_{j=\hbox{odd}} d_{X_u}(x_j, x_{j+1}) + \sum_{i=\hbox{even}} d_{X_u}(x_i, x_{i+1}) \le \\ L \sum_{j=\hbox{odd}} \psi(D) + \sum_{i=\hbox{even}} d_{X_u}(x_i, x_{i+1}) \le LD \psi(D) + D. \end{align*} It follows that $d_{X_u}(x,x')\le LD \psi(D) + D$ and, hence, the inclusion map $X_u\to X_{uv}$ is $\eta$-proper, for $\eta(D):= D(L\psi(D) + 1)$. \begin{rem}\label{rem:linear} Assuming that the map $f_{ev}: X_e\to X_v$ is an $L$-qi embedding (which will be eventually our assumption for trees of hyperbolic spaces), we obtain a better estimate: \begin{align*} d_{X_u}(x,x')\le \operatorname{length}(c_u)= \sum_{j=\hbox{odd}} d_{X_u}(x_j, x_{j+1}) + \sum_{i=\hbox{even}} d_{X_u}(x_i, x_{i+1}) \le \\ \sum_{i=\hbox{even}} d_{X_u}(x_i, x_{i+1}) + L^2 \sum_{j=\hbox{odd}} d_{X_v}(z_j, z_{j+1}) + L^3 n \le L^3 \operatorname{length}(c). \end{align*} Thus, we conclude that each inclusion map $X_u\to X_{uv}$ in this case is an $(L^3, 0)$-qi embedding. \end{rem} We now deal with the general case. Consider an admissible path $\beta: [0,1]\to X$ connecting $x, y\in X_S$. The projection $\pi\circ \beta$ is a path $p$ in $T$ connecting $\pi(x)$ to $\pi(y)$ whose length is $\le \operatorname{length}(\beta)$. Without loss of generality, we may assume that $\pi(x), \pi(y)$ are vertices in $S$ and $p$ is a simplicial path in $T$. We now construct inductively a sequence of paths $$ \beta_0= \beta, \beta_1,...,\beta_n $$ in $X$ with simplicial projections to $T$, all connecting $x$ to $y$, such that: (1) $\beta_n$ is a path in $X_S$. (2) The length of $\pi\circ \beta_{i+1}$ is at most $\operatorname{length}(\pi\circ \beta_{i}) -2$. (3) $$ \operatorname{length}(\beta_{i+1}) \le \eta(\operatorname{length}(\beta_i)) $$ where $\eta(D):= D(L\psi(D) + 1)$ as above. Assume that $\beta_{i}$ is defined. If this path is contained in $X_S$, then $n=i$ we are done. Otherwise, there exists an edge $e=[v,w]$ in the tree $\pi \beta_i([0,1])$ such that $\beta$ contains a subpath $\beta'$ connecting points $x', y'\in X_v$ and contained in the subspace $X_{vw}$. We then replace $\beta'$ with a geodesic in $X_v$ connecting the end-points $x', y'$ of $\beta'$. By the above estimate in $X_{vw}$, $$ d_{X_v}(x',y')\le \eta(\operatorname{length}(\beta')) $$ and, hence, the new path $\beta_{i+1}$ satisfies the required conditions. Clearly, $n\le \operatorname{length}(\beta)$, hence, $$ \operatorname{length}(\beta_n)\le \eta^{(n)}(\operatorname{length}(\beta)), $$ where $\eta^{(n)}$ is the $n$-fold iteration of the function $\eta$. Therefore, for $\eta_{\ref{unif-emb-subtree}}= \eta^{(n)}$, $$ n=\lceil d_X(x,y) \rceil$$ we obtain $$ d_{X_S}(x,y)\le \eta_{\ref{unif-emb-subtree}} ( d_X(x,y)). \qed $$ Applying the arguments of the proof of the proposition with the linear estimate in Remark \ref{rem:linear} we obtain: \begin{cor}\label{cor:exp-dist} If each incidence map $f_{ev}$ is an $L$-qi embedding, then each $X_S$ is at most exponentially distorted in $X$, i.e. is $\eta$-uniformly properly embedded in $X$ with $\eta(t)= \exp(at)$ for some $a\ge 1$ depending only on $L$. \end{cor} We omit the proof of this corollary since it is straightforward and the result is not used elsewhere. \begin{defn}\label{defn:lift} \index{$K$-qi section} Let ${\mathfrak X}=(\pi: X\to T)$ be a tree of spaces. \begin{enumerate} \item By a $K$-{\em qi section} (or a $K$-{\em qi lift} of $S$) over a subtree $S\subset T$ we mean a map $\sigma: S\to X$ such that for each vertex $v\in S$, $\sigma(v)\in X_v$, for any pair of adjacent vertices $u, v\in S$, we have $d_{X_{uv}}(\sigma(u), \sigma(v))\leq K$ and the restriction of $\sigma$ to the interval $uv$ is a parameterization of a geodesic $\sigma(u) \sigma(v)$ in $X_{uv}$. \item $K$-qi lifts of geodesic segments of $T$ will be referred to as {\em $K$-qi leaves} in $X$ and denoted by $\gamma$ or $\gamma_x$ or $\gamma_{xy}$, provided they start at $x$ and end at $y$. We will refer to such $\gamma$'s as {\em horizontal} paths in $X$. \item A {\em vertical path} in $X$ is a path contained in one of the vertex-spaces. \item If $Y$ is a subset of $X$ then the {\em fiberwise neighborhood} of $Y$ in $X$ (denoted $N_r^{fib}(Y)$) is the union $$ \bigcup_{v\in V(T)} N_r(Y\cap X_v), $$ where the latter neighborhood is taken with respect to the (intrinsic) metric of $X_v$. \end{enumerate} \end{defn} \begin{comment} \begin{rem}\label{rem:hat} Every $K$-qi section $\sigma$ defined on the vertex-set of a subtree $J=\llbracket v,w\rrbracket \subset T$ extends to a continuous map $\hat\sigma: J\to X$ whose image is uniformly close to the image of the original map: For each pair $v_i, v_{i+1}$ of adjacent vertices in $S$, we connect $\gamma(v_i), \gamma(v_{i+1})$ by a geodesic segment in $X_{J_i}, J_i= \llbracket v_i, v_{i+1}\rrbracket$. Since $\sigma$ and $\hat\sigma$ are uniformly close to each other, we will conflate the two and regard $K$-qi sections as maps $T\to X$. We will refer to $K$-qi sections over intervals in $T$ as {\em horizontal paths} in $X$. We will frequently concatenate vertical and horizontal paths: This will be used when constructing slim combings of various subsets $Y\subset X$ when proving hyperbolicity of $Y$ (and describing its geodesics up to bounded error). \end{rem} \begin{lemma}\label{lem:fiber-expansion} Suppose that the incidence maps of edge-spaces are $L$-quasiisometric embeddings to the incident vertex spaces. Then for $$ \bar{L}=L_{\ref{lem:fiber-expansion}} (L)= \max(L^3 , L^2 + L^{-2}) $$ the following holds: Suppose that $e=[v,w]$ is an edge of $T$, $x, y\in X_e$, $x_v, y_v\in X_v, x_w, y_w\in X_w$ are points within distance $\le R$ from $f_{ev}(x), f_{ev}(y), f_{ew}(x), f_{ew}(y)$ respectively. Then $$ \bar{L}^{-1} d_{X_v}(x_v, y_v) - \bar{L} (R+1) \le d_{X_w}(x_w, y_w)\le \bar{L} d_{X_v}(x_v, y_v) + \bar{L} (R+1). $$ In particular, for $M=2\bar{L}$, if $d_{X_v}(x_v, y_v)\ge 2\bar{L}^2(R+1)$, then $$ M^{-1} d_{X_v}(x_v, y_v) \le d_{X_w}(x_w, y_w)\le M d_{X_v}(x_v, y_v). $$ \end{lemma} \proof Since $f_{ev}, f_{ew}$ are $L$-quasiisometric embeddings, we have: $$ L^{-1} d_{X_e}(x,y) - L - R \le d_{X_v}(x_v, y_v) \le L (d_{X_e}(x,y) +1) +R, $$ $$ L^{-1} d_{X_e}(x,y) - L - R \le d_{X_w}(x_w, y_w) \le L (d_{X_e}(x,y) +1) +R. $$ By combining these inequalities, we obtain: $$ d_{X_w}(x_w, y_w) \le L^2 d_{X_v}(x_v, y_v) + L^3 + R L^2 + R\le \bar{L} d_{X_v}(x_v, y_v) + \bar{L} (R+1). $$ After swapping the roles of $v$ and $w$, the desired inequalities follow. \qed \end{comment} \medskip Let ${\mathfrak X}$ be an abstract tree of spaces. A {\em subtree of spaces} in ${\mathfrak X}=(\pi: X\to T)$ is a tree of spaces ${\mathfrak X}'=(\pi': X'\to T')$ whose base tree is a subtree $T'\subset T$, and vertex/edge spaces $X'_v, X'_e$ are rectifiably connected uniformly properly embedded subsets of $X_v, X_e$ respectively, so that the incidence maps of ${\mathfrak X}'$ are uniformly close to restrictions of incidence maps of ${\mathfrak X}$. \section{Coarse retractions} In this section we prove a general existence theorem of coarse Lipschitz left-inverses ({\em retractions}) for morphisms of trees of spaces. Let $T'$ be a subtree of $T$ and let ${\mathfrak X}=( \pi: X\to T)$, ${\mathfrak X}'=(\pi': X'\to T')$ be trees of spaces. We say that a morphism $h: X\to X'$ of these trees of spaces is a {\em relative $K$-qi embedding} if for each $v\in V(T'), e\in E(T')$, the map $h_v: X'_v \to X_v, h_e: X'_e\to X_e$ is a $K$-qi embedding. Similarly, one can define a {\em relatively retractible} morphism of trees of spaces (a morphism which admits a relative $L$-coarse Lipschitz retraction) as a morphism $h$ such that for each $v\in V(T'), e\in E(T')$ the maps $h_v: X'_v \to X_v, h_e: X'_e\to X_e$ admit $L$-coarse Lipschitz left-inverses $h'_v: X_v\to X'_v$, $h_e: X_e\to X'_e$. If ${\mathfrak X}, {\mathfrak X}'$ are trees of $\delta$-hyperbolic spaces then the two notions are equivalent and, moreover, the subspaces $h_v(X'_v)\subset X_v, h_e(X'_e)\subset X_e$ are $\la$-quasiconvex for $\la=\la(L,\delta)$. Our goal is to prove that, under some conditions, a relatively retractive morphism is {\em absolutely retractive}, i.e. admits a coarse left-inverse $h': X\to X'$. (Recall that the morphism $h'$ is a collection of maps $h'_v: X_v\to X'_v, h'_e: X_e\to X'_e$ satisfying certain compatibility properties.) This result is motivated by Mitra's construction of a coarse retraction in \cite[Theorem 3.8]{mitra-trees}. For relatively retractible morphisms of trees, by abusing the notation, we will identify the vertex/edge spaces $X'_v, X'_e$ of ${\mathfrak X}'$ with their images $h_v(X'_v)\subset X_v$ and $h_e(X'_e)\subset X_e$ respectively. \medskip The following theorem is inspired by Mitra's coarse retraction theorem in \cite[Theorem 3.8]{mitra-trees} and its proof closely follows Mitra's argument. \begin{theorem}[Existence of a retraction]\label{thm:left-inverse} Suppose that for some constants $C, D$, a relatively retractive morphism of trees of spaces $h: {\mathfrak X}'\to {\mathfrak X}$ satisfies the following conditions: (i) For every boundary edge $e$ of $T'$, $e=[v,w], v\in V(T'), w\in V(T)-V(T')$, $$ \operatorname{diam}_{X'_v} (h'_v \circ f_{ev}(X_e))\le D. $$ (ii) For every edge $[v,w]=e\in E(T')$ $$ \operatorname{dist}_{X'_v}( h'_v \circ f_{ev}, f'_{ev}\circ h'_e)\le C. $$ Then the map $h: X'\to X$ admits a coarse $L_{\ref{thm:left-inverse}}$-Lipschitz retraction $h': X\to X'$ whose restriction to $X_v$ equals $h'_v$ for each $v\in V(T')$. Here $L_{\ref{thm:left-inverse}}$ depends only on $C, D$, coarse Lipschitz constants of the maps $h'_v, h'_e$, and the parameters of trees of spaces ${\mathfrak X}, {\mathfrak X}'$. \end{theorem} \proof We let $K$ denote the maximum of Lipschitz constants of the projections $\pi: X\to T, \pi': X'\to T'$. For each $v\in V(T')$ then we let $h'(x):= h'_v(x)$. Let $p: T\to T'$ denote the nearest-point projection. Suppose $x\in X_w$, $w\in V(T)\setminus V(T')$; then $v=p(w)\in T'$ is the vertex nearest to $w$. Let $e\in E(T)$ be the edge incident to $v$ and contained in the geodesic $wv$. Thus, $e$ is a boundary edge of the subtree $T'\subset T$. By the assumption (i), the projection $h'_v(X_{ev})\subset X'_v$ has the diameter $\le D$. We let $h'(x)$ be any point $x'$ of this projection (we will use the same point $x'$ for all vertices $w$ in each component of $T-T'$). In order to verify that $h'$ is (uniformly) coarse Lipschitz it suffices to find a uniform upper bound on distances $d(h'(x), h'(y))$ for points $$ x, y\in {\mathcal X}= \coprod_{v\in V(T)} X_v $$ which are within distance $K$ from each other. If $x, y$ belong to the same vertex space $X_v$, then $d(h'(x), h'(y))\le L$, the upper bound for coarse Lipschitz constants of the maps $h'_v: X_v\to X'_v$. Suppose that $x, y$ belong to $X_v, X_w$, $v, w\in V(T')$ are vertices spanning an edge $e\in E(T')$. Then, necessarily, $$ x\in X_{ev}, y\in X_{ew}, x= f_{ev}(z), y=f_{ew}(z) $$ for some $z\in X_e$. The condition (ii) then implies the estimates $$ d(h'_v(x), f'_{ev}\circ h'_e(z))\le C, \quad d(h'_w(y), f'_{ew}\circ h'_e(z))\le C, $$ hence $d(h'(x), h'(y))\le 2C$. If $v, w\in V(T)- V(T')$ then the inequality $d_T(v,w)\le 1$ implies that $p(v)=p(w)=u\in V(T')$ and there is a common boundary edge $e$ of $T'$ contained in the geodesics $uv, uw\subset T$. In particular, both $h'(x), h'(y)$ belong to the subset $$ h'_u \circ f_{eu}(X_e)\subset X'_u $$ and, hence, $d(h'(x), h'(y))\le D$ by the condition (i). Lastly, consider the case when $x\in X_v, y\in X_w$, where $v\in V(T'), w\in V(T) - V(T')$ and $v, w$ span a boundary edge $e$ of $T'$. Since $d(x,y)\le 1$, it follows that $x\in X_{ev}, y\in X_{ew}$. Then $p(w)=v$ and, by the definition of $h'$, $h'(x), h'(y)\in h_v'(X'_{ev})$ and, therefore, $$ d_{X'_v}(h'(x), h'(y))\le D. \qed $$ An easy corollary of Theorem \ref{thm:left-inverse} is: \begin{cor}\label{cor:r'} Suppose that ${\mathfrak X}=(\pi: X\to T)$ is a retractive tree of spaces. For every edge $e=[u,v]\in E(T)$ there exists an $r=r_{\ref{cor:r'}}$-coarse retraction $X_{uv}\to X_u$, where $r$ depends only on the parameters of ${\mathfrak X}$ and its {retractivity constant}. \end{cor} \proof We have a retractive tree of spaces ${\mathfrak Y}= (\pi: X_{uv}\to \llbracket u,v\rrbracket )$. In ${\mathfrak Y}$ we have a subtree of spaces $\pi': {\mathfrak Y}'= (Y'\to \llbracket u,v\rrbracket )$, whose vertex spaces are $Y'_u=Y_u=X_u, Y'_e= Y_e=X_e, Y'_v= X_{e}, f'_{ev}=\operatorname{id}, f'_{eu}=f_{eu}$, and the morphism $h: {\mathfrak Y}'\to {\mathfrak Y}$ is defined by using $\operatorname{id}: Y'_u\to Y_u$ and $f_{ev}: Y'_v\to Y_v$. Since ${\mathfrak X}$ is retractive, the morphism $h$ is relatively retractive. Hence, by Theorem \ref{thm:left-inverse}, the identity map $X_u\to X_u$ and the retraction $X_v\to X_e$ define a coarse Lipschitz retraction $Y_{uv}\to Y'$. Since $Y'$ is Hausdorff-close to $X_u$, we obtain a coarse Lipschitz retraction $Y_{uv}\to X_u$. The reader will verify that the coarse Lipschitz bound for this retraction depends only on the parameters of ${\mathfrak X}$ and its retractivity constant. \qed \medskip Another useful application of Theorem \ref{thm:left-inverse} is in the setting of trees of hyperbolic spaces (which we will discuss in more detail in the next section): \begin{cor}\label{cor:projection} Suppose that the trees of spaces ${\mathfrak X}, {\mathfrak X}'$ and a morphism ${\mathfrak X}'\to {\mathfrak X}$ which is a fiberwise $L$-qi embedding, have the following properties: 1. For some $\delta$, all vertex and edge-spaces $X_v, X_e$ are $\delta$-hyperbolic. (Accordingly, the images vertex and edge-spaces $h_v(X'_v)\subset X_v$, $h'_e(X'_e) \subset X_e$ are $\la$-quasiconvex subsets in $X_v, X_e$ respectively, where $\la_{\ref{lem:qi-preserves}}(\delta, L)$.) 2. The retractions $h'_v, h'_e$ are ``nearest-point projections'' in the sense that $$ h'_v= P_{X_v,X'_v} \circ h_v, h'_e= P_{X_e,X'_e} \circ h_e. $$ 3. There is constant $K$, for every edge $e=[v,w]\in T'$ the Hausdorff distances $\operatorname{Hd}_{X'}(X'_v, X'_e)$ and $\operatorname{Hd}_{X'}(X'_w, X'_e)$ are $\le K$. 4. $T'=T$. Then the fiberwise nearest point projections $h'_v, h'_e$ extend to an $L_{\ref{cor:projection}}(\delta,L,K)$-coarse retraction $h': X\to X'$, where (without loss of generality) $$ L_{\ref{cor:projection}}(\delta,L,K) \ge \max(L,K). $$ \end{cor} \proof For vertices $v$ in $T$ incident to an edge $e$, the images $h_v(X'_v)$, $f_{ev}\circ h_e(X'_e)$ are uniformly Hausdorff-close to each other. Therefore, the nearest-point projections (in $X_v$) to these uniformly quasiconvex subsets are also uniformly close to each other (see Corollary \ref{cor:proj-to-close-subsets}). Now, the claim follows from Theorem \ref{thm:left-inverse}. \qed \section{Trees of hyperbolic spaces} \label{sec:hyperbolic trees} We now introduce {\em hyperbolicity conditions} for trees of spaces. \begin{defn}\index{Axiom H} \label{defn:axiom H} A tree of spaces ${\mathfrak X}$ satisfies Axiom {\bf H} if there are constants $\delta_0$ and $L_0$ such that: (1) Each vertex/edge space $X_v, X_e$ of ${\mathfrak X}$ is a $\delta_0$-hyperbolic geodesic metric space. (2) Each incidence map $f_{ev}: X_e\to X_v$ is an $L_0$-qi embedding. \noindent We will refer to such ${\mathfrak X}$ as a {\em tree of hyperbolic spaces}. A finite graph of finitely generated groups ${\mathcal G}$ satisfies Axiom {\bf H} if the the corresponding tree of spaces does. In other words, all vertex and edge-groups have to be hyperbolic and edge-groups are quasiconvex in the incident vertex-groups. \end{defn} A word of caution: Our terminology does not mean that a tree of hyperbolic spaces ${\mathfrak X}=(\pi: X\to T)$ has $\delta$-hyperbolic total space $X$. Simple examples are given by Euclidean plane and Cayley complexes of Baumslag--Solitar groups. One needs to add a suitable {\em flaring condition} on ${\mathfrak X}$ to ensure hyperbolicity of $X$, as discussed in Section \ref{sec:flare}. Note also that our terminology requires not only uniform hyperbolicity of vertex and edge-spaces but also uniform qi embedding condition for the incidence maps. \begin{defn}\index{parameters of a tree of hyperbolic spaces} We will refer to $\delta_0$ and $L_0$ as the {\em primary parameters} of a tree of hyperbolic spaces ${\mathfrak X}$. \end{defn} In general, throughout the book, we will suppress the dependence of various constants and functions on the parameters of ${\mathfrak X}$. \begin{defn} Suppose that ${\mathfrak X}$ is a tree of hyperbolic spaces. Let $A, B\subset X$ with $\pi(A)\subset \pi(B)$. If $X_v\cap A$ and $X_v\cap B$ are uniformly quasiconvex in $X_v$ for all $v\in \pi(A)$, we define the nearest projections in $X_v$ of $A\cap X_v$ to $B\cap X_v$. This gives us a map $A\rightarrow B$. We refer to this map as the {\em fiberwise projection} of $A$ to $B$. \end{defn} \medskip It is immediate that for every tree of hyperbolic spaces, for every edge $e=[v,w]\in E(T)$, the subset $X_{ev}\subset X_v$ is $\la_0= \la_{\ref{lem:qi-preserves2}}(\delta_0,L_0)$-quasiconvex. In particular, every tree of hyperbolic spaces is retractive (see Definition \ref{defn:retractive tree}) with retractions $$ f_{ve}: X_v\to X_e, e=[v,w], $$ given by the nearest-point projections $P=P_{X_v,X_{ev}}$ to the quasiconvex subsets $X_{ev}=f_{ev}(X_e)\subset X_v$; more precisely: $f_{ve}(x)$ is defined to be an arbitrary point in $f_{ev}^{-1}(P(x))$. \medskip As an application of Remark \ref{rem:linear} or, alternatively, of Corollary \ref{cor:r'}, we obtain: \begin{lemma}\label{lem:L'0} Suppose that ${\mathfrak X}$ is a tree of hyperbolic spaces with the primary parameters $\delta$ and $L$. Then for every edge $e=[u,v]\in E(T)$, the inclusion maps $X_u\to X_{uv}, X_v\to X_{uv}$ are $L'_0= L_{\ref{lem:L'0}}(\delta,L)$-qi embeddings where $L'_0$ is the maximum of $2$ and of the coarse Lipschitz constant for a retraction $X_{uv}\to X_v$ (see Corollary \ref{cor:r'}). \end{lemma} \begin{rem} In this lemma we ensured that $L'_0\ge 2$. This, somewhat artificial, convention will be used in the proof of Lemma \ref{lem:growth-of-flare} below. \end{rem} \medskip Suppose that ${\mathfrak X}'=(\pi: X'\to T)$ is a tree of hyperbolic spaces, $G< \operatorname{Isom}(X')$ is a subgroup acting by automorphisms of ${\mathfrak X}$, such that the quotient graph $T/G$ is finite and for every vertex $v\in V(T)$ (resp. edge $e\in E(T)$) the action of the corresponding stabilizer $G_v< G$ (resp. $G_e< G$) on $X'_v$ (resp. $X_e$) is quasiconvex (see Definition \ref{defn:qc-subgroup}). Thus, the group $G$ also has structure of a graph of finitely generated groups ${\mathcal G}$ (with the underlying graph $T/G$); in particular, $G$ is finitely generated. (Note that we are not assuming hyperbolicity of the space $X$.) Since $G$ acts via automorphisms of ${\mathfrak X}'$, for each edge $e=[v,w]\in E(T)$ the subspace $X'_{ev}\subset X'_v$ is $G_e$-invariant. We will also assume that for each $v\in V(T)$ the $G_v$-orbit of $X'_{ev}$ is {\em locally finite} in $X'_v$ (see Definition \ref{defn:locally finite action}). Note that the local finiteness assumption is automatic for instance if there exists a larger discrete group $G'_v$ (containing $G_v$) acting on $X'_v$ geometrically (Lemma \ref{lem:geo->lf}). \begin{prop}\label{prop:retract-so-subgroup} Under the above assumptions, there exists a coarse Lipschitz retraction $X'\to G x$ for each $G$-orbit in $X'$. In particular, each orbit map $G\to Gx\subset X'$ is a qi embedding. \end{prop} \proof The proof is similar to that of Corollary \ref{cor:projection}. Let ${\mathfrak X}= (\pi: X\to T)$ denote the tree of hyperbolic spaces corresponding to the graph of groups ${\mathcal G}$. The isometric action of $G$ via automorphisms of ${\mathfrak X}'$ defines a morphism of trees of spaces ${\mathfrak X}\to {\mathfrak X}'$. This morphism is relatively retractive in view of the quasiconvexity assumption for the actions $G_v\curvearrowright X'_v, G_e\curvearrowright X'_e$. In view of Proposition \ref{prop:proj-to-qc-action}(4), the local finiteness assumption implies that for $y\in X'_v$, the restriction to $X'_e$ of the nearest-point projection $P_{X'_v,G_vy}$ is within uniformly bounded distance from the projection $P_{X'_{ev},G_e y}$. Thus, Theorem \ref{thm:left-inverse} applies and the coarse Lipschitz retractions $X'_v\to X_v, X'_e\to X_e$ together give rise to a coarse Lipschitz retraction $X'\to X$. Since $G$ acts cocompactly on $X$, we, thus, obtain a coarse Lipschitz retraction $X'\to G x$. \qed \begin{cor} Suppose that ${\mathcal G}'$ is a finite, connected graph of hyperbolic groups satisfying Axiom {\bf H}, with $\pi_1({\mathcal G}')=G'$ and let $T$ be the Bass--Serre tree of ${\mathcal G}'$. Let $G< G'$ be a subgroup such that: 1. For every vertex $v$ (resp. edge $e$) of $T$, the $G$-stabilizer $G_v< G$ of $v$ (resp. the $G$-stabilizer $G_e< G$ of $e$) is a quasiconvex subgroup of the $G'$-stabilizer $G'_v< G'$ of $v$ (resp. of the $G'$-stabilizer $G'_e< G'$ of $e$). 2. The quotient-graph $T/G$ is finite. There exists a coarse Lipschitz retraction $G'\to G$. In particular, the subgroup $G$ is qi embedded in $G'$. \end{cor} \begin{example} Let $H=\pi_1(S_1)\star \pi_1(S_2)$, where $S_1, S_2$ are closed connected hyperbolic surfaces, and let $\phi_i: \pi_1(S_i)\to \pi_1(S_i), i=1, 2$, be automorphisms. Then $\phi_1, \phi_2$ define an automorphism $\phi: H\to H$ and we obtain subgroups $G_i=\phi_1(S_i) \rtimes_{\phi_i} \mathbb Z$ in $G'=H\rtimes_\phi \mathbb Z$. The subgroups $G_i< G'$ clearly satisfy the assumptions of the corollary (where $T$ is the line) which implies that they are coarse Lipschitz retracts of $G'$. Note, furthermore, that if $\phi_1, \phi_2$ are induced by pseudo-Anosov homeomorphisms of the surfaces $S_1, S_2$, then the group $G'$ is isomorphic to the amalgam of hyperbolic groups $G_1\star_{\mathbb Z} G_2$, where $\mathbb Z$ is a malnormal subgroup of both $G_1, G_2$. Hence, the group $G'$ is hyperbolic (see Corollary \ref{cor:malnormal-amalgam} below) and the subgroups $G_1, G_2$ are quasiconvex in $G'$. \end{example} Below, $H$ is a quasiconvex subgroup of $G'_v$ for some vertex $v$ in $T$. \begin{lemma}\label{lem:qc-stabs} For each edge $e$ and vertex $w$ in $T'$, the $H$-stabilizer of $e$ (resp. $w$) is a quasiconvex subgroup of $H$ and $G'_e$ (resp. $G'_w$). \end{lemma} \proof Consider first an edge $e=[v,w]$. Then $H_e=H\cap G'_e$ is the intersection of two quasiconvex subgroups of $G'_v$, hence, is quasiconvex in $G'_v$, $G'_e$ and $H$ (see Corollary \ref{cor:qc-in}). The general case follows from induction on the edge-path connecting $e$ (resp. $w$) to $v$. \qed \begin{lem}\label{lem:finite stabs} Suppose, in addition, that the $H$-stabilizers of edges incident to $v$ are all finite. Then for each $R\ge 0, x\in X'_v$ and all the edges $e$ incident to $v$, the coarse intersections intersections $Hx \cap N_R(X'_{ev})$ are uniformly bounded, with bound independent of $e$. In other words, the pairs $Hx, X'_e$ are uniformly cobounded. \end{lem} \proof By properness of the action, there exists an edge $e$ incident to $v$ such that the diameter of the intersection $Hx \cap N_R(X'_{ev})$ is maximal. The subset $X'_{ev}$ is Hausdorff-close to the orbit $G'_e x$. According to Proposition \ref{prop:proj-to-qc-action}, the coarse intersection $Hx \cap N_R(X'_{ev})$ is Hausdorff-close to the orbit $H_e x$, where $H_e=H\cap G'_e$. Since, by the hypothesis of the lemma, the subgroup $H_e$ is finite, the coarse intersection $Hx \cap N_R(X'_{ev})$ is bounded. \qed \begin{cor}\label{cor:finite stabs} If the pair $Hx, X'_e$ is not cobounded then the intersection $H\cap G'_e$ is infinite. \end{cor} We will prove in Corollary \ref{cor:finite-tree-hyp} that there exists a function $\delta(n)$ (depending also on the constants $\delta_0$ and $L_0$) such that for each interval $J$ of length $n$, the pull-back space $X'_J$ (with its intrinsic path-metric) is $\delta(n)$-hyperbolic. Thus, we can talk about cobounded pairs of subspaces in vertex-spaces $X'_v, X'_w$, $v, w\in V(J)$. \medskip Continuing with the notation of Lemma \ref{lem:finite stabs}, and applying Corollary \ref{cor:finite stabs} inductively (with Lemma \ref{lem:qc-stabs}), we obtain: \begin{lemma}\label{lem:trivial stabs} Suppose that for some vertex $w\in T, x\in X'_v$, the subsets $Hx, X'_w$ are not cobounded in $X'_J$, $J= \llbracket v, w\rrbracket$. Then the $H$-stabilizer of the segment $J$ is an infinite subgroup of $H$. \end{lemma} \medskip Below are few more easy consequences of Axiom {\bf H} for trees of spaces. \begin{lemma}\label{lem:edge-spaces} Assume that ${\mathfrak X}$ is a tree of hyperbolic spaces. Then for every edge $e=[v_1,v_2]$ of $T$, if $\alpha_{i}= [x_i y_i]_{X_{v_i}} \subset X_{v_i}$ are vertical geodesics such that $$ d_{X_{v_1 v_2}}(x_1, x_2)\le C, \quad d_{X_{v_1 v_2}}(y_1, y_2)\le C, $$ then the Hausdorff distance between these vertical geodesics in $X_{v_1 v_2}$ is at most $C_1=C_{\ref{lem:edge-spaces}}(C)$. \end{lemma} \proof Geodesics $x_1x_2, y_1y_2$ have to cross $X_e$ (separating $X_{vw}$) at some points $x, y\in X_e$. Since both $X_{ev_i}\subset X_{v_i}$ are $\la_0$-quasiconvex, it follows that geodesics $\alpha_i$ lie in $N_{\la_0}(X_{ev_i})$, $i=1,2$. Lemma \ref{lem:sub-close} applied to the geodesic $\alpha_i$ and the $L_0$-quasigeodesic $\alpha'_i= f_{ev_i}([xy]_{X_e})$ implies that $$ \operatorname{Hd}_{X_{v_i}}(\alpha_i, f_{ev_i}(\alpha))\le D=D_{\ref{lem:sub-close}}(\delta_0,L_0,C). $$ Since $$ \operatorname{Hd}_{X_{v_1v_2}}(\alpha, f_{ev_i}(\alpha))\le 1, $$ we conclude: $$ \operatorname{Hd}_{X_{v_1v_2}}(\alpha_1, \alpha_2)\le 2(1+ D). \qed $$ \begin{lemma}\label{lem:growth-of-flare} Let $I=\llbracket v,w\rrbracket \subset T$ be a subinterval, we denote its consecutive vertices $v_0=v, v_1, ..., v_n=w$. Let $\gamma_0, \gamma_1$ be $K$-qi sections over $I$. Then the function $$ \ell(i):= d_{X_i}(\gamma_{0,k}(v_i), \gamma_{1,k}(v_i)), i\in [0, n]\cap \mathbb Z, $$ satisfies $$ \ell(n)\le a^n \ell(0) + \frac{a^n-1}{a-1}b< a^n (\ell(0) + b), $$ where $a= L'_0$, $b=2L_0' K$. \end{lemma} \proof Consider an edge $e=[v_i, v_{i+1}]\subset I$. The points $$ \gamma_{0}(v_{i+1}), \gamma_{1}(v_{i+1})\in X_{v_i} $$ are connected by a path of length $\le 2K+ \ell(i)$ in $X_{\llbracket v_i, v_{i+1} \rrbracket}$, obtained by concatenating a vertical geodesic $[\gamma_{0}(v_i)\gamma_{1}(v_i)]_{X_{v_i}}$ with two geodesics of length $\le K$. Since $X_{v_{i+1}}$ is $L'_0$-qi embedded in $X_{\llbracket v_i, v_{i+1} \rrbracket}$, we have $$ \ell(i+1)\le L'_0( 2K+ \ell(i)). $$ Then $$ \ell(n)\le a^n \ell(0) + (a^{n-1}+\ldots +1) b = a^n \ell(0) + \frac{a^n-1}{a-1}b< a^n ( \ell(0) + b). \qed $$ \begin{cor}\label{cor:contraction} If $\ell(0)\ge a (M+b)$, then for all $$ n\in [0, N], N= \left\lfloor \log_a \left( \frac{\ell(0) }{M +b} \right) \right\rfloor, $$ we have $$ \ell(n)>M. $$ \end{cor} \proof We first reverse the role of $\ell(0)$ and $\ell(n)$ and obtain from the lemma that $$ \ell(n)< a^{-n} \ell(0) - b, n\in \NN. $$ The inequality $\ell(n)> M$ then follows from $$ n\le N \le \log_a \left( \frac{\ell(0) }{M +b} \right). $$ The assumption that $\ell(0)\ge a (M+b)$ ensures that $N\ge 1$. \qed \medskip Another corollary (or, rather, a special case of the lemma) is \begin{cor}\label{bdd-flaring} For every edge $e=[u,v]$ in $T$, any pair points $x, y\in X_u$, and a pair of $K$-qi sections $\gamma_0, \gamma_1$ over the interval $uv$, we have $$ d_{X_v}(\gamma_0(v), \gamma_1(v))\le D_{\ref{bdd-flaring}}(K, d_{X_u}(x,y))= L'_0( 2K+ d_{X_u}(x,y)). $$ \end{cor} \section{Flaring}\label{sec:flare} Geodesics (and, hence, uniform quasigeodesics) in hyperbolic spaces diverge (exponentially fast). Since $k$-qi leaves in hyperbolic trees of spaces $\pi: X\to T$ are uniform quasigeodesics, they should also diverge if $X$ is hyperbolic. In this section we discuss several divergence conditions, called {\em flaring conditions}\footnote{Flaring conditions do not require Axiom {\bf H}.}, one can impose on qi-leaves in trees of spaces. These conditions involve pairs $\Pi=(\gamma_0, \gamma_1)$ of $k$-sections $\gamma_0, \gamma_1$ over a common geodesic segment $J=\llbracket t_{-n}, t_n\rrbracket \subset T$ of length $2n$ and prescribe the nature of growth of the vertical distances $$ d_{X_{v_i}}(\gamma_0(v_i), \gamma_1(v_i)) $$ for $i>0$ or $i<0$. The {\em girth} $\Pi_0$ of the pair $(\gamma_0,\gamma_1)$ is the vertical distance $$ d_{X_0}(\gamma_0(0), \gamma_1(0)). $$ \begin{rem} $\Pi_0$ need not be equal to $$ \min_{v\in V(J)} d_{X_v}(\gamma_0(v), \gamma_1(v)). $$ \end{rem} We will frequently use the notation $\Pi_{\max}$ for the {\em maximal separation of the ends} of the pair $\Pi=(\gamma_0, \gamma_1)$, $$ \Pi_{\max}:= \max \left(d_{X_{t_{-n}}}(\gamma_0(t_{-n}), \gamma_1(t_{-n})), d_{X_{t_{n}}}(\gamma_0(t_{n}), \gamma_1(t_{n})) \right), $$ describing the rate of growth of the above vertical distances (in one of the directions). \begin{figure}[tbh] \centering \includegraphics[width=60mm]{fig1.pdf} \caption{Flaring.} \label{flaring.fig} \end{figure} \subsection{Proper and uniform flaring conditions} The {\em proper flaring} condition requires $k$-qi sections over the same geodesic in $T$ to diverge at some uniform rate in at least one direction. More precisely: \begin{defn} [Proper flaring] \index{proper flaring} A tree of spaces ${\mathfrak X}=(\pi: X\to T)$ is said to satisfy the {\em proper $\kappa$-flaring condition} if there exists $m_\kappa\ge 0$ and a positive proper function $\phi_{\kappa}: {\mathbb N}\to {\mathbb R}_+$ such that for every pair $\Pi$ of $\kappa$-qi sections $\gamma_0,\gamma_1$ of girth $> m_\kappa$, over an interval of length $2n$ in $T$, we have $$ \Pi_{\max}\ge \phi_\kappa(n). $$ \end{defn} In other words, $\kappa$-sections have to diverge uniformly fast but the rate of divergence is allowed to be, say, sublinear (unlike the the Bestvina--Feighn flaring condition where one has an exponential rate of flaring). \medskip It is clear from the definition that if ${\mathfrak X}$ satisfies the proper $K$-flaring condition, then it also satisfies the $\kappa$-flaring condition for all $\kappa\in [1, K]$: We simply take $\phi_\kappa:=\phi_K$ and $m_\kappa:=m_K$. Note also that it would be too much to ask for $$ \Pi_{\min}=\min\left( d_{X_{-m}}(\gamma_0(-m), \gamma_1(-m)), d_{X_{n}}(\gamma_0(n), \gamma_1(n)) \right) \ge \phi_\kappa(n), $$ for some (uniform) proper function $\phi_\kappa$. \begin{defn}\label{defn:signed flare} We will say that a pair $\Pi=(\gamma_0, \gamma_1)$ of sections over an interval $\llbracket -N, N\rrbracket$ in $T$ is {\em flaring in the positive/negative direction} if, respectively, $$ d_{X_{n}}(\gamma_0(n), \gamma_1(n)) \ge \phi_\kappa(n), $$ or $$ d_{X_{-n}}(\gamma_0(-n), \gamma_1(-n)) \ge \phi_\kappa(n) $$ for all $n\in \NN\cap [1, N]$. \end{defn} We will see in Lemma \ref{lem:exp} and Corollary \ref{cor:signed flare}, that proper flaring (for all $\kappa\ge 1$) implies proper flaring in positive or negative direction (after a possible change of the function $\phi_\kappa$). \medskip In the book we will be mostly using an alternative form of the proper flaring condition established in the next proposition. For the ease of the notation, in this section we will identify a geodesic, say $\llbracket v,w\rrbracket$, of length $\ell$ in $T$, where $v,w\in V(T)$ with an interval $[a, b]\subset \RR$ of length $\ell$, where $a, b\in \mathbb Z$ through an implicit isometry $[a,b]\rightarrow \llbracket v,w\rrbracket$; this means, in particular, that integers correspond to the vertices in $\llbracket v, w\rrbracket$. \begin{prop}\label{prop:weak flaring} The following are equivalent: 1. A tree of spaces ${\mathfrak X}=(\pi: X\to T)$ satisfies the proper $\kappa$-flaring condition. 2. There exist $M_\kappa$ such that for all $D\ge 0$, there is {$\tau=\tau_{\ref{prop:weak flaring}}(\kappa,D)$} satisfying the following: For every pair $\Pi$ of $\kappa$-qi sections $\gamma_0, \gamma_1$ over a geodesic interval $[-m, n]\subset T$ ($m, n\in \NN$), if $$ d_{X_i}(\gamma_1(i),\gamma_2(i)) >M_\kappa, \forall i\in [-m+1, n-1],$$ and $$ \Pi_{max}=\max\left( d_{X_{-m}}(\gamma_0(-m), \gamma_1(-m)), d_{X_{n}}(\gamma_0(n), \gamma_1(n)) \right) \le D, $$ then $$ n+m\leq \tau.$$ \end{prop} \proof First of all, we leave it to the reader to check that if (2) holds for all $m=n$ then it holds for all $m, n\in \NN$. Therefore, in what follows, in (2) we will be always assuming that $n=m$. i. Assume that the proper flaring condition holds. Take $M_\kappa:=m_\kappa$ and consider a pair $\Pi=(\gamma_0, \gamma_1)$ of $\kappa$-qi sections over an interval $[-n, n]\subset T$ as in part (2). In particular, girth of $(\gamma_0,\gamma_1)$ is $>M_\kappa$. By the proper flaring condition, we have $$ D\ge \Pi_{\max} \ge \phi_\kappa(n). $$ Since $\phi_\kappa(t)$ is proper, the preimage $\phi_\kappa^{-1}([0,D])$ is contained in an interval $[0, t_{\kappa,D}]$. Then we take $$ \tau(\kappa,D):= 2t_{\kappa,D}. $$ ii. Conversely, suppose that (2) holds but proper flaring fails. Then there exist a constant $D>0$ and a sequence $\Pi^m$ of pairs of $\kappa$-qi sections $\gamma^m_{0}, \gamma^m_{1}$ over some intervals $\llbracket s_m, t_m\rrbracket\subset T$ of length $2n_m$ with the midpoint vertex $r_m$ such that $\Pi^m_0\to\infty, n_m\to\infty$, but $$ \Pi^m_{\max}= \max\left( d_{X_{t_m}}(\gamma^m_{0}(t_m), \gamma^m_{1}(t_m)), d_{X_{s_m}}(\gamma^m_{0}(s_m), \gamma^m_{1}(s_m)) \right) \le D. $$ We will isometrically parameterize the geodesic $[s_m, t_m]$ by the interval $[-n_m, n_m]\subset \mathbb Z$ so that $r_m$ corresponds to $0$. Set $\tau:= \tau(\kappa,M)$ where $$ M=\max(D, M_\kappa). $$ Define the function $$ \ell_m(i):= d_{X_i}(\gamma^m_{0}(i), \gamma^m_{1}(i)), i\in [-n_m, n_m]; \ell_m(0)= \Pi^m_0. $$ Then for sufficiently large $m$ we have $$ \Pi^m_0=\ell_m(0)> a(M+ b); a= L'_0, b=2L_0' \kappa, $$ and, hence, according to Corollary \ref{cor:contraction}, for all $n\in [-N_m+1, N_m-1]$, we have $$ \ell_m(n)>M. $$ Here $$ N_m= \left\lfloor \log_a \left( \frac{\Pi^m_0}{M +b} \right) \right\rfloor. $$ Observe that the right hand side diverges to infinity as $m\to \infty$. Therefore, for sufficiently large $m$, $N_m>\tau/2$. Thus, we obtain a contradiction with (2). \qed \medskip While the proper flaring condition is quite natural, it is the condition (2) in the proposition that we will use throughout the book. \begin{defn}\index{uniform flaring}\label{uniform flaring} We will say that a tree of spaces ${\mathfrak X}$ satisfies the {\em uniform $\kappa$-flaring} condition with the parameter $M_\kappa$ if the condition (2) in the proposition holds. \end{defn} \begin{convention} In what follows, unless indicated otherwise, $\kappa$-flaring always means uniform $\kappa$-flaring. \end{convention} \begin{lemma}\label{lem:hyp->uniform flaring} Suppose that ${\mathfrak X}=(\pi: X\to T)$ is a tree of hyperbolic spaces with $\delta$-hyperbolic total space $X$. Then ${\mathfrak X}$ satisfies the uniform $\kappa$-flaring condition for all $\kappa\ge 1$. \end{lemma} \proof As noted earlier, it suffices to consider the case $n=m$. Since $\gamma_0, \gamma_1$ are $\kappa$-quasigeodesics in $X$, they are within Hausdorff distance $D_{\ref{Morse}}(\delta,\kappa)$ from geodesics $\gamma_i^*$ in $X$ connecting the endpoints of $ga_0, \gamma_1$ respectively. Take $x_0=\gamma_0(0)$ and $x_0^*\in \gamma_0^*$ a point within distance $D_{\ref{Morse}}(\delta,\kappa)$ from $x_0$. The projections to $T$ of the geodesics $[\gamma_0(-n) \gamma_1(-n)]_X$, $[\gamma_0(n) \gamma_1(n)]_X$ each have length $\le D$. Thus, $$ d(x_0^*, [\gamma_0(\pm n) \gamma_1(\pm n)]_X)\ge D. $$ Suppose for a moment that $n-D> 2\delta$. By the slim quadrilateral property, there is a point $x_1^*\in \gamma_1^*$ within distance $2\delta$ from $x_0^*$. (A priori, this could have been a point on one of two other sides of the geodesic quadrilateral with the vertices $\gamma_i(\pm n), i=0,1$, but this possibility is ruled out by our assumption that $n-D> 2\delta$.) Thus, we find a point $x_1\in \gamma_1\cap X_v$, within distance $$ D_0=2(\delta + D_{\ref{Morse}}(\delta,\kappa))+\kappa $$ from $x_0$. While $v$ need not be equal to the vertex $0\in \llbracket -n, n\rrbracket\subset T$, we still have $$ d_T(0, v)\le D_0. $$ In particular, $$ d_{X_0}(\gamma_0(0), \gamma_1(0))=\Pi_0\ge d_X(\gamma_0(0), \gamma_1(0))\le D_1=D_0(\kappa+1). $$ We, therefore, set $$ M_\kappa= D_1 $$ and $\tau(\kappa,D)= \delta+ \frac{1}{2}D$. Since in the uniform $\kappa$-flaring property, it is assumed, in particular, that $$ d_{X_0}(\gamma_0(0), \gamma_1(0))> M_\kappa, $$ we obtain a contradiction with the above estimates, unless the inequality $n-D\ge 2\delta$ is violated, i.e. unless $n\le D+2\delta$, equivalently, the length of the interval $\llbracket -n, n\rrbracket$ is at most $\tau$, as required. \qed \medskip The uniform flaring condition has an immediate consequence that we will use on few occasions: \begin{lemma}[Three flows lemma] \label{lem:3-flows} Suppose that $\pi: X\to T$ satisfies the uniform $K$-flaring condition. Suppose that $\gamma_1, \gamma_2, \gamma_3$ are $K$-qi sections of $\pi: X\to T$ over an interval $\llbracket s,t\rrbracket $ such that for all $r\in \rrbracket s,t\llbracket$, $$d_{X_r}(\gamma_1(r), \gamma_3(r))> M_K$$ while $$ \max_{i,j} d_{X_s}(\gamma_i(s), \gamma_j(s))\le C, \quad \max_{i,j} d_{X_t}(\gamma_i(t), \gamma_j(t))\le C. $$ Then the length of the interval $\llbracket s,t\rrbracket $ is uniformly bounded, i.e is $\le \tau_{\ref{lem:3-flows}}(K,C)$. \end{lemma} The property appearing below will be also used quite often in our book: \begin{defn} We say that a tree of spaces ${\mathfrak X}$ {\em satisfies the $R(K,C)$-thin $K$-bigon property} if there is a function $R(K,C)$ such that for every pair $\Pi=(\gamma_1, \gamma_2)$ of $K$-qi sections of $\pi: X\to T$ over any interval $I=\llbracket v,w\rrbracket$, $$ \Pi_{max}\le C \Rightarrow \quad \forall t\in V(I), ~~ d_{X_t}(\gamma_1(t), \gamma_2(t))\le R(K,C).$$ \end{defn} Here, as before, $$ \Pi_{max}= \max\left( d_{X_v}(\gamma_1(v), \gamma_2(v)), d_{X_w}(\gamma_1(w), \gamma_2(w)) \right). $$ \begin{cor}\label{cor:super-weak flaring} Aa tree of spaces ${\mathfrak X}=(X\to T)$ satisfies the uniform $K$-flaring condition if and only if it satisfies the $R(K,C)$-thin $K$-bigon property for some $R(K,C)=R_{\ref{cor:super-weak flaring}}(K,C)$. \end{cor} \proof 1. Assume that ${\mathfrak X}$ satisfies the uniform $K$-flaring condition. Consider a pair of $K$-qi sections over an interval $I\subset T$. If for every vertex $t\in I$, $d_{X_t}(\gamma_1(t), \gamma_2(t))\le M_K$, then we are done. Otherwise, let $I'=\llbracket v', w'\rrbracket \subset \llbracket v,w\rrbracket $ be a maximal subinterval such that for all vertices $t\in I'$ we have $$d_{X_t}(\gamma_1(t), \gamma_2(t))> M_K.$$ Then there are edges $[v'',v'], [w',w'']$ in $I$ (not contained in $I'$) such that $$ d_{X_s}(\gamma_1(s), \gamma_2(s))\le C':=\max(M_K, C, 3\delta_0), s\in \{v'', w''\}. $$ By Lemma \ref{lem:3-flows} applied to $K$-qi sections $\gamma_1, \gamma_2=\gamma_3$, restricted to $I'':= \llbracket v'', w''\rrbracket $, we obtain: $$ d_T(v'', w'')\le \tau:=\tau_{\ref{lem:3-flows}}(K, C'). $$ By Lemma \ref{lem:growth-of-flare}, we get that for all $t\in V(I')$, $$ d_{X_t}(\gamma_1(t), \gamma_2(t))\le R_{\ref{cor:super-weak flaring}}(K,C):=a^{\tau} \left(C' + \frac{b}{a-1}\right), $$ with $a=L'_0, b=2L'_0K$. (Recall that $L'_0\ge 2$.) \medskip 2. We argue as in the proof of Proposition \ref{prop:weak flaring}. Suppose that proper $\kappa$-flaring fails. Then there exist a constant $D>0$ and a sequence $\Pi^m$ of pairs of $\kappa$-qi sections $\gamma^m_{0}, \gamma^m_{1}$ over some intervals $J_m=\llbracket s_m, t_m\rrbracket\subset T$ of length $2n_m$ with the midpoint vertex $r_m$ such that $\Pi^m_0\to\infty, n_m\to\infty$, but $$ \Pi^m_{\max}= \max\left( d_{X_{t_m}}(\gamma_{0,m}(t_m), \gamma_{1,m}(t_m)), d_{X_{s_m}}(\gamma_{0,m}(s_m), \gamma_{1,m}(s_m)) \right) \le D. $$ Setting $C:=D$, the hypothesis in Part 2 of the corollary means that $$d_{X_t}(\gamma_1(t), \gamma_2(t))\le R(K,C)$$ for all vertices $t\in J_m$. This contradicts $\Pi^m_0\to\infty$. \qed \subsection{Acylindrical trees of spaces} \label{sec:acyl trees} An easy, and frequently occurring, {\em sufficient} condition for uniform $\kappa$-flaring is {\em acylindricity}: \begin{defn}\label{defn:acylindrical} \index{acylindrical tree of spaces} Fix constants $\kappa\ge 1$ and $\tau\ge 1$. A tree of spaces $(\pi: X\to T)$ is {\em $(M,\kappa,\tau)$-acylindrical} if for every pair of $\kappa$-sections $\gamma_0, \gamma_1$ over an interval $J\subset T$ of length $\ge \tau$, we have $$ d_{X_v}(\gamma_0(t), \gamma_1(t))\le M, \forall t\in V(J). $$ \end{defn} \medskip We give few geometric examples of acylindrical trees of spaces in Section \ref{sec:qcamalgam}. In order to see that acylindrical trees of spaces satisfy uniform flaring, we take $M_\kappa:=M$ and $\tau(\kappa,D):= \tau+2$. Then, regardless of $D$, if $\Pi=(\gamma_0,\gamma_1)$ is a pair of $\kappa$-qi sections over an interval $J=\llbracket u, v\rrbracket\subset T$ and $$ d_{X_i}(\gamma_1(i),\gamma_2(i)) >M, i\in V(\rrbracket u, v\llbracket), $$ then the length of $\rrbracket u, v\llbracket$ is $<\tau$ and, hence, $J$ has length $< \tau(\kappa,D)= \tau+2$. The terminology {\em acylindrical} has its origin in 3-dimensional topology: A compact oriented 3-dimensional manifold with incompressible boundary $M$ is called (homotopically) {\em acylindrical} if every map of an annulus $(A, \partial A)\to (M, \partial M)$ is homotopic (rel. $\partial A$) to a map $A\to \partial M$. Algebraically speaking, this condition means that if two elements of $\pi_1(\partial M,m)$ are conjugate in $\pi_1(M,m)$, then they are conjugate in $\pi_1(\partial M,m)$. If one glues two connected acylindrical 3-manifolds $M_1, M_2$ along their boundary surfaces to form a 3-manifold $M$, then every subgroup of $\pi_1(M)$ isomorphic to $\mathbb Z^2$ is contained (up to conjugation) in $\pi_1(M_1)$ or in $\pi_1(M_2)$. Algebraically speaking, topological acylindricity corresponds to acylindricity in the sense of group actions on trees (Definition \ref{defn:acylindrical-action}) as follows. The decomposition $M=M_1\cup M_2$ yields graph-of-groups decomposition of the fundamental $G= \pi_1(M)$. Let $G\times T\to T$ denote the action of $G$ on the Bass--Serre tree $T$ corresponding to this decomposition of $G$. Then the action of $G$ on $T$ is 1-acylindrical if and only if both manifolds $M_1, M_2$ are acylindrical. Suppose again that $G$ is the fundamental group of a finite graph of finitely generated groups $({\mathcal G}, Y)$; let $G\times T\to T$ be the corresponding $G$-action on the Bass--Serre tree and ${\mathfrak X}=(X\to T)$ the tree of spaces with $X$ equal to the Cayley graph of $G$ as discussed in Example \ref{ex:BS-tree}. We will see in Proposition \ref{example: acylindrical} that in this setting the tree of spaces ${\mathfrak X}$ is $(\kappa,\tau)$-acylindrical provided that the action of $G$ on $T$ is $k$-acylindrical for suitable values of $\kappa, \tau$ and $k$. \subsection{Group-theoretic examples} The following proposition was proved by Ilya Kapo\-vich \cite{ikap-acyl}; below, we give a different proof. \begin{prop}\label{example: acylindrical} Suppose $(\mathcal G, Y)$ is a finite graph of hyperbolic groups satisfying Axiom {\bf H} and $G:=\pi_1(\mathcal G)$. If the $G$-action on the Bass--Serre tree $T$ of $\mathcal G$ is $R$-acylindrical in the sense of Sela \cite{Sela97}, then for all $\kappa\geq 1$ there is a constant $M_{\kappa}$ such that the induced tree of metric spaces ${\mathfrak X}=(\pi: X\rightarrow T)$ is $(M_{\kappa}, \kappa, R)$-acylindrical. In particular, in view Theorem \ref{thm:mainBF}, $G$ is hyperbolic. \end{prop} \proof The first part of the proof follows in the arguments in \cite[Section 3]{ps-limset}. We will need some properties of the tree of spaces ${\mathfrak X}$ listed below. (1) The vertex-spaces of ${\mathfrak X}$ are metric graphs which are isometric copies of various cosets of $G_y$'s in $G$, where $y\in V(Y)$. The map $\pi: X\rightarrow T$ is $G$-equivariant. The $G$-action on $X$ is proper and cocompact, and the stabilizer of each $v\in V(T)$ acts on $V(X_v)$ transitively. (2) Suppose that $\Gamma$ is a Cayley graph of $G$ with respect to a finite generating set. Let $f: G \rightarrow X$ be an orbit map. We know that for each $y\in V(Y)$ and $g\in G$, $gG_y$ is a vertex of $T$. We have $\operatorname{Hd}(X_{gV_y}, f(gG_y))\leq D$, where $D$ is a constant independent of $g\in G, y\in V(Y)$. Suppose that the claim of the proposition fails for some $\kappa$. Then there is a sequence of pairs of $\kappa$-qi sections $\gamma_{0,n}, \gamma_{1,n}$ over geodesic intervals $$ \beta_n: [0, R+1]\rightarrow T$$ of length $R+1$, such that for some integer $t\in [0,R+1]$ we have $$ d_{X_{\beta_n(t)}}(\gamma_{0,n}(t), \gamma_{1,n}(t))\geq n, \quad \forall n\in \mathbb N.$$ Note that for all integers $s\in [0,R+1]$ $$ d_X(\gamma_{0,n}(s), \gamma_{1,n}(s))\geq d_X(\gamma_{0,n}(t), \gamma_{1,n}(t))- d_X(\gamma_{0,n}(s), \gamma_{0,n}(t))-d_X(\gamma_{1,n}(s), \gamma_{1,n}(t)). $$ Since $\gamma_{0,n}, \gamma_{1,n}$ are $\kappa$-qi sections, we have \begin{align*} d_X(\gamma_{0,n}(s), \gamma_{1,n}(s))\geq d_X(\gamma_{0,n}(t), \gamma_{1,n}(t))- 2\kappa|s-t|-2\kappa\geq \\ d_X(\gamma_{0,n}(t), \gamma_{1,n}(t))-2(R+1)\kappa-2\kappa. \end{align*} Since vertex-spaces of ${\mathfrak X}$ are uniformly properly embedded in the ambient space $X$ we see that $d_X(\gamma_{0,n}(t), \gamma_{1,n}(t))\rightarrow \infty$ as $n\rightarrow \infty$. Thus, $d_X(\gamma_{0,n}(t), \gamma_{1,n}(t))\rightarrow \infty$, which in turn implies that $d_{X_{\beta_n(s)}}(\gamma_{0,n}(s), \gamma_{1,n}(s))\rightarrow \infty$ for all $s\in [0,R+1]$. Thus, passing to subsequence, if necessary, we may assume that $d_{X_{\beta_n(s)}}(\gamma_{0,n}(s), \gamma_{1,n}(s)) \geq n$ for all $n\in \mathbb N$ and $s\in [0,R+1]$. Also, since the group $G$ acts on $T$ cocompactly, we can assume, by passing to subsequence if necessary, that $\beta_n(0)$ is a fixed vertex $v$ and $\gamma_{0,n}(0)$ is a fixed point $x\in X_v$. Since $X$ is quasiisometric to $G$, by passing to a further subsequence, if necessary, we may assume that $\beta_n$ is a fixed geodesic $vw$ in $T$, where $d_T(v,w)=R+1$. We note that since $d_X(\gamma_{0,n}(v), \gamma_{1,n}(w))\leq \kappa +\kappa R$, by Lemma \ref{lem:edge-spaces} we have $$ \operatorname{Hd}([\gamma_{0,n}(v) \gamma_{1,n}(v)]_{X_v}, [\gamma_{0,n}(w) \gamma_{1,n}(w)]_{X_v})\leq C_{\ref{lem:edge-spaces}}(\kappa+ (R+1)\kappa). $$ Now, by (2) above we have a constant $D_1$ and $y, y'\in V(Y)$, $g, g'\in G$ such that (i) $v=gG_y$, $w=g'G_{y'}$ and (ii) the diameter of $N_{D_1}(gG_{y})\cap g'G_{y'}$ is infinite in $\Gamma$. \medskip The rest of the argument is borrowed from \cite[Theorem 4.6]{mitra-ht}. Let $\{h_n\}\subset gG_{y}$ and $\{h'_n\}\subset g'G_{y'}$ be sequences of distinct elements such that $d_{\Gamma}(h_n, h'_n)\leq D_1$ for all $n\in \mathbb N$. Hence, $d_{\Gamma}(1, h^{-1}_nh'_n)\leq D_1$. But there are only finitely many elements of $G$ inside $B(1; D_1)$. Hence, passing to a subsequence, we may assume that the sequence $\{h^{-1}_nh'_n\}$ is constant. Let $x=h^{-1}_nh'_n$. Consider the equations $x=h^{-1}_mh'_m=h^{-1}_nh'_n$; whence $h_mx=h'_m, h_nx=h'_n$. Thus, we have $$x^{-1}h^{-1}_mh_nx={h'}^{-1}_mh'_n \Rightarrow h^{-1}_mh_n=x{h'}^{-1}_mh'_nx^{-1}$$ $$ \Rightarrow h_m(h^{-1}_mh_n) h^{-1}_m=(h_mx){h'}^{-1}_mh'_n(h_mx)^{-1}=h'_m({h'}^{-1}_mh'_n){h'}^{-1}_m.$$ Clearly, $h^{-1}_mh_n\in G_y$ and, hence, $h_m(h^{-1}_mh_n) h^{-1}_m\in h_mG_yh^{-1}_m=gG_yg^{-1}$, since $h_m\in gG_y$. Similarly, $h'_m({h'}^{-1}_mh'_n){h'}^{-1}_m\in g'G_{y'}g'^{-1}$. This implies that $$ h_m(h^{-1}_mh_n) h^{-1}_m=h'_m({h'}^{-1}_mh'_n){h'}^{-1}_m\in gG_yg^{-1}\cap g'G_{y'}g'^{-1}. $$ However, $gG_yg^{-1}$ is the stabilizer of the vertex $v=gG_y$ and $g'G_{y'}g'^{-1}$ is the stabilizer of $w=g'G_{y'}$. Since $\{h_n\}$ and $\{h'_n\}$ are sequences of distinct elements in $gG_y$ and $g'G_{y'}$ respectively, the intersection $G_v\cap G_w$ is infinite. Since $d_T(v,w)=R+1$ this contradicts the $R$-acylindricity of the $G$-action. \qed \begin{rem} 1. The proof of Proposition \ref{example: acylindrical} also works even if we assume that $G_v\cap G_w$ finite whenever $d_T(v,w)\geq k+1$. 2. In fact, to conclude hyperbolicity of $G$ in the proposition, one does not need the full power of the Combination Theorem, Theorem \ref{thm:mainBF}, one can derive the result from the cobounded quasiconvex chain-amalgamation, Theorem \ref{thm:chain}. \end{rem} For the next corollary, we recall that a subgroup $H$ in a group $G$ is {\em weakly malnormal} if for every $g\in G\setminus H$ the intersection $$ gHg^{-1} \cap H $$ is finite. \begin{cor} \label{cor:malnormal-amalgam} {\em (\cite[Theorem 2]{km-malnormal})} If $G_1, G_2$ are hyperbolic groups and $H$ is a common quasiconvex subgroup which is weakly malnormal in either in $G_1$ or in $G_2$, then $G=G_1*_H G_2$ is hyperbolic. \end{cor} \proof We claim that the action of $G$ on the Bass--Serre tree for the given amalgam decomposition is $3$-acylindrical. Without loss of generality let us assume that $H<G_1$ is weakly malnormal. If $T$ is the Bass--Serre tree and $v,w\in V(T)$ with $d_T(v,w)\geq 4$, then there are is a sequence of consecutive vertices on $vw$ of the form $v_1=xG_2, v_2=yG_1, v_3=zG_2$, where $x,y,z\in G$. Then $G_{v_1}\cap G_{v_3}$ is equal to the intersection of the stabilizers of the two edges: (i) one edge connecting $v_1,v_2$, and (ii) the one edge connecting $v_2,v_3$. However, these are two distinct conjugates of $H$ in the stabilizer of $v_2=yG_1$, i.e. they are of the form $ygHg^{-1}y^{1}, yg'H{g'}^{-1}y^{-1}$ in $yG_1y^{-1}$ where $g,g'\in G_1$. Since $$ ygHg^{-1}y^{1}\cap yg'H{g'}^{-1}y^{-1}=y(gHg^{-1}\cap g'H{g'}^{-1})y^{-1}$$ (weak) malnormality of $H$ in $G_1$ proves our claim. Then the hyperbolicity follows from Proposition \ref{example: acylindrical}. \qed An example analogous to the situation of the corollary above in the context of a tree of spaces is discussed in Section \ref{sec:qcamalgam}. \subsection{Exponential flaring (Bestvina--Feighn flaring condition)} \begin{defn}[Exponential flaring condition]\label{exp-flare}\index{exponential flaring} We say that a tree ${\mathfrak X}$ of metric spaces $\pi:X\rightarrow T$ it satisfies the {\em Bestvina--Feighn} $\kappa$-{\em flaring condition} or the {\em exponential} $\kappa$-{\em flaring condition}, if there exist $\lambda_\kappa>1, M_\kappa>0$ and $n_\kappa\in \mathbb N$ such that the following holds: For every pair $\Pi=(\gamma_0,\gamma_1)$ of $\kappa$-qi sections of ${\mathfrak X}$ over a length $2n_\kappa$ geodesic interval $\llbracket -n_\kappa,n_\kappa\rrbracket \subset T$, if the girth $\Pi_0$ of the pair $(\gamma_0, \gamma_1)$ is $\ge M_\kappa$, then $$ \lambda_\kappa\cdot \Pi_0 \le \Pi_{\max}. $$ \end{defn} A form of this flaring condition first appeared in the paper \cite{BF} of Bestvina and Feighn. Actually, the original Bestvina--Feighn flaring condition was a bit different from the exponential flaring condition above as they required not just two qi sections but a 1-parameter family of $\kappa$-qi sections interpolating these two, i.e. a $\kappa$-hallway, see Definition \ref{defn:hallway}. The existence of such a family (with a different but uniform qi constant $\kappa'$) follows from \cite{pranab-mahan}. It will be also proven in Lemma \ref{lem:E-ladder-structure}(b). We will see below that the exponential flaring implies proper flaring with an exponential function $\phi_\kappa$ and that if $X$ is hyperbolic, then ${\mathfrak X}$ satisfies the exponential $\kappa$-flaring condition for all $\kappa\ge 1$. Note that while in their first paper \cite{BF} Bestvina and Feighn imposed the exponential flaring condition for all $\kappa\ge 1$, in the addendum \cite{BF-err} to their paper, the flaring condition was required only for some value of $\kappa$, cf. the statement of our main result, Theorem \ref{thm:mainBF}. \begin{lemma}\label{lem:exp} Bestvina--Feighn $\kappa$-flaring implies exponential proper $\kappa$-flaring. Moreover, the proper flaring condition holds either in the negative or in the positive direction (see Definition \ref{defn:signed flare}). \end{lemma} \proof We fix $\kappa$ and set $n:= n_\kappa, \la:=\la_\kappa$. Suppose that $\Pi=(\gamma_0,\gamma_1)$ is a pair of $\kappa$-qi sections over a geodesic interval $I$ of length $N= s n$ and of girth $\Pi_0\ge m_\kappa$. For concreteness, we assume that $$ d_{X_{n}}(\gamma_0(n), \gamma_1(n))\ge \la d_{X_{0}}(\gamma_0(0), \gamma_1(0))=\Pi_0. $$ Then, applying the flaring inequality to the subinterval in $I$ of length $2n$ centered at $n$, we obtain $$ \max( d_{X_{2n}}(\gamma_0(2n), \gamma_1(2n)), \Pi_0)\ge \la d_{X_{n}}(\gamma_0(n), \gamma_1(n)). $$ Since $\la>1$, the maximum in this inequality is attained by $d_{X_{2n}}(\gamma_0(2n), \gamma_1(2n))$ and, thus, $$ d_{X_{2n}}(\gamma_0(2n), \gamma_1(2n)) \ge \la d_{X_{n}}(\gamma_0(n), \gamma_1(n)). $$ Applying this argument inductively, we obtain: $$ \lambda^s_\kappa\cdot \Pi_0\leq d_{X_{sn}}(\gamma_0(sn), \gamma_1(sn))\le \Pi_{\max}(sn).$$ By reducing $\la$ to $\mu>1$ if necessary and using Lemma \ref{lem:growth-of-flare}, we also get $$ \Pi_{\max}(m)\ge d_{X_{m}}(\gamma_0(m), \gamma_1(m))\ge \mu^m \Pi_0, \forall m\ge n. $$ Since the function $m\mapsto \mu^m, m\in \NN$, is proper, the exponential proper $\kappa$-flaring condition for ${\mathfrak X}$ follows. \qed \begin{prop}\label{hyp to lin flaring} If ${\mathfrak X}$ satisfies the proper $\kappa$-flaring condition for all $\kappa\ge 1$, then ${\mathfrak X}$ also satisfies satisfies an exponential $\kappa$-flaring condition for all $\kappa\geq 1$. In particular, if $X$ is hyperbolic, then ${\mathfrak X}$ satisfies satisfies an exponential $\kappa$-flaring condition for all $\kappa\geq 1$. \end{prop} \proof Since $X$ is hyperbolic, the tree of spaces ${\mathfrak X}=(X\rightarrow T)$ satisfies both the proper $\kappa$-flaring and the property obtained in Corollary \ref{cor:super-weak flaring} for all $\kappa\ge 1$. We will use both of these in the proof. The proof is inspired by, but is conceptually simpler than \cite[Proposition 5.8]{pranab-mahan}. For each $K\geq 1$, we inductively define $K_0:=K$ and $K_i:=\max\{K_{i-1}, C_{\ref{lem:edge-spaces}}(K_{i-1})\}$, $i\geq 1$. Given $\kappa \geq 1$ we set $$ L:=\eta_{\ref{unif-emb-subtree}}(2\kappa_3), \epsilon=3\eta_{\ref{unif-emb-subtree}}(2\kappa_3), \quad R:=\max\{1, m_{\kappa_3}, L(5\epsilon+4L)\}$$ and $D:=\max\{R, R_{\ref{cor:super-weak flaring}}(\kappa_3,R)\}$. Let $n=n_{\kappa}$ be any integer such that $\phi_{\kappa_3}(n)\geq 12D$; set $\lambda_{\kappa}:=2$ and $M_{\kappa}:=D+1$. If $\Pi=(\gamma_0,\gamma_1)$ is a pair of $\kappa$-qi sections over an interval $J=[-n,n]\subset T$, $\Pi_0\geq M_{\kappa}$, then we form a metric bundle ${\mathfrak Y}= (Y\to J)$: The vertex-spaces $Y_i$ of $Y$ are geodesic segments in $X_i$ joining $\gamma_0(i),\gamma_1(i)$. The edge-spaces $Y_e$, $e=[i, i+1]$, of ${\mathfrak Y}$ are geodesic segments in $X_e$ with end-points within distance $\kappa$ from the respective end-points of $Y_i$. The incidence maps of ${\mathfrak Y}$ are obtained by composing the incidence maps of $f_{ev}$, $v\in V(J)$, composed with the nearest-point projections to $Y_v$ (taken in $X_v$). After that, the idea is to first decompose this interval-bundle into a finite number of subbundles by constructing qi sections in $Y$ (cf. \cite[Proposition 3.14]{pranab-mahan}, also Proposition \ref{vertical subdivision}), where the subbundles intersect along the qi sections. We then use proper flaring to prove that the qi sections bounding each of the subbundles flare in at least one direction. Finally, as in the last step of the proof of \cite[Proposition 5.8]{pranab-mahan}, we verify that at least half of these will flare in the same direction to finish the proof. {\bf Step 1: Construction of qi sections in $Y$.} We note that through any point of the metric bundle formed by two $K_{i-1}$-qi sections, there is a $K_i$-qi section, $i\geq 1$. Let $\alpha_i=Y\cap X_i$, $i\in V(J)$. For two consecutive integers $i, j$ we have a map $h_{ij}: \alpha_i\rightarrow \alpha_j$ such that for all $x\in \alpha_i$, $d_{X_{ij}}(h_{ij}(x),x)\leq \kappa_3$. This map is clearly $\eta_{\ref{unif-emb-subtree}}(2\kappa_3)$-coarsely Lipschitz, with a similarly defined $\eta_{\ref{unif-emb-subtree}}(2\kappa_3)$-coarse inverse $h_{ji}:\alpha_j\rightarrow \alpha_i$, which is also an $\eta_{\ref{unif-emb-subtree}}(2\kappa_3)$-coarsely Lipschitz map. Hence, by Lemma \ref{lem: qi from lipschitz}, the maps $h_{ij}, h_{ji}$ are both $(\eta_{\ref{unif-emb-subtree}}(2\kappa_3), 3\eta_{\ref{unif-emb-subtree}}(2\kappa_3))$-quasiisometries. By Lemma \ref{lem:c-mono}, if $x, y, z\in \alpha_i$ and $y$ is between $x,z$ with $d_{X_i}(x,y)\geq L(5\epsilon+4L)$, and $d_{X_i}(y,z)\geq L(5\epsilon+4L)$, then $h_{ij}(y)$ is between $h_{ij}(x)$ and $h_{ij}(z)$. In particular, this is true if $d_{X_i}(x,y)\geq R$ and $d_{X_i}(y,z)\geq R$ by the choice of $R$. Suppose $l(\alpha_0)=l$ and let $\alpha_0$ also denote the parametrization of this geodesic in $X_0$ so that $\alpha_0(0)=\gamma_0(0), \alpha_0(l)=\gamma_1(0)$. Next, we inductively construct a sequence of numbers $s_0=0, \cdots, s_n=l$ and a sequence of $\kappa_1$-qi sections $\gamma_0=\beta_0,\beta_1,\cdots, \beta_n=\gamma_1$ in $Y$ such that each $\beta_{i+1}$ is contained in the metric bundle formed by $\beta_i$ and $\beta_n$, $0\leq i\leq n-2$ as follows. Suppose $s_0,\cdots, s_i$ are chosen and so are $\beta_0,\cdots, \beta_i$ and $s_i<l$. To construct $s_{i+1}$ and $\beta_{i+1}$ consider the subset $S\subset (s_i,l]$ consisting of $s$ such that there is a $\kappa_2$-qi section $\beta$ through $s$ satisfying $$ \min_jd_{X_j}(\beta(j), \beta_i(j))\leq R. $$ If $S=\emptyset$ then define $s_{i+1}=s_n=\gamma_1$. Assume now that $S$ is nonempty. Suppose there is $s\in S$ and a $\kappa_1$-qi section $\beta$ in $Y$ through $\alpha_0(s)$ such that $\min_j d_{X_j}(\beta(j), \beta_i(j))= R$. In this case, if $s\neq l$, then define $s_{i+1}=s$, $\beta_{i+1}=\beta$. Otherwise, if $s=l$, then we define $s_{i+1}=s_n=l$ and $\beta_n=\gamma_1$. Suppose there is no such $s\in S$. Then let $s_{i+1}=\min\{l, 1+\sup S\}$. If $s_{i+1}\neq l$, then let $\beta_{i+1}$ be any $\kappa_1$-qi section in $Y$ passing through $s_{i+1}$. Otherwise, define $s_n=s_{i+1}$ and $\beta_{i+1}=\gamma_1$. We note that $s_{i+1}-s_i\geq R$ unless $s_{i+1}=s_n$. \medskip {\bf Step 2: Verification of the properties of the qi sections.} Let $\Pi^i=(\beta_i, \beta_{i+1})$ and let $Y^i$ denote the interval-bundle over $J$ formed by these qi sections. We claim that $Y^i\cap Y^j=\emptyset$, unless $|i-j|\leq 1$, and $Y^i\cap Y^{i+1}=\beta_{i+1}$ for all permissible $i$. Both claims follow from Lemma \ref{lem:c-mono}, cf. Lemma 3.12 of \cite{pranab-mahan}. \medskip {\bf Step 3: Flaring of $\Pi^i=(\beta_i, \beta_{i+1})$.} We know that there is a $\kappa_2$-qi section $\bar{\beta}_i$ through either $s_{i+1}$ or $s_{i+1}-1$ inside the subbundle $Y^i$, such that $$ d_{X_j}(\beta_i(u_i), \bar{\beta}_i(u_i))\leq R$$ for some $u_i\in V(J)$. Without loss of generality, we may assume that $u_i<0$. Now we construct a new set of $\kappa_3$-qi sections inside the bundle formed by $\beta_i$ and $\bar{\beta}_i$ as follows. Let $r=\lfloor (s_{i+1}-s_i-1)/D\rfloor$. Let $\beta'_j$, $0\leq j\leq r$ be arbitrary $\kappa_3$-qi sections in the bundle formed by $\beta_i, \bar{\beta}_i$ such that $\beta'_0=\beta_i$, and for $j\neq 0$ $\beta'_j$ passes through $\alpha_0(s_i+jD)$. It follows from Lemma \ref{cor:super-weak flaring} that for all $j, k\geq 0$ $d_{X_j}(\beta'_k(j), \beta'_{k+1}(j))\geq R$ and, thus, as in step 2, by Lemma \ref{lem:c-mono} we see that $$ \beta_i(j)=\beta'_0(j),\cdots, \beta'_m(0), \beta_{i+1}(j)$$ is a monotonic sequence in the geodesic interval $[\beta_i(j) \beta_{i+1}(j)]_{X_j}$ for all $j\geq 0$. Thus, \begin{align*} d_{X_n}(\beta_i(n), \beta_{i+1}(n))\geq \sum^{r-1}_{j=0} d_{X_n}(\beta'_j(n), \beta'_{j+1}(n)) \geq \\ \sum^{r-1}_{j=0}12D =\sum^{r-1}_{j=0} 12d_{X_0}(\beta'_j(0), \beta'_{j+1}(0)) = 12d_{X_0}(\beta'_0(0), \beta'_r(0))\end{align*} by the proper flaring and by the choice of $n$. However, $$ d_{X_0}(\beta_i(0), \beta_{i+1}(0))=d_{X_0}(\beta'_0(0), \beta'_r(0))+d_{X_0}(\beta'_r(0), \beta_{i+1}(0)),$$ where $d_{X_0}(\beta'_r(0), \beta_{i+1}(0)\leq D+1$. It follows that $$ d_{X_n}(\beta_i(n), \beta_{i+1}(n))\geq \frac{12D}{2D+1}d_{X_0}(\beta_i(0), \beta_{i+1}(0))\geq 4d_{X_0}(\beta_i(0), \beta_{i+1}(0)),$$ since $D\geq 1$. \medskip {\bf Step 4: Exponential flaring of $\gamma_0, \gamma_1$.} We know that each pair $\Pi^i=(\beta_i, \beta_{i+1})$ exponentially flares in at least one direction (say, in the positive direction) by Step 3. Then there is a subset of induces ${\mathcal I}\subset \{1, 2,...\}$, such that $$ \sum_{i\in {\mathcal I}} \Pi^i_0 \ge \frac{1}{2} \Pi_0. $$ It follows that $\Pi$ flares exponentially in the positive direction with $\lambda_{\kappa}=2$. \qed \begin{cor}\label{cor:signed flare} If ${\mathfrak X}$ satisfies the proper $\kappa$-flaring condition for all $\kappa\ge 1$, then (again, for all $\kappa\ge 1$) it satisfies proper flaring either in positive or in the negative direction. \end{cor} \section{Hyperbolicity of trees of hyperbolic spaces} \subsection{The combination theorem} We are now ready to state our version of the combination theorem of Bestvina and Feighn \cite{BF}: \begin{theorem}\label{thm:mainBF} There exist $K_*=K_{\ref{thm:mainBF}}(\delta_0,L_0)$ and $\delta_*=\delta_{\ref{thm:mainBF}}(\delta_0,L_0)$, depending only on $\delta_0$ and $L_0$, such that the following holds. Suppose ${\mathfrak X}=(\pi: X\rightarrow T)$ is a tree of hyperbolic spaces (with primary parameters $\delta_0, L_0$) satisfying the uniform $K_*$-flaring condition. Then $X$ is a $\delta_*$-hyperbolic metric space. \end{theorem} The constants $K_*$ and $\delta_*$ are computable. In Remark \ref{rem:K*} we will give a formula for $K_*$, which is inductive in nature, as it relies upon earlier computations of various constants and functions scattered throughout the book. (We will not attempt to write a formula for $\delta_*$.) The reader unwilling to keep track of such computations, can simply assume that ${\mathfrak X}$ satisfies the uniform $\kappa$-flaring condition {\em for all} $\kappa\ge 1$. \subsection{Cobounded quasiconvex chain-amalgamation}\label{sec:qcamalgam} In the book we will be frequently using the following very special case of Theorem \ref{thm:mainBF} which is much easier to prove, see e.g. \cite[Proposition 1.51]{pranab-mahan}. This special case was motivated by a result of Hamenstadt, \cite[Lemma 3.5]{hamenst-word}. Although Hamenstadt used much stronger assumptions, it is clear that the proof of Hamenstadt goes through with the weaker hypothesis as well. We include a proof along the lines of Hamenstadt's arguments for the sake of completeness and also since we want a description of geodesics. We assume that $X$ is a path-metric space that can be represented as a union of a finite chain of rectifiably-connected subsets equipped with induced path-metrics $$ Y= Q_0\cup Q_1\cup ... \cup Q_{n}, $$ such that for some constants $C$ and $\delta$ the following hold: \begin{enumerate} \item Each $Q_i$ is $\delta$-hyperbolic. \item For each $i<n$ the intersection $Q_{i,i+1}= Q_{i}\cap Q_{i+1}$ is rectifiably connected and $L$-qi embedded in $Q_i, Q_{i+1}$. \item Each $Q_{i,i+1}$ separates (in $Y$) $Q_{i}$ from $Q_{i+1}$ in the sense that every path $c$ in $X$ connecting $Q_i$ to $Q_{i+1}$ has to cross $Q_{i,i+1}$. \item Each pair of intersections $Q_{i-1,i}, Q_{i,i+1}$ is $C$-cobounded in $Q_i$. \item $d_{Q_{i}}(Q_{i-1,i}, Q_{i,i+1})\ge 1$. \end{enumerate} \medskip We will say that such $X$ is a {\em cobounded quasiconvex chain-amalgam} of $Q_i$'s. If $n=1$, we will refer to $X=Q_0\cup Q_1$ simply as a {\em quasiconvex amalgam}. \index{cobounded quasiconvex chain-amalgam} Clearly, the collection $Q_i$'s in a cobounded quasiconvex chain-amalgam gives $X$ structure of a tree of hyperbolic spaces with vertex-spaces $Q_i$ and edge-spaces $Q_{i,i+1}$, such that the tree $T$ is isometric to the interval $J$ of length $n+1$ in $\RR$ with integer vertices. Conversely, consider a tree of hyperbolic spaces ${\mathfrak X}$ over an interval $T$ such that for each vertex $v$ with the incident edges $e_\pm$, the corresponding subsets $X_{e_\pm v}$ are $C'$-cobounded in $X_v$. Then ${\mathfrak X}$ yields a cobounded quasiconvex chain-amalgam with subsets $Q_i=Q_v$, $v=v_i$, equal to the unions $$ X_{e_{\scriptstyle -}}\times \left[\frac{1}{2},1 \right] \cup_{f_{e_-v}} X_v \cup_{f_{e_+v}} X_{e_{\scriptstyle +}}\times \left[0,\frac{1}{2}\right], $$ $$ Q_{i-1,i}= X_{e_{\scriptstyle -}} \times \frac{1}{2}, $$ see Section \ref{sec:cylinders} for the definition of mapping cylinders. For each $i$ pick points $$ x_i^-\in N_{C'} (P_{Q_i,Q_{i-1,i}}(Q_{i,i+1})) \cap Q_{i-1,i}, x_{i}^+\in N_{C'} (P_{Q_i,Q_{i,i+1}}(Q_{i-1,i}))\cap Q_{i,i+1},$$ where the $C'$-neighborhoods are taken with respect to the metric of $Q_{i}, Q_{i}$. Since both projections of $Q_{i-1,i}$ to $Q_{i,i+1}$ and of $Q_{i,i+1}$ to $Q_{i-1,i}$ have diameters $\le C$, we obtain \begin{equation}\label{eq:min-cobounded} d_{Q_i}(Q_{i-1,i}, Q_{i,i+1})\le d_{Q_i}(x_i^-, x_i^+)\le d_{Q_i}(Q_{i-1,i}, Q_{i,i+1})+2(C+C'), \end{equation} i.e. the pair of points $x_i^-, x_i^+$ ``almost'' realizes the minimal distance in $Q_i$ between the subsets $Q_{i-1,i}$, $Q_{i,i+1}$. We will simultaneously prove hyperbolicity of $X$ and describe uniform quasigeodesics connecting points in $X$. For this description, given points $x\in Q_{i-1}, x'\in Q_{k+1}$, it will be convenient to name their nearest-point projections (in $Q_{i-1}, Q_{k+1}$) to $Q_{i-1,i}, Q_{k,k+1}$ as $\bar{x}, \bar{x}'$, respectively. Suppose, furthermore, that $$ c(x_{i}^+, x_{i+1}^-), $$ are $L'$-quasigeodesic paths in $Q_{i,i+1}$ connecting $x_{i}^+$ to $x_{i+1}^-$ and $$ c(x_i^-, x_{i}^+), c(x,\bar{x}), c(\bar{x}', x') $$ are $L'$-quasigeodesic paths in $Q_{i}$ connecting $x_i^-$ to $x_{i}^+$, etc. We let $c^*(\cdot, \cdot)$ denote the corresponding geodesics paths in $Q_{i,i+1}$, $Q_{i}$, connecting the respective points. \begin{thm}\label{thm:chain}\label{thm:hyp-tree} Under the above assumptions, $X$ is $\delta_{\ref{thm:chain}}(\delta,L,L',C,C')$-hyperbo\-lic. Moreover, the following paths $c(x,x')$ are $K=K_{\ref{thm:chain}}(\delta,L,C,D,L')$-quasigeodesics in $X$ connecting $x\in Q_{i-1}, x'\in Q_{k+1}$, $i\le k$: 1. If $x, x'$ belong to $Q'_i=Q_i\setminus (Q_{i-1,i}\cup Q_{i,i+1})$ for some $i$, then we assume that $c(x,x')$ is an $L'$-quasigeodesic in $Q_i$ connecting $x$ to $x'$. 2. Otherwise, $$ c(x,x')= c(x,\bar{x})\star c(\bar{x}, x_{i}^-) \star c(x_i^-, x_{i}^+) \star c(x_{i}^+, x_{i+1}^-) \star ... \star c(x^+_k,\bar{x}') \star c(\bar{x}', x'). $$ \end{thm} \proof This theorem is proven by verifying the assumptions of Corollary \ref{cor:bowditch}, i.e. axioms of a slim combing. (a1) We will need to estimate the length of $c(x,x')$ in terms of $d(x,x')$. First of all, $$ \operatorname{length}(c(x,x'))\le L' (\operatorname{length} (c^*(x,x')) +1), $$ hence, it suffices to get an estimate for $c^*$. Let $\gamma$ be any geodesic in $X$ connecting $x$ to $x'$. In view of the separation property (3) in the theorem, for each $i\le j\le k$, $\gamma$ will contain subpaths $\gamma_j\subset Q_j$ (necessarily a geodesic in $Q_j$) connecting a point $p_j^-\in Q_{j-1,j}$ to some $p_j^+\in Q_{j,j-1}$. Let $P_-, P_+$ denote the projections $Q_j\to Q_{j-1,j}$, $Q_j\to Q_{j,j+1}$ respectively. According to Lemma \ref{lem:projection-1}, $\gamma_j$ contains points $y_j^\pm$ satisfying $$ d_{Q_j}(y_j^\pm, P_\pm(p_j^\mp))\le 2\delta+\la, $$ hence, $$ d_{Q_j}(y_j^\pm, x_j^\pm)\le D:=C+C'+2\delta+\la, $$ where $\la=\la_{\ref{lem:qi-preserves2}}(\delta,L)$ is the quasiconvexity constant of $Q_{j,j\pm 1}$ in $Q_j$. Thus, $$ \operatorname{length}( [p_j^- x_j^-]_{Q_j} \star [x_j^- x_j^+]_{Q_j} \star [x_j^+ p_j^+]_{Q_j} ) \le \operatorname{length}(\gamma_j) + 4D. $$ Since $Q_{j-1,j}, Q_{j,j+1}$ are $L$-qi embedded in $Q_j$ we also obtain \begin{equation}\label{eq:gammaj} \operatorname{length}(c(p_j^-, p_j^+))\le L \cdot \operatorname{length}(\gamma_j) + 4D (L+1). \end{equation} Since $$ \operatorname{length}(\gamma)\ge d(x, p^-_{i}) + \sum_{j=i}^{k} \operatorname{length}(\gamma_j) + d(p_{k}^+, x'), $$ by combining the inequalities \eqref{eq:gammaj}, we get: $$ \operatorname{length}(c^* (x^-_{i}, x_{k}^+)) \le \operatorname{length}(c (p^-_{i}, p_{k}^+)) \le L \sum_{j=i}^{k} \operatorname{length}(\gamma_j) + 4D (L+1)(k-i+1). $$ To estimate the term $4D (L+1)m$, $m=k-i+1$, note that $d(x,y)\ge m$ (in view of the assumption 5 in the theorem). Thus, $$ \operatorname{length}(c^* (x^-_{i}, x_{k}^+)) \le L d(x,x') + 4D (L+1) d(x,x') = (L+ 4D (L+1) )d(x,x'). $$ Lastly, we deal with $d(x, x^-_{i})$ and $d(x', x_{k}^+)$. Recall that the metric space $(Q_i, d_{Q_i})$ is $\eta$-properly embedded in $X$ (Proposition \ref{unif-emb-subtree}). We obtain: \begin{align*} \operatorname{length} c^*(x, x^-_{i}) = d_{Q_i}(x, \bar{x}) + d_{Q_{i-1, i}}(\bar{x}, x^-_{i})\le \\ d_{Q_{i-1}}(x, \bar{x}) + L + L d_{Q_{i-1}}(\bar{x}, x^-_{i+1}) \le L + 2D + L d_{Q_{i-1}}(x, x^-_{i}) \le \\ L + 2D + L \eta( d(x, x^-_{i}) ) \le \\ L + 2D + L \eta( d(x, y^-_{i}) +D) \le L + 2D + L \eta( d(x, x') +D). \end{align*} Similarly, $$ \operatorname{length} c^*(x', x^+_{k}) \le L + 2D + L \eta( d(x, x') +D). $$ Combining the inequalities, we obtain: \begin{align*} \operatorname{length}(c^*(x,x'))= \operatorname{length} c^*(x, x^-_{i}) + \operatorname{length}(c (x^-_{i}, x_{k}^+) + \operatorname{length} c^*(x', x^+_{k}) \le \\ \eta_{\ref{thm:chain}}(d(x,x')) := 2(L + 2D + L \eta( d(x, x') +D)) + (L+ 4D (L+1) )d(x,x'). \end{align*} \medskip (a2) Consider a triple of points $x\in Q_i, y\in Q_j, z\in Q_k$, $i\le j\le k$, and the ``triangle'' $$ \Delta_c=c(x,y)\cup c(y,z)\cup c(z,x). $$ By the definition of the paths $c$ in $X$, the paths $c(x,y), c(y,z)$ coincide away from $Q_j$, the same applies to the pair of paths $c(y,z), c(z,x)$. Therefore, it suffices to consider the case when $i=j=k$. We will use the notation $pq$ for geodesics in $Q_i$. Our goal is to verify that each of the paths $c(p,q)$ connecting points $p, q\in Q_i$ are uniformly Hausdorff-close to a geodesic $pq$: Once we are done with this, then uniform slimness of $\Delta_c$ will follow from $\delta$-hyperbolicity of $Q_i$. If both $p, q$ are in $Q'_i$ or in $Q_{i,i+1}$ or in $Q_{i-1,i}$, there is nothing to prove. Hence, up to permutation of the points $p, q$ and reversing the order in the interval $[0, n]$, there are two cases to consider, depending on the position of the points $p, q$ with respect to the subsets $Q_{i-1,i}, Q_{i,i+1}$: Case 1. Suppose that $p\notin Q_{i-1,i}$ and $q\in Q_{i,i+1}$. We will be using the notation $\bar{p}=P_{Q_i,Q_{i,i+1}}(p)$. Then $$ c(p,q)= c(p, \bar{p}) \cup c(\bar{p}, p) $$ Since $Q_{i,i+1}$ is $L$-qi embedded in $Q_i$, this path is $D_{\ref{stab-qg}}(\delta, LL')$-Hausdorff close to the union $p \bar{p} \cup \bar{p} q$. According to Lemma \ref{rem:lip-proj}, $$ \operatorname{Hd}_{Q_i}(p \bar{p} \cup \bar{p} q, pq)\le \la+2\delta,$$ concluding the proof in this case. Case 2. Suppose that $p\in Q_{i-1,i}$ and $q\in Q_{i,i+1}$. In view of the assumption that $Q_{i-1,i}, Q_{i,i+1}$ are $L$-qi embedded in $Q_i$, we will work with $Q_i$-geodesics connecting pairs of points points in $Q_{i-1,i}$ and pairs of points in $Q_{i,i+1}$ instead of the $c$-paths in $Q_{i-1,i}$ and $Q_{i,i+1}$. Continuing with the notation of Case 1, $$ \operatorname{Hd}_{Q_i}(pq, p \bar{p} \cup \bar{p} q)\le \la+3\delta $$ (see Lemma \ref{lem:projection-1}) and $$ d(\bar{p}, x_i^+)\le C. $$ Thus, $$ \operatorname{Hd}_{Q_i}(pq, p x_i^+ \cup x_i^+ q)\le C+\la+4\delta. $$ Similarly, $$ \operatorname{Hd}_{Q_i}(p x_i^+, px_i^- \cup x_i^- x_i^+)\le C+\la+4\delta. $$ Combining the inequalities, we obtain $$ \operatorname{Hd}_{Q_i}(pq, p x_i^+ \cup x_i^- x_i^+ \cup x_i^+ q)\le 2(C+\la+4\delta). \qed $$ \begin{rem} The flexibility of working with concatenations of quasigeodesics points $x_i^\pm$ uniformly close to nearest-point projections will be critical in several places in the book, e.g. in Chapter \ref{ch:CT}. \end{rem} \begin{cor}\label{cor:chain} Assuming that $X\to T$ is a tree of hyperbolic spaces satisfying the assumptions of Theorem \ref{thm:chain}, for every subinterval $S\subset T$, the inclusion map $$ X_S\to X $$ is an $L_{\ref{cor:chain}}(\delta,L,C)$-qi embedding. \end{cor} \proof This is an immediate consequence of the of the fact that for any pair of points $x, y'\in S_X$, the path $c_{X_S}(x,x')$ equals the path $c_{X}(x,x')$ where the subscript denotes the space in which we define the combing. \qed \begin{cor}\label{cor:edge-spaces} Suppose that ${\mathfrak X}=(\pi: X\to T)$ is a tree of spaces satisfying Axiom {\bf H}. Then, for every edge $e=[v,w]$ of $T$ the space $X_{vw}$ equipped with its natural path-metric, is $\delta'_0$-hyperbolic with the hyperbolicity constant $\delta'_0$ depending only on the primary parameters of the tree of hyperbolic spaces ${\mathfrak X}$. \end{cor} We now return to the general quasiconvex chain-amalgamation and relate this class of trees of spaces to acylindrical trees of spaces. Suppose that $\gamma$ is a $\kappa$-qi section of the tree of spaces ${\mathfrak X}=(\pi: X\to J)$, defined on an interval $I\subset J$. Thus, for each integer $i$ we have a point $x_i\in Q_i$ and $d_Y(x_i, x_{i+1})\le K$ for some $K$ depending on $\kappa$. If the length of $I$ is $\ge 3$, it follows that for each triple of indices $i-1, i, i+1$ the point $x_i$ is within uniform distance $D=D(K)$ from both $Q_{i-1,i}$ and $Q_{i,i+1}$: There exist $y^-_i\in Q_{i-1,i}$, $y^+_i\in Q_{i,i+1}$ such that $$ d(x_i, y^\pm_i)\le K. $$ Such a point $x_i$ might not even exist, which would mean that each $\kappa$-qi section $\gamma$ of ${\mathfrak X}$ is defined only on an interval of length $\le 2$, and that would definitely ensure acylindricity of ${\mathfrak X}$. In general, one can say that $$ d(P_{Q_i, Q_{i-1,i}}(y_i^+), x^-_i)\le 2K, \quad d(P_{Q_i, Q_{i,i+1}}(y_i^-), x^+_i)\le 2K, $$ and, hence, $$ d(x_i, x_i^\pm)\le 3K. $$ It follows that any two $\kappa$-qi sections $\gamma_0, \gamma_1$ defined on $I$ satisfy $$ d(\gamma_0(i), \gamma_1(i))\le 6K, i\in V(I), $$ thereby ensuring $(6K,\kappa,3)$-acylindricity of ${\mathfrak X}$. \subsection{Hyperbolicity of finite trees of hyperbolic spaces} We will also need the following version of Theorem \ref{thm:chain} in the situation when the coboundedness condition is dropped: \begin{cor}\label{cor:finite-tree-hyp} Suppose that $T$ is a finite tree, ${\mathfrak X}=(\pi: X\to T)$ is a tree of hyperbolic spaces (satisfying Axiom {\bf H}). Then $X$ is $\delta$-hyperbolic with $$ \delta=\delta_{\ref{cor:finite-tree-hyp}}(\delta_0, L_0, |V(T)|),$$ i.e. $\delta$ depends only on the parameters of ${\mathfrak X}$ and the cardinality of $|V(T)|$. \end{cor} \proof The proof is induction on $|V(T)|$. For $|V(T)|=1$, there is nothing to prove. For $n=2$, the corollary is nothing but Theorem \ref{thm:chain} for the quasiconvex amalgam of pairs. We, thus, assume that the corollary holds for all trees $S$ with $|V(S)|= n\ge 2$. Let ${\mathfrak X}=(\pi: X\to T)$ be a tree of hyperbolic spaces with $|V(T)|= n+1$. Pick a valence $1$ vertex $w$ of $T$ and let $e=[v,w]$ be the incidence edge. Set $Y_v:=X_{vw}$. We then form a new tree of spaces ${\mathfrak Y}=(Y\to S)$, where $S$ is obtained from $T$ by removing $w$ and $e$, hence, $S$ has one less vertex than $T$. For vertices of $S$ which are distinct from $v$ and edges which are not incident to $v$, we use the same incidence maps for ${\mathfrak Y}$ as we had for ${\mathfrak X}$. For edges $e_i=[v_i,v]$ incident to $v$ we use incidence maps $Y_{e_i}=X_{e_i}\to Y_v=X_{vw}$ equal to the corestrictions of the incidence maps $X_{e_i}\to X_v$. The new tree of spaces still satisfies the assumptions of the corollary since $Y_v=X_{vw}$ is $\delta_1=\delta(\delta_0, L_0, 2)$-hyperbolic and incidence maps $$ f_{e_iv}: X_{e_i}\to Y_v= X_{vw} $$ are $L_1=L_0\cdot L'_0$-qi embeddings, where $L'_0=L_{\ref{lem:L'0}}(\delta,L_0)$. Now, $\delta=\delta(\delta_0, L_0, n)$-hyperbolicity of $X$ follows from the induction hypothesis. \qed \subsection{Secondary parameters of trees of hyperbolic spaces} \label{not:edge-space-constants}\label{not:K0}\label{not:secondary} In addition to the primary parameters of trees of hyperbolic spaces ${\mathfrak X}= (X\to T)$, we will be frequently using {\em secondary parameters}, which are functions of the primary parameters. Since these secondary parameters are used so often, we will give them special names. First of all, we recall some constants defined earlier, namely, $\la_0$, the quasiconvexity constant of the images $X_{ev}$ of incidence maps $X_e\to X_v$ and $L'_0\ge 2$, the quasiisometry constant for the inclusion maps $X_v\to X_{vw}$, where $e=[v,w]$ runs over all edges of $T$ (Lemma \ref{lem:L'0}). Also, $\delta'_0$ is the supremum of hyperbolicity constants of the spaces $X_{uv}= X_{\llbracket u, v \rrbracket}$ (Corollary \ref{cor:edge-spaces}). Let $\la'_0$ denote an upper bound on the quasiconvexity constants for the images in $X_{vw}$ of $4\delta_0$-quasiconvex subsets in $X_v, X_e$ (in particular, each $X_v, X_e$ is $\la'_0$-quasiconvex in $X_{vw}$). Explicitly, one can take $$ \la'_0= 92(L'_0)^2(L'_0 + 3\delta'_0).$$ We will also use the notation $L'_1$ for an upper bound of coarse Lipschitz constants of projections $P=P_{X_{uv},X_v}: X_{uv}\to X_v$, $L'_1=(L'_0+1)\cdot D_{\ref{lip-proj}}(\delta'_0,\la'_0)$ (Lemma \ref{lip-proj}). Last, but not least, we define the constant \index{$K_0$} \begin{equation}\label{K0} K_0:= (15(2\la'_0 + 5\delta'_0)L'_0)^3. \end{equation} The importance of this constant will become clear in Chapter \ref{ch:4 classes} during the discussion of flows of quasiconvex subsets of vertex-spaces of ${\mathfrak X}$. This constant will be critical in computing the flaring constant $K_*$ in Theorem \ref{thm:mainBF}. \begin{comment} {\scriptsize As an immediate corollary of Lemma \ref{proj-geod}, we obtain: \begin{cor}\label{cor:key-for-flow} Suppose that a tree of spaces ${\mathfrak X}$ satisfies Axiom {\bf H}. Then there exists a constant $\kappa_0$ such that the following holds for every edge $e=[u,v]$ in $T$. Suppose that $x, y\in X_u$ are such that their nearest-point projections $\bar{x}, \bar{y}$ (with respect to the metric on $X_{uv}$) to $X_{ev}$ satisfy $d_{X_v}(\bar{x}, \bar{y})\ge D_{\ref{proj-geod}}(\delta'_0, \la'_0)$. Then $$ [\bar{x} \bar{y}]_{X_v}\subset N^e_{\kappa_0} ([x y]_{X_u}), $$ where the neighborhood is taken with respect to the metric of $X_{uv}$. \end{cor} } \begin{comment} \begin{lemma} Let ${\mathfrak X}=(\pi: X\to T)$ be a (not necessarily hyperbolic) tree of spaces such that $T$ is a tree of finite diameter and incidence maps are $L$-qi embeddings. Then for each subtree $S\subset T$ the inclusion maps $X_S\to X$ are $L'$-quasiisometric embeddings where $L'$ depends only on $L$ and on the diameter of $T$. \end{lemma} \proof Improve Proposition \ref{unif-emb-subtree}. \qed Generalize the notion of $Fl_K(Q_u)$ to $Fl_{K,D}(Q_u)$ ($D\ge D_0$): We stop the flow whenever $(Q_v, X_w)$ is $D$-cobounded in $X_{vw}$. Such spaces are again uniformly hyperbolic and uniform retracts. They can (should?) be used in lieu of our original vertex-spaces. This allows one to treat: We say that a tree of hyperbolic spaces ${\mathfrak X}=(\pi: X\to T)$ is $(K,n)$-acylindrical if for each vertex $u$ and each $K$-leaf $\gamma$ emanating from $X_u$, the projection of $\gamma$ to $T$ has length $\le n$. Such ${\mathfrak X}$ clearly satisfies a uniform $K$-flaring condition. \begin{thm} If ${\mathfrak X}=(\pi: X\to T)$ is $(1,n)$-acylindrical then $X$ is $\delta(\tau)$-hyperbolic. \end{thm} \proof The space ${\mathfrak X}$ satisfies the $1$-flaring condition. However, the proof of our main theorem (with the flow-stopping constant $D_0$) requires $K$-flaring for some large $K=K_*$ (depending on the parameters of ${\mathfrak X}$). To deal with this, we observe that for each uniformly quasiconvex subset $Q_u\subset X_u$ and a vertex $v$ such that $d_T(u,v)\le n$ the fiberwise distance between $Fl_K(Q_u)$ and $Fl_1(Q_u)$ in $X_v$ is $\le C(n,K)$. Thus, we introduce a new stopping constant $D=D(K,n)$ defined by: If $(Fl_1(Q_u)\cap X_v, X_w)$ is $D_0$-cobounded, then $(Fl_K(Q_u)\cap X_v, X_w)$ is $D$-coboun\-ded. We apply the same to carpets, ladders, bundles. Then the same proof of hyperbolicity of $X$ goes through. \qed \end{comment} \section{Flaring for semidirect products of groups}\label{sec:semi-direct-flaring} The purpose of this section is to illustrate the concept of flaring in the case of semidirect products of groups, $G=H\rtimes \mathbb Z$. Suppose $H$ is a nonelementary finitely generated group (which we will eventually assume to be hyperbolic) with a finite generating set $S$ and the corresponding word-metric $d_H$. Recall that the word-length of an element $h\in H$, denoted $|h|_H$ or $|h|_{S}$, when the generating set is to be stressed, is related to $d_H$ by $|h|_H=d_H(1, h)$. Let $f: H\rightarrow H$ be an automorphism and $G=H\rtimes_f \langle t\rangle$ the corresponding semidirect product. Let $S_G= S\cup \{t\}$ be a generating set of $G$, where $t$ is the stable letter corresponding to the infinite cyclic factor in the semidirect product. Let $X$ be the Cayley graph $\Gamma(G,S)$; define the linear tree $T=\Gamma(\mathbb Z, 1)$. Then we have a tree of metric spaces $\pi: X\rightarrow T$, where the vertex spaces are various left cosets $X_i:=t^iH, i\in \mathbb Z$, of $H$ in $G$. (Strictly speaking, $X$ is only quasiisometric to the 2-dimensional complex which is the total space of the abstract tree of spaces whose vertex spaces are isometric copies of the Cayley graph of $H$.) We shall denote the standard metric on $X$ by $d_G$ and the metrics on the left cosets $t^iH\subset G$ by $d_{t^iH}$; the latter are isometric to the word-metric on $H$ corresponding to the generating set $S$. Given $m\in \mathbb Z, n\in \mathbb N$, a $\kappa$-qi section over the interval $\llbracket m-n, m+n\rrbracket$ in $T$ is a sequence\footnote{For further computations, we find it notationally convenient to write elements of $t^iH$ as $h_it^i, h_i\in H$, which is possible since $H$ is normal in $G$.} $\{h_i t^i\}$, $m\leq i\leq n$, such that for each $i\in [m-n, m+n]\cap \mathbb Z$, $d_{X_{i,i+1}}(h_it^i, h_{i+1}t^{i+1})\leq \kappa$, where we identify integers $i\in [m-n, m+n]$ with the corresponding vertices of $T$. This inequality is satisfied, in particular, when $d_{X_i}(h_it^i, h_{i+1}t^{i})\leq \kappa-1$. Since the vertex spaces $X_i, X_{i+1}$ are qi embedded in $X_{i,i+1}$, after changing $\kappa$ if necessary, we can (and will) identify $\kappa$-qi sections with sequences $\{h_i t^i\}$ satisfying the inequality $$ d_{X_i}(h_it^i, h_{i+1}t^{i})=d_H(1, t^{-i}h^{-1}_i h_{i+1}t^{i})=d_H(1,f^{-i}(h^{-1}_i h_{i+1}))= |f^{-i}(h^{-1}_i h_{i+1})|_H\le \kappa,$$ equivalently, \begin{equation}\label{eq:fkappa} d_H(f^{-i}(h_i), f^{-i} (h_{i+1}))\le \kappa . \end{equation} Here is an explicit example: \begin{example} Fix $h\in H$. Then $i\mapsto ht^i, i\in \mathbb Z$, is a $1$-qi section over $T$. \end{example} Now, let us see what, respectively, exponential and proper flaring conditions in this context mean in group-theoretic terms. Suppose $\gamma,\gamma'$ are two $\kappa$-qi sections over $\llbracket m-n,m+n\rrbracket$, where $m,n\in \mathbb Z$, given by maps $i\mapsto a_it^i$ and $i\mapsto b_it^i$. Then for each integer $i\in [m-n, m+n]$, the fiber-distance equals $$ d_{t^iH}(\gamma(i),\gamma'(i))=d_{t^iH}(a_it^i, b_it^i)=d_H(t^{-i}a_it^i, t^{-i}b_it^i)= d_H(1, t^{-i}a^{-1}_ib_it^i)= |f^{-i}(a^{-1}_ib_i)|_H.$$ If we denote the pair of sections $(\gamma, \gamma')$ by $\Pi$, then $$\Pi_{max}=\max \{|f^{-m+n}(a^{-1}_{m-n}b_{m-n})|_H, |f^{-m-n}(a^{-1}_{m+n}b_{m+n})|_H\}.$$ In the special case, when $m=0, n=1$, \begin{equation}\label{eq:Pimax-auto} \Pi_{max}=\max \{|f(a^{-1}_{-1}b_{-1})|_H, |f^{-1}(a^{-1}_{1}b_{1})|_H\}. \end{equation} \begin{example}\label{simplest qi section} If $\gamma, \gamma'$ are given by the maps $i\mapsto t^i$ and $i\mapsto ht^i$ respectively, where $h\in H$, then $$\Pi_{max}=\max\{|f^{-m+n}(h)|_H, |f^{-m-n}(h)|_H\}.$$ \end{example} Since $G$ acts on itself isometrically via the left multiplication, in order to formulate flaring conditions, without loss of generality, we may assume that the qi sections $\gamma, \gamma'$ are defined over intervals of the form $\llbracket -n,n\rrbracket$ (i.e. $m=0$) and $\gamma(0)=1$. \medskip One can also reformulate the above conditions and quantities using the notion of {\em pseudo-orbits} coming from the theory of dynamical systems. \begin{defn}\index{pseudo-orbit}\label{defn:pseudo-orbit} Let $(Y,d)$ be a metric space and $\phi: (Y,d)\to (Y,d)$ be a homeomorphism. For a number $K$, a {\em $K$-pseudo-orbit} of $\phi$ in $Y$ is a biinfinite sequence $(y_i)_{i\in \mathbb Z}$ in $Y$ such that for each $i$ $$ d(y_{i+1}, \phi(y_i))\le K. $$ For instance, if $K=0$ then $0$-pseudo-orbits are just orbits of $\phi$ (or, more precisely, the cyclic group generated by $\phi$) in $Y$. The element $y_i$ is called the $i$-th member of the pseudo-orbit $(y_i)_{i\in \mathbb Z}$. A {\em partial} $K$-pseudo-orbit is the restriction of a $K$-pseudo-orbit sequence to an interval in $\mathbb Z$. \end{defn} Given an automorphism $f$ of the group $H$, set $\phi:= f^{-1}$. We will use $(H, d_H)$ as our metric space $(Y,d)$. For a section $\gamma$, $\gamma(i)= h_it^i$, we define $g_i:= \phi^i(h_{i+1})$. In particular, $g_0=h_1$. Then the inequality \eqref{eq:fkappa} is equivalent to $$ d_H(\phi(g_i), g_{i+1})\le \kappa, $$ the $\kappa$-pseudo-orbit condition. In other words, instead of working with $\kappa$-sections, we can work with (partial) $\kappa$-pseudo-orbits. The case of a 1-qi section corresponds to the case when $(g_i)$ is the (partial) $\phi$-orbit of $g_0$. Given two sections $\gamma, \gamma'$, we note that the corresponding partial pseudo-orbit sequences $(g_i), (g'_i)$, satisfy $$ d_{t^iH}(\gamma(i),\gamma'(i))=d_H(\phi(g_i), \phi(g'_i)). $$ In particular, for fixed $\phi$, a uniform bound on $d_{t^iH}(\gamma(i),\gamma'(i))$ is equivalent to a (possibly different) uniform bound on $d_H(\phi(g_i), \phi(g'_i))$. \medskip We can now restate various flaring conditions: \medskip (1) The linear (Bestvina--Feighn) $\kappa$-flaring condition is equivalent to: There exist constants $M_\kappa\ge 0$, $\la_\kappa>1$ and $n_\kappa\in \mathbb N$ such that for every pair of maps $i\mapsto a_i\in H$ and $i\mapsto b_i\in H$, $i\in [-n_\kappa, n_\kappa]\cap \mathbb Z$, satisfying: (a) $a_0=1$, $|b_0|_H\ge M_\kappa$, (b) $d_H(f^{-i}(a_i), f^{-i}(a_{i+1}))\le \kappa$, $d_H(f^{-i}(b_i), f^{-i}(b_{i+1}))\le \kappa$, $i\in [-n,n]$, we have $$ \max \{d_H(f^{n}(a_{-n}), f^{n} (b_{-n})), d_H(f^{-n}(a_{n}), f^{-n} (b_{n}))\} \ge \la |b_0|_H. $$ \medskip (2) The proper $\kappa$-flaring condition is equivalent to: There exists a constant $M_\kappa\ge 0$ and a proper function $\phi_\kappa: \mathbb N\to \mathbb R_+$ such that for every pair of maps $i\mapsto a_i\in H$ and $i\mapsto b_i\in H$, $i\in \mathbb Z$, satisfying: (a) $a_0=1$, $|b_0|_H\ge M_\kappa$, (b) $d_H(f^{-i}(a_i), f^{-i}(a_{i+1}))\le \kappa$, $d_H(f^{-i}(b_i), f^{-i}(b_{i+1}))\le \kappa$, $i\in [-n,n]$, we have $$ \max \{d_H(f^{n}(a_{-n}), f^{n} (b_{-n})), d_H(f^{-n}(a_{n}), f^{-n} (b_{n}))\} \ge \phi_\kappa(n). $$ It is also useful to spell out the negation of the proper $\kappa$-flaring condition, which is most apparent as the negation of the bigon property from Corollary \ref{cor:super-weak flaring}: There exists elements $g, h\in H$ and pairs sequences of partial $\kappa$-pseudo-orbits $(g_{i,n})_{n\in \mathbb N}$, $(g'_{i,n})_{n\in \mathbb N}$ of $f$ in $H$ defined for $i\in [0, N_n]$, such that: (a) For all $n$, $g_{0,n}=1, g'_{N_n,n}=g$, $g'_{N_n}=h g_{N_n}$. (b) $\lim_{n\to \infty} \max_{i\in [0, N_n]} d_H(g_{i,n}, g'_{i,n})= \infty$. Note that, by possibly increasing $\kappa$ to $K:=\kappa+C$, where $C=\max\{ |g|, |h|\}$, and working with partial $K$-pseudo-orbits, we can even ensure that $g=1, h=1$ and, hence, $g'_{0,n}=1$, $g_{N_n,n}=g'_{N_n,n}$. \medskip \begin{comment} \medskip (3) The uniform flaring .... \medskip (4) Maybe also the bigon property from Corollary \ref{cor:super-weak flaring}. It also might be convenient to state the negation of the thin bigon property: \end{comment} We next relate flaring to {\em hyperbolicity properties} of the automorphism $f$. \begin{defn}(Bestvina, Feighn, \cite{BF})\label{defn hyp autom} Suppose $H$ is a finitely-generated group and $S$ is finite generating set for $H$. Suppose $f: H\rightarrow H$ is an automorphism. We say that $f$ is {\em weakly hyperbolic} if there is $m\in \NN$, $\lambda>1$ and a finite subset $E\subset H$, such that for all $h\in H\setminus E$ we have $$\lambda |h|\leq \max \{ |f^m(h)|, |f^{-m}(h)|\}.$$ We say that $E$ is the {\em exceptional} subset. An automorphism is called {\em hyperbolic} if the above inequality holds with $E=\emptyset$. \end{defn} Some remarks are in order regarding this definition. \begin{rem} \begin{enumerate} \item The notion of {\em hyperbolicity} of an automorphism was introduced by Bestvina and Feighn in \cite{BF} (the exceptional subset $E$ was absent). They also proved hyperbolicity of semidirect products $H\rtimes_f \langle t\rangle$ of hyperbolic automorphisms of hyperbolic groups, see Corollary in \cite[section 5]{BF}. However, the original Bestvina--Feighn definition is too restrictive if the purpose is to conclude hyperbolicity of semidirect products. Lemma \ref{lem:BF-auto} below (which is already present in Gersten's paper \cite[Corollary 6.9]{MR1650363}) shows that hyperbolicity of the semidirect product is equivalent to the weak hyperbolicity of the automorphism. \item If $f: H\to H$ is a weakly hyperbolic automorphism, then for any nontrivial finite group $H_1$, the automorphism $f'=f \times \operatorname{id}$ of $H'=H\times H_1$ is also weakly hyperbolic but fixes the subgroup $H_1$ element-wise and, hence, is not hyperbolic. \item In Corollary \ref{cor:wh-auto} we will prove that for automorphisms of torsion-free hyperbolic groups weak hyperbolicity is equivalent to hyperbolicity. \item The only hyperbolic groups which admit weakly hyperbolic automorphisms are the ones commensurable to free products of surface groups and free groups, as follows for instance from \cite{rips-sela}. \item The hyperbolicity inequality trivially holds for the trivial element $h=1\in H$. Suppose that the exceptional subset $E$ is a ball $B(1, r)\subset H$ and that (with $\la>1$) for $h\notin E$, $|f^m(h)|\ge \la |h|$. Then $f^m(h)\notin E$ and, thus, we can apply the same hyperbolicity inequality to $f^m(h)$. Clearly, $$ |f^{2m}(h)|= \max \{ |f^{2m}(h)|, |f^{-m+m}(h)|\}\ge \la^2|h|. $$ Repeating this argument, we see that for each $i\ge 1$, $$ |f^{im}(h)|\ge \la^i |h|. $$ \end{enumerate} \end{rem} \begin{lemma} \label{lem:hyp-independent} Hyperbolicity and weak hyperbolicity of an automorphism $f: H\rightarrow H$ are independent of the finite generating set of $H$. \end{lemma} \proof We will verify the claim for the weak hyperbolicity property since the proof for hyperbolicity is identical (with $E=\emptyset$). Suppose $f$ is weakly hyperbolic with respect to a finite generating set $S$, i.e. there exist $m\in \NN$, $\lambda>1$ and a finite subset $E\subset H$ such that $$ \lambda |h|_S\leq \max \{ |f^m(h)|_S, |f^{-m}(h)|_S\}$$ for all $h\in H\setminus E$. Suppose $S'$ is another finite generating set of $H$. For any $h\in H$ let $|h|_S$ and $|h|_{S'}$ denote the word-lengths of $h$ with respect to $S$ and $S'$ respectively. Then there is a constant $C>0$ such that $\frac{1}{C}|h|_S\leq |h|_{S'}\leq C|h|_S$ for all $h\in H$. Also, we note that for all $r\in \NN$ we have $\lambda^r |h|_S\leq \max\{|f^{mr}(h)|_S, |f^{-mr}(h)|_S\}$ for all $h\in G\setminus E_r$ where $$E_r=\bigcup_{-(r-1)\leq i\leq r-1} f^{im}(E).$$ Hence, $$\lambda^r |h|_{S'}\leq C\lambda^r |h|_S \leq C\max\{|f^{mr}(h)|_S, |f^{-mr}(h)|_S\} \leq C^2 \max\{|f^{mr}(h)|_{S'}, |f^{-mr}(h)|_{S'}\}$$ for $h\in H\setminus E_r$. It follows that $C^{-2}\lambda^r |h|_{S'}\leq \max\{|f^{mr}(h)|_{S'}, |f^{-mr}(h)|_{S'}\}$ for $h\in H\setminus E_r$. Thus, if we choose $r=r_1$ large enough, we get $\lambda_1=C^{-2}\lambda^{r_1}>1$ and setting $m_1=r_1m$, we obtain $$\lambda_1 |h|_{S'}\leq \max \{ |f^{m_1}(h)|_{S'}, |f^{-m_1}(h)|_{S'}\}$$ for all $h\in H\setminus E_{r_1}$. Whence $f$ is weakly hyperbolic with respect to the generating set $S'$. \qed \begin{comment} {\scriptsize Thus, one is led to the following: \begin{question} Is the hyperbolicity of $f$ (in the sense of Bestvina--Feighn) equivalent to the weak hyperbolicity for torsion-free hyperbolic groups? \end{question} The following results provide some partial evidence to the positive answer. } \end{comment} \begin{example} \label{lemma: hyp autom} (1) Suppose $H=\pi_1(\Sigma)$ where $\Sigma$ is a closed connected hyperbolic surface. Then an automorphism $f$ of $H$ is hyperbolic if and only if it is induced by a pseudo-Anosov automorphism of $\Sigma$, if and only if it has no nontrivial periodic conjugacy classes, if and only if the semidirect product $H\rtimes_f \mathbb Z$ is hyperbolic. (The equivalence of the last three properties is due to William Thurston, see e.g. \cite{Casson, Otal-book}. The equivalence with hyperbolicity of $f$ can be seeing either as a consequence of the pseudo-Anosov property or of the combination of Lemma \ref{lem:BF-auto} and Corollary \ref{cor:wh-auto}.) (2) If $H={\mathbb F}_n$, $n\geq 2$, then any automorphism $f\in Aut(H)$ with no periodic conjugacy classes is hyperbolic (in the sense of Bestvina--Feighn). See Theorem 5.1 in \cite{BFH-lam}. \end{example} We refer the reader to \cite{MR1800064, MR3720349, MR4237417} for other results in this direction. \begin{lemma}[\cite{BF, MR1650363}] \label{lem:BF-auto} If $f$ is weakly hyperbolic, then the tree of metric spaces $X=\Gamma(H\rtimes_f \mathbb Z, S_G)\rightarrow T=\Gamma(Z,1)$--- as constructed in the beginning of this subsection--- satisfies the Bestvina--Feighn flaring condition. The converse is also true. \end{lemma} \proof 1. Suppose $f$ is weakly hyperbolic. Let $R=\max\{d_H(1,h): h\in E\}$ where $E\subset H$ is a finite exceptional subset as in Definition \ref{defn hyp autom}. Then for all $x\in H$ with $|x|_H> R$ we have $$\lambda |x|\leq \max\{ |f^m(x)|, |f^{-m}(x)|\}.$$ First of all, since the Bestvina--Feighn flaring condition is equivalent to hyperbolicity of $G$ and the latter is equivalent to the hyperbolicity of the semidirect product $H\rtimes_{f^m} \mathbb Z$ (commensurable to the original group $G$), it suffices to consider the case when $m=1$. We will also assume that in the definition of a weakly hyperbolic automorphism the maximum is attained by $\phi=f^{-1}$ rather than $f$ (otherwise, we replace $f$ with $f^{-1}$). Then the weak hyperbolicity inequality reads \begin{equation}\label{eq:weak} \lambda |x|\leq |\phi(x)| \end{equation} for all $x\in H\setminus E$. Take $\kappa \geq 1$. As we noted earlier, it suffices to verify the Bestvina--Feighn flaring condition for pairs of $\kappa$-qi sections $\Pi=(\gamma, \gamma')$ defined over intervals of the form $\llbracket -m,m\rrbracket$, satisfying $\gamma(0)=1, h:=\gamma'(0)$, where, as above, $m=1$, such that $\Pi_0=|h|\ge M_\kappa$ for a suitable uniform constant $M_\kappa$. Pick any $\la'$ in the open interval $(1,\la)$; for concreteness, we take $\la'= \frac{1}{2}(\la+1)$. We claim that $$ \la' |h|= d_H(\gamma(0), \gamma'(0))\leq d_{tH}(\gamma(m), \gamma'(m)). $$ Set $\gamma(1)=h_{1}t, \gamma'(1)=h'_{1}t$. Then (as we noted earlier, after changing $\kappa$) the $\kappa$-qi section condition for $\gamma, \gamma'$ over the interval $[-1,0]$ is equivalent to the inequalities \begin{equation}\label{eq:kappa1} d_H(\phi(h_{1}),1)\le \kappa, \quad d_H(\phi(h), \phi(h'_{1}))\le \kappa. \end{equation} We will estimate from below the distance (in $H$) between $\gamma(1), \gamma'(1)$; according to the computation in \eqref{eq:Pimax-auto}, we need to estimate from below the distance $$ d_H(\phi(h_{1}), \phi(h'_{1})). $$ By the triangle inequality, combined with the inequalities \eqref{eq:weak} and \eqref{eq:kappa1}, we have $$ d_H(\phi(h_{1}), \phi(h'_{1})) \ge d_H(1, \phi(h)) -2\kappa \ge \la |h| -2\kappa. $$ Then the desired inequality $\la' |h| \leq d_{tH}(\gamma(1), \gamma'(1))$ is equivalent to $$ (\la-\la')|h|\ge 2\kappa $$ By out choice of $\la'$, this amounts to $$ |h| \ge \frac{2\kappa}{\la - 1}. $$ Therefore, taking $M_\kappa$ equal to the maximum of $\frac{2\kappa}{\la - 1}$ and $1+\operatorname{diam}_H(E)$, we ensure the Bestvina--Feighn flaring condition. \medskip 2. For the converse one applies the flaring condition to $\kappa=1$-qi sections. More precisely, we work with pairs of sections as in Example \ref{simplest qi section}. We take the finite set in the definition of weak hyperbolicity to be $E=\{h\in H: |h|_H\leq M_1\}$. The rest is straightforward and hence left as an exercise for the reader. \qed \begin{lemma} Suppose that $H$ is a hyperbolic group, $f: H\to H$ is weakly hyperbolic. Then the exceptional subset of $E$ can be chosen to contain only finite order elements. \end{lemma} \proof Let $E\subset H$ be an exceptional subset of $f$, i.e. there exist $m\in \NN$, $\lambda>1$ such that $$ \lambda |h|\leq \max \{ |f^m(h)|, |f^{-m}(h)|\}$$ for all $h\in H\setminus E$. After replacing $f$ with $f^m$, we can assume that $\lambda |h|\leq \max \{ |f(h)|, |f^{-1}(h)|\}$ unless $h\in E$. After enlarging $E$ is necessary, we can assume that it equals to the ball of certain radius $r$ in $H$ (centered at $1\in H$). We define $E'\subset E$, the subset consisting of infinite order elements. Suppose that $h\in E$ is such that for infinitely many values of $m\ge 1$, $$ |f^m(h)|\le r. $$ We claim that $h$ has finite order. Indeed, then there exist two numbers $n> m\ge 1$ such that $$ f^m(h)=f^n(h), f^{n-m}(h)=h. $$ It follows that in the group $G=H\rtimes_f \mathbb Z$ we have $$ t^{n-m} h= h t^{n-m}. $$ Since $f$ is weakly hyperbolic, the semidirect product $G$ is a hyperbolic group, see Lemma \ref{lem:BF-auto}. Hence, the abelian subgroup $A< G$ generated by $t^{n-m}$ and $h$ is virtually cyclic, i.e. $h$ has finite order. Thus, as noted above, for each $h\in E'$ there exists a smallest natural number $n=n_h$ such that $|f^n(h)|>r$, which, in particular, implies that $f^n(h)\notin E'$. Thus, $$ \max \{|f\circ f^n(h)|, |f^{-1}\circ f^n(h)|\}\ge \la |f^n(h)|\ge \la |h|. $$ Since $n_h$ was chosen to be smallest, it follows that the above inequality holds for $f\circ f^n(h)=f^{n+1}(h)$. By the same argument as in the proof of Lemma \ref{lem:exp}, we see that for each $m\ge \max\{n_h: h\in E'\}$, and $h\in E'$, $$ \lambda |h|\leq |f^m(h)|.$$ Since the hyperbolicity inequality holds for all $h\in H\setminus E$, we conclude that $f$ satisfies the hyperbolicity condition except for the subset $E''\subset E$ consisting of torsion elements. \qed \begin{cor}\label{cor:wh-auto} If $H$ is torsion-free then each weakly hyperbolic automorphism of $H$ is hyperbolic. \end{cor} \chapter{Flow-spaces, ladders and their retractions} \label{ch:4 classes} In this chapter we introduce and analyze four classes of subtrees of spaces in hyperbolic trees of spaces: \begin{itemize} \item Ladders \item Metric bundles \item Carpets \item Flow-spaces \end{itemize} These spaces play key role in proving hyperbolicity and describing geodesics in trees of spaces $(\pi: X\to T)$: Uniform quasigeodesics in $X$ will be inductively described as concatenations of uniform quasigeodesics in carpets, ladders and flow-spaces. The main result of this and the next chapter is that all ladders, carpets and flow-spaces are hyperbolic and admit coarse Lipchitz retractions from $X$. We note that our definitions of ladders and flow-spaces are inspired by the {\em ladder} construction of Mitra, \cite{mitra-trees}, while the notion of metric bundles is adapted from \cite{pranab-mahan}. \section{Semicontinuous families of spaces} All four classes of spaces discussed in this (and the next) chapter are special cases of {\em semicontinuous families} of spaces, which are certain subtrees of spaces ${\mathfrak Y}\subset {\mathfrak X}$. In what follows, given a subtree of spaces ${\mathfrak Y}= (\pi: Y\to S)\subset {\mathfrak X}=(\pi: X\to T)$, it will be notationally convenient to extend ${\mathfrak Y}$ to a tree of spaces (still denoted ${\mathfrak Y}$) over the entire tree $T$ by declaring $Y_v=\emptyset, Y_e=\emptyset$ for each $v\in V(T)-V(S)$ and $e\in E(T)-E(S)$. \begin{defn}\label{defn:scfamily} \index{semicontinuous family} Suppose that ${\mathfrak X}=(\pi: X\to T)$ is a tree of hyperbolic spaces. Fix constants $\la\in [0,\infty)$, $E, K\in [1, \infty), D\in [0,\infty]$. We will say that a subtree of spaces ${\mathfrak Y}= (\pi: Y\to S), S\subset T$, in ${\mathfrak X}$ forms a {\em $(K,D,E,\la)$-semicontinuous family}, relative to a vertex $u\in V(S)$, called the {\em center} of ${\mathfrak Y}$, if the following conditions hold: 1. Each vertex/edge space $Y_v\subset X_v, Y_e\subset X_e$, $v\in V(S), e\in E(S)$, is $\la$-quasiconvex. 2. Each $y\in \YY$ is connected to $Y_u$ by a $K$-leaf $\gamma_y$ in $Y$. 3. For each edge $e=[v,w]\in E(T)$ we define the (possibly empty!) projection \begin{equation}\label{Yvw} Y^v_w:=P_{X_{vw}, X_w}(Y_v)\subset X_w. \end{equation} We require that whenever $e=[v,w]\in E(S)$ is oriented away from $u$, \begin{equation}\label{eq:E-in} \operatorname{Hd}_{X_{vw}}(Y^v_w, Y_w)\le E \end{equation} and \begin{equation}\label{eq:K-in} \operatorname{Hd}_{X_{vw}}(Y_w, Y_e)\le K \end{equation} 4. For every edge $e=[v,w]$ such that $v\in V(S), w\notin V(S)$, we require the pair of quasiconvex subsets $(Y_v, X_w)$ in $X_{vw}$ to be $D$-cobounded. \end{defn} \begin{figure}[tbh] \centering \includegraphics[width=80mm]{fig2.pdf} \caption{A semicontinuous family} \label{scf.fig} \end{figure} \begin{rem}\label{rem:semico} i. Condition 2 implies that $Y_w\subset N^e_{K}(Y_v)$, hence, $Y_w\subset N_{\eta_0(2K)}(Y^v_w)$ with respect to the metric of $X_w$. (Recall that $\eta_0$ is the distortion function of $X_v$ in $X$, hence, in $X_{uv}$.) ii. Condition 2 in this definition ensures that $Y_w$ cannot be ``much larger'' than $Y_v$, while Condition 3 ensures a certain lower bound on $Y_w$. Thus, as we move away from $X_u$, vertex spaces of ${\mathfrak Y}$ can shrink substantially (even disappear) but they cannot substantially increase. iii. In most examples in our book, $\la=4\delta_0$, hence, we will be suppressing the dependence on this parameter and record only the triple of numbers $(K,D,E)$. iv. To ensure uniform coboundedness of the pairs $(Y_v, X_w)$ in Axiom 4, it suffices to get a uniform upper bound $C$ on the diameters of $Y^v_w$: It will then follow that the pair $(Y_v, X_w)$ is $D$-cobounded for some $D=D(\delta'_0,\la'_0, C)$, see Corollary \ref{cor:cob-char}. v. We do not insist on the converse implication in Axiom 4: There will be important situations when we have to consider ${\mathfrak Y}$'s with uniformly bounded fibers over non-boundary vertices of $S$. vi. Axioms 3 and 4 will be needed in order to have a uniform coarse Lipschitz retraction $X\to Y$, see Theorem \ref{thm:mitras-projection}. vii. The edge-spaces $Y_e$ of are largely irrelevant for our discussion. viii. The projections $P_{X_{vw},X_w}$ restricted to $X_v$ are at uniformly bounded distance (as measured in $X_{vw}$) from the projections $P_{X_v,X_{ev}}$. The same, of course, applies to restrictions of the projections $P_{X_{vw},X_v}$. However, we decided to work with the projections $P_{X_{vw},X_w}$ as computations tend to be simpler in this setting. \end{rem} \begin{theorem}\label{thm:mitras-projection} Suppose that ${\mathfrak Y}$ is a $(K,D,E,\la)$-semicontinuous family of spaces with $D<\infty$. Then there exists an $L_{\ref{thm:mitras-projection}}(K,D,E,\la)$-coarse Lipschitz retraction $\rho_{{\mathfrak Y}}: X\to Y$. \end{theorem} \proof We will verify that the subtree of spaces ${\mathfrak Y}$ is (uniformly) retractible; we use Theorem \ref{thm:left-inverse} as follows. (i) We let $h'_v: X_v\to Y_v$ denote the restriction of the nearest-point projections $P_{X_{vw},Y_v}$. According to Theorem \ref{thm:left-inverse}(i), we need to bound the diameter of the image of $X_{ev}$ in $Y_v$ under $h'_v$. However, $X_{ev}$ is contained in the unit neighborhood of $X_w$ (taken in $X_{vw}$); thus, we need to bound the diameter of the projection (in $X_{vw}$) of $X_w$ to $Y_v$, which is the content of Axiom 4 of the definition of a semicontinuous family of spaces: This diameter is $\le D$. \medskip (ii) Assuming that $e=[v,w]$ is an edge of $S$ oriented away from $u$, we need to get a uniform bound $$ \operatorname{dist}_{X'_v}( h'_v \circ f_{ev}, f'_{ev}\circ h'_e)\le Const. $$ The subsets $Y_{ev}, Y_e \subset X_{vw}$ are within unit Hausdorff distance, while $$ \operatorname{Hd}_{X_{vw}}(Y_e,Y_{w})\le E \hbox{~~and~~} \operatorname{Hd}_{X_{vw}}(Y^v_w, Y_w)\le E.$$ Since $d_{X_{vw}}(Y_w, X_v)\le K$, by applying Lemma \ref{lemma0-flow-space} to the subsets $U_1=X_v, U_2=Y_w$ in $X_{vw}$, we conclude that $$ \operatorname{Hd}_{X_{vw}}(P_{X_{vw},Y_v}(X_w), P_{X_{vw},X_w}(Y_v))\le 2\la_0'+3\delta'_0 + K. $$ Taking into account that the projection $P_{X_{vw},Y_v}$ is uniformly coarse Lipschitz, we conclude that $Y_{ev}$ is uniformly close to the image of $X_{e}$ under the nearest-point projection $P_{X_{vw},Y_v}$ and, accordingly, the nearest-point projection $h'_e: X_e\to Y_e$ is uniformly close to the restriction of the nearest-point projection $X_{vw}\to P_{X_v,Y_v}(X_e)$ (see Lemma \ref{lem:two-projections}). Taking also into account that the map $f'_{ev}$ moves points by distance $\le K$ in $X_{vw}$, we can replace $f'_{ev}\circ h'_e$ with the restriction of the projection $X_{vw}\to P_{X_v,Y_v}(X_e)$ to $X_e$. Similarly, the map $f_{ev}$ moves points distance $\le 1$ in $X_{vw}$ and, hence (in view of the uniform coarse Lipschitz property of $h'_v$), we can replace the composition $h'_v \circ f_{ev}$ with the restriction of the nearest-point projection $P_{X_{vw},Y_v}$ to $X_e$. But now, the projections $X_{vw}\to P_{X_v,Y_v}(X_e)$ and $P_{X_{vw},Y_v}$ are uniformly close to each other according to Corollary \ref{cor:projection-2} applied to the $\la_0'$-quasiconvex subsets $Y_v$ and $X_e$ in the ambient hyperbolic space $Z=X_{vw}$. \qed \begin{cor}\label{cor:mitras-projection} If ${\mathfrak Y}=(\pi: Y\to S)$ is a $(K,D,E,\la)$-semicontinuous family of spaces with $D<\infty$, then the inclusion map $Y\to X$ is an $L_{\ref{thm:mitras-projection}}(K,D,E,\la)$-qi embedding. \end{cor} \medskip We next describe a class of semicontinuous subtrees of spaces, called {\em metric bundles}. The theory of metric bundles was developed in \cite{pranab-mahan} in a more general setting when the base is allowed to be an arbitrary geodesic metric space but we will not need that in our book. The following definition of {\em metric bundles} is adapted from \cite{pranab-mahan} in a form suitable for our purposes. It is easy to verify that the two definitions (ours and that of \cite{pranab-mahan}) are equivalent when the base is a tree. The reader should also compare this definition with the notion of an abstract metric bundle given in Definition \ref{defn:bundle}: Each metric bundle defined below is also an abstract metric bundle. \begin{defn}[Metric bundles] \label{defn:flow0} \index{metric bundle} A subtree of spaces ${\mathfrak Y} =(\pi: Y\to S) \subset {\mathfrak X}=(\pi: X\to T)$ is called a {\em $K$-metric bundle} if: 1. Every vertex/edge space of ${\mathfrak Y}$ is $\la$-quasiconvex in the respective vertex/edge space of ${\mathfrak X}$. 2. For every vertex $u\in V(S)$ and edge $e=[v,w]\in E(S)$ (directed away from $u$), and $x\in Y_w, x\in Y_e$, there exist $K$-qi sections $\gamma_x$ in ${\mathfrak Y}$ on $\llbracket w, u\rrbracket$, such that $\gamma_x(w)=x$. \end{defn} It follows immediately that each $K$-metric bundle forms a $(K,\infty,E,\la)$-semicontinuous family of spaces in ${\mathfrak X}$ (relative to any vertex $u\in S$), with $E=\eta_0(2K)$. The reader uncomfortable with using $D=\infty$ here can simply restrict ${\mathfrak X}$ to $S$, then one can take $D=0$. \medskip While Theorem \ref{thm:mitras-projection} does not directly apply to metric bundles ${\mathfrak Y}\subset X$ (unless $S=T$), we will see in Corollary \ref{cor:X-to-bundle} that under certain extra assumption weakening condition 4 in Definition \ref{defn:scfamily}, these too admit uniform coarse Lipschitz retraction from ${\mathfrak X}$. \section{Ladders} \label{sec:Ladders} \index{ladder} Ladders are certain semicontinuous subtrees of spaces ${\mathfrak L}=(\pi: L\to S)\subset {\mathfrak X}$ whose fibers are geodesic segments. However, in addition to the properties of a semicontinuous family of spaces, we will impose a certain extra structure on ${\mathfrak L}$. \medskip Each ladder ${\mathfrak L}=(\pi: L\to S)$ comes equipped with certain parameters and two pieces of extra data: An orientation of the fibers (hence, ladders generalize oriented line bundles) and a canonical choice of a maximal $K$-qi section $\Sigma_x\subset L$ through each point $x\in L$. The choice of $\Sigma_\bullet$ can be regarded as a ``connection'' on ${\mathfrak L}$. Thus, ladders can be regarded as ``oriented line semi-bundles equipped with connections.'' We will be primarily interested in ladders such that $\LL$ is contained in the $4\delta_0$-fiberwise neighborhood of a $k$-flow space ${\mathcal Fl}_k(Q_u)\subset X$ (these will be defined in Section \ref{sec:flow-spaces}). For the ease of notation, we will be ignoring the flow-spaces for now; formally speaking, one can regard the flow-space ${\mathfrak Fl}_k(Q_u)$ as a tree of hyperbolic spaces satisfying a uniform flaring condition. \medskip We now begin with an axiomatic definition of ladders. Let ${\mathfrak X}=(\pi: X\to T)$ be a tree of hyperbolic spaces satisfying Axiom {\bf H}. Fix positive numbers $K, D, E$ and a vertex $u\in T$; these are the {\em parameters} of a ladder ${\mathfrak L}$. A {\em ladder} with these parameters (a {\em $(K,D,E)$-ladder centered at $u$}) is a subtree of oriented geodesic intervals in ${\mathfrak X}$, ${\mathfrak L}= (\pi: L\to S)$, $S=\pi(L)$ which satisfies further axioms described below. Each fiber $L_v:= L\cap X_v, v\in V(T), L_e= L\cap X_e, e\in E(T)$, of ${\mathfrak L}$ is an {\em oriented} geodesic segment denoted $[x_v y_v]_{X_v}$ or $[x_e y_e]_{X_e}$. Furthermore, we fix once and for all a family $\Sigma_\bullet$ of {\em maximal partial $K$-qi sections} $\Sigma_x$ of $\pi: L\to S$, whose domains $T_x$ are subtrees in $S$ containing the vertex $u$ (and $\pi(x)$, of course). Maximality here is understood in the sense that if $\Sigma'_x$ is another partial $K$-qi section containing $\Sigma_x$, then $\Sigma_x=\Sigma'_x$. The subscript $x$ in $\Sigma_x$ indicates that $x\in \Sigma_x$. We will assume that $x$ belongs to
a vertex-space of ${\mathfrak X}$. \medskip {\bf Axiom L0}. We will require the family of sections $\Sigma_\bullet$ to be {\em consistent} in the sense that whenever $v=\pi(y)$ is between $u$ and $w=\pi(x)$, the sections $\Sigma_y$ and $\Sigma_x$ agree on the interval $\llbracket u, v\rrbracket\subset T$. \begin{defn} Let ${\mathfrak L}$ be a ladder centered at the vertex $u$, $L_u= [x_u y_u]_{X_u}$. We will refer to the subsets $\Sigma_{x_u}=bot({\mathfrak L}), \Sigma_{y_u}=top({\mathfrak L})$ as, respectively, the {\em bottom} and the {\em top} of the ladder ${\mathfrak L}$. \end{defn} Thus, $\Sigma_\bullet$ defines a family of maps $$ \Pi_{w,v}: L_w\to L_v, $$ for every vertex $v\in V(S)$ between $u$ and $w\in V(S)$: $$ \Pi_{w,v}(x)= \Sigma_x\cap L_v, x\in \LL_w. $$ (Note that this intersection is nonempty since $\pi(\Sigma_x)$ is a subtree containing both $w$ and $u$, hence, also containing $v$.) These maps can be regarded analogues of {\em parallel transport maps} in the conventional theory of connections on bundles. Consistency of sections implies the following {\em semigroup property}: $$ \Pi_{w_1,w_3}= \Pi_{w_2,w_3}\circ \Pi_{w_1,w_2} $$ whenever $w_2, w_3$ belong to the interval $\llbracket u,w_1\rrbracket$ and appear there in the following order: $$ u\le w_3\le w_2\le w_1. $$ As we will see below, axioms of a ladder require each map $\Pi_{w,v}$ to be either constant or an orientation-preserving topological embedding. The maps $\Pi_{w,v}$ need not be surjective; for every oriented edge $e=[v,w]$ in $S$ (oriented away from $u$) we have an oriented subsegment $$ L'_v= [x'_v y'_v]_{X_v}:= \Pi_{w,v}(L_w) $$ in $L_v$. Here $x'_v=\Pi_{w,v}(x_w), y'_v:= \Pi_{w,v}(y_w)$. The orientation of the segment $L'_v$ is then consistent with that of $L_v$ (since $\Pi_{w,v}$ is orientation-preserving). Observe also that the mere existence of $K$-qi sections $\Sigma_x$ implies {\em some semicontinuity} of the ladder ${\mathfrak L}$: For every edge $e=[v,w]\subset S$ (oriented away from $u$) \begin{equation}\label{eq:semicontinuity} L_w \subset N^e_K(L'_v)\subset N^e_K(L_v), \end{equation} where the $K$-neighborhood is taken in the subspace $X_{vw}$ (which is what the superscript $e$ indicates). However, $L_w$ can be much smaller than $L_v$. \medskip For ${\mathfrak L}$ (equipped with $\Sigma_\bullet$) to be a ladder, it has to satisfy three further axioms listed below. Note, however, that the assumption that the fibers $L_v, L_e$ of ${\mathfrak L}$ are geodesic segments ensures Property 1 in Definition \ref{defn:scfamily} with $\la=\delta_0$, while Axiom L1 implies Property 2 in that definition, thus making Axiom L3 somewhat redundant. We now fix $K\in [1,\infty), E\ge 1$ and $D\in [0,\infty]$. While all other two parameters in the triple $(K,D,E)$ are real numbers, as with general semicontinuous subtrees of spaces, it is convenient to allow for infinite values of the parameter $D$. \begin{figure}[tbh] \centering \includegraphics[width=60mm]{fig3.pdf} \caption{Ladder} \label{ladder.fig} \end{figure} \medskip {\bf Ladder Axioms}: \begin{itemize} \item{{\bf L1}} Each $x\in L$ belongs to some $\Sigma_x$. \item{{\bf L2}} Each map $\Pi_{w,v}$ is either constant or is an orientation-preserving topological embedding. \item{{\bf L3}} ${\mathfrak L}$ is a $(K,D,E,\delta_0)$-semicontinuous family of spaces. \end{itemize} \begin{rem} In general, $\pi(top({\mathfrak L}))$ and $\pi(bot({\mathfrak L}))$ are smaller than the base $S$. If $$\pi(top({\mathfrak L}))=\pi(bot({\mathfrak L}))=S$$ then ${\mathfrak L}$ will be a $K$-metric bundle. This will happen in the important case of {\em carpets}. \index{carpet} \end{rem} \begin{defn} A $(K,D,E)$-{\em ladder} in ${\mathfrak X}$ is a subtree of spaces $${\mathfrak L}=(\pi: L\to S)\subset {\mathfrak X}=(\pi: X\to T)$$ whose vertex and edge-spaces are oriented geodesic segments, equipped with a family of $K$-qi sections $\Sigma_\bullet$ and satisfying Axioms L0---L3. \end{defn} \begin{example} Let $x$ be a point in $X_u$ and let $\Sigma_x$ be a $K$-qi section of $\pi: X\to T$ defined over a subtree $S\subset T$ such that $x\in \Sigma_x$. Then $L=\Sigma_x$ is the total space of a $(K,0, \eta_0(2K))$-ladder. \end{example} \begin{defn}\label{defn:subladder} A {\em subladder} in ${\mathfrak L}$ is a ladder ${\mathfrak L}'={\mathfrak L}(\alpha')\subset {\mathfrak L}={\mathfrak L}(\alpha)$ with the same center $u$ as ${\mathfrak L}$, such that the sections $\Sigma'_\bullet$ of ${\mathfrak L}'$ are restrictions of the sections $\Sigma_\bullet$ of ${\mathfrak L}$. In particular, top and the bottom of ${\mathfrak L}'$ are contained in sections of ${\mathfrak L}$ through the end-points of $\alpha'$. \end{defn} In what follows, given a ladder ${\mathfrak L}={\mathfrak L}_K(\alpha)$, $\alpha\subset X_u$, for each point $x\in \LL$ let $\gamma_x\subset \Sigma_x\subset L$ denote the section over the interval $\llbracket u, \pi(x)\rrbracket$, connecting $x$ to a point in $\alpha$. Similarly, given two points $x, y\in \LL$, if $\Sigma_x\cap \Sigma_y\ne \emptyset$ and the restriction of $\pi$ to $\Sigma_x\cup \Sigma_y$ is 1-1, then there exists a unique $K$-leaf $\gamma_{x,y}$ in $\Sigma_x\cup \Sigma_y\subset {\mathfrak L}$ connecting $x$ to $y$. \medskip We omit a proof of the next lemma as it is straightforward: \begin{lemma}[Bisecting a ladder]\label{bisecting a ladder} Suppose $u\in V(T)$, $\alpha= [xy]_{X_u}\subset X_u$ and we are given a ladder ${\mathfrak L}={\mathfrak L}_{K,D,E}(\alpha)$. Then for every point $z\in [xy]_{X_u}$ the $K$-qi section $\Sigma_z\subset \LL$ decomposes ${\mathfrak L}$ into two $(K,D,E)$-subladders ${\mathfrak L}^+, {\mathfrak L}^-$ such that \begin{enumerate} \item $L^+_u=[zy]_{X_u}\subset \alpha$, $L^-_u=[xz]_{X_u}\subset \alpha$, \item $$ top({\mathfrak L}^-)= \Sigma_z= bot({\mathfrak L}^+). $$ \end{enumerate} \end{lemma} Applying this lemma twice, we obtain: \begin{cor}[Trisecting a ladder]\label{cor:trisecting a ladder} Suppose $u\in V(T)$, $\alpha= [xy]_{X_u}\subset X_u$ and we are given a ladder ${\mathfrak L}={\mathfrak L}_{K,D,E}(\alpha)$. Then for every subsegment $\alpha'=[x'y']_{X_u}\subset \alpha$ there exists a subladder ${\mathfrak L}'={\mathfrak L}_{K,D,E}(\alpha')\subset {\mathfrak L}$ bounded by the $K$-qi sections $\Sigma_{x'}, \Sigma_{y'}\subset {\mathfrak L}$ (its bottom and top respectively). \end{cor} Since a $(K,D,E)$-ladder ${\mathfrak L}=(\pi: L\to S)$ is a $(K,D,E,\delta_0)$-semicontinuous subtree of spaces in ${\mathfrak X}$, as an application of the retraction Theorem \ref{thm:mitras-projection} we obtain: \begin{cor} [Retraction to ladders] \label{cor:ladder-retraction} For every $(K,D,E)$-ladder ${\mathfrak L}=(\pi: L\to S)$ there exists a $L_{\ref{thm:mitras-projection}}(K,D,E,\delta_0)$-coarse Lipschitz retraction $\rho_{{\mathfrak L}}: X\to L$. \end{cor} \medskip We next define {\em carpets} which are both ladders and metric bundles. While in Axiom L3 of a ladder we assume that fibers over {\em all} boundary vertices of $S$ have uniformly bounded diameter when projected to adjacent vertex-spaces $X_w, w\notin V(S)$, in the definition of carpets (where the base $S$ is an oriented interval $\llbracket u, w\rrbracket$) we will allow one of the boundary vertices of $S$ (namely the vertex $u$) to violate this property (which is why $D=\infty$). However, instead, we will add a stronger requirement on the other boundary vertex $w$ and a requirement on the top and the bottom. \begin{defn} A $(K,\infty, \eta_0(2K))$-ladder ${\mathfrak A}=(\pi: A\to S)\subset {\mathfrak X}$ is called a $(K,C)$-{\em narrow carpet} or just a {\em $(K,C)$-carpet} if: 1. $S$ is an interval $\llbracket u,w\rrbracket$ and, furthermore, the top and the bottom of ${\mathfrak A}$ connect the two end-points of $A_u$ to that of $A_w$, i.e. $$ \pi(top({\mathfrak A}))=\pi(bot({\mathfrak A}))=S. $$ In this case, we will say that ${\mathfrak A}$ is bounded by the $K$-qi sections $\gamma_1=bot({\mathfrak L}), \gamma_2=top({\mathfrak L})$ of the carpet. We will refer to $\beta=A_w$ as the {\em (narrow) end} of the carpet and will say that ${\mathfrak A}$ is {\em from $\alpha=A_u$ to $\beta=A_w$}. 2. $A_w$ has length $\le C$. \noindent We will use the notation ${\mathfrak A}= {\mathfrak A}_K(\alpha)$ for such carpets to indicate the two key parameters. \end{defn} \begin{defn}\index{hallway} \label{defn:hallway} A $(K,\infty)$-carpet is called a $K$-hallway. \end{defn} \begin{rem} 1. Every $K$-hallway is a $K$-metric bundle. 2. The pair of sections $\gamma_1, \gamma_2$ determines a hallway ${\mathfrak A}$ ``coarsely uniquely'': The ambiguity in the definition comes from the choice of the vertical geodesics $A_t, t\in V(S)$, and, hence, is uniformly controlled. Therefore, in what follows, we will ignore this ambiguity. \end{rem} \medskip The next lemma establishes existence of ladder and hallway structures on subsets in $X$ which are unions of vertical geodesic segments. \begin{lemma}\label{lem:E-ladder-structure} Suppose that ${\mathfrak X}$ is a tree of hyperbolic spaces. There exists a function $K'=K'_{\ref{lem:E-ladder-structure}}(K)$ such that the following holds: a0. Suppose that $\LL\subset \XX$ is a subset whose projection to $T$ is the vertex-set of a subtree $S\subset T$ containing a vertex $u$ satisfying: a1. Every fiber $L_v=\LL\cap X_v, v\in V(S)$, is an oriented geodesic segment $[x_v y_v]_{X_v}$ in $X_v$. a2. $\LL$ satisfies Property 4 of a semicontinuous family of spaces with the parameter $D$. Furthermore, in line with Property 3, for every oriented away from $u$ edge $e=[v,w]\in E(S)$, $\operatorname{Hd}_{X_{vw}}(L^v_w,L_w)\le E$, where, as before, $$ L^v_w= P_{X_{vw},X_w}(L_v)\subset X_w. $$ a3. Points $x_w, y_w$ are within distance $K$ (in $X_{vw}$) from points $x'_v, y'_v\in L_v$ respectively, so that $$ x_v\le x'_v\le y'_v\le y_v $$ on the oriented segment $L_v$. Then $\LL\subset X$ is the union of vertex-spaces of a $(K',D,E)$-ladder ${\mathfrak L}\subset {\mathfrak X}$ centered at $u$. b0. Suppose that $\mathcal A\subset \XX$ is a subset whose projection to $T$ is the vertex-set of an interval $S=\llbracket u,w\rrbracket\subset T$ such that: b1. Every fiber $A_v, v\in V(S),$ of $\mathcal A$, is an oriented geodesic segment $[x_v y_v]_{X_v}$ in $X_v$. b2. For every edge $[v_1,v_2]$ in $S$, $d_{X_{v_1v_2}}(x_{v_1}, x_{v_2})\le K$, $d_{X_{v_1v_2}}(y_{v_1}, y_{v_2})\le K$. Then $\mathcal A$ is the union of vertex-spaces of a $K'$-hallway ${\mathfrak A}\subset X$. \end{lemma} \proof Our first goal is to define the function $K'$. We let $r':=D_{\ref{lem:sub-close}}(\delta_0', L'_0, K)$ be given by Lemma \ref{lem:sub-close}. For $k=K_{\ref{lem:close->qi}}(r', L'_0)$ given by Lemma \ref{lem:close->qi}, we let $\la'={k}_{\ref{lem:approximation}}(k)$ be given by Lemma \ref{lem:approximation}. Lastly, set $$ K':= r'+ D_{\ref{lem:approximation}}(k). $$ \noindent We now prove the lemma. a. We define inductively the projections $\Pi_{v_1,v_2}$ (where $e=[v_1,v_2]$ is an edge in $S$ oriented away from $u$), as well as the edge-spaces $L_e$. Suppose that for the subtree $B_n\subset S$, which is the closed $n$-ball centered at $u$, we defined (partial) $K$-qi sections $\Sigma$ and maps $\Pi$ satisfying all the requirements of a ladder with respect to the parameter $K'$. We extend the definitions of these sections and maps to the vertices in the ball $B_{n+1}\subset S$ as follows. Let $e=[v_1,v_2]$ be an edge of $S$ with $v_1\in B_n, v_2\notin B_n$. Let $L'_{v_1}$ denote the oriented subsegment of $L_{v_1}$ bounded by $x'_{v_1}, y'_{v_1}$ respectively. Similarly, we define the edge-space $L_e$ as the oriented geodesic segment in $X_e$ spanned by the nearest-point projections of the end-points $x_{v_2}, y_{v_2}$ of $L_{v_2}$. According to Lemma \ref{lem:sub-close}, we have $$ \operatorname{Hd}_{X_{v_1v_2}}(L'_{v_1}, L_{v_2})\le r'=D_{\ref{lem:sub-close}}(\delta_0', L'_0, K)\le K', $$ $$ \operatorname{Hd}_{X_{v_1v_2}}(L_{e}, L_{v_2})\le r'. $$ These conditions ensure Property 3 of Definition \ref{defn:scfamily}, i.e. Axiom L3 of a ladder. By Lemma \ref{lem:close->qi}, we extend the map $x_{v_2}\mapsto x'_{v_1}, y_{v_2}\mapsto y_{v_1}'$ to a $k=K_{\ref{lem:close->qi}}(r',L'_0)$-quasiisometry of geodesic segments $q: L_{v_2}\to L'_{v_1}$, which moves each point distance $\le r'$ (with respect to the metric of $X_{v_1v_2}$). Applying Lemma \ref{lem:approximation}, we can replace the quasiisometry $q$ by an increasing homeomorphism $\tilde{q}$ (or a constant function) within distance $D_{\ref{lem:approximation}}(k)$ from $q$, such that $\tilde{q}$ is a ${k}_{\ref{lem:approximation}}(k)$-quasiisometry. Since $q$ was moving every point of $L_{v_2}$ at most distance $r'$, it follows that $\tilde{q}$ moves every point within distance $K'=r'+D_{\ref{lem:approximation}}(k)$ in $X_{v_1v_2}$. We set $$ \Pi_{v_2,v_1}:= \tilde{q}. $$ Thus, we obtain maps $\Pi_{w,v}: L_{v_2}\to L_{v_1}$ for oriented edges $[v_1,v_2]$ of the tree $S=\pi(\LL)$, such that $d_{X_{v_1v_2}}(x, \Pi_{v_2,v_1}(x))\le K', x\in L_{v_2}$. For vertices $v_1, v_2$ of $S$ such that $v_1$ is between $u$ and $v_2$ we define the map $\Pi_{v_2,v_1}: L_{v_2}\to L_{v_1}$ as the composition of maps defined for the sequence of edges connecting $v_2$ to $v_1$. If $\Pi_{v_2,v_1}$ is injective, then for $z\in L'_{v_1}$ we define the section $\Sigma_z\cap L_{v_2}$ as $$ \Pi_{v_2,v_1}^{-1}(z). $$ If the map is not injective, i.e. constant, we choose an arbitrary point in $L_{v_2}$ as $\Sigma_z\cap L_{v_2}$. b. The proof of this part is exactly the same as of Part a, except that we use $x'_{v_1}=x_{v_1}, y'_{v_1}=y_{v_1}$. \qed \section{Flow-spaces}\label{sec:flow-spaces} \subsection{$K$-flow spaces and Mitra's retraction} \label{sec:def-of-flow-spaces} Suppose that ${\mathfrak X}=(\pi: X\to T)$ is a tree of hyperbolic spaces. We fix a vertex $u\in T$, the {\em center of the flow} and orient all edges $e=[v, w]$ of $T$ so that $v$ is closer to $u$ than $w$. For each $4\delta_0$-quasiconvex subset $Q_u\subset X_u$ we will define the $K$-{\em flow-space} $$ {\mathfrak Fl}_K(Q_u)= (\pi: {Fl}_K(Q_u) \to S) \subset {\mathfrak X}, $$ which, unlike ladders and carpets, depends only on $Q_u$ and on $K$, and which will be a $(K,D,E,4\delta_0)$-semicontinuous family of spaces (relative to the vertex $u$), with the parameter $E$ depending only on $K$ and $D=D_0$, where \begin{equation}\label{eq:D0} D_0=D_{\ref{cobdd-cor}}(\delta'_0, \la'_0) \end{equation} is independent of $K$. However, for the construction to work, the parameter $K$ has to be sufficiently large, specifically, $K\ge K_0$, where $K_0$ (which depends only on the parameters of the tree of spaces ${\mathfrak X}$) is given by the equation \eqref{K0}. As before, we will use ${\mathcal Fl}_K(Q_u)$ to denote the union of vertex-spaces of ${\mathfrak Fl}_K(Q_u)$. We first compute the auxiliary parameter $E$ and a certain parameter $R$ (depending on $K$) which will be used to define the $K$-flow. Suppose that $\la\ge \frac{3}{2}\delta_0$. Recall (Lemma \ref{lem:triple1}) that if the image of a subset $Q$ of $X_v$ is $\la$-quasiconvex in $X_{vw}$ then $Q$ is $\hat{\la}$-quasiconvex in $X_v$ with \begin{equation}\label{hatla} \hat{\la}= 1500 (L'_0\la)^3. \end{equation} Take \begin{equation}\label{eq:R-ineq} R\ge R_0:= \max(2(\la'_0+ \delta'_0), R_{\ref{cobdd-cor1}}(\delta'_0, \la'_0))= 2\la'_0+ 5\delta'_0. \end{equation} Set (cf. \eqref{hatla}) \begin{equation}\label{eq:lambda'} \la':= 1500 (L'_0(R+2\delta'_0))^3, \end{equation} \begin{equation}\label{eq:E-R-flow} E:=2(2\la'_0+3\delta'_0 +R)+ (\la'+\delta_0). \end{equation} While our proofs will work whenever \begin{equation}\label{eq:K-R-ineq} K\ge R+\la'+\delta_0, \end{equation} concretely, we will use \begin{equation}\label{eq:vee} K = R^\wedge:=(15 L'_0 R)^3, \hbox{~~~i.e.~~~} R= K^\vee:= \frac{1}{15L'_0} K^{1/3} \end{equation} (The reader will verify that this $K$ satisfies the inequality \eqref{eq:K-R-ineq}.) Thus, the inequality \eqref{eq:R-ineq} translates into the inequality \begin{equation}\label{eq:K-ineq} K\ge K_0= 15^3 (2\la'_0+ 5\delta'_0)^3 (L'_0)^3. \end{equation} Note also that \eqref{eq:E-R-flow} makes $E$ a function of $K$ \begin{equation}\label{eq:E-K-flow} E=E_{\ref{eq:E-K-flow}}(K), \end{equation} while and $R$ also becomes a function of $K$. We inductively define $4\delta_0$-quasiconvex subsets $Q_v\subset X_v, Q_e\subset X_e$, $v\in V(T), e\in E(T)$, and, at the same time, verify conditions of Definition \ref{defn:scfamily} for the collection of subsets $Q_v, Q_e$, aiming eventually to Prove Theorem \ref{thm:semicontinuity-of-flows}. Assuming that for $v\in V(T)$ a $4\delta_0$-subset $Q_v\subset X_v$ is defined, for the oriented edge $e=[v,w]$ of $T$ (oriented away from $u$) we set $$ Q^v_w:= P_{X_{vw},X_w}(Q_v), \quad Q'_w:= N^e_R(Q_v)\cap X_w. $$ According to Corollary \ref{cor:0-flow-space}, $$ \operatorname{Hd}_{X_{vw}}(Q^v_w, Q'_w)\le 2(2\la'_0+3\delta'_0 +R). $$ Note that both $X_w, Q_v$ are $\la'_0$-quasiconvex in $X_{vw}$. Furthermore, by Lemma \ref{lem:coarse-intersections-are-qc-2}, since $R\ge R_0\ge 2\la'_0+ 2\delta'_0$, the intersection $Q'_w:= N^e_R(Q_v)\cap X_w$ is $\la_{\ref{lem:coarse-intersections-are-qc-2}}= R+2\delta'_0$-quasiconvex in $X_{vw}$. Hence, $Q'_w$ is $\la'=\widehat{R+2\delta'_0}$-quasiconvex in $X_w$, where $$ \la'= 1500 (L_0' (R+ 2\delta'_0))^3, $$ see Lemma \ref{lem:triple1}. Therefore, by \eqref{eq:qc-nbd}, the $\delta_0$-hull, taken in $X_w$, $$ Q_w:=\operatorname{Hull}_{\delta_0}(Q'_w) $$ is $(\la'+\delta_0)$-Hausdorff close to $Q'_w$, thus, $$ \operatorname{Hd}_{X_{vw}}(Q^v_w, Q_w)\le E=2(2\la'_0+3\delta'_0 +R)+ (\la'+\delta_0), $$ verifying the condition \eqref{eq:E-in} in Part 3 of a semicontinuous family of spaces (in the case when $Q_w'\ne \emptyset$, equivalently, $Q_w\ne \emptyset$). We define the edge-space $Q_e$ as the $\delta_0$-null (in $X_e$) of the projection $$ P_{X_{vw},X_e}(Q_w). $$ Thus, $$ \operatorname{Hd}_{X_{vw}}(Q^v_w, Q_e)\le \delta_0+1. $$ At the same time, since each point of $Q'_w$ is within distance $R$ from $Q_v$, each point of $Q_w$ is within distance $$ R+\la'+\delta_0$$ from $Q_v$, where both distances are computed in $X_{vw}$. Since $$ K= (15 L'_0 R)^3\ge R+\la'+\delta_0, $$ we conclude that each point of $Q_w$ is within distance $K$ from $Q_v$. From this, since $Q_e$ was defined as the projection of $Q_w$ to $X_e$, it also follows that $\operatorname{Hd}_{X_{vw}}(Y_w, Y_e)\le K$. Thus, we verified Part 3 of Definition \ref{defn:scfamily} (for the edge $e$). Since the subsets $Q_w, Q_e$ were defined as $\delta_0$-hulls in $\delta_0$-hyperbolic spaces, we conclude that $Q_e\subset X_e, Q_w\subset X_w$ are $4\delta_0$-quasiconvex, verifying Part 1 of Definition \ref{defn:scfamily}. Lastly, we turn to Part 4 of Definition \ref{defn:scfamily}. As we noted earlier, $Q_w=\emptyset$ if and only if $Q'_w=N^e_R(Q_v)\cap X_w=\emptyset$. In other words, the $\la'_0$-quasiconvex subsets $Q_v, X_w\subset X_{vw}$ are $R$-separated. Since $R$ was chosen to satisfy $$ R\ge R_0=R_{\ref{cobdd-cor}}(\delta'_0, \la'_0)=2\la'_0+5\delta_0,$$ Corollary \ref{cobdd-cor} implies that subsets $Q_v, X_w\subset X_{vw}$ are $D=D_{\ref{cobdd-cor}}(\delta'_0, \la'_0)$-coboun\-ded. This verifies Part 4 of Definition \ref{defn:scfamily}. \begin{defn}\label{defn:flow-space} \index{flow-space $Fl_K(Q)$} We define the {\em $K$-flow space ${\mathfrak Fl}_K(Q_u)$} of $Q_u$ as the subtree of spaces in ${\mathfrak X}$ as follows. The nonempty subsets $Q_v, Q_e$ defined by the inductive procedure above will be the vertex/edge spaces of ${\mathfrak Fl}_K(Q_u)$. The incidence maps $g_{ev}$ of ${\mathfrak Fl}_K(Q_u)$ are the compositions of the incidence maps $f_{ev}$ with fiberwise nearest-point projections in $X_v$ to $Q_{v}$. The vertex and edge-spaces of ${\mathfrak Fl}_K(Q_u)$ are equipped with path-metrics induced from the ambient path-metrics on vertex and edge-spaces of ${\mathfrak X}$. We let $Fl_K(Q_u)\subset X$ denote the total space of ${\mathfrak Fl}_K(Q_u)$, set $S:= \pi(Fl_K(Q_u))$; we will use the notation ${\mathcal Fl}_K(Q_u)$ for the disjoint union $$ \coprod_{v\in V(S)} Q_v, $$ which is the union of vertex-spaces of ${\mathfrak Fl}_K(Q_u)$. We will equip $Fl_K(Q_u)$ with the standard path-metric of a tree of spaces. Sometimes it will be convenient to restrict flow-spaces to subtrees $T'\subset T$. We will denote such ``subflows'' by $$ {\mathfrak Fl}^{T'}_K(Q_u). $$ \end{defn} \begin{rem}\label{rem:flows} 1. The $\delta_0$-neighborhoods in the definition of flow-spaces are taken in order to ensure that the each inclusion map $Q_w\to X_w$ is a $(1,C_{\ref{lem:delta-hull}}(\delta_0))$-quasiisometric embedding, where $Q_w$ is equipped with the path-metric induced from $X_w$, see Lemma \ref{lem:delta-hull}. 2. In general, it is not true that for Hausdorff-close subsets $A, B\subset X_u$, the $K$-flow spaces are Hausdorff-close to each other. However, if $$ B\subset N^{fib}_{r}(Q_u)\subset X_{u}$$ then (by the very definition of a flow-space) $$ Fl_K(B)\subset Fl_{K+ r}(Q_u). $$ Similarly, $$ N^{fib}_r(Fl_K(Q_u)) \setminus N^{fib}_r(Q_u) \subset Fl_{K+ r}(Q_u). $$ \end{rem} The discussion preceding the definition of flow-spaces proves: \begin{theorem}\label{thm:semicontinuity-of-flows} For every $K\ge K_0$, the flow-space ${\mathfrak Fl}_K(Q_u)$ is a $(K,D,E)$-semicontinuous family of spaces in ${\mathfrak X}$, where $D=D_0=D_{\ref{cobdd-cor}}(\delta'_0, \la'_0)$ and $E=E_{\ref{thm:semicontinuity-of-flows}}(K)$ is given by the equation \eqref{eq:E-R-flow}. In particular, every $x\in Fl_K(Q_u)$ belongs to a $K$-leaf $\gamma_x$ in $Fl_K(Q_u)$ connecting $x$ to $Q_u$. \end{theorem} Combining with with the existence of uniform coarse Lipschitz retractions to semicontinuous subtrees of spaces (Theorem \ref{thm:mitras-projection}), we conclude: \begin{theorem}[Mitra's Retraction] \label{mjproj}\index{Mitra's retraction $\rho$} Suppose that ${\mathfrak X}$ is a tree of hyperbolic spaces. Then for each $K\ge K_0$, there exists an $L_{\ref{mjproj}}(K)$-
Omega_{\theta}, \end{equation*} with $w(\gamma)$ the weight in~\eqref{weight} and $\Gamma_{\theta}$ the set in~\eqref{Gamma}. It is easy to see that this condition fails to hold almost surely. Indeed, for any $s\in \Omega_{\theta}$ with $X(s)=x$, there exist $(\theta(x)-1)!$ cycles passing through $s$ that have zero weight, and hence $\beta(\alpha,s)\geq (\theta(X(s))-1)!$. Taking the supremum over $s\in \Omega_{\theta}$ we conclude that $\beta(\alpha)=\infty$ almost surely in $\{\theta(x)\}_{x\in {\mathbb Z}^d}$, for any $\alpha>0$. Let $\Lambda \subset {\mathbb Z}^d$ be a finite set. A finite cycle permutation $\sigma \in S_{\theta}^F$ can be represented as a configuration $\eta\in \{0,1\}^{\Gamma_{\theta}}$ with $\eta(\gamma)= {\mathbf 1}\{\gamma\in \sigma\}$. We say that $\eta$ is the {\it gas of cycles representation of} $\sigma$ and write $\gamma\in\eta$ iff $\eta(\gamma)=1$. Notice that if $\eta(\gamma)=1$, then $\eta(\gamma')=0$ for any cycle $\gamma'$ with $\{\gamma\}\cap \{\gamma'\}\neq \emptyset$. Given $\xi \in S_{\theta}^{F}$ and ${\Lambda} \Subset {\mathbb Z}^d$ let \begin{equation*} B(\xi, {\Lambda})= \{\gamma \in \xi \colon \{ \gamma\} \cap {\Lambda} \neq \emptyset, \{\gamma\} \cap {\Lambda}^c \neq \emptyset\}, \end{equation*} the set of cycles from $\xi$ that intersect ${\Lambda}$ and ${\Lambda}^c$. The set of permutations $S_{\theta,{\Lambda}}^{\xi}$ that are compatible with $\xi$ at volume ${\Lambda}$ introduced in \eqref{compatible} can now be described as \begin{align*} S_{\theta,{\Lambda}}^{\xi} = \Big\{\eta \in \{0,1\}^{\Gamma_{\theta}} \colon & \eta(\gamma)=1 \text{ for all } \gamma\in B(\xi, {\Lambda}), \nonumber \\ & \eta(\gamma)=0 \text{ if } \gamma\in \Gamma_{\theta,{\Lambda}} \text{ and there exists } \gamma'\in B(\xi, {\Lambda}) \text{ with } \{\gamma\}\cap \{\gamma'\} \neq \emptyset, \nonumber \\ & \eta(\gamma)\eta(\gamma')=0 \text{ if } \{\gamma\}\cap\{\gamma'\}\neq \emptyset \text{ for all } \gamma, \gamma'\in \Gamma_{\theta,{\Lambda}} \notag \\ & \eta(\gamma)=\xi(\gamma) \text{ if }\{\gamma\}\subset {\Lambda}^c \Big \}, \end{align*} and the specification at finite volume ${\Lambda}$ with boundary condition $\xi$ \eqref{especificaciones} becomes \begin{equation} \label{especificacion-gas-ciclos} G_{\theta, {\Lambda}}^{\xi}(\eta) = \frac{1}{Z_{\theta, {\Lambda}}^{\xi}} \prod\limits_{\gamma \in \Gamma_{\theta,{\Lambda}}} w(\gamma)^{\eta(\gamma)}\, {\mathbf 1} \{ \eta\in S_{\theta,{\Lambda}}^{\xi}\} \,. \end{equation} From this viewpoint the specification $G_{\theta, {\Lambda}}^{\xi}$ is a distribution over the space of gases of cycles, which interact by exclusion: if a cycle $\gamma$ is in the gas, then any other cycle that uses a point visited by $\gamma$ cannot be in the gas. For the rest of the article we indistinctly denote configurations by $\sigma$ or its associated gas of cycles $\eta$. \paragraph{The free process.} Let $\mathcal{N}$ be a Poisson process on $\Gamma_{\theta} \times \mathbb{R} \times \mathbb{R}_+$ with intensity measure $w(\gamma) \times dt \times e^{-r} dr$. Given $\xi$ and $\Lambda$, we define the {\it free process $(\eta_t^{o,\xi,{\Lambda}} \colon t\in {\mathbb R})$ on $\mathbb{N}_0^{\Gamma_{\theta}}$ associated to $\xi,\, \Lambda$} as \begin{equation} \label{xi-BD-process} \eta_t^{o,\xi,{\Lambda}}(\gamma) = {\mathbf 1}\{\gamma \in B(\xi,{\Lambda})\} + \sum_{(\gamma,t',r')\in \mathcal{N}} {\mathbf 1}\{t'\leq t < t'+r'\}\,. \end{equation} each marginal process $(\eta_t^{o,\xi,{\Lambda}}(\gamma) \colon t\in {\mathbb R})$ is a birth and death process of cycles of type $\gamma$, shifted by 1 when $\gamma\in B(\xi, {\Lambda})$ to account for the boundary condition. A new copy of cycle $\gamma$ appears at rate $w(\gamma)$ and is removed at rate $1$, independently of other copies of the same cycle, and of other cycles. The generator of this process is given by \begin{multline} \label{generador-free-xi} \mathcal{L}^{o,\xi,{\Lambda}} f(\eta) = \sum_{\gamma \in \Gamma_{\theta}} w(\gamma) \left[ f(\eta + \delta_{\gamma}) - f(\eta)\right] + \sum_{\gamma\in B(\xi, {\Lambda})} \eta(\gamma){\mathbf 1}\{\eta(\gamma)\geq 2\} \left[ f(\eta - \delta_{\gamma}) - f(\eta)\right] \\ + \sum_{\gamma\notin B(\xi, {\Lambda})} \eta(\gamma) \left[ f(\eta - \delta_{\gamma}) - f(\eta)\right] \,, \end{multline} $f:\mathbb{N}_0^{\Gamma_{\theta}}\to {\mathbb R}$ a test function. Consider the product measure $\nu_{\theta,{\Lambda}}^{\xi}$ on $\mathbb{N}_0^{\Gamma_{\theta}}$ such that, independently for $\gamma \in \Gamma_\theta,$ the marginal distribution $\nu_{\theta,{\Lambda}}^{\xi}(\gamma)$ satisfies \begin{align*} \nu_{\theta,{\Lambda}}^{\xi}(\gamma)&\sim \text{Poisson}\big(w(\gamma)\big)\quad \text{if } \gamma \not\in B(\xi, {\Lambda})\notag\\ \nu_{\theta,{\Lambda}}^{\xi}(\gamma)&\sim 1+\text{Poisson}\big(w(\gamma)\big)\quad \text{if } \gamma \in B(\xi, {\Lambda}). \end{align*} When $\xi=\text{id}$, the set $B(\text{id},\Lambda)=\emptyset$ for any $\Lambda\Subset {\mathbb Z}^d$, and $\nu_{\theta,{\Lambda}}^{\text{id}}$ does not depend on $\Lambda$; in this case we write $\nu_{\theta}$ instead of $\nu_{\theta,{\Lambda}}^{\text{id}}$. Each marginal $\nu_{\theta,\Lambda}^{\xi}(\gamma)$ is reversible for the birth and death dynamics of cycles of type $\gamma$, hence the product measure $\nu_{\theta,{\Lambda}}^{\xi}$ is reversible for $\mathcal{L}^{o,\xi,{\Lambda}}$. The family of processes $\big\{(\eta_t^{o,\xi,\Lambda} \colon t\in {\mathbb R}),\, \xi\in S_\theta^F\big\}$ can be simultaneously built using the same driving Poisson process $\mathcal{N}$. The coupled construction yields $\eta_t^{o,\xi,\Lambda} \geq \eta_t^{o,\text{id},\Lambda}$, that is, \[ \eta_t^{o,\xi,\Lambda} (\gamma)\geq \eta_t^{o,\text{id},\Lambda}(\gamma)\quad \text{ for all }\gamma\in \Gamma_{\theta},\, t\in {\mathbb R}. \] In fact, these processes only differ if $\gamma \in B(\xi, {\Lambda})$, and then $\eta_t^{o,\xi,\Lambda} (\gamma)= \eta_t^{o,\text{id},\Lambda}(\gamma)+1$. Note that $\nu_{\theta,{\Lambda}}^{\xi}$ assigns positive probability to $S_{\theta,{\Lambda}}^{\xi}$, hence the conditional measure $\nu_{\theta,{\Lambda}}^{\xi}(\,\cdot \,|S_{\theta, {\Lambda}}^{\xi})$ is well-defined, and it is a simple computation to verify that $\nu_{\theta,{\Lambda}}^{\xi}(\,\cdot \,|S_{\theta, {\Lambda}}^{\xi})= G_{\theta,{\Lambda}}^{\xi}(\cdot)$. \paragraph{The loss network.} Our goal now is to define a process that can be easily compared to the free process, and which has $G_{\theta, {\Lambda}}^{\xi}$ as invariant measure. We will realize it as a thinning of the free process. As before, let $\xi \in S_\theta^F$ and ${\Lambda} \Subset {\mathbb Z}^d$ be fixed. We say that two cycles $\gamma$ and $\gamma'$ are compatible if $\{\gamma\}\cap \{\gamma'\}= \emptyset$. Otherwise, they are incompatible. A cycle $\gamma$ is compatible with the gas of cycles $\eta \in \{0,1\}^{\Gamma_{\theta}}$, which we denote by $\gamma\sim\eta$, when $\gamma$ is compatible with all cycles $\gamma'$ such that $\eta(\gamma')=1$. The {\it loss network associated to $\xi,\, \Lambda$} is the Markov process in $S^\xi_{\theta,{\Lambda}}$ with generator \begin{equation} \label{xi-generador-loss} \mathcal{L}^{\xi,{\Lambda}} f(\eta) = \sum_{\gamma \in \Gamma_{\theta, {\Lambda}}} w(\gamma) {\mathbf 1}\{\gamma \sim \eta\}\left[ f(\eta + \delta_{\gamma}) - f(\eta)\right] + \sum_{\gamma\in \Gamma_{\theta,{\Lambda}}} \eta(\gamma) \left[ f(\eta - \delta_{\gamma}) - f(\eta)\right]\,, \end{equation} $f\colon S_{\theta, {\Lambda}}^{\xi} \mapsto {\mathbb R}$ a test function. Informally, the loss network follows the dynamics of the free process but it is subject to an exclusion rule: a cycle $\gamma\in\Gamma_{\theta,{\Lambda}}$ tries to be added at rate $w(\gamma)$ but the attempt is effective only when $\gamma$ is compatible with the cycles already present in the configuration at the time; each cycle is removed at rate 1 independently of others; and a copy of each cycle in $B(\xi, {\Lambda})$ is present at all times. The loss network is an irreducible Markov process in a finite state space with a unique invariant measure. \begin{lema} Let ${\Lambda}\Subset {\mathbb Z}^d$ and $\xi$ a finite cycle permutation. The measure $G_{\theta, {\Lambda}}^{\xi}$ defined in \eqref{especificacion-gas-ciclos} is the unique invariant distribution for the generator $\mathcal{L}^{\xi,{\Lambda}}$. \end{lema} Since we only need to verify the detailed balance equations, we omit the proof. Denote by $\eta_t^{\xi,{\Lambda}}$ the loss network process related to $\xi$ and ${\Lambda}$ at time $t$. We want to construct $\eta_t^{\xi,{\Lambda}}$ using a convenient thinning of the free process $(\eta_t^{o,\xi,{\Lambda}})$ to obtain $\eta_t^{\xi,{\Lambda}} \leq \eta_t^{o,\xi,{\Lambda}}$ for all $t$. The algorithm to delete cycles needs to know if the birth attempt of a cycle is allowed or not. So, we consider the clan of ancestors of $\zeta=(\gamma,t,r) \in \Gamma_{\theta, {\Lambda}}\times {\mathbb R} \times {\mathbb R}^+$ as follows. The first generation of ancestors supported on ${\Lambda}$ is defined by: \begin{equation*} A_1^{\zeta, {\Lambda}} = \{(\gamma',t',r')\in \mathcal{N} \colon \gamma'\in \Gamma_{\theta, {\Lambda}},\, \gamma' \nsim \gamma, \, t'<t<t'+r'\}\,. \end{equation*} Inductively, if $A_{n-1}^{\zeta, {\Lambda}}$ is determined, for the $n$-th generation we set: \begin{equation*} A_{n}^{\zeta, {\Lambda}} = \bigcup\limits_{\upsilon \in A_{n-1}^{\zeta, {\Lambda}}} A_{1}^{\upsilon, {\Lambda}}\,. \end{equation*} The clan of ancestors of the mark $\zeta$ supported in ${\Lambda}$ is defined by \begin{equation*} A^{\zeta,{\Lambda}} = \bigcup\limits_{n \geq 1} A_n^{\zeta,{\Lambda}}. \end{equation*} Suppose that $A^{\zeta,{\Lambda}}$ is finite for all $\zeta \in \Gamma_{\theta, {\Lambda}}\times {\mathbb R} \times {\mathbb R}^+$ and for almost all realizations of $\mathcal{N}$. To describe the thinning of $\eta^{o,\xi,\Lambda}$ we define in each step if a cycle is kept or deleted using its clan of ancestors. Let $\mathcal{D}_0^{\xi,{\Lambda}}= \{(\gamma,t,r) \in \mathcal{N} \colon \gamma\nsim \gamma' \text{ for some } \gamma'\in B(\xi,{\Lambda})\}$ and for $n\geq 1$ set \begin{equation}\label{kept-deleted-paso-n} \mathcal{K}_{n}^{\xi,{\Lambda}} = \{\zeta \in \mathcal{N} \colon A_1^{\zeta, {\Lambda}} \setminus \mathcal{D}_{n-1}^{\xi, {\Lambda}} = \emptyset \}, \hspace{1cm} \mathcal{D}_{n}^{\xi,{\Lambda}} = \{\zeta \in \mathcal{N} \colon A_1^{\zeta, {\Lambda}} \cap \mathcal{K}_{n}^{\xi,{\Lambda}} \neq \emptyset \} \,. \end{equation} Let $\mathcal{K}^{\xi,{\Lambda}}= \cup_{n\geq 1} \mathcal{K}_n^{\xi,{\Lambda}}$ be the set of kept cycles and $\mathcal{D}^{\xi, {\Lambda}}= \cup_{n \geq 1} \mathcal{D}_n^{\xi,{\Lambda}}$ be the set of deleted cycles. Note that in the initial step any cycle that is incompatible with a cycle from $B(\xi,\Lambda)$ is deleted. Under the assumption that all the clans of ancestors supported in ${\Lambda}$ are finite, every mark $\zeta \in \Gamma_{\theta, {\Lambda}}\times {\mathbb R} \times {\mathbb R}^+$ is kept or deleted. Now, using kept cycles we give a graphical construction for the loss network related to $\xi$ at volume ${\Lambda}$ by the formula \begin{equation} \label{loss-network} \eta_t^{\xi,{\Lambda}}(\gamma) = \sum_{(\gamma,t',r')\in \mathcal{N}} {\mathbf 1}\{t'\leq t < t'+r'\} \, {\mathbf 1}\{(\gamma,t',r') \in \mathcal{K}^{\xi,{\Lambda}}\} \, {\mathbf 1}\{\gamma \in \Gamma_{\theta, {\Lambda}}\}\,. \end{equation} To show that~\eqref{loss-network} is well-defined we need to check that the clan of ancestors of any mark $(\gamma,t',r')$ is finite almost surely. The next Lemma proves it when ${\Lambda}$ is finite but unfortunately the argument does not work when ${\Lambda}$ is infinite as we seen at the beginning of this section. \begin{lema} \label{xi-especificacion-invariante} If ${\Lambda}\Subset {\mathbb Z}^d$, the process $(\eta_t^{\xi,{\Lambda}} \colon t\in {\mathbb R})$ is well-defined. It is a Markov process with generator given by~\eqref{xi-generador-loss}. The construction~\eqref{loss-network} is stationary, so, $\eta_{t}^{\xi,{\Lambda}}$ is distributed according to $G_{\theta, {\Lambda}}^{\xi}$ for all $t$. \end{lema} \begin{proof} Since $\Gamma_{\theta, {\Lambda}}$ is a finite set, for almost every realization of the process $\mathcal{N}$ there exists a sequence of times $\{t_j \colon j\in {\mathbb Z}\}$ with $t_{j} {\to} \pm\infty$ as $j \to \pm\infty$ such that $\eta_{t_j}^{o,\xi,\Lambda}(\gamma)=0$ for all $\gamma \in \Gamma_{\theta, {\Lambda}}$. Therefore, $A^{\zeta, {\Lambda}}$ must be finite almost surely for all $\zeta \in \Gamma_{\theta, {\Lambda}}\times {\mathbb R} \times {\mathbb R}^+$. If the process $(\eta_t^{o,\xi,{\Lambda}} \colon t\in {\mathbb R})$ is restricted to cycles in $\Gamma_{\theta, {\Lambda}}$, marks can be sorted by their birth time (the second coordinate of the mark), and so, the algorithm described in~\eqref{kept-deleted-paso-n} works. The construction~\eqref{loss-network} is stationary and Lemma~\ref{xi-especificacion-invariante} implies that $\eta_t^{\xi, {\Lambda}}$ has distribution $G_{\theta,\Lambda}^{\xi}$ for all $t$. \end{proof} \begin{lema} \label{dominacion-de-especificaciones} Let $\xi$ be a finite cycle permutation. For almost every realization of the environment $\theta$, we have that $G_{{\Lambda}, \theta}^{\xi}$ is stochastically dominated by $\nu_{\theta,{\Lambda}}^{\xi}$ for all ${\Lambda}\Subset {\mathbb Z}^d$. \end{lema} \begin{proof} Since $\nu_{\theta,{\Lambda}}^{\xi}$ and $G_{{\Lambda}, \theta}^{\xi}$ are invariant measures for the free process and the loss network process respectively, it is enough to give a coupling such that $\eta_t^{\xi,{\Lambda}} \leq \eta_t^{o,\xi,{\Lambda}}$ for all $t$. The coupling is to use the same $\mathcal{N}$ for both graphical representations defined in~\eqref{xi-BD-process} and~\eqref{loss-network}. By these constructions it follows that \[\eta_t^{\xi,{\Lambda}}(\gamma) \leq \eta_t^{o,\xi,{\Lambda}}(\gamma) \text{ for all } t,\] and as both process are stationary $\nu_{\theta,{\Lambda}}^{\xi}$ dominates $G_{{\Lambda}, \theta}^{\xi}$. \end{proof} \section{Existence of Gibbs measures}\label{section-existence} In this section we prove that for $\rho\in(0,1/2)$ the family of specifications with identity boundary condition is tight in the large temperature regime for almost every realization of $\theta$. The tightness also holds considering specifications with a boundary condition given by a finite cycle permutation. In the next section we prove that for the large temperature regime there exists a unique Gibbs measure that concentrates over finite cycle permutations, so, weak limits of specifications with a general finite cycle boundary condition are the same as the identity case. For this reason, we focus on the identity boundary condition case. All proofs are done for the quadratic potential case but work in the general case with slight modifications. In such cases we explain differences in the next subsection. We have cycles that are different but they use the same sites and have an identical value for the Hamiltonian. See for instance $\gamma_1^2$ and $\gamma_2^2$ in Figure~\eqref{Figura}. We want to study the number of cycles that project to a fixed and ordered set. The ordered support $[\gamma]$ of a cycle $\gamma=(s_1,\,s_2,\dots, s_n)$ is the vector in $({\mathbb Z}^d)^m,\, m\le n$, given by \begin{align} \label{ord-supp} &[\gamma]=(x_1, x_2, \dots, x_m) \quad \text{with} \quad x_i=X(s_{\pi(i)}) \end{align} where $\pi(1)=1$, and inductively, $\pi(i)=\inf\{k>\pi(i-1),\, X(s_k)\neq X(s_{\pi(i-1)})\},\,\, i>1$. In other words, $[\gamma]$ is the projection of $\gamma$ to ${\mathbb Z}^d$ erasing consecutive repetitions of sites. Note that both in the representation of $\gamma$ as a vector and in the definition of its ordered support $[\gamma],$ due to the cyclic property of $\gamma$, the choice of initial point is arbitrary. Starting from any other point $s \in \{\gamma\}$ for the former, or of its spatial coordinate $X(s)$ for the latter, lead to alternative representations of the cycle and its ordered support. The following refers to Figure 1. The supports of the cycles in $\sigma_1$ are $\{\gamma_1^1\}=\{(3,1);(3,2)\}$ and $\{\gamma_2^1\}=\{(6,1);(6,2);(6,3);(7,1)\}$, and the ordered supports are $[\gamma_1^1]=(3)$ and $[\gamma_2^1]=(6,7)$. Also, $\{\gamma_2^2\}=\{(6,1);(6,2);(7,1)\}\neq \{\gamma_2^1\}$, but they share the ordered support, $[\gamma_2^1]=[\gamma_2^2]$. Let $\bar{y} \in ({\mathbb Z}^d)^m$ be a vector such that $\bar{y}_i\neq \bar{y}_{i+1}$. It will represent an ordered support, so, it could have repetitions in different (but non-consecutive) coordinates. Write $N_{\theta}(\bar{y})$ for the number of cycles $\gamma$ such that $[\gamma]= \bar{y}$. In the appendix, using basic facts from combinatorics, we compute an upper bound for $N_{\theta}(\bar{y})$ and we call it $M_{\theta}(\bar{y})$. The explicit definition of $M_{\theta}(\bar{y})$ is in~\eqref{cotaM(y)}. In the following we use that the expectation of $M_{\theta}(\bar{y})$ under ${\mathbb P}$ is bounded for $\rho\in(0,1/2)$ by \begin{align} {\mathbb E}[M_{\theta}(\bar{y})] \leq \left(\frac{\rho e^{-\rho+\frac{1}{2}}}{1-2\rho}\right)^{|\bar{y}|} \label{cota-expectation-M(y)}\,, \end{align} where $|\bar{y}|$ is the number of coordinates of $\bar{y}$. Note that the weight of a cycle is a function of its ordered support. So, for a cycle $\gamma$ with $[\gamma]=\bar{y}$ we have that $w(\gamma)=w(\bar{y}):= \exp\{-\alpha \sum_{i=1}^m \|y_{i+1} - y_i\|^2 \}$ assuming $y_{m+1}=y_1$. In certain situations it will be useful to sum the weights of all finite cycles that contain a site $x$. Instead, using the bound $M_{\theta}(\bar{y})$, we will sum over the ordered supports, whose sums are easier to calculate. In fact, the sum of the weights of ordered supports that contain site $x$ and have length $m$ is, \begin{equation} \label{suma-pesos-largo-m} \sum\limits_{\substack{\bar{y} \colon x\in \bar{y}\\|\bar{y}|=m \\ \bar{y}_i\neq \bar{y}_{i+1}}} w(\bar{y}) = \sum\limits_{\substack{\bar{y} \colon x\in \bar{y}\\|\bar{y}|=m \\ \bar{y}_i\neq \bar{y}_{i+1}}} \prod\limits_{i=1}^{m} e^{-\alpha \|\bar{y}_i-\bar{y}_{i+1}\|^2} = \sum\limits_{\substack{t_1\dots t_m\\t_i \neq 0_d}} \prod\limits_{i=1}^{m} e^{-\alpha \|t_i\|^2} = \varphi(\alpha)^m, \end{equation} where $\varphi(\alpha)= \sum_{t \in {\mathbb Z}^d,\, t \neq 0_d} e^{-\alpha \|t\|^2}$. Observe that $\varphi$ is the function defined in~\eqref{funcion-varphi-general} for the quadratic potential. It is easy to compute that $\varphi(\alpha) < (1+ \sqrt{\frac{\pi}{\alpha}})^d -1$. So, $\varphi$ is a decreasing function of $\alpha$ that tends to 0 when $\alpha \to +\infty$. Now we start with a series of lemmas to prove the tightness of $\{G_{\theta, {\Lambda}}^{\text{id}}\}_{{\Lambda} \Subset {\mathbb Z}^d}$. For $f: {\mathbb Z}^d \mapsto \mathbb{N}$ we define the set $\widehat{K}_f = \bigcap_{x\in {\mathbb Z}^d} \widehat{K}_f(x),$ where \begin{equation*} \widehat{K}_{f}(x)= \{\eta\in \mathbb{N}_0^{\Gamma_{\theta}} \colon \forall \,\gamma\in\eta \text{ such that } x\in \gamma \text{ we have } H(\gamma) \leq f(x)\} \,. \end{equation*} Denote by $\widehat{K}_{f}^c(x)$ the complement of $\widehat{K}_{f}(x)$. \begin{lema} \label{lema-cota-annealed} Let $\rho\in (0,1/2)$ and $\alpha>0$ such that \begin{equation} \label{condicion-existencia-rho-alpha} C_{\rho}\,\varphi(\alpha/2)<1\,, \end{equation} where $\varphi$ is the function defined in~\eqref{funcion-varphi-general} and $C_{\rho}=\frac{\rho e^{-\rho+\frac{1}{2}}}{1-2\rho}$. Then, \begin{equation} \label{cota-annealed} {\mathbb E} \left[\nu_{\theta}(\widehat{K}_{f}^c(x)) \right] \leq C(\rho,\alpha) e^{-\frac{\alpha}{2} f(x)}\,. \end{equation} \end{lema} In the quadratic potential case we know that $\varphi(\alpha/2) \to 0$ when $\alpha \to +\infty$, so, for any $\rho \in (0,1/2)$ we can choose $\alpha$ large enough such that $C_{\rho}\,\varphi(\alpha/2)<1$. For a general potential $V$ the condition~\eqref{condicion-existencia-rho-alpha} becomes to $C_{\rho}\varphi_V(\alpha/2)<1$ where $\varphi_V$ is the analogous function of $\varphi$ defined in~\eqref{funcion-varphi-general}. Observe that now, $\varphi_V(\alpha)$ does not tend to 0 necessarily. \begin{proof}[Proof of Lemma~\eqref{lema-cota-annealed}] By the cycle gas representation and using the marginal distributions of $\nu_{\theta}$ we have \begin{equation*} {\mathbb E} \left[\nu_{\theta}(\widehat{K}_{f}^c(x)) \right] \leq {\mathbb E} \big[ \sum\limits_{\substack{\gamma \colon \gamma \ni x \\ H(\gamma)> f(x)}} \nu_{\theta}( {\mathbf 1}\{\gamma \in \eta \} ) \big] = {\mathbb E}\big[ \sum\limits_{\substack{\gamma \colon \gamma \ni x \\ H(\gamma)> f(x)}} (1-e^{-w(\gamma)})\big] \,. \end{equation*} We want to sum over ordered supports instead of cycles. Write the weight of each cycle as a function of its order support, and recall that for each ordered support $\bar{y}$, the number of cycles that have ordered support $\bar{y}$ is bounded above by $M_{\theta}(\bar{y})$, where $M_{\theta}(\bar{y})$ was defined in~\eqref{cotaM(y)}. Then, using the linearity of ${\mathbb E}$, the bound~\eqref{cota-expectation-M(y)} and $1-e^{-t} \leq t$, we obtain \begin{equation*} {\mathbb E} \big[\nu_{\theta}(\widehat{K}_{f}^c(x)) \big] \leq {\mathbb E}\big[ \sum_{m\geq 2} \sum\limits_{\substack{\bar{y} \colon \bar{y} \ni x \\ \bar{y}_i \neq \bar{y}_{i+1} \\ |\bar{y}|=m \\ H(\bar{y})> f(x)}} M_{\theta}(\bar{y}) (1-e^{-w(\bar{y})}) \big] \leq \sum_{m\geq 2} \sum\limits_{\substack{\bar{y} \colon \bar{y} \ni x \\ \bar{y}_i \neq \bar{y}_{i+1} \\ |\bar{y}|=m \\ H(\bar{y})> f(x)}} C_{\rho}^m w(\bar{y})\,. \end{equation*} Now, using that $H(\bar{y})> f(x)$ and the definition of $\varphi$ (see~\eqref{suma-pesos-largo-m}) we have \begin{equation*} {\mathbb E} \big[\nu_{\theta}(\widehat{K}_{f}^c(x)) \big] \leq e^{-\frac{\alpha}{2} f(x)} \sum_{m\geq 2} C_{\rho}^m \sum\limits_{\substack{\bar{y} \colon \bar{y} \ni x \\ \bar{y}_i \neq \bar{y}_{i+1} \\ |\bar{y}|=m}} e^{-\frac{\alpha}{2} H(\bar{y})} = e^{-\frac{\alpha}{2} f(x)} \sum_{m\geq 2} C_{\rho}^m \, \varphi\left(\alpha/2\right)^{m}. \end{equation*} Finally, lemma follows from the fact $C_{\rho} \varphi\left(\alpha/2\right)<1$. \end{proof} \begin{lema} \label{lema-cota-quenched} Let $\rho$ and $\alpha$ be in the same conditions as in Lemma~\eqref{lema-cota-annealed}. Given $\epsilon >0$, for almost every realization of $\theta$ there exists a function $f$ that depends on $\theta$, such that $\nu_{\theta}(\widehat{K}_{f}^c)<\epsilon$. \end{lema} \begin{proof} For each $n$, using~\eqref{cota-annealed} we pick $f_n(x)$ large enough such that: ${\mathbb E} \big[\nu_{\theta}(\widehat{K}_{f_n}^c(x)) \big] \leq 1/(n^2 2^{\|x\|})$. If we define $f_n: {\mathbb Z}^d \mapsto \mathbb{N}$ in the obvious way, we have: \begin{equation*} {\mathbb E} \big[\nu_{\theta}(\widehat{K}_{f_n}^c) \big] \leq {\mathbb E} \big[\sum_{x\in {\mathbb Z}^d}\nu_{\theta}(\widehat{K}_{f_n}^c(x)) \big] \leq \sum_{x\in {\mathbb Z}^d} \frac{1}{n^2} \frac{1}{2^{\|x\|}} = \frac{C}{n^2}\,. \end{equation*} So, we have a sequence of functions $\{f_n\}_{n \geq 1}$ such that ${\mathbb E} \big[ \sum_{n\geq 1} \nu_{\theta}(\widehat{K}_{f_n}^c) \big] <\infty \,.$ Therefore $\sum_{n\geq 1} \nu_{\theta}(\widehat{K}_{f_n}^c) <\infty$ for almost every realization of $\theta$. So, for almost every realization of $\theta$ there exists $n_0(\theta, \epsilon)$ such that $\nu_{\theta}(\widehat{K}_{f_n}^c) <\epsilon$ for all $n\geq n_0$. \end{proof} The following lemma works in the quadratic potential case. Later we discuss the proof in the case of a general potential. \begin{lema} \label{lema-Kf-compacto} Consider a function $f: {\mathbb Z}^d \mapsto \mathbb{N}$ and the set $K_f:= \widehat{K}_f \cap S_{\theta}^F$, where $S_{\theta}^F$ is the finite cycle permutation space. Then, $K_f$ is a non-empty compact set. \end{lema} \begin{proof} The gas of cycles representation for the identity is the null configuration, thus $\text{id} \in \widehat{K}_f(x)$ for all $x$ and any choice of $f(x)$. Recall that a sequence of permutations $\{\sigma_n\}$ converges to $\sigma$ if and only if $\sigma_n(s)\to \sigma(s)$ for all $s\in \Omega_{\theta}$. Let $\{\sigma_n\}_{n\geq 1}$ be a sequence on $K_f$ and fix $s\in \Omega_{\theta}$. Definition of $K_f$ implies that \[ \|\sigma_n(s)-s\|^2 \leq \|X(\sigma_n(s))- X(s)\|^2 + R(s) \leq f(X(s)) + R(s),\] where $R(s)= \max\{\theta(x) \colon \|x-X(s)\|^2 \leq f(X(s))\}$. So, $\{\sigma_n(s)\}_{n\geq 1}$ is bounded and it has a convergent subsequence. Since $\Omega_{\theta}$ is countable and $S_{\theta}$ is a complete metric space, a Cantor's diagonal argument follows to prove that $\{\sigma_n\}$ has a subsequential limit $\sigma$. We write also $\{\sigma_n\}$ for the convergent subsequence. It only remains to prove that $\sigma$ is a finite cycle permutation. On the contrary, suppose that the point $s$ is contained on an infinite cycle, i.e., $\sigma^j(s)\neq s$ for all $j\in {\mathbb Z}$. Choose a subsequence $\{{j_k}\}$ such that $X(\sigma^{j_k}(s))\neq X(\sigma^{j_{k-1}}(s))$ for all $k$. Since $\sigma_n \to \sigma$ we can pick $n$ large enough such that $\sigma_n^{j_k}(s) = \sigma^{j_k}(s)$ for all $k=1, \dots, k_0$ being $k_0=f(X(s))+1$. Then, calling $\gamma'$ for the cycle that contains $s$ in the permutation $\sigma_n$: \[H(\gamma')\geq \sum_{k=1}^{k_0} \|X(\sigma_n^{j_k}(s))-X(\sigma_n^{j_k-1}(s))\|^2 = \sum_{k=1}^{k_0} \|X(\sigma^{j_k}(s))-X(\sigma^{j_k-1}(s))\|^2 \geq f(X(s))+1\,,\] which contradicts that $\sigma_n\in K_f$. Hence, $\sigma$ is a finite cycle permutation. \end{proof} Recall that $S_{\theta}^F$ has a natural partial order, i.e., $\eta \leq \eta'$ if $\eta(\gamma)\leq \eta'(\gamma)$ for all $\gamma \in \Gamma_{\theta}$. So, an event $A\subset S_{\theta}^F$ is increasing when ${\mathbf 1}_A$ is an increasing function with respect to the partial order. \begin{lema} \label{lema-Kf-creciente} Set $K_f^c = \widehat{K}_f^c \cap S_{\theta}^F$. Then $K_f^c$ is an increasing event. \end{lema} \begin{proof} Fix $\eta \in \widehat{K}_f^c(x)$ and let be $\eta'$ such that $\eta \leq \eta$. By definition of $\widehat{K}_f^c(x)$ there exists $\gamma\in \eta$ such that $x\in\gamma$ and $H(\gamma)>f(x)$. The fact $\gamma\in \eta$ implies that $\gamma\in \eta'$, and so, using the same cycle $\gamma$ ones proves that $\eta'\in \widehat{K}_f^c(x)$. \end{proof} The next result is a general fact from Gibbs measures theory, for this reason we omit its proof. \begin{lema} \label{lim-debil-es-gibbs} Let $\{{\Lambda}_n\}_{n\geq 1} \Subset {\mathbb Z}^d$ an increasing sequence such that ${\Lambda}_n \uparrow {\mathbb Z}^d$ and $\{G_{\theta, {\Lambda}_n}^{\text{id}}\}_{n\geq 1}$ converges weakly to a probability measure $\mu$. Then, $\mu$ is a Gibbs measure. \end{lema} \begin{lema} Let $\rho$ and $\alpha$ satisfying conditions of Lemma~\eqref{lema-cota-annealed}. Then for almost every realization of $\theta$ there exists a Gibbs measure $\mu_{\theta}$ related to temperature $\alpha$ and specifications defined in~\eqref{especificaciones}. \end{lema} \begin{proof} First we prove that the family of specifications $\{G_{\theta, {\Lambda}}^{\text{id}}\}_{{\Lambda}\Subset {\mathbb Z}^d}$ is tight. By Lemma~\eqref{dominacion-de-especificaciones} we know that $\nu_{\theta}$ stochastically dominates $G_{\theta,{\Lambda}}^{\text{id}}$ for all ${\Lambda}\Subset {\mathbb Z}^d$ and, so, as $K_f^c$ is an increasing event we obtain: \[\sup_{{\Lambda} \Subset {\mathbb Z}^d} G_{\theta, {\Lambda}}^{\text{id}}(K_f^c) \leq \nu_{\theta}(K_f^c)\,, \hspace{0.8cm} \theta \text{ a.s..} \] Given $\epsilon>0$, by Lemma~\eqref{lema-cota-quenched} for almost every realization of $\theta$ there exists a function $f$, such that $\nu_{\theta}(K_f^c)<\epsilon$. Combining this with the previous inequality and the compactness of $K_f$, tightness follows. Now, we have a subsequential limit $\mu_{\theta}$ and Lemma~\eqref{lim-debil-es-gibbs} shows that $\mu_{\theta}$ is a Gibbs measure. \end{proof} \subsection{Remarks for the general potential case} Previous results hold also when the Hamiltonian is given by a potential $V$ satisfying~\eqref{funcion-varphi-general} instead the quadratic potential. The only relevant difference is in Lemma~\eqref{lema-Kf-compacto} to prove that $K_f$ is compact, concretely in the proof that limit permutation $\sigma$ has only finite cycles in its decomposition (other steps can be adapted). Other proofs work line by line, using $\varphi_V$ with $\alpha \geq \alpha_0$ in the statements instead of $\varphi$. To show that $\sigma$ is a finite cycle permutation we need to ensure that any infinite cycle has infinite energy associated to the Hamiltonian. In the case when $V$ is strictly positive on $[1,+\infty)$ the previous proof works, since each jump among points located in different sites contributes non-zero to $H_V$. Otherwise, if $L_V\geq 1$, where $L_V= \sup\{\|x\| \colon V(\|x\|)=0\}$, it can exist infinite cycles with finite energy. To skip this problem we take the density $\rho$ small enough to ensure that the event that there exists an infinite sequence of points $\{s_i\}_{i\in \mathbb{N}} \subset \Omega_{\theta}$ such that $\|X(s_i) - X(s_{i+1})\|\geq L_V$, has zero probability with respect to the environment. This is exactly our definition that $\rho$ is good density for the potential $V$ in~\eqref{L_V} and the condition that appears in the Theorem~\eqref{Teo-V-general}. \section{Uniqueness of Gibbs measures}\label{section-uniqueness} Proved the existence of a Gibbs measure we want to prove that it is unique. Specifically, we want to show that if $\mu$ and $\mu'$ are Gibbs measures supported on the finite cycle permutations then $\mu=\mu'$. For that we consider the product measure $\mu\otimes \mu'$, which is a Gibbs measure with respect to the product specifications $G_{\theta, {\Lambda}_1}^{\xi_1}\otimes G_{\theta, {\Lambda}_2}^{\xi_2}$ with $\xi_1$, $\xi_2$ and ${\Lambda}_1,{\Lambda}_2\Subset {\mathbb Z}^d$. As in the previous section we focus in the quadratic potential case. For a potential $V$ the difference is to use $\varphi_V$ instead of $\varphi$ in each statement. For the rest of this section $\mu$ and $\mu'$ will be Gibbs measures supported on finite cycle permutations. \begin{df} We say that $\Delta \subset {\mathbb Z}^d$ separates $\eta \in \mathbb{N}_0^{\Gamma_{\theta}}$ when for all $\gamma\in \eta$ we have $\{\gamma\} \subset \Delta$ or $\{\gamma\} \subset \Delta^c$. For a pair $(\eta,\eta')$ we say that the pair is separated by $\Delta \subset {\mathbb Z}^d$ when both coordinates are separated by $\Delta$. \end{df} The separating set property is closed by unions. Indeed, let $\Delta_1$ and $\Delta_2$ such that both are separating sets for $(\eta,\eta')$. If $\gamma\in \eta$ is such that $\{\gamma\}\subset \Delta_1, \Delta_2$, we have $\{\gamma\}\subset \Delta_1 \cup \Delta_2$. In other case, as $\Delta_1$ and $\Delta_1$ are separating sets, we have $\{\gamma\}\subset \Delta_1^c \cap \Delta_2^c$. So, $\Delta_1 \cup \Delta_2$ is a separating set for $\eta$ and the same holds for $\eta'$. Denote by $\Lambda_l$ the box $[-l,l]^d\cap {\mathbb Z}^d$. Let $A_n$ be the event that exists $\Delta\Subset {\mathbb Z}^d$ such that $\Delta$ separates $(\eta,\eta')$ and $\Delta \supset \Lambda_n$. Note that $A_{n+1}\subset A_{n}$ and define $A=\cap_{n\geq 1} A_n$. Observe also that $A$ and $A_n$ are decreasing events with respect to the partial order of $\mathbb{N}_0^{\Gamma_{\theta}}$. Our goal is to prove that for sufficiently large $\alpha$ the event $A$ has full measure with respect to the product measure $\mu\otimes \mu'$. Then we use the existence of an arbitrary large separating set to prove that $\mu$ and $\mu'$ are equal to the weak limit of specifications with identity boundary condition. We say that cycles $\gamma, \gamma' \in \Gamma_{\theta}$ are neighbors, and we write $\gamma\Join\gamma'$, if exists $s$, $s'\in \Omega_{\theta}$ such that $s\in \gamma$, $s'\in \gamma'$ and $X(s)=X(s')$. In terms of ordered supports $\gamma$ and $\gamma'$ are neighbors when $\{[\gamma]\}\cap\{[\gamma]'\}\neq \emptyset$. A path of cycles of length $n$ is a sequence of $n$ different cycles $\gamma_1, \dots,\gamma_n$ such that $\gamma_i \Join \gamma_{i+1}$ for $i=1,\dots,n-1$. The idea is to consider a random subgraph of $(\Gamma_{\theta}, \Join)$ and ask about percolation on it, i.e., the existence of an infinite path of cycles with positive probability. Fix $(\eta,\eta')\in {\mathbb N}^{\Gamma_{\theta}} \times {\mathbb N}^{\Gamma_{\theta}}$. We declare that $\gamma$ is open when $\eta(\gamma)+ \eta'(\gamma)\geq 1$. A path is open when it is composed by open cycles. A cycle is a trivial cycle if only uses points located at the same site. We are interested in open paths that use non-trivial cycles. Fix $x_0\in {\mathbb Z}^d$ with $\theta(x_0)\neq 0$. Let $D(n)$ be the event that there exists an open path of length $n$ formed only by non-trivial cycles and for which the first cycle contains $x_0$. Of course $D(n+1)\subset D(n)$ for all $n$. There are paths of length $n$ that are not included in $D(n)$ since we only allow non-trivial cycles. However, an infinite path of cycles exists if and only if there exists an infinite path using only non-trivial cycles. To enunciate the next lemma recall that $r_0$ is the unique solution in $[0,1]$ of the equation $\frac{r}{(1-r)^2}-r=\frac{1}{2}$. Note that $r<r_0$ implies $\sum_{m\ge 2} m r^m = \frac{r}{(1-r)^2}-r<\frac{1}{2}$. \begin{lema} \label{no-percolacion-nu-times-nu} Suppose that $\rho\in(0,1/2)$ and $\alpha>0$ satisfy $C_{\rho} \varphi(\alpha)<r_0$, where $C_{\rho}$ is the constant that appears in Lemma~\eqref{lema-cota-annealed} and $\varphi$ is the function defined in~\eqref{funcion-varphi-general}. Let $\xi$, $\xi'$ be finite cycle permutations and $\Lambda \Subset {\mathbb Z}^
d$. Consider the pair $(\eta$,$\eta')$ independently sampled according to $\nu_{\theta,\Lambda}^{\xi} \otimes \nu_{\theta,\Lambda}^{\xi'}$ and the graph structure induced by it on $\Gamma_{\theta}$. Then for almost every realization of $\theta$, the event that there exists an infinite open path of cycles has zero probability with respect to $\nu_{\theta,\Lambda}^{\xi} \otimes \nu_{\theta,\Lambda}^{\xi'}$. \end{lema} \begin{proof} It is sufficient to show that $\lim_{n\to +\infty} \nu_{\theta}\otimes \nu_{\theta}(D(n))=0$. Indeed, there is a natural coupling between $\nu_{\theta}\otimes \nu_{\theta}$ and $\nu_{\theta,\Lambda}^{\xi} \otimes \nu_{\theta,\Lambda}^{\xi'}$ such that the former is stochastically dominated by the last one but both realizations have only a finite number of differences (see remark below~\eqref{generador-free-xi}). So, the graph induced by the realization of $\nu_{\theta,\Lambda}^{\xi} \otimes \nu_{\theta,\Lambda}^{\xi'}$ has more open cycles but only a finite number of them. So, the existence of an infinite open path under one measure is equivalent to the same event for the other. Now, we compute the annealed expectation of $D(n)$ with respect to $\nu_{\theta} \otimes \nu_{\theta}$. Using the marginals distribution and independence, we obtain \begin{equation*} {\mathbb E}[\nu_{\theta}\otimes \nu_{\theta}(D(n))] = {\mathbb E}[\sum_{\substack{ \gamma_1, \dots,\gamma_n}} \prod_{i=1}^n(1-e^{-2w(\gamma_i)})] \leq {\mathbb E} [\sum_{\substack{ \gamma_1, \dots,\gamma_n}} \prod_{i=1}^n 2w(\gamma_i)]\,, \end{equation*} where the sum denoted by $\sum_{\gamma_1, \dots,\gamma_n}$ is over all paths of length $n$ formed by non-trivial cycles and for which $x_0\in \gamma_1$. Any non-trivial cycle has an ordered support, so, we will sum over sequences of ordered supports instead cycles. Concretely, we sum over all sequences of $n$ ordered supports $\bar{y}_1, \dots, \bar{y}_n$ such that $x_0\in\{\bar{y}_1\}$ and $\{\bar{y}_i\}\cap \{\bar{y}_{i+1}\} \neq \emptyset$ for all $i$. This sum is denoted by $\sum_{\substack{\bar{y}_1, \dots, \bar{y}_n}}$. Recall that $M_{\theta}(\bar{y}_1 \dots \bar{y}_n)$ is an upper bound for the number of cycles $\gamma_1,\dots,\gamma_n$ that have ordered supports $\bar{y}_1,\dots,\bar{y}_n$ respectively (see Remark~\eqref{cota-indep-orden}). Combining these with the bound obtained in~\eqref{cota-expectation-M(y)} we have\begin{equation*} {\mathbb E}[\nu_{\theta}\otimes \nu_{\theta}(D(n))] \leq {\mathbb E}[\sum_{\substack{\bar{y}_1, \dots, \bar{y}_n}} M_{\theta}(\bar{y}_1 \dots \bar{y}_n) \prod_{i=1}^n 2w(\bar{y}_i)] = \sum_{\substack{\bar{y}_1, \dots, \bar{y}_n}} \prod_{i=1}^n 2C_{\rho}^{|\bar{y}_i|}w(\bar{y}_i)\,. \end{equation*} To estimate the last sum we need to consider all possibilities for which $\bar{y}_n$ shares a site with $\bar{y}_{n-1}$, and after this, all possibilities such that $\bar{y}_{n-1}$ shares a site with $\bar{y}_{n-2}$ and so on. Using that $C_{\rho}\varphi(\alpha)<r_0<1$ we compute it to obtain \begin{equation*} {\mathbb E}[\nu_{\theta}\otimes \nu_{\theta}(D(n))] \leq 2^n \Big[\sum_{m\ge 2} (C_{\rho} \varphi(\alpha))^m \Big] \Big[\sum_{m\ge 2} m(C_{\rho} \varphi(\alpha))^m \Big]^{n-1}. \end{equation*} The choice of $r_0$ implies that $\sum_{m\ge 2} m(C_{\rho} \varphi(\alpha))^m<1/2$ and it follows that \begin{equation*} {\mathbb E}[\sum\limits_{n\geq 1} \nu_{\theta}\otimes \nu_{\theta}(D(n))] = \sum\limits_{n\geq 1} {\mathbb E} [\nu_{\theta}\otimes \nu_{\theta} (D(n))] < \infty\,. \end{equation*} Finally, for almost every realization of $\theta$ we have $\nu_{\theta}\otimes \nu_{\theta}(D(n)) \to 0$ when $n\to +\infty$. \end{proof} \begin{lema} \label{nu-times-nu(A)=1} Let $\rho$ and $\alpha$ as in Lemma \ref{no-percolacion-nu-times-nu}. Recall that $A_n$ be the event that exists a separating set $\Delta$ that contains $\Lambda_n$ and $A = \cap_{n\ge 1} A_n$. Let $\xi$, $\xi'$ be finite cycle permutations and $\Lambda\Subset {\mathbb Z}^d$. Then for almost every realization of $\theta$ we have $\nu_{\theta,\Lambda}^{\xi} \otimes\nu_{\theta,\Lambda}^{\xi'}(A)=1$. \end{lema} \begin{proof} Define $\Delta_0(\eta,\eta')$ as \[\Delta_0(\eta,\eta')= \sup \{\Delta \Subset {\mathbb Z}^d \colon 0 \in \Delta,\, \Delta \text{ separates } (\eta,\eta')\}\,.\] Such $\Delta_0$ exists because the separating set property is closed by finite unions. For the event $\{\Delta_0 = {\mathbb Z}^d\}$ we understand that any site with non-zero multiplicity is in $\Delta_0$. Note that events $A$ and $\{\Delta_0 = {\mathbb Z}^d\}$ are equivalent. In fact, if $A$ does not hold there exists $n$ such that $\Delta$ does not separate $(\eta, \eta')$ for all $\Delta \supset \Lambda_n$. So, $\Delta_0$ cannot contain $\Lambda_n$. Reciprocally, suppose that $x \notin \Delta_0$ with $\theta(x)\neq 0$ and the event $A$ holds. So, there exists a finite set $\Delta$ such that $\Delta$ separates $(\eta, \eta')$ and $\Delta \supset \Lambda_n$ where $n$ is fixed such that $x\in {\Lambda}_n$. Then $\Delta_0\cup \Delta$ is a separating set that contradicts the maximal assumption of $\Delta_0$. Assume that exists $x\in\Delta_0^c$ with $\theta(x)\neq 0$. In such case, there exists $s\in \Omega_{\theta}$ with $X(s)=x$ such that $(\eta(s),\eta'(s)) \neq (s,s)$. Otherwise, $\Delta_0\cup \{x\}$ contains $0$ and it is a separating set for $(\eta,\eta')$ larger than $\Delta_0$. Call $\gamma_1$ to the cycle from $\eta$ or $\eta'$ such that $\gamma_1(s)\neq s$. Clearly $\gamma_1$ is open and as $\Delta_0$ is a separating set we have $\{\gamma_1\}\subset \Delta_0^c$. Consider $\Delta_1=\Delta_0 \cup \{\gamma_1\}$. It cannot be a separating set for $(\eta,\eta')$ by the maximal assumption of $\Delta_0$. So, there exists $\gamma_2$ in $\eta$ or $\eta'$ such that $\Delta_1 \cap \{\gamma_2\} \neq \emptyset$ and $\Delta_1^c \cap \{\gamma_2\}\neq \emptyset$. From the second intersection, we deduce $\Delta_0^c \cap \{\gamma_2\} \neq \emptyset$ but as $\Delta_0$ is a separating set we have $\{\gamma_2\}\subset \Delta_0^c$. We have also $\{\gamma_2\}\cap\{\gamma_1\}^c\neq \emptyset$, so $\gamma_2\neq\gamma_1$. From the first intersection we obtain that $\gamma_2\Join \gamma_1$, because $\{\gamma_2\}\cap\{\gamma_1\}\neq \emptyset$ and $\gamma_2\neq\gamma_1$. Note that $\gamma_2$ is open. Now, suppose that there are $n$ different open cycles $\gamma_1, \dots, \gamma_n$ such that $\{\gamma_i\}\subset \Delta_0^c$ for all $i=1,\dots,n$ and for each $\gamma_i$ there exists $j\in \{1,\dots, i-1\}$ such that $\gamma_j\Join \gamma_i$. Denote by $\Delta_n$ the set $\Delta_0 \cup \left(\cup_{i=1}^n \{\bar{\gamma}_i\}\right)$. Since $\Delta_n$ cannot separate $(\eta,\eta')$ there exists a cycle $\gamma_{n+1}$ in $\eta$ or $\eta'$ (so, $\gamma_{n+1}$ is open) such that $\Delta_n \cap \{\gamma_{n+1}\} \neq \emptyset$ and $\Delta_n^c \cap \{\gamma_{n+1}\} \neq \emptyset$. As $\Delta_0$ is a separating set, the second condition implies that $\{\gamma_{n+1}\} \subset \Delta_0^c$ and $\gamma_{n+1}\neq \gamma_{j}$ for all $j=1,\dots,n$. Then the condition $\Delta_n \cap \{\gamma_{n+1}\} \neq \emptyset$ says that $\gamma_{n+1} \Join \gamma_j$ for some $j=1,\dots, n$. Thus, there is a sequence of different open cycles $\{\gamma_i\}_{i\in \mathbb{N}}$ such that each $\gamma_n$ is the neighbor of some $\gamma_j$ with $j<n$. Then all cycles from the sequence are in the same connected component, so, it is infinite. Hence, we have proved that $A^c \subset \{\text{exists an infinite open path of cycles containing }x\}\,,$ and by Lemma~\eqref{no-percolacion-nu-times-nu} it follows that $\nu_{\theta,{\Lambda}}^{\xi}\otimes \nu_{\theta,{\Lambda}}^{\xi'}(A^c)=0$ for almost every realization of $\theta$. \end{proof} \begin{lema} \label{mu-times-mu(A)=1} Let $\rho$ and $\alpha$ as in Lemma~\ref{no-percolacion-nu-times-nu}. Let $\mu$ and $\mu'$ be Gibbs measures that concentrate on finite cycle permutations. Then $\mu\otimes \mu'(A)=1$. \end{lema} \begin{proof} As $A_{n+1}\subset A_{n}$ it is sufficient to show that $\lim_{n\to \infty} \mu\otimes \mu'(A_n)=1$. Using that $A_n$ is a decreasing event with the definition of Gibbs measures, we obtain for $\Lambda\Subset {\mathbb Z}^d$: \begin{equation*} \mu\otimes \mu'(A_n) = \int G_{\theta, \Lambda}^{\xi} \otimes G_{\theta, \Lambda}^{\xi'}(A_n) \text{\rm d}\mkern0.5mu \mu \otimes \mu'(\xi,\xi') \geq \int \nu_{\theta, \Lambda}^{\xi} \otimes \nu_{\theta, \Lambda}^{\xi'}(A_n) \text{\rm d}\mkern0.5mu \mu \otimes \mu'(\xi,\xi') \,. \end{equation*} To complete the proof, we take limit as $n$ tends to $\infty$ in the last term and use the Lemma~\eqref{nu-times-nu(A)=1}. \end{proof} \begin{lema} \label{unicidad-discreto} Consider $\rho\in(0,1/2)$ and $\alpha>0$ such that $C_{\rho}\varphi(\alpha)<r_0$, where $r_0$ is the constant defined in~\eqref{r_0}. If $\mu$ and $\mu'$ are Gibbs measures supported on finite cycle permutations, then $\mu=\mu'$. \end{lema} \begin{proof} It is sufficient to prove that $\mu(B)=\mu'(B)$ for a local event $B$. Let $J_{{\Lambda}}(\Delta)$ be the event that $\Delta\Subset {\mathbb Z}^d$ is the first separating set that contains ${\Lambda}$. The existence of $\Delta$ is guaranteed by~\eqref{mu-times-mu(A)=1} and so, $\sum_{\Delta\supset {\Lambda}, \, \Delta \Subset {\mathbb Z}^d} \mu \otimes \mu'( J_{{\Lambda}}(\Delta))=1\,.$ Now, suppose that for any finite cycle boundary conditions $\xi$ and $\xi'$, we have \begin{equation} \label{igualdad-especificaciones-producto} G_{\theta,\Delta}^{\xi}\otimes G_{\theta,\Delta}^{\xi'}\left((B\times S_{\theta}^F)\cap J_{{\Lambda}}(\Delta)\right) = G_{\theta,\Delta}^{\xi}\otimes G_{\theta,\Delta}^{\xi}\left((S_{\theta}^F \times B)\cap J_{{\Lambda}}(\Delta) \right)\,. \end{equation} Then, integrate with respect to $\mu\otimes \mu'$ and use that $\mu$ and $\mu'$ are supported on finite cycle permutations to obtain \[\mu \otimes \mu' \left((B\times S_{\theta}^F)\cap J_{{\Lambda}}(\Delta)\right) = \mu \otimes \mu'\left((S_{\theta}^F \times B)\cap J_{{\Lambda}}(\Delta)\right)\,,\] and summing over all choices of $\Delta$, we show $\mu(B) = \mu'(B)$. So, it remains to prove that (\ref{igualdad-especificaciones-producto}). Observe that if $\Delta$ does not separate $(\xi,\xi')$, both sides of (\ref{igualdad-especificaciones-producto}) are 0. If $\Delta$ separates, the boundary conditions $(\xi,\xi')$ have the same effect as the identity boundary conditions. Hence, we can replace $\xi$ and $\xi'$ by $\text{id}$. Since the event $J_{{\Lambda}}(\Delta)$ and $G_{\theta,\Delta}^{\text{id}}\otimes G_{\theta,\Delta}^{\text{id}}$ are invariant under the map $(\sigma,\sigma') \mapsto (\sigma',\sigma)$ the equation (\ref{igualdad-especificaciones-producto}) holds. \end{proof} \section{Existence and uniqueness on the continuum} \label{section-continuum} In this section we study the existence of Gibbs measure when the set of points is a realization of a homogeneous Poisson point process on ${\mathbb R}^d$ with low intensity. The Hamiltonian $H$ is given by the quadratic potential as in~\eqref{Hamiltonian-intro-general}. We understand the finite volume $\Lambda$ as a compact subset of ${\mathbb R}^d$ and we write $\Lambda\Subset {\mathbb R}^d$ for it. The thermodynamic formalism has analogous definitions that in the random lattice case. Let $\Omega \subset {\mathbb R}^d$ be the realization of a Poisson point process with density $\rho$. The notation is the same that for previous sections, we write $\Omega$ instead $\Omega_{\theta}$ or $\theta$. So, $S_{\Omega}$ is the permutation space, $S_{\Omega}^F$ is the set of finite cycle permutation and $\Gamma_{\Omega}$ is the space of finite cycles. The support of a cycle has the same definition as before. However, the notion of ordered support does not make sense in the continuous setting, since each jump contributes non-zero to the Hamiltonian. Section~\eqref{section-domination} works as in the previous case because we did not use anything related to the environment. So, the free process and the loss network of cycles have the same definitions and properties that in the discrete case. In particular, the specification at finite volume $\Lambda$ corresponding to a finite cycle boundary condition is stochastically dominated by the corresponding invariant measure of the free process. So, to show the existence we will prove tightness of the family of specifications $\{G_{\Omega,\Lambda}^{\text{id}}\}_{\Lambda\Subset {\mathbb R}^d}$. To prove uniqueness we will apply the existence of separating sets as in Lemma~\eqref{mu-times-mu(A)=1}. We want to construct a coupling between the free process in the continuum setup with the free process of a certain discrete model on ${\mathbb Z}^d$ with Poisson multiplicities in such way that the first is dominated by the second. Then we are able to apply the results of previous sections. If $z=(z_1,\dots,z_d) \in {\mathbb R}^d$ we write $\floor{z}=(\floor{z_1},\dots,\floor{z_d})\in {\mathbb Z}^d$. Let $\Omega$ be a homogeneous Poisson process on ${\mathbb R}^d$ with intesity $\rho$. For $x\in {\mathbb Z}^d$, let $\theta(x)$ be the number of points in $\Omega \cap I_x$, where $I_x = x + [0,1)^d$. Then $\theta=\{\theta(x)\}_{x\in {\mathbb Z}^d}$ is an i.i.d. sequence of Poisson($\rho$) random variables. For each $x$ such that $\theta(x) \neq 0$ we tag points of $I_x$ from $1$ to $\theta(x)$ with some rule. For example, using the relative order of the distance to $x$. So, there is a bijection among $\Omega$ and $\Omega_{\theta}$, where $\Omega_{\theta}\subset {\mathbb Z}^d\times \mathbb{N}$ is the set associated to $\theta$, such that each $z\in\Omega$ is mapped to some $(\floor{z},i)$ with $i\in\{1,\dots,\theta(\floor{z})\}$. This bijection induces also a bijective map among the finite cycle spaces. We denote this bijection by $\Psi$. The following lemma summarizes these claims. \begin{lema} The map $\Psi \colon S_{\Omega} \to S_{\theta}$ is a homeomorphism. Further, it induces a homeomorphism among $\mathbb{N}_0^{\Gamma_{\Omega}}$ and $\mathbb{N}_0^{\Gamma_{\theta}}$ by the relation $\eta \mapsto \varsigma$ where $\varsigma(\gamma)= \eta(\Psi^{-1}(\gamma))$ for all $\gamma\in \Gamma_{\theta}$. \end{lema} The following lemma tells us what potential we can choose to compare the discrete and continuum models. \begin{lema} For $x, z\in{\mathbb R}^d$ we have: $\|x-z\|^2 \geq V(\floor{x}-\floor{z})$, where $V$ is defined by $V(x)=\max\{\|x\|^2-2\sqrt{d}\|x\|,0\}$. \end{lema} \begin{proof} For $x\in {\mathbb R}^d$ write $x= \floor{x} + \tilde{x}$ with $\tilde{x}\in [0,1)^d$. So, using Cauchy-Schwarz we have \[ \|x-z\|^2 \geq \|\floor{x} - \floor{z}\|^2 - 2 \|\floor{x} - \floor{z}\| \|\tilde{x} - \tilde{z}\| \geq \|\floor{x} - \floor{z}\|^2 - 2 \sqrt{d}\|\floor{x} - \floor{z}\|.\qedhere\] \end{proof} For the rest of this section, $V$ will denote the potential defined in the previous Lemma and $H_V$ its associated Hamiltonian. Note that $H(\gamma)\geq H_V(\Psi(\gamma))$ for all $\gamma\in \Gamma_{\Omega}$, so, the respective weights satisfies $w(\gamma)\leq w_V(\Psi(\gamma))$ for all $\gamma \in \Gamma_{\Omega}$. Denote by $(\eta_t^o \colon t\in {\mathbb R})$ and $(\varsigma_t^o \colon t\in {\mathbb R})$ the stationary constructions of free processes for the continuum model with quadratic potential and for the discrete model related to $V$ respectively. By the relation among weights, we can give a coupling between processes such that $\Psi(\eta_t^o) \leq \varsigma_t^o$ for all $t$. Each free process is constructed as a function of a Poisson process in the cycle spaces, so, it is sufficient to couple both processes. Denote by $\mathcal{N}$ and $\mathcal{N}_V$ the Poisson processes corresponding to $(\eta_t^o \colon t\in {\mathbb R})$ and $(\varsigma_t^o \colon t\in {\mathbb R})$ respectively. It is sufficient to construct $\mathcal{N}$ as an independent thinning of $\mathcal{N}_V$ as follows: a mark $(\Psi(\gamma),t,s) \in\mathcal{N}_V$ induces a mark $(\gamma,t,s)$ in the process $\mathcal{N}$ with probability $w(\gamma)/w_V(\Psi(\gamma))$ independent of each other. Let $K\subset S_{\Omega}^F \subset \{0,1\}^{\Gamma_{\Omega}}$ be a decreasing event. Since the Lemma~\eqref{dominacion-de-especificaciones} holds for the continuum setting, the specifications are dominated by the Poisson measure $\nu_{\Omega}$. Using this fact with the coupling among both free processes we have that for all compact set $\Lambda \subset {\mathbb R}^d$: \begin{equation}\label{discreto-domina-continuo} G_{\Omega,\Lambda}^{\text{id}} (K^c)\leq \nu_{\Omega} (K^c)\leq \nu_{\theta}(\Psi(K^c)). \end{equation} For the discrete model we define $\widehat{K}_f = \bigcap\limits_{x\in {\mathbb Z}^d} \widehat{K}_f(x) \subset \mathbb{N}_0^{\Gamma_{\theta}}$ where \[\widehat{K}_{f}(x)= \{\eta\in \mathbb{N}_0^{\Gamma_{\theta}} \colon \forall \,\gamma\in\eta \text{ such that } x\in \gamma \text{ we have } H_V(\gamma) \leq f(x)\} \,.\] It is the same set of the previous section but now is associated to $H_V$. The set $K_f= \widehat{K}_f \cap S_{\theta}^{F}$ is a decreasing event by Lemma~\eqref{lema-Kf-creciente} and if the density $\rho$ is good for $V$, $K_f$ is a non-empty compact set for almost every realization of $\theta$. For the rest of the section we assume that $\rho$ is good for $V$. Note that $\Psi^{-1}(K_f)$ is also a decreasing event and a non-empty compact set. Hence, equation~\eqref{discreto-domina-continuo} implies that for almost every realization of $\Omega$ we have \[G_{\Omega,\Lambda}^{\text{id}} (\Psi^{-1}(K_f^c))\leq \nu_{\Omega} (\Psi^{-1}(K_f^c))\leq \nu_{\theta}(K_f^c)\,.\] So, given $\epsilon>0$, Lemma~\eqref{lema-cota-quenched} provides a function $f$ such that $\nu_{\theta}(K_f^c)<\epsilon$. \begin{lema} Let $\rho \in (0,1/2)$ such that $\rho$ is good for $V$. Suppose that $\rho$ and $\alpha>0$ satisfy $C_{\rho}\varphi_V(\alpha/2)<1$, where $C_{\rho}= \frac{\rho e^{-\rho+\frac{1}{2}}}{1-2\rho}$ and $\varphi_V$ was defined in~\eqref{funcion-varphi-general}. Then the family $\{ G_{\Omega,\Lambda}^{\text{id}} \}_{\Lambda \Subset {\mathbb R}^d}$ is tight and there exists a Gibbs measure $\mu$ that concentrates on finite cycle permutations. \end{lema} Now, we want to establish the uniqueness among Gibbs measures that concentrate on finite cycle permutations. To prove it, we use the existence of arbitrary large separating sets for the discrete model with potential $V$. We say that a compact set $\Delta \subset {\mathbb R}^d$ is a separating set for $\eta\in \mathbb{N}_0^{\Gamma_{\Omega}}$ if $\partial \Delta \cap \Omega = \emptyset$ and for any $\gamma\in\eta$ we have $\{\gamma\}\subset \Delta$ or $\{\gamma\}\subset \Delta^c$. We say that $\Delta$ is a separating set for the pair $(\eta,\eta')$, if it is a separating set for $\eta$ and $\eta'$. Given $z_1,\dots, z_n \in \Omega$, we can pick a compact set $\Lambda\subset {\mathbb R}^d$ such that $\partial \Lambda \cap \Omega = \emptyset$ and $\Lambda \cap \Omega = \{z_1,\dots, z_n\}$. There exists a lot of ways to choose $\Lambda$, but fix one of these. Now, given $\Delta \Subset {\mathbb Z}^d$ define $\Psi^{-1}(\Delta)$ as the fix previous compact set that contains $\cup_{x\in \Delta} (\Omega \cap I_x)$. Note that, if $\Delta \Subset {\mathbb Z}^d$ is a separating set for $\varsigma \in \mathbb{N}_0^{\Gamma_{\theta}}$, the compact set $\Psi^{-1}(\Delta) \subset {\mathbb R}^d$ is also a separating set for $\eta=\Psi^{-1}(\varsigma)\in \mathbb{N}_0^{\Gamma_{\Omega}}$. This fact can be extended to pair of configurations. Let $A$ be the event that exists a sequence of compact sets that increase to ${\mathbb R}^d$ and each set is a separating set for pairs $(\eta,\eta')\in {\mathbb N}_0^{\Gamma_{\Omega}}\times{\mathbb N}_0^{\Gamma_{\Omega}}$. By the coupling between continuum and discrete free processes and Lemma~\eqref{nu-times-nu(A)=1} about the existence or arbitrary large separating sets for the discrete model, one shows that for almost surely with respect to the environment $\nu_{\Omega, \Lambda}^{\xi} \otimes \nu_{\Omega, \Lambda}^{\xi'}(A)=1$ for any finite $\Lambda$ and any pair $(\xi, \xi')$ of finite cycle permutations in ${\mathcal{S}}_{\Omega}$. Lemma~\eqref{mu-times-mu(A)=1} also holds in the continuum context, since it only uses the definition of Gibbs measures and the domination of specifications by the corresponding free process. So, if $\mu$ and $\mu'$ are Gibbs measures in the continuous model that concentrate on the finite cycle permutations, we have \begin{equation*} \mu\otimes\mu'( \exists \{\Delta_j\}_{j\in {\mathbb N}} \uparrow {\mathbb R}^d \text{ such that } \Delta_j \text{is a separating set}) = 1. \end{equation*} Hence, the existence of an increasing sequence of separating sets in the continuum model is proved. \begin{lema} Let $\rho \in (0,1/2)$ such that $\rho$ is good for $V$. Suppose that $\rho$ and $\alpha$ satisfy $C_{\rho}\varphi_V(\alpha)<r_0$, where $r_0$ is the solution of the equation~\eqref{r_0}. Then, if $\mu$ and $\mu'$ are Gibbs measures supported on finite cycle permutations we have $\mu=\mu'$. \end{lema} \begin{proof} The proof of Lemma~\eqref{unicidad-discreto} applies without changes. \end{proof} \section*{Appendix} In this appendix we prove a bound for the number of cycles that have the same ordered support. Recall that the ordered support of a cycle $\gamma$, defined in~\eqref{ord-supp}, is the projection of $\gamma$ to ${\mathbb Z}^d$ erasing consecutive repetitions of sites. Let $\bar{y}\in ({\mathbb Z}^d)^m$ such that $\bar{y}_i\neq \bar{y}_{i+1}$. It is a possible ordered support. Remember that $N_{\theta}(\bar{y})$ is the number of cycles such that its ordered support is $\bar{y}$. Write $\{\bar{y}\}$ for coordinates of $\bar{y}$ without repetitions. For $z\in \{\bar{y}\}$ let $k_z(\bar{y})= \# \{i \colon y_i=z\}$ be the number of times that $z$ appears in the ordered support $\bar{y}$, i.e., the effective uses of site $z$ in the sense that computes non trivially for the Hamiltonian. If $\bar{y}$ is such that $k_z>\theta(z)$ for some $z\in\{\bar{y}\}$, then the number of cycles that has $\bar{y}$ as ordered support is zero. It is the same if $\theta(z)=0$ for any $z\in \{\bar{y}\}$. Therefore, suppose that $\bar{y}$ is such that $0<k_z\leq \theta(z)$ for all $z\in \{\bar{y}\}$. A cycle $\gamma$ with $[\gamma]=\bar{y}$ can use the site $z$ at least $k_z$ times and at most $\theta(z)$ times. Let $a\in\{k_z,\dots,\theta(z)\}$ be the number of times that $\gamma$ uses $z$. There are $\binom{\theta(z)}{a} a!$ ways to choose $a$ different points in $\Omega_{\theta}$ located at $z$. Then, the $a$ different points have to locate at $k_z$ coordinates of $\bar{y}=[\gamma]$ corresponding to place $z$, and they must use all of $k_z$ coordinates. Thus, there are $\binom{a-1}{k_z-1}$ ways to do it (it is the same problem that put $a$ balls in $k_z$ boxes using at least one ball per box). So, for the number of cycles that has $\bar{y}$ as ordered support we have \begin{equation} \label{cotaM(y)} N_{\theta}(\bar{y}) \leq \sum\limits_{\substack{k_1 \leq a_1 \leq \theta(z_1)\\ \vdots \\ k_{l} \leq a_{l} \leq \theta(z_{l})}} \prod_{j=1}^{l} \binom{\theta(z_j)}{a_j} a_j! \binom{a_j-1}{k_j-1} \leq \prod_{j=1}^{l} \left( \frac{e^{\frac{1}{2}}}{2} \, \theta(z_j)! \, 2^{\theta(z_j)} {\mathbf 1}_{\{\theta(z_j)\neq 0\}} \right) := M_{\theta}(\bar{y})\,, \end{equation} where $\{\bar{y}\}=\{z_1,\dots,z_l\}$ and $k_j=k_{z_j}$. For density $\rho\in(0,1/2)$, the expectation under ${\mathbb P}$ of the upper bound $M_{\theta}(\bar{y})$ given in~\eqref{cota-expectation-M(y)}, can be computed using the independence of multiplicities and their distribution. Indeed, \begin{align*} {\mathbb E}[M_{\theta}(\bar{y})] = {\mathbb E}\left[\frac{e^{\frac{1}{2}}}{2} \theta(z)!\,2^{\theta(z)} {\mathbf 1}_{\{\theta(z)\neq 0 \}} \right]^{l} = \left(\sum\limits_{i\geq 1} \frac{e^{\frac{1}{2}}}{2} 2^{i} \, e^{-\rho}\, \rho^i \right)^l \leq \left(\frac{\rho e^{-\rho+\frac{1}{2}}}{1-2\rho}\right)^{|\bar{y}|} \,. \end{align*} \begin{obs} \label{cota-indep-orden} The upper bound $M_{\theta}(\bar{y})$ does not depend on the relative order for coordinates of $\bar{y}$. So, if we need an upper bound for the number of pairs of cycles $(\gamma,\gamma')$ such that $[\gamma]= \bar{y}$ and $[\gamma']=\bar{y}'$, we use the upper bound $M_{\theta}(\bar{y}\bar{y}')$, where $\bar{y}\bar{y}'$ is a concatenation of vectors such that $(\bar{y},\bar{y}')$ is an ordered support. \end{obs} \bibliographystyle{alpha}
\section{Introduction} \label{sec:intro} The notion of a thermal equilibrium state of a macroscopic system (say, one with $N>10^{20}$ degrees of freedom) is basic to thermodynamics. Its existence is Postulate 1 in the Tisza--Callen formulation of thermodynamics \cite{Ca}. Informally, one can use Onsager's description: \begin{quotation} These ``thermodynamic" states are typically defined as states of ``equilibrium" under specified restraints in composition, energy, and external boundary conditions, in that no spontaneous change can occur in the system as long as the constraints remain fixed. \cite{Ons} (quotation marks in original) \end{quotation} One would, of course, also like to have a microscopic description of what it means for a system to be in thermal equilibrium in terms of the micro-state considered in statistical mechanics, i.e., in terms of the phase space $\Gamma$ of a classical system or the Hilbert space $\mathscr{H}$ of a quantum system. When speaking of thermal equilibrium, one often refers to a \emph{thermodynamic ensemble}, which corresponds classically to a probability distribution over $\Gamma$ and quantum-mechanically to a density operator on $\mathscr{H}$. For example, the canonical ensemble at inverse temperature $\beta$ has, classically, the density function \begin{equation}\label{rhobetadef} \rho^{(\beta)}(X) = \frac{1}{Z} e^{-\beta H(X)} \end{equation} for any $X\in \Gamma$, with normalizing constant $Z$ and Hamiltonian function $H:\Gamma\to\mathbb{R}$. In quantum mechanics, the canonical ensemble corresponds to the density operator \begin{equation}\label{hatrhobetadef} \hat\rho^{(\beta)} = \frac{1}{Z} e^{-\beta \hat H} \end{equation} with a different normalizing constant $Z$ and Hamiltonian operator $\hat H$ on $\mathscr{H}$. Likewise, the micro-canonical ensemble is, classically, the uniform density $\rho^{\mathrm{mc}}$ over a micro-canonical energy shell \begin{equation}\label{Gammamc} \Gamma_{\mathrm{mc}} = \Bigl\{X\in\Gamma: E-\Delta E< H(X)\leq E\Bigr\} \end{equation} whose width $\Delta E$ represents the macroscopic resolution of energy. In quantum mechanics, the micro-canonical ensemble corresponds to the density operator \begin{equation}\label{hatrhomcdef} \hat\rho^{\mathrm{mc}}= \frac{1}{\dim \mathscr{H}_{\mathrm{mc}}} \hat{P}_{\mathrm{mc}}\,, \end{equation} where $\mathscr{H}_{\mathrm{mc}}$, also called the micro-canonical energy shell, is the subspace of $\mathscr{H}$ spanned by the eigenvectors of $\hat{H}$ with eigenvalue between $E-\Delta E$ and $E$, and $\hat{P}_{\mathrm{mc}}$ is the projection to $\mathscr{H}_{\mathrm{mc}}$. However, such an ensemble does not answer the need for a definition of ``thermal equilibrium,'' as one often wants to consider an individual closed, macroscopic system in thermal equilibrium. For example, we want to know whether \emph{this particular} thermos bottle of coffee is in thermal equilibrium. Put differently, we assume an ``individualist'' attitude, as opposed to the ``ensemblist'' attitude \cite{GLTZ10}. An individual system corresponds classically to a point in phase space, rather than to a distribution over phase space. Also in quantum mechanics, one often wants to regard a system in a pure state $\ket{\psi}$ as being in thermal equilibrium, while its density matrix $\hat\rho=\pr{\psi}$ is far away from the $\hat\rho^{(\beta)}$ of \eqref{hatrhobetadef} and the $\hat\rho^{\mathrm{mc}}$ of \eqref{hatrhomcdef}. This is certainly possible; in fact, it has been an active field of research for a number of years now to study how a closed quantum system in a pure state can display thermal equilibrium behavior; see, e.g., \cite{GMM04,PSW06,GLTZ06,Sug07,Rei07,Rei08,RDO08,LPSW08,GLMTZ09b,Rei10,RS12,Rei15,GE15}, after some pioneering work even earlier \cite{vN29,JS,Deu91,Sre94}. In Section~\ref{sec:pure} we elaborate on the reasons for considering systems in pure states. In this paper, which elaborates on ideas introduced in \cite{GHLT15a}, we explain how the idea of thermal equilibrium of a system in a pure state can be defined for a macroscopic quantum system. Of particular interest in this context are systems featuring \emph{many-body Anderson localization (MBL)} \cite{And58,BAA06b,OH07}. These are quantum systems whose Hamiltonian $\hat H$ has eigenfunctions that are in a certain sense spatially localized, which can be an obstacle to reaching thermal equilibrium (in whatever sense). A natural definition of thermal equilibrium is to say that a system with (pure or mixed) density matrix $\hat\rho$ is in thermal equilibrium when all macro observables assume rather sharp values in $\hat\rho$ that agree with their thermodynamic equilibrium values; we call this notion \emph{macroscopic thermal equilibrium (MATE)}. As we discuss below, the pure states $\psi$ in MATE in a given micro-canonical energy shell are all close to a certain subspace of Hilbert space, the thermal equilibrium subspace $\mathscr{H}_{\mathrm{eq}}$ for this energy shell. For many systems including those with MBL, $\mathscr{H}_{\mathrm{eq}}$ has the overwhelming majority of dimensions in the energy shell, and most pure states in the energy shell are in MATE, as are most mixed states. Here and throughout this paper, ``most'' means ``the overwhelming majority of'' (or ``all except a small set'') relative to the relevant uniform distribution; for example, ``most pure states in the energy shell'' means the overwhelming majority relative to the uniform distribution on the unit sphere in $\mathscr{H}_{\mathrm{mc}}$ (see Remark~\ref{rem:mostMATE} in Section~\ref{sec:MATE1}). Now, for generic macroscopic systems, with or without MBL, most $\psi$'s have a stronger property: That micro observables (i.e., any observable referring to a small subsystem $S$) have a probability distribution in $\psi$ that coincides with their thermal probability distribution; we say that a system with such a $\psi$ (or, in fact, such a $\hat\rho$) is in \emph{microscopic thermal equilibrium (MITE)}. This property is a sign of a high degree of entanglement in $\psi$ between $S$ and its complement. A dynamical aspect of our theme is the \emph{approach to thermal equilibrium}, by which we mean (for either MATE or MITE) that a system starting out away from thermal equilibrium sooner or later reaches thermal equilibrium and spends most of the time in the long run in thermal equilibrium. This behavior is connected to the \emph{eigenstate thermalization hypothesis} (ETH), which asserts that the energy eigenfunctions are in thermal equilibrium, and which therefore can be considered in two variants, as MATE-ETH or MITE-ETH. If \emph{all} energy eigenstates are in MATE, then it can be shown \cite{GLMTZ09b} that \emph{all} pure states approach MATE; for MITE, the situation is a bit more complicated, as discussed in Section~\ref{sec:overview}. For MBL systems, some pure states fail to approach either MATE or MITE, which is related to the failure of MATE for some eigenstates and MITE for all eigenstates for such systems (again, see Section~\ref{sec:overview}). In the remainder of this paper we explore the two notions, MATE and MITE, their properties and relations to MBL. The remaining sections are organized as follows. In Section~\ref{sec:pure}, we describe our motivation for considering an individual system, possibly even in a pure state. In Section~\ref{sec:classical}, we take a look at the classical situation of thermal equilibrium. In Section~\ref{sec:quantum}, we give a detailed description of the concepts of MATE and MITE. In Section~\ref{sec:approach} we focus on the dynamical approach to MATE or MITE and the eigenstate thermalization hypothesis (ETH). In Section~\ref{sec:mbl}, we illustrate MATE and MITE for specific simple MBL systems. In Section~\ref{sec:overview}, we explore further aspects of MATE and MITE. In Section~\ref{sec:exceptional}, we address cases in which no dominant macro-state exists. In Section~\ref{sec:otherdefs}, we review a couple of other proposed definitions of thermal equilibrium. We conclude in Section~\ref{sec:conclusions}. \section{Why Include Pure States?} \label{sec:pure} Readers familiar and comfortable with the individualist attitude may want to skip this section. \subsection{Classical Mechanics} In the ensemblist attitude, one would say that thermal equilibrium occurs when, for a classical system, the probability density is close to that of a suitable thermodynamic ensemble---say, to $\rho^{(\beta)}$ or $\rho^{\mathrm{mc}}$. Thus, thermal equilibrium would seem to require a ``mixed state'' (i.e., a probability distribution over phase space), and not a ``pure state'' (i.e., a point in phase space). So why do we insist on considering pure states? The reason is that an \emph{individual system} has a unique phase point $X$ (a pure state), and it seems meaningful and necessary to talk about whether this system is in thermal equilibrium. For example, we can talk about this particular thermos bottle of coffee, how the energy is spatially distributed in it, in particular whether the local temperature is constant throughout the coffee. To be sure, \emph{our knowledge} of the system can be represented by some probability density function $\rho_{\mathrm{know}}$ over the phase space, and since our knowledge is usually very limited, as we do not know the exact position and momentum of every molecule in this bottle of coffee, $\rho_{\mathrm{know}}$ is usually \emph{very} spread-out (not at all a pure state). However, when we ask whether the coffee in this particular bottle is in thermal equilibrium, we are not asking whether $\rho_{\mathrm{know}}$ is close to $\rho^{(\beta)}$ or $\rho^{\mathrm{mc}}$; instead, we are asking about how the energy is spatially distributed, and whether the local temperature is constant. We are asking about properties of the phase point $X$, not of our knowledge $\rho_{\mathrm{know}}$ (a point made particularly in \cite{LM03}). In fact, if we do not have the relevant knowledge about $X$, if we do not know the spatial distribution of energy in this particular bottle, we have to answer that we do not know whether the content of the bottle is in thermal equilibrium, and we need to make measurements on the system to find out whether it is in thermal equilibrium. We do not want to say that the system is not in thermal equilibrium just because we do not know its phase point---or because we do. So, we say that a phase point $X$ is in thermal equilibrium if it has all the properties of thermal equilibrium, such as a uniform spatial distribution of energy over the volume of the bottle (see Section~\ref{sec:classical} below for more detail). By $\Gamma_{\mathrm{eq}}$ we denote the set of those $X$. Should our knowledge correspond to $\rho_{\mathrm{know}}=\rho^{\mathrm{mc}}$, then we are $>99.99\%$ confident that $X$ is in thermal equilibrium, as $\Gamma_{\mathrm{eq}}$ has most of the phase space volume of $\Gamma_{\mathrm{mc}}$. \subsection{Quantum Mechanics} In quantum mechanics, the situation is a bit more complicated and richer than in the classical case. That is mainly because a mixed state, i.e., a density matrix $\hat\rho$ on $\mathscr{H}$, can arise in two ways: either as representing our lack of knowledge (analogously to probability distributions in the classical case), or as a consequence of entanglement, i.e., as a reduced density matrix obtained by tracing out another system with which our system is entangled. For that reason, we do not insist that the system be in a pure state, but we insist that a system in a pure state can be in thermal equilibrium! As in the classical case, we regard the experimenter's lack of knowledge as irrelevant to the question of whether the system is in thermal equilibrium. This attitude already suggests using a definition of thermal equilibrium that allows also systems in pure states to be in thermal equilibrium: Since classically a single $X$ could be in thermal equilibrium, why not a single $\psi$? Likewise, since knowing $X$ did not matter, why would knowing $\psi$ matter? As in the classical case, when we ask whether a system is in thermal equilibrium, we do not ask a question about our limited knowledge but one about the factual state of affairs. For that reason, we admit the possibility that the system may have a pure state $\psi$ that we do not know. Moreover, if by thermal equilibrium we mean that (e.g.)\ energy is uniformly distributed (within suitable tolerances) over the volume, then that can very well be the case also for a pure quantum state $\psi$. (For MITE, it is very relevant that small subsystems have thermal (highly mixed) density matrices, but the whole system may well be in a pure state.) Finally, the concepts of MATE and MITE show that thermal equilibrium \emph{can} be defined in a way that allows a system in a pure state to be in thermal equilibrium. At the same time, they also allow a system in a mixed state $\hat\rho$ to be in thermal equilibrium, without requiring that $\hat\rho$ be close to $\hat\rho^{(\beta)}$ or $\hat\rho^{\mathrm{mc}}$. For example, even if the system is entangled with another system, and its state $\hat\rho$ is not pure, it could be much less mixed than $\hat\rho^{\mathrm{mc}}$; e.g., $\hat\rho$ could have rank 2 (i.e., could be a mixture of 2 pure states). Another subtlety in the quantum case arises from superpositions of macroscopically different states, such as Schr\"odinger's cat states. Here, our investigation touches upon the foundations of quantum mechanics. For the purposes of this paper, however, we can leave this problem aside. \section{Thermal Equilibrium in Classical Mechanics} \label{sec:classical} A definition of thermal equilibrium for a classical system in a pure state amounts to the specification of a set of phase points that we regard as being in thermal equilibrium; that is, a subset $\Gamma_{\mathrm{eq}}$ of phase space $\Gamma$. Such a set $\Gamma_{\mathrm{eq}}$ has been defined by Boltzmann \cite{B96,Gol99} as follows. Consider a collection of macro variables $M_j$, $j=1,\ldots, K$; each of them can be regarded as a function on phase space, $M_j:\Gamma\to\mathbb{R}$. Since macro measurements have limited accuracy (say, $\Delta M_j>0$), we want to think of the $M_j$ as suitably coarse-grained with a discrete set of values, say, $\{k\Delta M_j: k\in \mathbb{Z}\}$. Then two phase points $X_1,X_2\in\Gamma$ will look macroscopically the same if $M_j(X_1)=M_j(X_2)$ for all $j=1,\ldots, K$. In this way, the collection of functions $\{M_1,\ldots,M_K\}$ defines a partition of phase space $\Gamma$ into equivalence classes \begin{equation}\label{GammaMdef} \Gamma_\nu = \Bigl\{X\in \Gamma: M_j(X)=\nu_j \:\forall j\Bigr\}\,, \end{equation} one for every macro-state $\nu=(\nu_1,\ldots,\nu_K)$ described by the list of values of all $M_j$; we call $\Gamma_\nu$ a \emph{macro-state}. Some of the $\Gamma_\nu$ represent thermal equilibrium. \begin{figure}[h] \begin{center} \begin{minipage}{50mm} \includegraphics[width=60mm]{2equi-fig2.pdf} \end{minipage} \end{center} \caption{Coarse graining function $f$ with $\Delta E = 0.1$} \label{fig:coarse} \end{figure} More specifically, since a coarse-grained version of the energy is usually among the macro variables, say $M_1(X) = f\bigl(H(X)\bigr)$ with coarse-graining function $f(E) = [E/\Delta E]\, \Delta E$ and $[x]$ denoting the nearest integer to $x\in\mathbb{R}$ (see Figure~\ref{fig:coarse}), every macro-state $\Gamma_\nu$ belongs to a particular micro-canonical energy shell $\Gamma_{\mathrm{mc}}$, so that $\Gamma_{\mathrm{mc}}$ is partitioned into macro-states $\Gamma_\nu$ (see Figure~\ref{fig:phasespace}). In most macroscopic systems (see Section~\ref{sec:exceptional} for a discussion of exceptions), there is, for every energy shell $\Gamma_{\mathrm{mc}}$, one macro-state $\Gamma_\nu=\Gamma_{\mathrm{eq}}$ that contains most of the phase space volume of $\Gamma_{\mathrm{mc}}$; see, e.g., \cite{Lan,Gol99,GL,L07}. A realistic value of the size of $\Gamma_{\mathrm{eq}}$ is \begin{equation}\label{Gammaeqsize} \frac{\vol\Gamma_{\mathrm{eq}}}{\vol\Gamma_{\mathrm{mc}}} \approx 1-\exp(-10^{-15}N)\,, \end{equation} where $\vol$ denotes the $6N$-dimensional phase space volume and $N$ is the number of degrees of freedom (or of particles) of the system; this estimate is derived in Section~\ref{sec:MATE} (having in mind a system that is macroscopically large). Since the phase point $X(t)$ cannot leave the energy shell, and since phase space volume is conserved by Liouville's theorem, most $X\in\Gamma_{\mathrm{eq}}$ stay during their time evolution in $\Gamma_{\mathrm{eq}}$ for a long time (in fact, usually for an \emph{extraordinarily} long time), though not forever. Then, this set $\Gamma_{\mathrm{eq}}$ is the \emph{thermal equilibrium subset} for energy $E$, and the system \emph{is in thermal equilibrium} whenever $X(t)\in \Gamma_{\mathrm{eq}}$. \begin{figure}[h] \begin{center} \begin{minipage}{50mm} \includegraphics[width=60mm]{phasespace1.pdf} \end{minipage} \end{center} \caption{Schematic representation of the partition of an energy shell $\Gamma_{\mathrm{mc}}$ in the phase space of a macroscopic classical system into subsets $\Gamma_\nu$ corresponding to different macro-states $\nu$. One of the subsets, $\Gamma_{\mathrm{eq}}$, contains more than 99.99\%\ of the volume (not drawn to scale) and corresponds to thermal equilibrium.} \label{fig:phasespace} \end{figure} There is some arbitrariness in the choice of the functions $M_j$. As a consequence, there is also some arbitrariness about which set exactly $\Gamma_{\mathrm{eq}}$ is. The attitude of Boltzmann's followers (including the authors) is that this arbitrariness is unproblematical, as any reasonable choice of $\Gamma_{\mathrm{eq}}$ will take up most of the volume of $\Gamma_{\mathrm{mc}}$. Rather, this arbitrariness makes it evident that there is no reason to expect a unique criterion for exactly which phase points are in thermal equilibrium, just as there is no unique criterion for exactly which strings of 0's and 1's should count as ``purely random-looking.'' \section{Thermal Equilibrium in Quantum Mechanics} \label{sec:quantum} As already mentioned, unlike in classical mechanics, in quantum mechanics we need to consider two different notions of thermal equilibrium, which we describe in turn in the following two subsections. \subsection{Macroscopic Thermal Equilibrium} \label{sec:MATE1} For quantum mechanics, a construction analogous to the subdivision of $\Gamma_{\mathrm{mc}}$ into $\Gamma_\nu$'s (Figure~\ref{fig:phasespace}) goes back to von Neumann \cite{vN29,GLTZ10} and, in a preliminary form, to Einstein \cite{Ein14}. Let \begin{equation} \mathbb{S}(\mathscr{H})=\bigl\{\psi\in\mathscr{H}: \|\psi\|=1\bigr\} \end{equation} denote the unit sphere in Hilbert space. Consider a collection of macro observables, corresponding to self-adjoint operators $\hat{M}_j$, $j=1,\ldots,K$, on $\mathscr{H}$. These can be based on a partition of the system's available volume $\Lambda\subset \mathbb{R}^3$ into cells $\Lambda_i$ that are small on the macro scale but still large enough to each contain a large number of degrees of freedom. Examples of natural choices of $\hat M$'s are, for each cell, the number of particles of each type, the energy of the cell, its momentum, and/or its magnetization. Again, we think of each $\hat{M}_j$ as suitably coarse-grained, so that its eigenvalues are separated by gaps whose magnitude corresponds to the inaccuracy of macro measurements. For example, the Hamiltonian $\hat{H}$ of a macroscopic system usually has eigenvalues separated by gaps much, much smaller than the macro energy inaccuracy $\Delta E$, so coarse graining at coarseness $\Delta E$, as in $\hat{M}_1 = f(\hat{H})$ with $f(E) = [E/\Delta E]\, \Delta E$ as before (see Figure~\ref{fig:coarse}), leads to a high degree of degeneracy of each eigenvalue. As von Neumann \cite{vN29} argued, the $\hat{M}_j$ can be taken to commute with each other,\footnote{For a different but closely related notion of thermal equilibrium, proposed by Tasaki, see Section~\ref{sec:TMATE}. In this approach one avoids the necessity of rounding the macro observables to make them commute. This approach adds support to there always being a dominant macrostate. It is however not so convenient for discussing the joint values of the macro observables, especially the nonequilibrium values. } by changing them if necessary, in addition to the coarse-graining, in a way that is negligible on macro scales \cite{GLMTZ09b,GLTZ10,Y}. Then the simultaneous diagonalization of the $\hat{M}_j$ provides a decomposition of Hilbert space into a sum of orthogonal subspaces $\mathscr{H}_\nu$, \begin{equation}\label{decomp} \mathscr{H}= \bigoplus_{\nu} \mathscr{H}_\nu\,, \end{equation} where $\nu=(\nu_1,\ldots,\nu_K)$, and $\mathscr{H}_\nu$ is the joint eigenspace of the $\hat{M}_j$ with eigenvalues $\nu_j$. We call the $\mathscr{H}_\nu$ \emph{macro-spaces}, as they are the analogs of the $\Gamma_\nu$ and correspond to different macro-states. If $\hat{M}_1$ is again the coarse-grained energy, then its eigenspaces are the micro-canonical energy shells $\mathscr{H}_{\mathrm{mc}}$, which are also decomposed by a subcollection of $\mathscr{H}_\nu$'s. In general, one macro-space in each $\mathscr{H}_{\mathrm{mc}}$ has most of the dimension of $\mathscr{H}_{\mathrm{mc}}$, and this is the \emph{thermal equilibrium subspace} $\mathscr{H}_{\mathrm{eq}}$. In analogy to the classical case, a realistic value of the ratio of dimensions is (see Section~\ref{sec:MATE}) \begin{equation}\label{Hilberteqsize} \frac{\dim\mathscr{H}_{\mathrm{eq}}}{\dim \mathscr{H}_{\mathrm{mc}}} \approx 1-\exp(-10^{-15}N)\,. \end{equation} We choose a suitably small tolerance $\delta>0$ and say that a system with state $\hat\rho$ is in \emph{macroscopic thermal equilibrium (MATE)} if and only if \begin{equation}\label{MATEdef} \tr(\hat\rho \hat P_{\mathrm{eq}}) > 1-\delta\,. \end{equation} We also write MATE for the set of all $\hat\rho$'s in $\mathscr{H}_{\mathrm{mc}}$ satisfying this condition, as well as for the set of all pure states $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$ such that $\hat\rho=\pr{\psi}$ satisfies \eqref{MATEdef}. A definition of thermal equilibrium along these lines was used, e.g., in \cite{Gri,Pen04,RDO08,GLMTZ09b,Tas10,GHT13b,GHT14a,GHT14b}. We look at realistic values of $\delta$ in Section~\ref{sec:MATE}. \bigskip \noindent{\bf Remarks.} \begin{enumerate} \setcounter{enumi}{\theremarks} \item\label{rem:mostMATE} \emph{Most pure states in the energy shell are in MATE.} We note that this statement is also true of MBL systems. As a precise version of the statement, suppose that one of the macro-spaces, $\mathscr{H}_{\mathrm{eq}}$, is dominant, \begin{equation}\label{dominant} \frac{\dim \mathscr{H}_{\mathrm{eq}}}{\dim \mathscr{H}_{\mathrm{mc}}} > 1-\varepsilon \end{equation} with $0< \varepsilon \ll \delta$. Then \begin{equation}\label{sizeMATE} u_{\mathrm{mc}}(\mathrm{MATE})> 1-\frac{\varepsilon}{\delta} \approx 1\,, \end{equation} where $u_{\mathrm{mc}}$ is the normalized uniform (surface area) measure on $\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$. Indeed, \begin{align} \int\limits_{\mathbb{S}(\mathscr{H}_{\mathrm{mc}})} \!\!\! u_{\mathrm{mc}}(d\psi)\, \scp{\psi}{\hat P_{\mathrm{eq}}|\psi} &=\tr (\hat\rho^{\mathrm{mc}} \hat P_{\mathrm{eq}})\label{avgpsieqpsi}\\ &=\frac{\dim \mathscr{H}_{\mathrm{eq}}}{\dim \mathscr{H}_{\mathrm{mc}}} > 1-\varepsilon\,, \end{align} but the average of $f(\psi)=\scp{\psi}{\hat P_{\mathrm{eq}}|\psi}$ could not be that high if no more than $1-\varepsilon/\delta$ of all $\psi$'s had $f(\psi)>1-\delta$. In practice, as an order of magnitude, \begin{equation}\label{eta} \varepsilon< 10^{-10^5}\,, \end{equation} (as follows from \eqref{Hilberteqsize} for $N>3\times 10^{20}$) and $\delta$ can be taken to be $\sqrt{\varepsilon}$, which is still comparable to $10^{-10^5}$; then, according to \eqref{sizeMATE}, the fraction of states outside of MATE is also $\leq\sqrt{\varepsilon}$. \item\label{rem:mosteigenMATE} \emph{Most eigenstates of $\hat{H}$ are in MATE.} In fact, for any orthonormal basis $\{b_1,\ldots,b_D\}$ of $\mathscr{H}_{\mathrm{mc}}$, at least the fraction $1-\varepsilon/\delta$ (close to 1 since $\varepsilon \ll \delta$) of all basis vectors are in $\mathrm{MATE}$, since \begin{equation} \frac{1}{D}\sum_{i=1}^D \scp{b_i}{\hat P_{\mathrm{eq}}|b_i} = \frac{1}{D} \tr(\hat P_{\mathrm{eq}}) > 1-\varepsilon\,. \end{equation} Thus, for example, also for Hamiltonians exhibiting many-body localization, most eigenstates are in MATE. \item \emph{Most mixed states in the energy shell are in MATE.} In fact, this is the case relative to \emph{any} unitarily invariant distribution, uniform or not, over the density matrices in $\mathscr{H}_{\mathrm{mc}}$. In other words, suppose that $\hat\rho=\sum_\alpha p_\alpha |b_\alpha\rangle \langle b_\alpha|$ is chosen randomly, with the eigenbasis $\{b_1,\ldots,b_D\}$ uniformly distributed over all orthonormal bases of $\mathscr{H}_{\mathrm{mc}}$ (corresponding to the Haar measure over the unitary group) and the eigenvalues $p_1,\ldots,p_D$ independent of $b_1,\ldots,b_D$ with any joint distribution on the set defined by the conditions $0\leq p_\alpha\leq 1$ and $\sum_\alpha p_\alpha =1$; then $\hat\rho\in \mathrm{MATE}$ with probability near 1. Indeed, \begin{equation} \tr(\hat\rho \hat P_{\mathrm{eq}}) = \sum_{i=1}^D p_i \scp{b_i}{\hat P_{\mathrm{eq}}|b_i}\,, \end{equation} which always lies between 0 and 1. If we average this quantity over the eigenbasis, we obtain $\sum_i p_i (\dim \mathscr{H}_{\mathrm{eq}}/\dim \mathscr{H}_{\mathrm{eq}})> 1-\varepsilon$ by \eqref{avgpsieqpsi}. Since a quantity between 0 and 1 whose average is close to 1 must be close to 1 with high probability, we find that, in fact, even in the subset of $\hat\rho$'s with fixed eigenvalues, most $\hat\rho$'s are in MATE, and a fortiori so if the eigenvalues are randomized. \end{enumerate} \setcounter{remarks}{\theenumi} \subsection{Microscopic Thermal Equilibrium} \label{sec:MITE1} The concept of microscopic thermal equilibrium (MITE) is inspired by \emph{canonical typicality}, the observation \cite{GMM04,PSW05,PSW06,GLTZ06,Sug07} that for any not-too-large subsystem $S$ and most wave functions $\psi$ in the energy shell $\mathscr{H}_{\mathrm{mc}}$, \begin{equation}\label{rhopsirhomc} \hat\rho^\psi_S \approx \hat\rho^{\mathrm{mc}}_S\,, \end{equation} where \begin{equation}\label{rhoSdef} \hat\rho^\psi_S= \tr_{S^c} \pr{\psi} \end{equation} is the reduced density matrix of $S$ obtained by tracing out the complement $S^c$ of $S$, and \begin{equation} \hat\rho^{\mathrm{mc}}_S= \tr_{S^c} \hat\rho^{\mathrm{mc}} \end{equation} with $\hat\rho^{\mathrm{mc}}$ the micro-canonical density matrix as in \eqref{hatrhomcdef}. If $S$ is small enough then \begin{equation}\label{mcS=betaS} \hat\rho^{\mathrm{mc}}_S \approx \hat\rho^{(\beta)}_S \end{equation} for suitable $\beta>0$, where the right-hand side is the partial trace, $\hat\rho_S^{(\beta)}= \tr_{S^c} \, \hat\rho^{(\beta)}$, of the canonical density matrix $\hat\rho^{(\beta)}$ of the whole system as in \eqref{hatrhobetadef}. Let $\ell_0$ be the largest length small enough so that \eqref{mcS=betaS} holds for every subsystem $S$ with diameter $\leq \ell_0$. As a consequence of \eqref{mcS=betaS}, for small $S$, $\hat\rho^\psi_S \approx \hat\rho^{(\beta)}_S$. Hence, it does not matter whether one starts from $\hat\rho^{\mathrm{mc}}$ or $\hat\rho^{(\beta)}$ (this fact is a version of equivalence of ensembles), and we will call either one the canonical or thermal density matrix for $S$.\footnote{The density matrix $Z^{-1}_S\exp(-\beta \hat{H}_S)$ with $\hat H_S$ the Hamiltonian of $S$ is sometimes called the canonical or thermal density matrix for $S$; it agrees with $\hat\rho_S^{(\beta)}$ if the interaction between $S$ and its complement can be neglected. If the interaction cannot be neglected, then $\hat\rho_S^{(\beta)}$ is the correct density matrix to use.} As a consequence, also a micro observable $\hat{A}$ concerning a small subsystem $S$ behaves ``thermally'' in the sense that if we were to make a quantum measurement of $\hat{A}$ then the probability distribution over its eigenvalues would agree with the thermal distribution, defined by $\hat\rho^{\mathrm{mc}}_S$ (or, equivalently, by $\hat\rho^{\mathrm{mc}}$ or $\hat\rho^{(\beta)}$). For a system in a mixed state $\hat\rho$, we write $\hat\rho_S=\tr_{S^c} \hat\rho$ for the reduced state of $S$. If $\hat\rho$ is such that \begin{equation}\label{MITEdef} \hat\rho_S\approx \hat\rho^{\mathrm{mc}}_S \end{equation} for every subsystem $S$ corresponding to a spatial region of diameter $\leq \ell_0$ (as defined after \eqref{mcS=betaS}), we say that the system is in \emph{microscopic thermal equilibrium (MITE)}. We also use the name $\mathrm{MITE}$ for the set of $\hat\rho$'s in $\mathscr{H}_{\mathrm{mc}}$ that fulfill this condition, as well as for the set of pure states $\psi\in \mathbb{S}(\mathscr{H}_{\mathrm{mc}})$ that fulfill this condition. A concept along these lines was used, e.g., in \cite{Rei08,LPSW08,RS12,Lych,NH15}. As a precise version of \eqref{MITEdef}, we may take the condition \begin{equation} \|\hat\rho_S- \hat\rho^{\mathrm{mc}}_S\|< \varepsilon\,, \end{equation} where $\varepsilon\ll 1$ is a chosen tolerance and $\|\cdot\|$ means the trace norm, defined by \begin{equation}\label{tracenormdef} \|M\| = \tr |M| = \tr \sqrt{M^* M}\,. \end{equation} \bigskip \noindent{\bf Remarks.} \begin{enumerate} \setcounter{enumi}{\theremarks} \item \emph{In classical mechanics there is no analog of MITE for pure states.} Indeed, a classical system in a pure state is represented by a point $X$ in phase space, that is, by a list of the positions and momenta of all particles. For a subsystem $S$, be it defined as consisting of the particles numbered 1 through 100 or as the particles in a certain region $\mathscr{R}$ of the available volume in $\mathbb{R}^3$, its state is then given by the list of positions and momenta of the particles in $S$, i.e., by a point $X_S$ in the phase space of $S$ that is determined by $X$. Thus, the state of $S$ is itself pure and never close to $\rho^{(\beta)}$. While a notion of MITE is not available for pure states in classical mechanics, a notion of MATE is, as described in Section~\ref{sec:classical} above. \item \label{rem:subsub} \emph{Subsubsystem property.} If $\hat\rho_S \approx \hat\rho^{\mathrm{mc}}_S$ for some subsystem $S$ then the same is true for every smaller subsystem $S'$ contained in $S$, just by taking another partial trace on both sides of the approximate equation $\hat\rho_S \approx \hat\rho^{\mathrm{mc}}_S$. As a consequence, for a system to be in MITE it suffices that $\hat\rho_S\approx \hat\rho^{\mathrm{mc}}_S$ for a few subsystems $S=S_i$ corresponding to spatial regions (possibly of diameter $>\ell_0$) such that every region of diameter $\ell_0$ is contained in one of these regions. \item\label{rem:can} \emph{Most pure states in the energy shell are in MITE} (even for MBL systems). The basis of this fact is canonical typicality \cite{GMM04,PSW05,PSW06,GLTZ06,Sug07}, which can be understood as an instance of the following mathematical proposition \cite{Sug07}: Let $\mathscr{H}_R$ be any subspace of $\mathscr{H}$ of dimension $d_R$ (we will later set $\mathscr{H}_R=\mathscr{H}_{\mathrm{mc}}$), let $\hat P_R$ be the projection to $\mathscr{H}_R$ and $\hat\rho_R=\hat P_R/d_R$; let $\Psi$ be drawn randomly according to $u_R$, the uniform distribution over $\mathbb{S}(\mathscr{H}_R)$. Then, for any operator $\hat A:\mathscr{H}\to\mathscr{H}$, \begin{equation} \mathbb{E} \scp{\Psi}{\hat A|\Psi}= \tr(\hat A\hat\rho_R) \end{equation} and \begin{equation} \Var \scp{\Psi}{\hat A|\Psi} \leq \frac{V_{\hat A}(\hat\rho_R)}{d_R+1}\,, \end{equation} where \begin{align} V_{\hat A} (\hat\rho) &:= \tr\Bigl[ \Bigl(\hat A-\tr(\hat A\hat \rho) \Bigr)^* \Bigl(\hat A-\tr(\hat A\hat \rho) \Bigr) \hat\rho \Bigr]\\ &= \tr(\hat A^*\hat A\hat \rho)-|\tr(\hat A\hat \rho)|^2\,. \end{align} This proposition follows by a little calculation from the fact \cite{vN29,Ull64,GMM04,GLMTZ15} that the coefficients $c_\alpha$ relative to any orthonormal basis $\{\phi_\alpha\}$ of a random vector $\Psi=\sum_\alpha c_\alpha \phi_\alpha$ that is uniformly distributed over the unit sphere in some Hilbert space $\mathscr{H}$ of dimension $d$ have the following moments: The first and third moments vanish, the second moments are \begin{equation} \mathbb{E}(c_\alpha^* c_\beta) = \frac{\delta_{\alpha\beta}}{d}\,, \end{equation} and the only non-vanishing fourth moments are \begin{equation} \mathbb{E}\Bigl( |c_\alpha|^2 |c_\beta|^2\Bigr) = \frac{1+\delta_{\alpha\beta}}{d(d+1)}\,. \end{equation} The above proposition yields together with the Chebyshev inequality that for any operator $\hat A$ and any $\varepsilon>0$, \begin{equation} u_R \Bigl\{\psi\in\mathbb{S}(\mathscr{H}_R): \bigl|\scp{\psi}{\hat A|\psi}-\tr(\hat A\hat\rho_R)\bigr| \leq \varepsilon \Bigr\} \geq 1-\frac{V_{\hat A}(\hat\rho_R)}{\varepsilon^2(d_R+1)}\,. \end{equation} Now suppose that $\mathscr{H}=\mathscr{H}_1 \otimes \mathscr{H}_2$ with $\dim \mathscr{H}_1=d_1$. By considering $\hat A$'s that act only on $\mathscr{H}_1$, one can further conclude through a little computation that \begin{equation} u_R \Bigl\{\psi\in\mathbb{S}(\mathscr{H}_R): \bigl\| \hat\rho_1^\psi-\tr_2 \hat\rho_R\bigr\| \leq \varepsilon \Bigr\} \geq 1-\frac{d_1^4}{\varepsilon^2 d_R}\,. \end{equation} That is, when \begin{equation}\label{mostMITEcond2} d_1 \ll d_R^{1/4} \end{equation} then $\hat\rho_1^\psi\approx \tr_2 \hat\rho_R$ for most $\psi\in\mathbb{S}(\mathscr{H}_R)$. [In fact, the tighter error estimate of Popescu et al.~\cite{PSW05,PSW06} (see Section~\ref{sec:MITE} below) yields that $d_R^{1/4}$ in \eqref{mostMITEcond2} can be replaced by $d_R^{1/2}$ (but not any larger exponent).] Now for $\mathscr{H}_R=\mathscr{H}_{\mathrm{mc}}$ and $\mathscr{H}_1$ the Hilbert space of a subsystem $S$, this amounts to canonical typicality. What if $d_1=\infty$? If we are considering a system of $N_1+N_2$ spins (finitely many), then we do not encounter this problem, as the Hilbert spaces $\mathscr{H}_1$ and $\mathscr{H}_2$ have finite dimension $2^{N_1}$ and $2^{N_2}$. But if we are considering particles in a region $\Lambda=\Lambda_1\cup \Lambda_2\subset \mathbb{R}^3$, then both $\mathscr{H}_1$ and $\mathscr{H}_2$ have infinite dimension, although $\mathscr{H}_{\mathrm{mc}}$ has finite dimension, provided that $\Lambda$ has finite volume (as then there are only finitely many energy levels below $E$). That is why $d_1=\infty$ can occur. So, what if $d_1=\infty$? Effectively, only finitely many dimensions of $\mathscr{H}_1$ are relevant to $\mathscr{H}_{\mathrm{mc}}$: Let $\tilde\mathscr{H}_1$ be the span of the eigenvectors of $\tr_2 \rho_{\mathrm{mc}}$ with the largest $n$ eigenvalues; take $n$ large enough so that the sum of these eigenvalues is close to 1. Then $\tilde\mathscr{H}_1$ and an analogously constructed $\tilde\mathscr{H}_2$ can play the roles of $\mathscr{H}_1$ and $\mathscr{H}_2$ in the above reasoning. Concerning the size of $S$, it follows from \eqref{mostMITEcond2} that canonical typicality still holds if the size of $S$ is almost one quarter of the size of the whole; in fact, by the tighter estimate $d_R^{1/2}$, almost one half (see Section~\ref{sec:MITE} for elaboration). If the diameter of the whole is greater than $4\ell_0$, then a moderate number (such as 8 for a cube) of nearly-half-size subsystems will contain any spatial region of diameter $\leq \ell_0$. By the subsubsystem property, we obtain that most $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$ simultaneously satisfy $\hat\rho_S^\psi\approx \hat\rho_S^{\mathrm{mc}}$ for every region $S$ of diameter $\leq \ell_0$. That is, most $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$ are in MITE. \end{enumerate} \setcounter{remarks}{\theenumi} \subsection{Relation Between MATE and MITE} \subsubsection{General Framework of MATE and MITE as Referring to Different Observables} \label{sec:framework} MITE and MATE are special cases of the following scheme: Given a set $\mathscr{A}$ of observables, a state $\hat\rho$ in $\mathscr{H}_{\mathrm{mc}}$ is in \emph{thermal equilibrium relative to $\mathscr{A}$} if and only if for every $\hat A\in\mathscr{A}$, the probability distribution over the spectrum of $\hat A$ defined by $\hat\rho$ is approximately equal to that defined by $\hat\rho^{\mathrm{mc}}$. For $\mathscr{A}=\mathscr{A}_{\mathrm{MATE}}=\{\hat M_1,\ldots,\hat M_K\}$, one obtains MATE, and MITE is obtained for $\mathscr{A}=\mathscr{A}_{\mathrm{MITE}}=\cup_S\mathscr{A}_S$ with the union taken over all spatial regions $S$ of diameter $\leq \ell_0$ and $\mathscr{A}_S$ the set of all self-adjoint operators on $\mathscr{H}_S$, more precisely \begin{equation}\label{sASdef} \mathscr{A}_S = \Bigl\{ \hat A_0\otimes \hat I_{S^c}: \hat A_0 \text{ self-adjoint on }\mathscr{H}_{S} \Bigr\} \end{equation} (with $\hat I$ the identity operator and $S^c$ again the complement of $S$). Indeed, the condition $\hat\rho_S\approx \hat\rho^{\mathrm{mc}}_S$ is equivalent to $\tr(\hat\rho\hat P) \approx \tr(\hat\rho^{\mathrm{mc}} \, \hat P)$ for every projection of the form $\hat P = \hat P_0 \otimes \hat I_{S^c}$ with $\hat P_0$ a projection in $\mathscr{H}_S$. In this sense, MATE means thermal equilibrium relative to the macro observables, whereas MITE is thermal equilibrium relative to all observables concerning any $S$ of diameter $\leq \ell_0$. The latter observables include those of a more microscopic and local nature. Yet another choice of $\mathscr{A}$ has been considered by Reimann \cite{Rei15b}, who took $\mathscr{A}$ to contain one or a few \emph{typical} observables (instead of macroscopic or local ones). \subsubsection{MITE Implies MATE for Macroscopic Systems} \label{sec:MITEimpliesMATE} Since most $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$ are in both MATE and MITE (see Remarks~\ref{rem:mostMATE} and \ref{rem:can} above), it follows further that most states in $\mathrm{MITE}$ lie also in $\mathrm{MATE}$ and vice versa. (Indeed, if $99\%$ of all states lie in MITE, and $99\%$ of all states lie in MATE, then at least the fraction $1-1/99$ of all states in MITE lie in MATE, and at least the fraction $1-1/99$ of all states in MATE lie in MITE.) In fact, more is true: \emph{All} states in MITE lie also in MATE \cite{GHLT15a}. Indeed, since macro observables are sums or averages of local observables over spatial cells (say, of length $L$), it follows from Section~\ref{sec:framework}, as soon as $L\leq \ell_0$, that states $\psi$ that display thermal behavior for micro observables (i.e., lead to the same probability distribution over the spectrum of the observable as $\hat\rho^{\mathrm{mc}}$) also display thermal behavior for macro observables. And these $\psi$ include those in MITE. The condition $L\leq \ell_0$ means that $\hat\rho^\psi_S \approx \hat\rho^{\mathrm{mc}}_S$ at least up to the length scale of the macro observables, which is commonly the case; e.g., for a cubic meter of gas at room conditions, we can realistically take $L \approx 10^{-4}$ m and $\ell_0 \approx 10^{-3}$ m. \bigskip \noindent {\bf Example 1.} \emph{A simple example of a state in MATE that is not in MITE.} Consider a system of $N\gg 1$ non-interacting spins-1/2, $\mathscr{H}=(\mathbb{C}^2)^{\otimes N}$, with $\hat H=0$ so that $\mathscr{H}_{\mathrm{mc}}=\mathscr{H}$ and $\hat\rho^{\mathrm{mc}}=2^{-N}\hat I$, in a pure product state $\psi=\otimes_i \psi_i$. Divide the $N$ spins into $m$ groups (``cells'') $\Lambda_j$ of $n\gg 1$ spins, so that $nm=N$, and take $\hat M_j$ to be a coarse-grained version of $\sum_{i\in\Lambda_j} \hat\sigma_i^z$, the total magnetization of $\Lambda_j$ in the $z$-direction. Then the thermal equilibrium value of $\hat M_j$ is $\tr(\hat\rho^{\mathrm{mc}}\hat M_j)=0$, so $\mathscr{H}_{\mathrm{eq}}=\bigcap_j \,\mathrm{kernel}(\hat M_j)$ (where kernel means the eigenspace with eigenvalue 0), and a typical pure product state $\psi$ lies in MATE. To see that $\psi$ does not lie in MITE, note that for a single spin at site $i$, $S=\{i\}$, \begin{equation}\label{H=0} \hat\rho^{\mathrm{mc}}_S=\tfrac 12 \hat I_i\text{ whereas }\hat\rho_S^\psi= |\psi_i\rangle\langle\psi_i|\,, \end{equation} so the two density matrices are not close to each other. \section{Dynamical Approach to Thermal Equilibrium} \label{sec:approach} We say that $\hat\rho$ \emph{approaches} MITE/MATE if $\hat\rho_t= e^{-i\hat{H}t} \,\hat\rho\, e^{i\hat{H}t}$ sooner or later reaches MITE/MATE and spends there most of the time in the long run. In many systems, all states in the energy shell approach thermal equilibrium in this sense, but there are some exceptional macroscopic classical and quantum systems for which many states do \emph{not} come to thermal equilibrium in any sense as time goes on. This is obviously the case for Example 1 above, but in fact there are more physically relevant systems (exhibiting MBL) which also have this property, as we explain below. A condition relevant to whether approach to thermal equilibrium occurs is the \emph{eigenstate thermalization hypothesis (ETH)} \cite{Sre94,RDO08,GLMTZ09b,RS12}. The ETH can be formulated as the condition on $\hat H$ that all eigenstates of $\hat{H}$ are in thermal equilibrium. Corresponding to two kinds of thermal equilibrium, MITE and MATE, we have two versions of the ETH. Let us focus first on MATE-ETH. \subsection{Approach to MATE} \label{sec:approachMATE} \emph{Under the MATE-ETH, all $\psi$ in the energy shell approach MATE.} Indeed \cite{GLMTZ09b}, writing $\overline{f(t)}=\lim_{T\to\infty} \frac{1}{T}\int_0^T f(t)\, dt$ for time averages, $\ket{\alpha}$ for the energy eigenstate with eigenvalue $E_\alpha$, and $\psi_t=e^{-i\hat Ht}\psi$, \begin{align} \overline{\scp{\psi_t}{\hat P_{\mathrm{eq}}|\psi_t}} &= \sum_{\alpha,\beta} \scp{\psi}{\alpha} \: \overline{e^{iE_\alpha t} \scp{\alpha}{\hat P_{\mathrm{eq}}|\beta} e^{-iE_{\beta}t}} \: \scp{\beta}{\psi} \label{MATE-ETH-first}\\ &= \sum_{\alpha} \bigl| \scp{\psi}{\alpha} \bigr|^2 \scp{\alpha}{\hat P_{\mathrm{eq}}|\alpha} \geq \sum_{\alpha} \bigl| \scp{\psi}{\alpha} \bigr|^2 (1-\delta) =1-\delta\,,\label{MATE-ETH-last} \end{align} provided $\hat H$ is non-degenerate, i.e., $E_\alpha\neq E_{\beta}$ for $\alpha\neq \beta$ (using $\overline{e^{iEt}}=1$ if $E= 0$ and $=0$ otherwise).\footnote{In fact, the assumption of non-degeneracy can be dropped: If we number the eigenvalues as $E_\alpha$ with $E_\alpha\neq E_{\beta}$ for $\alpha\neq \beta$ and let $\ket{\alpha}$ denote the normalized projection of $\psi$ to the eigenspace of $E_\alpha$, then the calculation \eqref{MATE-ETH-first}--\eqref{MATE-ETH-last} still applies.} Since its time average is close to 1, $\scp{\psi_t}{\hat P_{\mathrm{eq}}|\psi_t}$ must be close to 1 for most $t$ in the long run. Conversely, if the MATE-ETH is violated, then it is mathematically possible that no state outside MATE ever approaches MATE. For example, choose $\hat H$ so that every eigenstate is either in $\mathscr{H}_{\mathrm{eq}}$ or orthogonal to it. As we will see in Section~\ref{sec:mbl}, this happens in some MBL systems. \subsection{Approach to MITE} \label{sec:approachMITE} The ideal gas provides an example of a system for which some states do not approach MITE. We now ask, Under which conditions will all or most $\psi$ approach MITE? There are several results \cite{RDO08,Rei08,LPSW08,Rei10}, all of which assume the MITE-ETH, and that the Hamiltonian is non-degenerate and has non-degenerate energy gaps, i.e., \begin{equation}\label{noresonance} E_{\alpha}-E_{\beta} \neq E_{\alpha'}-E_{\beta'} \text{ unless } \begin{cases}\text{either } \alpha= \alpha' \text { and } \beta= \beta' \\ \text{or }\alpha=\beta \text{ and }\alpha'=\beta'\,,\end{cases} \end{equation} a condition that is generically fulfilled. We note here two results, the first of which \cite{Rei08,LPSW08} asserts that if all energy eigenstates in $\mathscr{H}_{\mathrm{mc}}$ are in MITE, then most $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$ will sooner or later reach MITE and spend most of the time in MITE in the long run. More precisely, those $\psi$ will behave this way for which the effective number of significantly participating energy eigenstates is much larger than $\dim \mathscr{H}_S$ for any small $S$. The second result \cite{RDO08} shows that \emph{all} (rather than most) $\psi$ will ultimately reach MITE and stay there most of the time, under two assumptions, first again that all energy eigenstates $|\alpha\rangle$ are in MITE, and second Srednicki's \cite{Sre96,Sre99} extension of the ETH to off-diagonal elements, i.e., that for $\hat A\in\mathscr{A}_{\mathrm{MITE}}$ (as in Section~\ref{sec:framework}), \begin{equation} \scp{\alpha}{\hat A|\beta}\approx 0\text{ for }\alpha\neq \beta \end{equation} (see also \cite{Rei15}). Indeed, if $\hat H$ is non-degenerate and all $\ket{\alpha}$ are in MITE, then \begin{align} \overline{\scp{\psi_t}{\hat A|\psi_t}} &= \sum_{\alpha,\beta} \scp{\psi}{\alpha} \overline{e^{iE_\alpha t} \scp{\alpha}{\hat A |\beta} e^{-iE_\beta t}} \scp{\beta}{\psi}\\ &= \sum_\alpha \scp{\psi}{\alpha} \scp{\alpha}{\hat A|\alpha} \scp{\alpha}{\psi}\\ &\approx \tr(\hat\rho^{\mathrm{mc}} \hat A)\,. \end{align} Furthermore, a calculation using \eqref{noresonance} shows that \begin{equation}\label{timevariance} \overline{\biggl( \scp{\psi_t}{\hat A|\psi_t} - \overline{\scp{\psi_t}{\hat A|\psi_t}} \biggr)^{\!2}} =\sum_{\alpha\neq \beta} \bigl|\scp{\psi}{\alpha}\bigr|^2 \; \bigl|\scp{\alpha}{\hat A|\beta}\bigr|^2 \; \bigl|\scp{\beta}{\psi}\bigr|^2\,, \end{equation} and if $\bigl|\scp{\alpha}{\hat A|\beta}\bigr|<\varepsilon\ll 1$ for all $\alpha\neq \beta$, then the time variance \eqref{timevariance} is smaller than $\varepsilon^2$. It follows that, for most $t$ in the long run, $\scp{\psi_t}{\hat A|\psi_t} \approx \tr(\hat\rho^{\mathrm{mc}}\, \hat A)$ for any $\hat A\in\mathscr{A}_{\mathrm{MITE}}$ (in particular projections), which yields that $\psi_t\in \mathrm{MITE}$ for most $t$ in the long run. \section{Many-Body Localized Systems} \label{sec:mbl} There is no consensus on the definition of many-body localization \cite{GE15}. For the purposes of this paper we will adopt the following definition: A system with Hamiltonian $\hat H$ is many-body localized if all the eigenstates of $\hat H$ fail to be in MITE, this remains true under generic small local (in real space) changes to $\hat H$, and in each eigenstate almost all subsystems $S$ are ``localized'' with $\hat\rho_S$ having substantially lower entropy than at thermal equilibrium. Many-body localized (MBL) systems are the one known example of many-body quantum systems that fail to thermalize under their own dynamics where this failure to thermalize remains under small generic local perturbations to the system's Hamiltonian. Since the approach to thermal equilibrium is connected to the properties of the energy eigenstates $\phi_\alpha$, it is of particular interest whether the $\phi_\alpha$ lie in MITE or MATE or neither. \bigskip \noindent{\bf Example 2.} As a simple, and essentially trivial, example, consider a chain of $N$ non-interacting spins-1/2, each subject to a local random field: \begin{equation}\label{exMBL1} \hat{H_2}=\sum_i h_i \hat\sigma_i^z ~, \end{equation} where $i$ labels the spin, and $\hat\sigma_i^z$ is the Pauli operator for the $z$ component of spin $i$. For specificity, let the local static random fields $h_i$ be independent and identically distributed, drawn from the uniform distribution on $-W<h_i<W$ with $W>0$. Let us consider an energy shell containing $E=0$, which in a sense corresponds to infinite temperature. The eigenstates of $\hat H_2$ are simply the simultaneous eigenstates of each $\hat\sigma_i^z$, all of which mutually commute and thus also commute with $\hat H_2$. In this sense, we have a (trivially) integrable system, with a complete set of local conserved operators, the $\{\hat\sigma_i^z\}$. We take as macro observables the $\hat M_j$ of Example 1 above, i.e., the coarse-grained total $z$-magnetization in each macro cell, along with the coarse-grained energy $\hat M_0=f(\hat H_2)$. So, $\mathscr{H}_{\mathrm{eq}}=\mathscr{H}_{\mathrm{mc}}\cap \bigcap_j \, \mathrm{kernel}(\hat M_j)$. Then, most of the eigenstates of $\hat H_2$ in $\mathscr{H}_{\mathrm{mc}}$ are in MATE (in fact, even in $\mathscr{H}_{\mathrm{eq}}$), with an energy near zero (for this energy shell) in all large subregions of the spin chain. But there are also a few eigenstates where some large subregions have energies that deviate substantially from the thermal equilibrium value 0, and these are the eigenstates that are not in MATE. In fact, these eigenstates are orthogonal to $\mathscr{H}_{\mathrm{eq}}$ (i.e., as far from MATE as possible). So, the situation can be summarized by the statement that \begin{equation}\label{ineqperpeq} \text{for every $\alpha$, either }\phi_\alpha\in \mathscr{H}_{\mathrm{eq}}\text{ or }\phi_\alpha \perp \mathscr{H}_{\mathrm{eq}}\,. \end{equation} Thus, MATE-ETH is violated as strongly as consistent with the mathematical fact (Remark~\ref{rem:mosteigenMATE} above) that always most eigenstates are in MATE. As a consequence of \eqref{ineqperpeq}, all states out of MATE stay out of MATE forever (no MATE-thermalization and, since MITE implies MATE, also no MITE-thermalization, the heat transport coefficients vanish), whereas all states in MATE stay in MATE forever (no fluctuations away from thermal equilibrium). It is moreover the case for $\hat H_2$ (but not relevant to thermalization) that every eigenstate $\phi_\alpha$ lies in some $\mathscr{H}_\nu$. In contrast, a typical Hamiltonian, say one whose eigenbasis in $\mathscr{H}_{\mathrm{mc}}$ was drawn uniformly among all orthonormal bases in $\mathscr{H}_{\mathrm{mc}}$, has eigenvectors $\phi_\alpha$ that are typical vectors relative to the uniform distribution over $\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$, and by the phenomenon of \emph{normal typicality} \cite{vN29,GLMTZ10,GLTZ10}, $\|\hat P_\nu\phi_\alpha\|^2\approx \dim \mathscr{H}_\nu/\dim \mathscr{H}_{\mathrm{mc}}$, where $\hat P_\nu$ is the projection to $\mathscr{H}_{\nu}$; that is, $\phi_\alpha$ is spread out over \emph{all} $\mathscr{H}_{\nu}$; in fact, this is the case for \emph{all} $\alpha$ simultaneously \cite{vN29,GLMTZ10,GLTZ10}. Correspondingly, \cite{GLMTZ09b,GLMTZ10}, for a typical Hamiltonian every $\phi_\alpha$ has a component of size $1-\varepsilon = \dim\mathscr{H}_{\mathrm{eq}}/\dim\mathscr{H}_{\mathrm{mc}}$ in $\mathscr{H}_{\mathrm{eq}}$ and a component of size $\varepsilon$ orthogonal to $\mathscr{H}_{\mathrm{eq}}$; as a consequence, all $\phi_\alpha$ lie in MATE \cite{GLMTZ09b}. It is ironical that, although MATE is nearly the same as $\mathscr{H}_{\mathrm{eq}}$, almost none of the eigenstates can lie in $\mathscr{H}_{\mathrm{eq}}$ if all of them lie in MATE (and not all can lie in MATE if as many as possible of them lie in $\mathscr{H}_{\mathrm{eq}}$). Several traits of the eigenstates of $\hat H_2$ are quite typical of MBL systems: The energy eigenstates $\phi_\alpha$ of MBL systems tend to have a short range of entanglement. That is, while they are not exactly product states, they are less entangled between neighboring lattice sites than random states $\psi$ (and thus less than for Hamiltonians with a random eigenbasis). They can be approximated as unentangled between different cells $\Lambda_j$. That is, the $\phi_\alpha$ of an MBL system can be approximated as a product of eigenstates of local energy, a situation of which Example 2 is a strict case. As a consequence, some eigenstates have a profile of cell energy that is very non-uniform, and they will not be in MATE, but will be approximately orthogonal to $\mathscr{H}_{\mathrm{eq}}$. In addition, for a generic interacting MBL system there are presumably also a small number of eigenstates that contain substantial components both in $\mathscr{H}_{\mathrm{eq}}$ and orthogonal to $\mathscr{H}_{\mathrm{eq}}$; this should happen when the profile of cell energy lies near the borderline of what should be considered uniform. Let us have a look at MITE in Example 2. In our energy shell $E=0$, the thermal density matrix of a single spin at site $i$, $S=\{i\}$, is \begin{equation} \hat\rho^{\mathrm{mc}}_S= \tfrac12 |\!\uparrow\rangle\langle \uparrow\!| + \tfrac12 |\!\downarrow\rangle\langle \downarrow\!|\,, \end{equation} analogously to the even more trivial Example 1 in Section~\ref{sec:MITE1}. However, also analogously to \eqref{H=0}, for every eigenstate $\phi$ of $\hat H_2$, due to the product structure of $\phi$, $\hat\rho_S^\phi=\hat\rho_i^\phi$ is either $|\!\!\uparrow\rangle\langle \uparrow\!\!|$ or $|\!\!\downarrow\rangle\langle \downarrow\!\!|$, so $\hat\rho_S^\phi$ is far from $\hat\rho^{\mathrm{mc}}_S$. Thus, for this system, the MITE-ETH is false, in fact none of the eigenstates are in MITE. [In the full spectrum, there are two exceptional eigenstates, namely the ground states of $\hat H_2$ and of $-\hat H_2$. These two states are (trivially) in both MITE and MATE, as is always the case for non-degenerate ground states. But we are not interested here in ground states.] Also this situation is typical of MBL systems: It has been shown analytically \cite{Imbrie}, numerically \cite{PH10}, or perturbatively \cite{BAA06a,BAA06b,ros} for various MBL models that none, or almost none, of the eigenstates of $\hat H$ are in MITE---although most pure states, when consideration is not restricted to the energy eigenstates, are necessarily in MITE. {This is only to be expected considering that entanglement has short range in $\phi_\alpha$ of MBL systems, and entanglement is the mechanism behind MITE. So, a typical MBL system has most energy eigenstates in MATE but none in MITE; it is thus as far from the ETH as possible, in view of the mathematical fact (Remark~\ref{rem:mosteigenMATE}) that most energy eigenstates have to be in MATE. Correspondingly to the failure of MITE-ETH, typical $\psi$'s in Example 2 do not approach MITE. For example (though not a typical one), if $\psi$ is initially a product state then it will forever remain one due to the form of \eqref{exMBL1}, and product states lack the entanglement needed for MITE, so $\psi$ never reaches MITE. Now consider a pure state $\psi\in\mathscr{H}_{\mathrm{mc}}$ built out of the eigenstates $\phi_\alpha$ in MATE. Since it lies in a subspace in which the MATE-ETH is true, $\psi$ approaches MATE (see Section~\ref{sec:approachMATE}). Can $\psi$ be at all out of MATE (so that it is a non-thermal state that thermalizes)? For an ETH system, it is clear that the answer is yes, i.e., a non-MATE $\psi$ can be superposed out of MATE eigenstates, as all eigenstates $\phi_\alpha$ from $\mathscr{H}_{\mathrm{mc}}$ lie in MATE, and surely some $\psi\in\mathscr{H}_{\mathrm{mc}}$ are not in MATE, so they must be built of $\phi_\alpha$'s in MATE. It is equally clear that for Example 2 the answer is no, as the $\phi_\alpha$ in MATE lie in $\mathscr{H}_{\mathrm{eq}}$, so any superposition also lies in $\mathscr{H}_{\mathrm{eq}}$ and thus in MATE. So what about other MBL systems? This will be addressed by the next example. \bigskip \noindent{\bf Example 3.} Take the same Hilbert space and Hamiltonian as in Example 2, but now take the macro observables $\hat M_j$ to refer only to $x$-spin and not to $z$-spin, which leads to a different choice of $\mathscr{H}_{\mathrm{eq}}$. (This example is less serious because serious examples should have cell energies among their macro variables, and this example does not; but we consider it anyway.) It is useful to consider the basis $\{b_\alpha\}$ of $\mathscr{H}$ consisting of products of $|\!\!\rightarrow\rangle$'s and $|\!\!\leftarrow\rangle$'s; those $b_\alpha$ that have approximately equally many $|\!\!\rightarrow\rangle$'s as $|\!\!\leftarrow\rangle$'s in every cell lie in $\mathscr{H}_{\mathrm{eq}}$, and the others are orthogonal to $\mathscr{H}_{\mathrm{eq}}$. It follows that every energy eigenstate $\phi_\alpha$ lies in MATE. As a consequence, every $\psi\in\mathscr{H}$ approaches MATE for this choice of macro variables. For example, $\psi=|\!\!\rightarrow\rangle^{\otimes N}$ is orthogonal to $\mathscr{H}_{\mathrm{eq}}$, and in particular not in MATE (and hence not in MITE). Since all spins precess at different frequencies due to the local random fields $h_i$ in \eqref{exMBL1}, the macroscopic $x$-magnetization relaxes to zero, and this $\psi$ approaches MATE, as it should. In view of this example of a dynamical relaxation of the $x$-magnetization, we may ask whether there are also pure states of macroscopically non-uniform cell energy distribution (i.e., non-uniform temperature profile, such as a temperature gradient) that relax to a uniform cell energy distribution. The answer is no, as it is clear from \eqref{exMBL1} that the spins do not interact and thus energy cannot be transported from one site to another. In fact, it follows that a $\psi$ with a non-uniform cell energy distribution must consist exclusively of energy eigenstates with the same cell energy distribution. With respect to MITE, Example 3 behaves like Example 2 because MITE does not depend on the choice of the $\hat M_j$, which was the only difference between the two examples. That is, none of the $\phi_\alpha$ lie in MITE, and approach to MITE does not occur. \bigskip \noindent{\bf Example 4.} Take again the same Hilbert space and Hamiltonian as before, but now let us include among the $\hat M_j$ \emph{both} the $x$-spin and the $z$-spin on the macro level. That is, we include coarse-grained magnetization operators for each cell in the $x$- and the $z$-direction (a little adjusted so as to make them all commute). This is a natural choice that reflects better what can macroscopically be measured. Then, again, none of the energy eigenstates lie in MITE, and approach to MITE does not occur. Some $\phi_\alpha$ (those with a macroscopically non-uniform density of $|\!\!\uparrow\rangle$ factors) will be (approximately) orthogonal to $\mathscr{H}_{\mathrm{eq}}$ and thus clearly out of MATE; $|\!\!\rightarrow\rangle^{\otimes N}$ (or rather, its normalized projection to $\mathscr{H}_{\mathrm{mc}}$) will again be an example of a state out of MATE that approaches MATE. So, some non-equilibrium states thermalize, but a non-zero temperature gradient cannot relax. \bigskip \noindent{\bf Example 5.} For a system that is less trivially localized, let us now add some nearest neighbor interactions to this spin chain model, as well as possibly a transverse field. For example, Imbrie \cite{Imbrie} adds non-random Ising interactions and a transverse field: \begin{equation} \hat{H_5}=\sum_i (J\hat\sigma_i^z\hat\sigma_{i+1}^z + \Gamma\hat\sigma_i^x + h_i \hat\sigma_i^z) ~. \end{equation} For $W>0$ and $\Gamma$ small enough, he shows \cite{Imbrie} under plausible assumptions that this system remains fully many-body localized (although the precise definition of MBL that he uses differs from ours in ways that we expect are not important for the present discussion). In this regime, for any small local perturbation of $\hat H_2$ one can define localized conserved operators $\hat\tau_i^z$ that all mutually commute and also commute with the resulting Hamiltonian $\hat H_5$ \cite{hno,ros}. These operators $\hat\tau_i^z$ are made by ``dressing'' each $\hat\sigma_i^z$ with multi-spin operators that are localized near site $i$. This means that the norm of any such dressing typically falls off exponentially with the distance of the farthest spin used in the dressing and the probability of having strong long-range dressing also falls off exponentially with the distance. In terms of these $\{\hat\tau_i^z\}$, the Hamiltonian of this more generic system can be written as \cite{hno} \begin{equation} \hat{H_5}=\sum_i \tilde h_i \hat\tau_i^z + \sum_{i<j} J_{ij}\hat\tau_i^z\hat\tau_j^z + \sum_{i<j<k}K_{ijk}\hat\tau_i^z\hat\tau_j^z\hat\tau_k^z+ \ldots ~, \end{equation} where $\tilde h_i$ is the local effective random field, and the interactions $J_{ij}$, $K_{ijk}$, etc. typically fall off exponentially with the distance between the two farthest operators involved, as does the probability of such a coupling being strong. Although $\hat H_5$ has more interactions than $\hat H_2$, it is similarly integrable with a complete set of localized conserved operators, the $\{\hat\tau_i^z\}$. And $\hat H_5$ has all the properties outlined above for $\hat H_2$, including some eigenstates that fail to be in MATE, and having all highly-excited eigenstates fail to be in MITE. It remains an open question whether or not all systems that are MBL have this structure, with a complete set of localized conserved operators. No detailed description of how MBL would work otherwise has yet been proposed. \section{Further Aspects of MITE and MATE} \label{sec:overview} \subsection{Quantitative MATE} \label{sec:MATE} In this section, we focus on the practical size of $\varepsilon$ and $\delta$ in \eqref{MATEdef} and \eqref{dominant}; that is, of $\varepsilon=1-\dim\mathscr{H}_{\mathrm{eq}}/\dim\mathscr{H}_{\mathrm{mc}}$ (or, classically, $\varepsilon=1-\vol\Gamma_{\mathrm{eq}}/\vol\Gamma_{\mathrm{mc}}$), and of the $\delta$ that quantifies how far $\tr(\hat\rho \hat P_{\mathrm{eq}})$ can deviate from 1 in MATE. First, we stated in \eqref{Gammaeqsize} and \eqref{Hilberteqsize} that \begin{equation}\label{MATEepsilon} \varepsilon \approx \exp(-10^{-15}N)\,. \end{equation} The estimate is similar in the classical and in the quantum case. To obtain it classically, we partition the available 3-volume $\Lambda\subset \mathbb{R}^3$ into $m$ (say, $10^9$) cells $\Lambda_i$ of equal volume, consider simply the configuration space $\Lambda^N$ instead of phase space, use the uniform distribution over $\Lambda^N$, and take the macro variables to be $M_j= [N_j/N\, \Delta M_j] \,\Delta M_j$, where $N_j$ is the number of particles in $\Lambda_j$ (and, say, $\Delta M_j=10^{-12}$). Then $M_j$ has equilibrium value $\nu_j^{\mathrm{eq}}=1/m$ (and the relative resolution is $\Delta M_j/\nu_j^{\mathrm{eq}}=10^{-3}$); the distribution of $N_j$ is binomial with parameters $N$ and $m^{-1}$ and thus, if $N$ is large, approximately Gaussian with parameters $\mu=N/m$ and $\sigma^2=m^{-1}(1-m^{-1})N\approx N/m$. For $M_j$ to deviate from its equilibrium value requires that $N_j$ deviates from $\mu$ by more than $N \, \Delta M_j$, i.e., by more than $\sqrt{mN}\, \Delta M_j$ standard deviations, which has probability less than $p:=\exp(-mN\, \Delta M_j^2)$. Since the $N_j$ are approximately independent, the probability that \emph{any} of the $M_j$ deviates from its equilibrium value is $mp$, which here is still of rough order of magnitude $\exp(-10^{-15}N)$. In this example we have chosen numbers appropriate for a truly macroscopic system (say, $N\geq 10^{20}$) and require equilibrium values to a rather high resolution in all of a rather large number of cells. The numbers can reasonably be changed by many orders of magnitude to consider much smaller systems, to demand equilibrium values to different levels of precision and to divide the system into different numbers of cells. At some point $N$ becomes too small to allow room for a reasonable definition of MATE. We now turn to the question, How big should $\delta$ reasonably be chosen? Not too small, or else $\mathrm{MATE}$ will not contain the majority of $\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$, and not too large, or else $\psi\in\mathrm{MATE}$ will have significant component orthogonal to $\mathscr{H}_{\mathrm{eq}}$ and will not mean much. That is, \begin{equation} \varepsilon \ll \delta \ll 1\,. \end{equation} Since a realistic value of $\varepsilon$ is $10^{-10^5}$ or smaller (taking $N\geq 10^{20}$), there is a lot of different possibilities for $\delta$. Since $\delta$ represents the maximal probability, in an ideal quantum measurement of $\hat P_{\mathrm{eq}}$ on $\psi\in\mathrm{MATE}$, of obtaining the outcome 0 and projecting $\psi$ to a subspace orthogonal to $\mathscr{H}_{\mathrm{eq}}$, we may want to choose this probability so small that we can expect never to observe such an outcome. Borel \cite[Chap.~6]{Borel} has argued that events with probability $<10^{-200}$ can be assumed to never occur in our universe, so we may want to choose $\delta<10^{-200}$. A natural choice is $\delta = \sqrt{\varepsilon\,}$. \subsection{Quantitative MITE} \label{sec:MITE} As already mentioned, the statement that most $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$ are in $\mathrm{MITE}$ is based on canonical typicality. A tighter estimate of canonical typicality than the one in Remark~\ref{rem:can} is provided by a theorem due to Popescu, Short, and Winter \cite{PSW05,PSW06}, which asserts that, for any Hilbert spaces $\mathscr{H}_1$, $\mathscr{H}_2$ of dimensions $d_1$, $d_2$, any subspace $\mathscr{H}_R\subseteq \mathscr{H}_1 \otimes \mathscr{H}_2$ of dimension $d_R$, and any $\tilde\varepsilon>0$, \begin{equation}\label{cantyp} u_R \left\{ \psi \in \mathbb{S}(\mathscr{H}_R): \Bigl\|\hat\rho_1^\psi - \tr_2 \hat\rho_R \Bigr\| \geq \tilde\varepsilon + \frac{d_1}{\sqrt{d_R}} \right\} \leq 4 \exp\Bigl(-\frac{d_R\tilde\varepsilon^2}{18\pi^3}\Bigr)\,. \end{equation} Let us explain how this estimate can be applied. We can immediately consider several systems $S_1,\ldots,S_r$ simultaneously and ask, Under which conditions does the set \begin{equation}\label{MITElistdef} M=\bigcap_{i=1}^r\Bigl\{ \psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}}): \hat\rho_{S_i}^\psi \approx \hat\rho^{\mathrm{mc}}_{S_i} \Bigr\} \end{equation} contain most wave functions? Here, we take the relation $\hat\rho_{S_i}^\psi\approx \hat\rho_{S_i}^{\mathrm{mc}}$ to mean \begin{equation}\label{normepsilon} \bigl\|\hat\rho_{S_i}^\psi-\hat\rho_{S_i}^{\mathrm{mc}} \bigr\|<\varepsilon \end{equation} for some fixed $0<\varepsilon\ll 1$. (This $\varepsilon$ is independent of the quantity called $\varepsilon$ for MATE in \eqref{MATEepsilon} and \eqref{dominant}.) Let $d_i=\dim\mathscr{H}_{S_i}$, $d_{\mathrm{mc}}=\dim\mathscr{H}_{\mathrm{mc}}$, and let $u_{\mathrm{mc}}$ denote the uniform probability distribution over $\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$. From the theorem \eqref{cantyp} we obtain: If \begin{equation}\label{mostMITEcondition} d_i< \tfrac{1}{2} \varepsilon \sqrt{d_{\mathrm{mc}}} \text{ for all $i$}\,, \end{equation} then \begin{equation}\label{mostMITE} u_{\mathrm{mc}}(M) \geq 1- 4r \exp\Bigl(-\frac{d_{\mathrm{mc}}\varepsilon^2}{72\pi^3}\Bigr)\,. \end{equation} Indeed, this follows by setting $\tilde\varepsilon=\varepsilon/2$, $\mathscr{H}_R=\mathscr{H}_{\mathrm{mc}}$, $\mathscr{H}_1=\mathscr{H}_{S_i}$, and $\mathscr{H}_2=\mathscr{H}_{S_i^c}$. By assumption \eqref{mostMITEcondition}, the probability that, for a particular $i$, the total error $\tilde\varepsilon + d_i/\sqrt{d_{\mathrm{mc}}}$ is greater than $\varepsilon$ is at most \begin{equation} 4 \exp\Bigl(-\frac{d_{\mathrm{mc}}\varepsilon^2}{72\pi^3}\Bigr)\,. \end{equation} The probability that this happens for any $i=1,\ldots,r$ is at most $r$ times this quantity, which completes the proof of \eqref{mostMITE}. It may be surprising that the subsystems $S_i$ do not have to be very small for canonical typicality to hold but can, in fact, take up almost half of the whole system. For example, suppose that the system consists of a lattice of $N\gg 1$ spins, so $\dim \mathscr{H} =2^N$; suppose further that the energy shell arises from partitioning the energy axis into $10^{60} = 2^{200}$ intervals, so that, roughly, $d_{\mathrm{mc}} = 2^{N-200}$. If a subsystem $S_i$ consists of some subset of the $N$ spins comprising $49\%$ of the lattice sites, then \begin{equation} d_i=2^{0.49N} \ll 2^{0.5N-100} = \sqrt{d_{\mathrm{mc}}}\,, \end{equation} so \eqref{mostMITEcondition} is satisfied. In fact, if we consider $r=10$ such subsystems of equal size and $\varepsilon = 10^{-12} = 2^{-40}$, then \eqref{mostMITEcondition} is satisfied for $N>14100$. (This leads to the question how large $r$ can be in \eqref{MITElistdef}. Continuing with the numbers just mentioned but dropping the assumption $r=10$, we obtain from \eqref{mostMITE} that $u_{\mathrm{mc}}(\mathrm{MITE}_{S_1,\ldots,S_r})\geq 1-10^{-30}$ for $r<\exp(2^{N-292}-71)$, which for large $N$ allows us to include \emph{all} sets of lattice sites comprising no more than $49\%$ of all sites. However, the definition of MITE in Section~\ref{sec:MITE1} required the appropriate behavior only for spatial regions of diameter $\leq \ell_0$, and as mentioned in Remark~\ref{rem:subsub}, a rather small number $r$ of regions of near-half volume, say $r=8$ for a system in a cube-shaped volume, will contain all regions of small diameter.) So, for a subsystem $S$ comprising $49\%$ of the lattice sites, we have that for most $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$, \begin{equation} \hat\rho^\psi_S \approx \hat\rho^{\mathrm{mc}}_S \not\approx \hat\r
ho^{(\beta)}_S\,. \end{equation} That is, while the density matrix obtained from $\psi$ is close to that from the micro-canonical ensemble, the latter is not necessarily close to that obtained from the canonical ensemble for any $\beta$. In fact, the canonical density matrix arises from $\hat\rho^{\mathrm{mc}}$ for \emph{small} subsystems $S$ (if the interaction between $S$ and $S^c$ is not too large), and $49\%$ of the lattice sites is not small enough for this effect to occur. What about subsystems $S$ greater than half of the whole system (say, comprising $51\%$ of the lattice sites, so $S^c$ is still a macroscopic system)? Is $\hat\rho^\psi_S\approx \hat\rho^{\mathrm{mc}}_S$ still true of most $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$? The condition \eqref{mostMITEcondition} is then not fulfilled, but that may have been a merely sufficient condition. So here is an argument showing that canonical typicality will usually fail for subsystems greater than half of the whole. Suppose $\mathscr{H}_{\mathrm{mc}} = \mathscr{H} =\mathscr{H}_S\otimes \mathscr{H}_{S^c}$ with $d=\dim \mathscr{H}=2^N$ and $d_S=\dim\mathscr{H}_S = 2^{0.51N}$. For typical $\psi\in\mathbb{S}(\mathscr{H})$, by canonical typicality $\hat\rho^{\psi}_{S^c}\approx \hat\rho^{\mathrm{mc}}_{S^c} = d_{S^c}^{-1} \hat I_{S^c} = (d_S/d) \hat I_{S^c}$. By the Schmidt decomposition, $\hat\rho^\psi_S$ has the same nonzero eigenvalues as $\hat\rho^\psi_{S^c}$, which are $d/d_S=2^{0.49N}$ nonzero eigenvalues of size $d_S/d=2^{-0.49N}$, whereas $\hat\rho^{\mathrm{mc}}_S = d_S^{-1} \hat I_S$ has $d_S=2^{0.51N}$ nonzero eigenvalues of size $2^{-0.51N}$, so $\hat\rho^\psi_S\not\approx \hat\rho^{\mathrm{mc}}_S$. Realistic values for $d_{\mathrm{mc}}$ are \begin{equation}\label{dmcvalues} \text{between }d_{\mathrm{mc}} = 10^{N/10} \quad \text{and} \quad d_{\mathrm{mc}} = 10^{30 N} \end{equation} (and thus something like $d_{\mathrm{mc}}=10^{10^{20}}$ or larger). Here are simple reasonings leading to these value. First, consider $N$ spins, so $\dim \mathscr{H}=2^N=10^{0.3 \, N}$, and suppose $d_{\mathrm{mc}}=(\dim\mathscr{H})^{1/2}$. Second, for a single particle of mass $m$ in 1 dimension enclosed in a box of length $L$, the energy levels are $E_n = \hbar^2 \pi^2 n^2/ 2m L^2$. Thus, the energy levels of $N$ non-interacting particles in a 3-dimensional cubic box of side length $L$ are $(\hbar^2 \pi^2/2m L^2)\sum_{a=1}^3 \sum_{i=1}^N n_{ia}^2$, and the number $n$ of levels up to energy $E$ is approximately equal to the volume of the part with positive coordinates of a $3N$-dimensional ball of radius $R=L\sqrt{2mE}/\hbar\pi$ around the origin; this volume is $\approx 2^{-3N}\pi^{3N/2}R^{3N}/(3N/2)!\approx (e\pi R^2/6N)^{3N/2}$. For $E=\tfrac{3}{2}NkT$ with $T$ the temperature and $k$ Boltzmann's constant, we obtain $n\approx (3eL^2mkT/2\pi\hbar^2)^{3N/2}$. Thus, the number of levels in an energy interval of size $\Delta E = \tfrac{3}{2}Nk\Delta T$ is $n_{\Delta T}\approx (3N/2)(3eL^2mk/2\pi\hbar^2)^{3N/2} \, T^{3N/2-1}\, \Delta T$. For $\Delta T = 10^{-2}\,\mathrm{K}$, $T=300\,\mathrm{K}$, $L=1\,\mathrm{m}$, and $m=5\times 10^{-26}\,\mathrm{kg}$ (the mass of a nitrogen molecule), we obtain that $n_{\Delta T}\approx N 10^{33.6 N - 4.3}$; for a cubic meter of air, $N=2\times 10^{25}$, so $n_{\Delta T} \approx 10^{10^{25}}$. \subsection{MITE for Abstract Subsystems} A natural mathematical generalization that is often interesting to consider is based on dropping the idea that $S$ corresponds to a region in 3-space and regarding $S$ as an abstract subsystem defined by any splitting of Hilbert space into a tensor product, \begin{equation} \mathscr{H}_{\mathrm{mc}} \subseteq \mathscr{H}_S \otimes \mathscr{H}_{S^c}\,, \end{equation} where $S$ and $S^c$ can be thought of as just labels for the two factor spaces. For example, $S$ may comprise the spin degrees of freedom and $S^c$ the position degrees of freedom, or $S$ may comprise the oxygen atoms and $S^c$ all other atoms in the system. Then, canonical typicality as described in Remark~\ref{rem:can} or in \eqref{cantyp} still applies: if $r$ is not too large and each $S_i$ is not too large ($\dim\mathscr{H}_{S_i} \ll \sqrt{\dim \mathscr{H}_{\mathrm{mc}}}$), then most $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$ are ``in MITE relative to $S_1,\ldots,S_r$,'' i.e., lie in the set \eqref{MITElistdef}. \subsection{MITE for Most Abstract Subsystems} One can also consider the set $\mathrm{MITE}_{\text{most}}$ comprising those $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$ for which $\hat{\rho}_S^\psi \approx \hat\rho^{\mathrm{mc}}_S$ holds for \emph{most} abstract subsystems $S$ with $\dim \mathscr{H}_S \leq d_0$. That is, instead of demanding $\psi\in\mathrm{MITE}_{S_i}$ for $r$ \emph{particular} subsystems $S_i$, we demand that $\psi\in\mathrm{MITE}_S$ for \emph{most} $S$. The key fact is that if $d_0\ll \sqrt{\dim\mathscr{H}_{\mathrm{mc}}}$, then \begin{equation} \text{most }\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})\text{ lie in }\mathrm{MITE}_{\text{most}}\,. \end{equation} This claim follows from canonical typicality. Indeed, let $\mathscr{S}$ be the set of all abstract subsystems $S$ of dimension $\leq d_0$, and let $\mu$ be the normalized uniform distribution over $\mathscr{S}$. Since for every $S\in\mathscr{S}$, most $\psi$ lie in $\mathrm{MITE}_S$ by canonical typicality, it follows from Fubini's theorem that under the product measure $\mu\times u_{\mathrm{mc}}$ on $\mathscr{S}\times\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$, the set of pairs $(S,\psi)$ such that $\psi\in\mathrm{MITE}_S$ has measure close to 1, and further that, for most $\psi$, $\mu\{S\in\mathscr{S}: \psi\in\mathrm{MITE}_S\}\approx 1$. On the other hand, a pure state $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$ cannot simultaneously lie in $\mathrm{MITE}_{S}$ for \emph{every} abstract subsystem $S$ of dimension $\leq d_0$. Put differently, for any given $\psi$ we can construct a subsystem $S$ for which $\psi$ is atypical. The simplest way of seeing this is to start with any given subsystem $S'$; then to find a $\psi'\in \mathscr{H}_{\mathrm{mc}}$ that is atypical for $S'$ in that $\hat\rho^{\psi'}_{S'}$ is far from $\hat\rho^{\mathrm{mc}}_{S'}$, for example $\psi'\approx \varphi\otimes \chi$ with $\varphi\in\mathscr{H}_{S'}$ and $\chi\in\mathscr{H}_{(S')^c}$; then to find a unitary operator $\hat U$ on $\mathscr{H}_{\mathrm{mc}}$ so that $\hat U \psi' = \psi$; and finally to define $S$ by applying $\hat U$ to $S'$. Another counterexample is described in \cite{Lych}. \subsection{Remarks} \begin{enumerate} \setcounter{enumi}{\theremarks} \item \emph{Superpositions of contributions from different energy shells.} Of course, some vectors in $\mathscr{H}$ have significant contributions from $\mathscr{H}_{\mathrm{mc}}$ for several macroscopically different energies $E$. In this paper, we focus on vectors in a single energy shell, as the implications for such superpositions are straightforward. \item \emph{Local thermal equilibrium.} One often considers situations of local thermal equilibrium, in which for example the temperature is not constant throughout the volume occupied by the system, but varies slowly in space and time, and small regions can be regarded as being in thermal equilibrium. For such situations, there are then two different notions of local thermal equilibrium, corresponding to MITE and MATE. \item \emph{Macro values are almost constant in the micro-canonical ensemble, micro values are random.} For every observable $\hat O$, $\hat\rho^{\mathrm{mc}}$ defines a probability distribution over its eigenvalues, the micro-canonical distribution; viz., the probability of eigenvalue $\alpha$ being \begin{equation}\label{probalphamc} p^{\mathrm{mc}}(\alpha) = \tr(\hat\rho^{\mathrm{mc}}\hat{P}_\alpha) \end{equation} with $\hat{P}_\alpha$ the eigenprojection for eigenvalue $\alpha$ of $\hat O$. For a (coarse-grained) macro observable $\hat M$, this distribution is almost constant, i.e., one value $\alpha_0$ has probability close to 1, and this value $\alpha_0$ is the thermal equilibrium value. For micro observables, in contrast, the distribution is not predominantly concentrated on a single value. For a macro observable $\hat O = \hat M$ again, when considering a pure state $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$, the distribution over the eigenvalues, $p^\psi(\alpha) = \scp{\psi}{\hat{P}_\alpha|\psi}$, may be very different from $p^{\mathrm{mc}}(\alpha)$ for exceptional $\psi$, but for most $\psi$ it must again be predominantly concentrated on $\alpha_0$ because the micro-canonical distribution \eqref{probalphamc} equals the average of the $p^\psi$ over all $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$. \item \emph{Non-macroscopic systems.} While the thermodynamic ensembles $\hat\rho^{(\beta)}$ and $\hat\rho^{\mathrm{mc}}$ (or classically $\rho^{(\beta)}$ and $\rho^{\mathrm{mc}}$) can also be considered for a system that is non-macroscopic to begin with (say, that comprises only few particles), MATE (and its classical analog) are not defined for such a system because it does not have macro variables.\footnote{If we make an arbitrary choice of variables $\hat M_j$ instead, then these variables will usually not commute, not even approximately; and if they do commute, so that they define an orthogonal decomposition $\mathscr{H}=\oplus_\nu \mathscr{H}_\nu$, then the $\mathscr{H}_\nu$ will not feature the drastic differences in dimension (or the $\Gamma_\nu$ defined by an arbitrary choice of classical variables $M_j$ will not feature the drastic differences in volume) typical of macro-states, and there will usually not be a single macro-state that has 99.99\%\ of the size of the energy shell.} That is, the notion of MATE cannot be applied. Concerning MITE, if the system is too small then canonical typicality will not apply (since canonical typicality requires that the ``bath,'' i.e., the complement of the subsystem, be large), and the set $\mathrm{MITE}$ may well be empty. However, MITE is well approximated in surprisingly small systems, such as for example the six atom, six site Bose--Hubbard chain studied experimentally in \cite{KTL16}. \item{\it Other Measures of Typicality Than Micro-Canonical.} We have mentioned that most $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$ are in both MITE and MATE; put differently, MITE and MATE are typical properties relative to $u_{\mathrm{mc}}$, the uniform distribution over $\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$. This distribution can be called the \emph{micro-canonical distribution} of wave functions, as it plays a role analogous to the micro-canonical distribution of phase points in classical mechanics. This brings us to the question whether, instead of starting out from $u_{\mathrm{mc}}$, we could have started from another distribution. Is there a distribution of wave function analogous to the \emph{canonical} distribution of phase points in classical mechanics? And are MITE and MATE typical relative to that distribution? We conjecture that the answers are yes and yes. The natural candidate for the canonical distribution of wave functions is the measure known as $GAP(\hat\rho^{(\beta)})$ (``$G$aussian $A$djusted $P$rojected measure''). For any density operator $\hat\rho$ on $\mathscr{H}$, the measure $GAP(\hat\rho)$ \cite{JRW94,GLTZ06b,Rei08b,GLMTZ15}, called the ``Scrooge measure'' in \cite{JRW94}, is the most spread-out distribution on $\mathbb{S}(\mathscr{H})$ that has density operator $\hat\rho$. For comparison, the least spread-out distribution would be concentrated on an eigenbasis of $\hat\rho$ with weights given by the eigenvalues of $\hat\rho$. When $\hat\rho$ is proportional to a projection, then $GAP(\hat\rho)$ is uniform over the sphere in the range of that projection; thus, $GAP(\hat\rho^{\mathrm{mc}})=u_{\mathrm{mc}}$. It turns out \cite{GLTZ06b,GLMTZ15} that for most $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$, the conditional wave function of a small subsystem $S$ is approximately $GAP(\hat\rho_S^{\mathrm{mc}})$-distributed; in this way, this distribution is a quantum analog of the canonical distribution of phase points in classical mechanics, and one can say that $GAP(\hat\rho^{(\beta)})$ is the thermal equilibrium distribution of the wave function. We conjecture that most $\psi$ relative to $GAP(\hat\rho^{(\beta)})$ have $\hat\rho^\psi_S \approx \hat\rho^{(\beta)}_S$ for small subsystems $S$. This parallel between the canonical and the micro-canonical distribution of wave functions would be some kind of equivalence of ensembles. However, we note that a $u_{\mathrm{mc}}$-typical $\psi^{\mathrm{mc}}$ looks quite different from a $GAP(\hat\rho^{(\beta)})$-typical $\psi^{(\beta)}$: While $\psi^{\mathrm{mc}}$ lies in $\mathscr{H}_{\mathrm{mc}}$, $\psi^{(\beta)}$ does not; while the coefficients of $\psi^{\mathrm{mc}}$ in the energy eigenbasis $\{\phi_\alpha\}$ are (with high probability) all of roughly equal magnitude (or zero), the coefficients $\scp{\phi_\alpha}{\psi^{(\beta)}}$ have rather different magnitudes, whose squares are roughly proportional to $e^{-\beta E_\alpha}$; as a consequence, more coefficients are nonzero, and more are significantly nonzero than for $\psi^{\mathrm{mc}}$. In fact, the energy uncertainty of $\psi^{\mathrm{mc}}$ is of order $1/\beta$ (independently of $N$ if we keep $\beta$ fixed), while the energy uncertainty of $\psi^{(\beta)}$ is proportional to $\sqrt{N}$; both are much smaller than the size $\Delta E$ of the energy window, which is proportional to $N$. \end{enumerate} \setcounter{remarks}{\theenumi} \section{Exceptional Cases} \label{sec:exceptional} There are at least two exceptional situations in which a dominant macro-state $\Gamma_{\mathrm{eq}}$ or $\mathscr{H}_{\mathrm{eq}}$ does not exist. First, at a first-order phase transition, such as in the ferromagnetic Ising model in a vanishing external magnetic field, some $\Gamma_\nu$ (or $\mathscr{H}_{\nu}$) has the appropriate majority of spins up and some $\Gamma_{\nu'}$ (or $\mathscr{H}_{\nu'}$) has the appropriate majority of spins down, each having nearly 50\%\ of the volume of $\Gamma_{\mathrm{mc}}$ (of the dimension of $\mathscr{H}_{\mathrm{mc}}$) for a suitable energy interval. Second, if the size of the system is exorbitant, say its volume is greater than $10^{10^{10}}$ cubic meters\footnote{Of course, already at much smaller sizes than that, another phenomenon that we are neglecting in this paper becomes very relevant: gravity. It was for this reason that Onsager wrote \cite{Ons}: ``[T]he common concept of a homogeneous volume phase implies dimensions that are large compared to the molecules and small compared to the moon.''} (which is about $10^{10^{10}}$ times the volume of the known universe, which is $10^{80}$ cubic meters), while we keep the size of the cells $\Lambda_j$ small on the macro scale, then the number of cells will be correspondingly large, and it is to be expected by chance alone that a uniformly-randomly selected phase point in $\Gamma_{\mathrm{mc}}$ will possess a cell $\Lambda_j$ somewhere in which a macroscopic observable $M_j$ deviates significantly from its average value. As a consequence, the set where \emph{every} $M_j$ assumes its average value will not have most of the volume. Likewise, for a randomly selected $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$ in such an exorbitantly large system, the joint probability distribution that $\psi$ defines over the eigenvalues $\nu_j$ of the macroscopic observables $\hat M_j$ will not be overwhelmingly concentrated on a single $(\nu_1,\ldots,\nu_K)$. To obtain the estimate that $10^{10^{10}}$ cubic meters is the relevant volume (say, in the classical case), we subdivide the volume into $m$ cells of (say) cubic millimeter size, consider the volume filled with air at room conditions, which has $n\approx 2.5\times 10^{16}$ particles (i.e., $N_2$ molecules) per cubic millimeter, and ask whether the number of particles in any cell will be less than $0.999n$ or more than $1.001n$. Since for a random phase point, the particles will be essentially uniformly distributed over the volume, the number $N_i$ of particles in cell $i$ has a binomial distribution with parameters $nm$ and $m^{-1}$, which for large $n$ and $m$ is approximately Gaussian with mean $n$ and variance $n$. The probability that $N_i<0.999n$ or $N_i>1.001n$ is of order $e^{-(0.001n)^2/2n}=e^{-n/2\times 10^6}$, so for an appreciable probability that this happens for any cell anywhere, we need that $m \gtrsim e^{n/2\times 10^6} \approx 10^{10^{10}}$. This effect, that for exorbitantly large systems none of the $\Gamma_\nu$ or $\mathscr{H}_\nu$ is dominant, can be problematical when we want to take the thermodynamic limit and let the volume tend to infinity. It can easily be dealt with, either by increasing the cell size and the tolerances $\Delta M_j$ as we take the limit, or by defining $\Gamma_{\mathrm{eq}}$ differently as the set of those $X\in\Gamma_{\mathrm{mc}}$ at which \emph{most}, but not \emph{all}, macro observables $M_j$ assume their thermal equilibrium values (and $\mathscr{H}_{\mathrm{eq}}$ as the subspace of $\mathscr{H}_{\mathrm{mc}}$ on which \emph{most}, but not \emph{all}, macro observables $\hat M_j$ assume their thermal equilibrium values). This effect also entails that the notion of MATE becomes meaningless for exorbitantly large systems (unless we increase cell size and tolerances or redefine $\mathscr{H}_{\mathrm{eq}}$), while MITE remains unaffected by this situation. Indeed, by virtue of the theorem of Popescu et al.~\cite{PSW05,PSW06} about canonical typicality (see Section~\ref{sec:MITE} below), the probability that for a (say, cubic millimeter sized) 3-cell $\Lambda_i$, $\|\hat\rho^\psi_{\Lambda_i} - \hat\rho^{\mathrm{mc}}_{\Lambda_i}\| > \varepsilon$ is, for fixed small $\varepsilon>0$, of order $\exp(-\varepsilon^2 d_{\mathrm{mc}})$ as $d_{\mathrm{mc}}=\dim\mathscr{H}_{\mathrm{mc}} \to\infty$. Thus, if we consider $m$ cells, the probability that any of them will be subject to a deviation $\|\hat\rho^\psi_{\Lambda_i} - \hat\rho^{\mathrm{mc}}_{\Lambda_i}\| > \varepsilon$ is at most $m \exp(-\varepsilon^2 d_{\mathrm{mc}})$, and since $d_{\mathrm{mc}}$ is of order $m^\lambda e^{\kappa m}$ with $\kappa,\lambda>0$ as we keep the cell size while increasing the number of cells (and thus the system size), that probability gets small as $m\to \infty$. Thus, as $m\to\infty$ it has probability close to 1 for random $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$ that \emph{all} cells will simultaneously be close to thermal equilibrium in the sense $\hat\rho^\psi_{\Lambda_i} \approx \hat\rho^{\mathrm{mc}}_{\Lambda_i}$. So this is another difference between MITE and MATE: MATE becomes meaningless for exorbitantly large systems (unless we change the cell size and tolerances, or the definition of $\mathscr{H}_{\mathrm{eq}}$) and MITE does not. As a consequence, since MATE but not MITE exists in classical mechanics for pure states, it is also a difference between the quantum and the classical case: for an exorbitantly large system, the notion of thermal equilibrium for pure states becomes problematical in classical mechanics but not (in the sense of MITE) in quantum mechanics. \section{Other Proposed Definitions of Thermal Equilibrium} \label{sec:otherdefs} \subsection{Tasaki's Version of MATE} \label{sec:TMATE} Tasaki \cite{Tas15,Tas15b} noted that there can be substantial practical difficulty about finding, for a specific example of a physical system, a realistic orthogonal decomposition \eqref{decomp} and proving that one of the macro-spaces $\mathscr{H}_\nu$ in $\mathscr{H}_{\mathrm{mc}}$ has $>99\%$ of the dimensions. He suggested the following alternative definition (see \cite{DRMN06,Sug07} for earlier work in this direction), which is not strictly but approximately equivalent to MATE and which we call TMATE: For any collection $\hat M_1,\ldots, \hat M_K$ of self-adjoint operators (thought of as representing macro observables but not necessarily commuting), we say that a system with state $\hat\rho$ is in TMATE if and only if \begin{equation}\label{TMATEdef} \tr(\hat\rho \hat P_j) > 1-\delta \quad \forall j=1,\ldots,K, \end{equation} where \begin{equation} \hat P_j = 1_{[V_j-\Delta M_j,V_j+\Delta M_j]}(\hat M_j)\,, \end{equation} \begin{equation} V_j = \tr(\hat\rho^{\mathrm{mc}}\, \hat M_j) \end{equation} is the thermal equilibrium value of $\hat M_j$, and $1_A$ denotes the characteristic function (indicator function) of the set $A$. Note that $\hat P_j$ is the projection to the subspace spanned by the eigenspaces of $\hat M_j$ with eigenvalues within $V_j\pm \Delta M_j$; thus, $\tr(\hat\rho \hat P_j)$ is the probability of finding, in a quantum measurement of $\hat M_j$ on a system in state $\hat\rho$, a value within $V_j \pm \Delta M_j$. In particular, the set of pure states in TMATE is given by \begin{equation}\label{TMATEpuredef} \mathrm{TMATE} = \bigcap_{j=1}^K \Bigl\{ \psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}}): \scp{\psi}{\hat P_j|\psi}>1-\delta \Bigr\}\,. \end{equation} If, for each $j$, the range of $\hat P_j$ has almost full dimension (as did $\mathscr{H}_{\mathrm{eq}}$ in our previous conderations, and as it should be the case for a macro observable $\hat M_j$ and a macroscopic tolerance $\Delta M_j$), then most $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$ lie in \eqref{TMATEpuredef}. That is, quantitatively, if the dimension of the range of $\hat P_j$ is greater than $(1-\varepsilon/\delta) \dim \mathscr{H}_{\mathrm{mc}}$ for each $j$, then $u_{\mathrm{mc}}(\mathrm{TMATE}) > 1-K\varepsilon/\delta$, which is close to 1 if $\varepsilon \ll \delta/K$. The basic point of TMATE is that the procedures involved in the choice of the subspace $\mathscr{H}_{\mathrm{eq}}$, such as rounding off the $\hat M_j$ to make them commute, are not crucial for obtaining a workable version of MATE, so that TMATE is simpler than MATE as defined in \eqref{MATEdef} from the perspective of practical computation, while keeping the essence of the concept of MATE. \subsection{Von Neumann's Proposed Definition} Von Neumann \cite{vN29} proposed a further definition of thermal equilibrium, inequivalent to MITE and MATE, that is also based on an orthogonal decomposition $\mathscr{H}_{\mathrm{mc}}=\oplus_\nu \mathscr{H}_\nu$ into the simultaneous eigenspaces of a commuting family $\{\hat M_1,\ldots, \hat M_K\}$ of macro observables. According to this definition, a system with pure state $\psi\in\mathbb{S}(\mathscr{H}_{\mathrm{mc}})$ is in thermal equilibrium if and only if \begin{equation}\label{normal} \|\hat P_\nu\psi\|^2 \approx \frac{\dim \mathscr{H}_\nu}{\dim \mathscr{H}_{\mathrm{mc}}} \quad \text{for all }\nu, \end{equation} where $\hat P_{\nu}$ is the projection to $\mathscr{H}_\nu$ and $a\approx b$ can be taken to mean (say) $0.99<a/b<1.01$. See \cite{GLMTZ10,GLTZ10} for discussion of the property \eqref{normal}, called there \emph{normality}. Suppose that among the $\mathscr{H}_\nu$ there is a dominant subspace $\mathscr{H}_{\mathrm{eq}}$. Then von Neumann's equilibrium states all lie in $\mathrm{MATE}$, and their macroscopic behavior is practically indistinguishable from other states in $\mathrm{MATE}$, which is why $\mathrm{MATE}$ then seems like the more natural definition. Von Neumann considered only the case in which there is no dominant $\mathscr{H}_\nu$, which occurs if one takes the inaccuracies $\Delta M_j$ in the coarse-graining involved in the construction of the macro observables $\hat M_j$ smaller than the typical size of fluctuations in thermal equilibrium. That is, a smaller choice of $\Delta M_j$ corresponds to a finer partition of $\Gamma$ into $\Gamma_\nu$ or of $\mathscr{H}$ into $\mathscr{H}_\nu$, and for sufficiently small $\Delta M_j$, none of the $\Gamma_\nu$ or $\mathscr{H}_\nu$ will have 99\%\ of the size of the energy shell. According to the estimate \begin{equation} \varepsilon=e^{-mN\Delta M_j^2} \end{equation} of Section~\ref{sec:MATE}, this may happen if \begin{equation} \Delta M_j \lesssim \frac{1}{\sqrt{mN}}\,, \end{equation} i.e., if \begin{equation} \text{relative error} = \frac{\Delta M_j}{\nu_j^{\mathrm{eq}}} = m\Delta M_j \lesssim \sqrt{\frac{m}{N}} \end{equation} with $\nu_j^{\mathrm{eq}}$ the eigenvalue of $\hat M_j$ on $\mathscr{H}_{\mathrm{eq}}$; this means a relative error of $3\times 10^{-6}$ or less for $m=10^9$ (number of 3-cells) and $N=10^{20}$ (number of particles). That a macroscopic measurement could determine the number of particles in a given cubic millimeter of a macroscopic system (or the amount of energy, or charge, or magnetization in that volume) with an accuracy of 6 digits seems not realistically feasible, so the assumption of such a small $\Delta M_j$ is perhaps overly stretching the idea of ``macroscopic.'' This leads us to another difference between MITE and MATE: If the $\Delta M_j$ are chosen so small (as von Neumann had in mind) that none of the macro-spaces $\mathscr{H}_\nu$ becomes dominant, then MATE cannot be applied any more, while MITE still can. This situation is parallel to that discussed in Section~\ref{sec:exceptional} above. It seems that Reimann's \cite{Rei15b} recent approach using a typical observable $\hat A$ is closely related to von Neumann's if we consider an orthogonal decomposition $\oplus_\nu \mathscr{H}_\nu$ that arises not from commuting macro observables but instead as the eigenspaces of the single observable $\hat A$. \section{Conclusions} \label{sec:conclusions} Arguably, MATE is the more immediate concept of thermal equilibrium. After all, thermal equilibrium is a notion of thermodynamics, and its meaning there is that the macro appearance of the system is stationary, and that temperature and chemical potential are spatially uniform (understood in terms of the spatial distribution of energy). This meaning corresponds to MATE, not to MITE. Moreover, the notion of thermal equilibrium is not exclusive to quantum mechanics, as thermal equilibrium is equally possible in classical mechanics, and in fact the concept originated in classical mechanics; so the definition of thermal equilibrium may be expected to be a general one that applies to both classical and quantum mechanics. This would be so for MATE but not for MITE (which does not exist in classical mechanics for pure states). On the other hand, since MITE is the stronger property, and since it is usually true that macroscopic quantum systems approach MITE (MBL systems being an exception), it is natural to consider MITE, and it would seem artificial to not regard it as a new kind of thermal equilibrium property emerging from quantum entanglement. For MBL systems, most energy eigenstates $\phi_\alpha$ have a short range of entanglement. Usually, some $\phi_\alpha$'s of MBL systems are not in MATE (so states with significant contribution from them will not thermalize), and in fact some $\phi_\alpha$'s are even approximately orthogonal to $\mathscr{H}_{\mathrm{eq}}$. Since the $\phi_\alpha$'s are more or less product states of eigenstates of local (cell) energy, they lack the long-range entanglement relevant to MITE, and thus almost all fail to satisfy MITE. Yet, considering, instead of energy eigenstates, typical wave functions $\psi$ from an energy shell, they do feature long-range entanglement and thus are in MITE, and a fortiori in MATE. We note finally that while our analysis has focused exclusively on macroscopic systems, there is strong numerical and even experimental evidence~\cite{JS,KTL16} that MITE can be a very good approximation for surprisingly small quantum systems of just a few spins or a few atoms, even in pure states. For such systems MATE is not defined at all in either classical or quantum mechanics. It is also not clear whether (and if so how) the concepts of ``thermodynamics," of Boltzmann entropy, and of the second law can be applied to such an isolated microscopic quantum system. \bigskip \noindent\textit{Acknowledgments.} D.~A.~Huse is the Addie and Harold Broitman Member at IAS. J.~L.~Lebowitz was supported in part by the National Science Foundation [grant DMR1104501] and AFOSR [grant F49620-01-0154].
\section{Introduction} When quantum objects (e.g., particles) have the capacity to interact, they have the capacity to become entangled. Any complex quantum system will have many interacting quantum elements, which makes a full understanding of these larger-scale systems challenging. Among other aspects, fully understanding these multi-partite systems requires fully understanding the forms of multi-partite (and multi-particle) entanglement \footnote{A state of $N$ parties is genuinely $N$-partite entangled if and only if the state cannot be obtained using entangled states of $N-1$ or fewer parties.} that exist between these parties, and has been intensely studied over the last two decades. With the advent of applied quantum information, device-independent secure quantum communication, and quantum computation, understanding and characterizing all forms of entanglement in real media is also an essential capability that must be mastered to make these technologies scalable toward quantum supremacy. Moreover, multipartite entanglement has been shown to give advantages in quantum networking protocols (e.g., quantum secret sharing \cite{PhysRevA.59.1829} and network key distribution \cite{Epping_2017}) as well as in quantum metrology \cite{toth2014quantum}. However, scalably characterizing multipartite entanglement for arbitrary states remains a open challenge. Entanglement between two parties has matured to the point that it is possible to experimentally quantify much of the entanglement present \cite{schneeloch2018quantifying,schneeloch2019record} without performing unscalable calculations (NP-hard in general) or an unfeasible determination of the complete quantum state (through tomography). Entanglement between three or more parties is still relatively underdeveloped. Though there exist multipartite entanglement monotones for pure states \cite{coffman2000distributed,ou2007monogamy,eisert2001schmidt, SzalayMultiEntMeasures}, adapting these measures for mixed states is challenging for both fundamental and practical reasons. Fundamentally, the main barriers to quantifying (genuine) multipartite entanglement are two-fold. The first challenge comes from local operations and classical communication (LOCC) being the primary way to tell whether one state is more entangled than another \footnote{If quantum state $\hat{\rho}$ can be converted into state $\hat{\sigma}$ by LOCC operations, but not the other way around, then $\hat{\rho}$ is more entangled than $\hat{\sigma}$.}. While this works perfectly for ordering bipartite-entangled states, there are pairs of tripartite entangled states that cannot be converted into each other in either direction via LOCC \footnote{In, \cite{Contreras2019} it was shown that in spite of the incomparability via LOCC between different-multi-partite entangled states, one can identify the three-party gebit (i.e., the GHZ state) as the maximally multi-partite entangled state of three-qubits. This is accomplished by enlarging the definition of a multi-partite entanglement measure to be monotonic not just under LOCC operations, but under all operations that map the set of biseparable states onto itself.}. The second challenge is in developing a unit of multi-partite entanglement. While all bipartite entangled states can be synthesized using copies of two-qubit Bell states (i.e., ebits), the minimum set of entangled states needed to be able to synthesize all multi-partite states is unknown \cite{walter2016multipartite} (and remains an open problem). It is because of these issues that most multi-partite entanglement monotones are purely geometric, and that resource-based measures of multi-partite entanglement describing the number of elementary multi-partite states required or produced in quantum information protocols need further development. In this paper, we develop a conservative bound to the tripartite entanglement of formation (denoted here as $E_{3\text{F}}$) discussed in \cite{SzalayMultiEntMeasures} using quantum entropies obtained from experimental correlations, as this measure has the greatest potential to be a resource-based measure with further theoretical development. We show how this strategy works both for three systems entangled in either discrete or continuous-variable degrees of freedom (e.g., photon triplets generated in third-order parametric down-conversion \cite{CoronaTripletSPDCPRA2011,Chang3OrdSPDCSupercond_PRX_2020}). As a tripartite entanglement monotone: $E_{3\text{F}}$ is nonzero if and only if the state is genuinely tripartite entangled; it is invariant under local unitary transformations, and is non-increasing under LOCC. What distinguishes $E_{3\text{F}}$ from other tripartite entanglement measures is that it is additive under tensor products of pure states, so that $E_{3\text{F}}$ for $N$ copies of a pure state is $N$ times larger than for a single copy. This additivity is essential for a resource-based measure of entanglement. The ordinary entanglement of formation, $E_{\text{F}}$, quantifies the number of ebits needed to synthesize a bipartite entangled state with LOCC, but it remains to be shown what protocol exemplifies $E_{3\text{F}}$. Even so, $E_{3F}$ has a principal unit of the three-qubit GHZ state (here called the three-party \emph{gebit} for three-party GHZ-entangled bit). Without an exemplary protocol, we can still say whether some high-dimensional tripartite state contains more tripartite entanglement than a number of gebits (see Appendix E for example), but understanding what the exemplary protocol is is an important subject for future research. Where multi-partite states can require multiple varieties of maximally entangled states to synthesize through LOCC, any single-valued resource measure of multi-partite entanglement is defined with respect to just one maximally multi-partite entangled state. Such measures may describe the number of copies necessary, but not sufficient to carry out a quantum information protocol. \section{Foundations and Motivation: The tripartite Entanglement of Formation} Entanglement is defined by departure from separability. If the joint quantum state of two parties $A$ and $B$ factors out into a product of states (one for $A$ and one for $B$), or is a mixture of such products, then that state is separable. Any state not separable is by definition entangled. The question of entanglement becomes more complex for three or more parties, because there are multiple ways a quantum state of three parties $A$, $B$, and $C$ can factor out into products of states contining fewer parties. As such, any meaningful discussion of entanglement in three or more parties requires pointing out from which classes of separable (or partially-separable) states the entanglement is being defined. While the joint quantum state of $ABC$ factoring out into a triple product (one for $A$, one for $B$ and one for $C$) is fully separable (and so are mixtures of these products), a joint quantum state $ABC$ factoring out into only two products (say, one for $AB$ and one for $C$) is only bi-separable (as are mixtures of these states). Bi-separable states of $ABC$ can contain entanglement between two parties, but not between three. In the left-hand side of Fig.~1, we created a diagram of the different classes of tri-partite states, and the convex sets pertaining to different classes of separable and bi-separable states. To begin, each class of biseparable states forms a convex set (depicted as one of the three circles in the left diagram) because any mixture of two states in a given class produces a state in that same class. As an example, the class listed as $A\otimes BC$ represents the set of biseparable states that factor into a product of a state of $A$ and a joint state of $BC$. These sets intersect one another, and remarkably, there are states existing in the joint intersection of all three biseparability classes (the intersection given by the Rouleaux (curved) triangle) that are nonetheless not fully separable (the set depicted by the equilateral triangle of the same vertices). While all non-fully-separable states are entangled in some fashion, non-bi-separable states of $ABC$ may or may not be genuinely tri-partite entangled. Measures of tripartite entanglement rule out not just all forms of separability in $ABC$, but also that the state of $ABC$ can be made out of any mixture of bi-separable states from different classes (here called biseparably-derived). Merely ruling out all forms of biseparability demonstrates full inseparability, but not genuine tri-partite entanglement. As an example of a fully inseparable state that is not tri-partite entangled, consider the three-way mixture $\hat{\rho}_{\text{insep.}}$: \begin{align}\label{gogo} \hat{\rho}_{\text{insep}}\!&=\!\frac{1}{3}\Big(\!|\Phi^{+}_{AB}\rangle\langle\Phi^{+}_{AB}|\otimes|0_{C}\rangle\langle0_{C}|\nonumber\\ &\qquad+|\Phi^{+}_{BC}\rangle\langle\Phi^{+}_{BC}|\otimes|0_{A}\rangle\langle0_{A}|\nonumber\\ &\qquad+|\Phi^{+}_{AC}\rangle\langle\Phi^{+}_{AC}|\otimes|0_{B}\rangle\langle0_{B}|\Big)\nonumber\\ &|\Phi^{+}_{AB}\rangle=\frac{1}{\sqrt{2}}\big(|0,0\rangle + |1,1\rangle\big) \end{align} This state is biseparably derived (by construction), while each party is entangled with the other two. Surprisingly, the reduced two-party state (tracing out any third party) is also separable, which makes this a state that is entangled, but where no two out of three parties are entangled, and which has no genuine tripartite entanglement. In the right-hand side of the diagram, we have the corresponding separable sets of states when one traces out a third party. For example, the set $A\otimes B$ is the set of states $ABC$ that are separable when tracing over $C$ (or when only considering the reduced state $AB$). Here, it is clear that there exist genuinely tri-partite entangled states that are nonetheless separable when considering only two out of three parties (such as the GHZ state shown in \eqref{GHZstate}). For a carefully, concisely, and clearly laid out discussion of the hierarchy of multi-parite entanglement and partial separability, see \cite{SzalayMultiEntMeasures}. \begin{figure}[t] \centerline{\includegraphics[width=\columnwidth]{3TangleDiagram}} \caption{Left: Diagram of the tripartite separability classes of quantum states contained within the set of all tri-partite states. The three classes of bi-separable states contain the class of fully-separable states (straight-edged triangle), but there are states that are bi-separable all three ways (curved (Rouleaux) triangle) that are not fully separable. Conversely, there are mixtures of bi-separable states from different classes that are fully inseparable (not within any class), but which are still not genuinely tripartite-entangled. Right: Diagram of the bipartite separability classes within the set of tripartite quantum states. There are tripartite-entangled states whose bipartite subsystems are separable, and any bipartite separability class encompasses the convex hull of two tripartite separability classes.} \end{figure} \subsection{The Tripartite entanglement of formation} The entanglement of formation $E_{\text{F}}(\hat{\rho})$ is one of the most ubiquitous measures of two-party entanglement \cite{WootersEntForm1998} because of its desirable properties. Like all entanglement measures, it satisfies the necessary axioms of entanglement monotonicity. It is invariant under local unitary transformations. It decreases monotonically under local operations and classical communication (LOCC), and it is zero for separable states. On top of this, $E_{\text{F}}$ is a faithful measure (being zero if and only if the state is separable), and it is additive under tensor products of pure states. As a resource measure, this means that $E_{\text{F}}$ for a product of different pure states is the sum of $E_{\text{F}}$ for each state individually. On top of this, $E_{\text{F}}$ is a popular measure because it has a simple physical interpretation in an exemplary protocol. Where an ebit is a maximally entangled 2-qubit state (i.e., a Bell state), $E_{\text{F}}(\hat{\rho})$ gives the number of ebits required on average to synthezise copies of $\hat{\rho}$ using LOCC, since LOCC alone cannot create entanglement. In 2015, Szalay \cite{SzalayMultiEntMeasures} showed how one can generalize the entanglement of formation to be a measure defined with respect to any set of classes of separable states within the hierarchy of multipartite states. For a quantum state $\hat{\rho}_{ABC}$ of parties $A$, $B$, and $C$, the tripartite entanglement of formation $E_{3\text{F}}(ABC)$ is defined as: \begin{equation} E_{3\text{F}}(ABC)\equiv \min_{|\psi_{i}\rangle}\Big(\sum_{i}p_{i}\min\{S_{i}(A),S_{i}(B),S_{i}(C)\}\Big) \end{equation} where the first minimum is taken over all pure-state decompositions of $\hat{\rho}_{ABC}$. If the joint state of $ABC$ is already pure, $E_{3\text{F}}$ simplifies to the minimum between $S(A)$, $S(B)$, and $S(C)$, where for example $S(A)$ is the Von Neumann entropy of subsystem $A$. For general $N$-partite entanglement, the second minimum would be taken over the entropies of all subsystems formed in a bipartite split. The tripartite entanglement of formation $E_{3\text{F}}$ is invariant under local unitary transformations, is non-increasing under LOCC, and is nonzero if and only if $\hat{\rho}_{ABC}$ is genuinely tripartite entangled (i.e., cannot be derived from mixing product states with kets containing two or fewer parties). What gives $E_{3\text{F}}$ potential as a resource-based measure is that like the ordinary entanglement of formation $E_{\text{F}}$, it is also additive under tensor products of pure states. This means that when using the three-party gebit as a principal unit of this aspect of three-party entanglement, $E_{3\text{F}}(\hat{\rho})$ gives the number of gebits that together would have the same amount of tripartite entanglement as $\hat{\rho}$. In contrast, the most ubiquitous measure of tripartite entanglement, the three-tangle \cite{coffman2000distributed} is not additive on products of pure states, and can be zero for some tripartite-entangled states, due to its being a measure of residual entanglement \footnote{Residual entanglement is the difference between the entanglement shared between a party $A$ and a group of parties $(B_{1},...,B_{N})$, and the entanglements $A$ shares with each subsystem of $B$ individually. As a concept for measuring tri-partite entanglement, it is only well-defined for pure states, as there are biseparably-derived mixed states (e.g., \eqref{gogo}) whose residual entanglement can be above zero.}. \section{Bounding Tripartite Entanglement} To obtain a conservative bound to $E_{3F}$, our approach has two steps. We first develop a relation between $E_{3F}$ and quantum conditional entropies, and then show how experimental correlations can bound these quantum entropies, bounding $E_{3F}$ in turn. To begin, we point out that all quantum states of three parties $A$, $B$, and $C$ that are not genuinely tri-partite entangled must obey the relation: \begin{equation}\label{QuantEntWitness} S(A|BC) + S(B|AC) + S(C|AB) \geq -2\log(D_{\mathrm{max}}), \end{equation} where $D_{\text{max}}=\max\{D_{A},D_{B},D_{C}\}$ is the maximum dimension of parties $A$, $B$, and $C$, and $S(A|BC)=S(ABC)-S(BC)$ is the Von Neumann conditional entropy of subsystem $A$ on joint subsystem $BC$, of the tripartite state given by density matrix $\hat{\rho}_{ABC}$. See Appendix for proof. We prove that the amount of violation of this inequality both witnesses genuine tripartite entanglement, and provides a lower limit to $E_{3\text{F}}$. To prove this relation bounds $E_{3\text{F}}$ from below, we begin with the amount of violation $V$ of \eqref{QuantEntWitness}: \begin{equation}\label{E3min} V\equiv -S(A|BC) - S(B|AC) - S(C|AB) -2\log(D_{\text{max}})\leq 0. \end{equation} Next, we use the fact that the quantum entropy (including conditional entropy) is concave, so that $V$ cannot decrease for any pure-state decomposition: \begin{equation} V\!\!\leq\!\sum_{|\psi\rangle_{i}}\!p_{i}\Big(\!-S_{i}(\!A|BC) - S_{i}(\!B|AC) - S_{i}(\!C|AB)\!\Big) -2\log(\!D_{\text{max}}) \end{equation} and has no smaller value even for the minimal pure state decomposition: \begin{equation} V\leq \min_{|\psi\rangle_{i}}\Big\{\sum_{i}p_{i}\Big(S_{i}(A) + S_{i}(B) + S_{i}(C)\Big)\Big\} -2\log(D_{\text{max}}), \end{equation} where for a pure state $|\psi\rangle_{ABC}$, $-S(A|BC) = S(A)$. Between $S_{i}(A)$, $S_{i}(B)$, and $S_{i}(C)$, the two largest of these three quantities are still each less than or equal to $\log(D_{\text{max}})$. The factors of $\log(D_{\text{max}})$ cancel out in $V$ to give us the result \begin{equation} V\!\!\leq \!\min_{|\psi\rangle_{i}}\!\Big\{\!\!\sum_{i}\!p_{i}\!\Big(\!\!\min\{\!S_{i}(A),\!S_{i}(B),\!S_{i}(C)\!\}\!\!\Big)\!\!\Big\}=E_{3\text{F}}(ABC). \end{equation} As large as the difference between $(S_{i}(A),S_{i}(B),S_{i}(C))$ and $\log(D_{\text{max}})$ might be, this inequality is tight, in that there are bi-separable entangled states for which the two largest of $(S_{i}(A),S_{i}(B),S_{i}(C))$ actually equal $\log(D_{\text{max}})$, such as when two parties are in a maximally-entangled Bell state, while the third stands alone. Having shown how quantum conditional entropies provide a lower bound to $E_{3\text{F}}$, we next show how experimental correlations can bound these quantum entropies, which in turn, bound $E_{3\text{F}}$. \subsection{Discrete observables} Given a $D$-level quantum system, any pair of observables $\hat{Q}$ and $\hat{R}$ with respective sets of eigenstates $\{|q_{i}\rangle\}_{i=1}^{D}$ and $\{|r_{j}\rangle\}_{j=1}^{D}$ obey the entropic uncertainty relation \footnote{For a full review of entropic uncertainty relations and their applications, see \cite{ColesRMP}}: \begin{align}\label{entUnvRel} H(Q) + H(R) &\geq \log(\Omega) + S(\hat{\rho}),\\ :\;\Omega &\equiv \min_{i,j}\Bigg(\frac{1}{|\langle q_{i}|r_{j}\rangle|^{2}}\Bigg)\nonumber \end{align} where $H(Q)$ is the Shannon entropy of the measurement probabilities of the outcomes of $\hat{Q}$ given by $\text{Tr}\big[\hat{\rho}|q_{i}\rangle\langle q_{i}|\big]$: \begin{equation} H(Q)=-\sum_{i}\text{Tr}\big[\hat{\rho}|q_{i}\rangle\langle q_{i}|\big]\log\Big(\text{Tr}\big[\hat{\rho}|q_{i}\rangle\langle q_{i}|\big]\Big), \end{equation} and $S(\hat{\rho})$ is the von Neumann entropy of the quantum state $\hat{\rho}$. The value $\Omega$ ranges between $1$ and $D$, where the minimum value is obtained when $\hat{Q}$ and $\hat{R}$ commute (and there is a simultaneously determined pair of eigenstates where the inner product $\langle q_{i}|r_{j}\rangle=1$). The maximum value is attained when $\hat{Q}$ and $\hat{R}$ are maximally uncertain with respect to one another (and the inner product $\langle q_{i}|r_{j}\rangle\rightarrow1/\sqrt{D}$ for all $i$ and $j$). Here and throughout this paper, all logarithms are taken to be base-$2$, since we measure information in bits. The entropic uncertainty principle can be used to place an upper limit on the quantum entropy of that system. To bound quantum \emph{conditional} entropy, we can use the uncertainty principle in the presence of quantum memory \cite{Berta2010} adapted for classical entropies and for three parties: \begin{equation}\label{DiscEntCorr} H(Q_{A}|Q_{B},\!Q_{C}) + H(R_{A}|R_{B},\!R_{C})\!\geq\! \log(\Omega) \!+\! S(\!A|BC). \end{equation} Expressed differently, this inequality is given as: \begin{equation} -S(\!A|BC)\!\geq\! \log(\Omega)\!-\!\Big(\!\!H(Q_{A}|Q_{B},\!Q_{C}) \!+\! H(R_{A}|R_{B},\!R_{C})\!\!\Big). \end{equation} Thus, using the experimental correlations between $(Q_{A},Q_{B},Q_{C})$ and $(R_{A},R_{B},R_{C})$, one can find minimum values for $-S(A|BC)$, $-S(B|AC)$ and $-S(C|AB)$, which when added together with $-2\log(D_{\text{max}})$, gives a minimum value for $E_{3F}$. For pure states, one can bound $E_{3\text{F}}$ more tightly using just the minimum value between $-S(A|BC)$, $-S(B|AC)$ and $-S(C|AB)$. As long as the correlations are strong enough that the lower bounds to each of these quantites are greater than zero, the minimum of these bounds is a lower bound to $E_{3\text{F}}$. \subsection{Continuous observables} For continuous observables, we can bound the quantum conditional entropies like $S(A|BC)$ in a similar fashion as for discrete observables. Using the methods in \cite{schneeloch2018quantifying}, one can derive a corresponding entropy constraint for continuous observables that are Fourier conjugates of one another (e.g., position/momentum, energy/time, field quadratures, etc). For position $x$ and momentum $k=p/\hbar$, we have the relation \begin{equation} h(x_{A}|x_{B},x_{C}) + h(k_{A}|k_{B},k_{C})\geq \log(2\pi) + S(\!A|BC), \end{equation} where for example, $h(x_{A}|x_{B},x_{C})$ is the continuous Shannon entropy of probability density $\rho(x_{A},x_{B},x_{C})$ conditioned on variables $x_{B}$ and $x_{C}$. To use a relation like this experimentally, one can use the discrete probabilities that come from coarse-graining $x$ and $k$ into bins of size $\Delta x$ and $\Delta k$, respectively. Because coarse-graining cannot decrease entropy, we can use these discrete probabilities in the relation: \begin{equation} H\!(\!X_{\!A}|X_{\!B},\!X_{\!C}\!) + H(\!K_{\!A}|K_{\!B},\!K_{\!C})\!\geq\! \log\!\!\Big(\!\frac{2\pi}{\Delta x\Delta k}\!\Big) + S(\!A|BC) \end{equation} to establish a minimum value to $-S(A|BC)$. However, because the relation \eqref{QuantEntWitness} between these quantum entropies and $E_{3\text{F}}$ explicitly includes the dimension $D$, we can only bound $E_{3\text{F}}$ from below for continuous-variable systems if the underlying state is known to be pure. It may be that tighter bounds exist if $S(ABC)$ is known, since a small deviation from a pure state should result in a small change in the threshold for tripartite entanglement, but more research is needed. This relation is still useful in theoretical tests where the various entanglements of formation of an infinite-dimensional pure tripartite state cannot yet be computed (which is almost every case). In Section III.D, we show how we can estimate $E_{3\text{F}}$ for the triple-Gaussian wavefunction of photon-triplets produced in third-order spontaneous parametric down-conversion. \subsection{Test1: The GHZ-Werner state and W state} In order to test the strength of our bound on $E_{3\text{F}}$, we evaluated it for a GHZ-Werner state of $3$-qubits defined as: \begin{equation} \hat{\rho}_{\text{GW}}\equiv p\;|\operatorname{GHZ}_{3}\rangle\langle \operatorname{GHZ}_{3}| + (1-p) \hat{\rho}_{\text{MM}} \end{equation} where \begin{equation}\label{GHZstate} |\operatorname
{GHZ}_{3}\rangle\equiv \frac{1}{\sqrt{2}}\Big(|0\rangle^{\otimes 3} + |1\rangle^{\otimes 3}\Big), \end{equation} $p$ is the GHZ mixing fraction between zero and one, and $\hat{\rho}_{\text{MM}}$ is the maximally mixed state of three qubits. Because the GHZ-Werner state has such a simple form, we were able to plot as a function of $p$ both the exact value of $V$ (the lower bound to $E_{3\text{F}}$ from \eqref{E3min}) and the lower bound to $V$ using \eqref{DiscEntCorr} with observables $\hat{Q}$ and $\hat{R}$ given as the Pauli $\hat{\sigma}_{x}$ and $\hat{\sigma}_{z}$. \begin{figure}[t] \centerline{\includegraphics[width=0.85\columnwidth]{E3GHZ}} \caption{Plot of $V$, the minimum value of $E_{3\text{F}}$ of the pure state fraction $p$. The red and blue dashed curves give the exact values of $V$ for the W-Werner state and the GHZ-Werner state, respectively, while the purple and green solid curves give the respective lower bounds to $V$ using the probabilities obtained from $\sigma_{x}$ and $\sigma_{z}$ correlations. The green shaded region where $p>0.9406$ is where three-party entanglement is quantified with the GHZ-Werner state.} \end{figure} Here, we see for $p$ greater than $0.94$, we reach the threshold for quantifying tripartite entanglement, where the largest lower bound to $E_{3\text{F}}$ is unity at $p=1$, which is also the exact value of $E_{3\text{F}}$ at that value of $p$. Though there are smaller values of $p$ sufficient to witness tripartite entanglement (e.g., using the fidelity to the GHZ state shows tripartite entanglement for $p>\frac{3}{7}$ \cite{guhne2009entanglement}), these fidelity-based witnesses did not (at the time) give any information about how much multipartite entanglement exists in the system (though we show one such strategy in Section IV.A.). Actually quantifying tripartite entanglement gives us more information about the resources present. In addition to the GHZ-Werner state, we also examined how our lower bound to $E_{3\text{F}}$ behaves for the three-qubit W-Werner state: \begin{equation} \hat{\rho}_{\text{WW}}\equiv p\;|W_{3}\rangle\langle W_{3}| + (1-p) \hat{\rho}_{\text{MM}} \end{equation} where \begin{equation} |W_{3}\rangle\equiv \frac{1}{\sqrt{3}}\Big(|0,0,1\rangle +|0,1,0\rangle + |1,0,0\rangle\Big), \end{equation} $p$ is the $W$ mixing fraction between zero and one, and $\hat{\rho}_{\text{MM}}$ is the maximally mixed state of three qubits. For all values of $p$, we found no violation. However, if we instead evaluate the bound directly by calculating the quantum conditional entropies in \eqref{E3min} (that may be determined through quantum state tomography), we can still quantify as much $E_{3\text{F}}$ as $0.7549$ gebits (where explicit calculation of $E_{3\text{F}}$ for $|W\rangle$ gives $\approx0.9183$ gebits). \subsection{Test 2: High-dimensional tri-partite entanglement of photon triplets} \subsubsection{Spatial entanglement} For photon triplets generated in third-order degenerate collinear spontaneous parametric down-conversion (TODC-SPDC), the momentum wavefunction in one spatial dimension is given by: \begin{align} \psi&(k_{1},k_{2},k_{3})=N \psi_{\text{pump}}(k_{1}+k_{2}+k_{3})\times\nonumber\\ &\times \operatorname{sinc}\Big(\frac{3 L_{z}}{4 k_{p}}\big((k_{1}+k_{2})^{2} + (k_{2}+k_{3})^{2} + (k_{3} + k_{1})^{2}\big)\Big). \end{align} where here, we simplify $(\vec{k}_{1})_{x}$ as $k_{1}$ to simplify notation. For a full derivation, see Appendix B. The sum of squares of sums of momentum inside the $\operatorname{sinc}$ function is a quadratic form, and can be expressed in terms of rotated orthogonal momentum coordinates: \begin{align} (k_{1}&+k_{2})^{2} + (k_{2}+k_{3})^{2} + (k_{3} + k_{1})^{2}\nonumber\\ &= k_{u}^{2} + k_{v}^{2} + 4 k_{w}^{2}, \end{align} where $k_{u}=\frac{1}{\sqrt{6}}(2 k_{1}-k_{2}-k_{3})$. $k_{v}=\frac{1}{\sqrt{2}}(k_{2}-k_{3})$, and $k_{w}=\frac{1}{\sqrt{3}}(k_{1}+k_{2}+k_{3})$. Since this $\operatorname{sinc}$ function approximately factors into a product of $\operatorname{sinc}$ functions for each orthogonal component, and because we can approximate the $\operatorname{sinc}$ function with a Gaussian function \cite{Schneeloch_SPDC_Reference_2016} (as well as the pump), we eventually arrive at the following form for the triple-Gaussian wavefunction for third-order SPDC: \begin{align} \psi(k_{u},k_{v},k_{w})=N &\operatorname{exp}[-\frac{8a}{9}k_{u}^{2}]\times \operatorname{exp}[-\frac{8a}{9}k_{v}^{2}]\nonumber\\ &\times \operatorname{exp}[-(3 \sigma_{p}^{2} + \frac{32 a}{9})k_{w}^{2}] \end{align} where $a\equiv\frac{3 L_{z}\lambda_{p}}{8\pi n_{p}}$. Here: $L_{z}$ is the length of the nonlinear medium; $\lambda_{p}$ is the wavelength of the pump light; $n_{p}$ is the index of refraction of the nonlinear medium at the pump wavelength; $\sigma_{p}$ is one quarter of the $1/e^{2}$ beam diameter of the pump beam, and $N$ is a normalization constant. The spatial multipartite entanglement in third-order SPDC is a consequence of strong momentum correlations due to momentum conservation with the pump, along with near-perfect position correlations due to the triphoton being created at one point in space. Because the correlations are between three parties, the uncertainty principle actually forbids near-perfect correlations \footnote{Near-perfect correlations in this context is where knowledge of one variable allows the determination of all others to a high degree of precision. The probability density of three nearly-perfectly-correlated variables approximately traces out a curve in that three-dimensional space.} in both position and momentum (See Appendix D for details) even though this is not the case for two parties. \begin{figure*}[t] \centerline{\includegraphics[width=0.9\textwidth]{E3CV5}} \caption{a) Plot of the minimum value of the tripartite entanglement of formation $E_{3\text{F}}$ (obtained from conditional position and momentum entropies of the Triple Gaussian approximation) of an entangled photon triplet generated in degenerate, collinear third-order SPDC. While the solid curve gives the exact lower bound to $E_{3\text{F}}$, the dashed curve is the approximation $E_{3\text{F}}\geq \log(\frac{e}{2}\frac{\sigma(x_{A}-x_{B})}{\sigma(x_{A})})+0.2075$, where the constant is independent of $a$ and $\sigma_{p}$ (see Appendix D1 for derivation); $\sigma(x_{A})=\sqrt{\sigma_{p}^{2}+\frac{16a}{9}}$, and $\sigma(x_{A}-x_{B})=\sqrt{\frac{16 a}{9}}$. Vertical guidelines indicate the pump widths of the zero intercepts of $E_{3\text{F}}$ and its approximation, followed by the pump widths $0.0370$ mm corresponding to $E_{3\text{F}}=1$ gebit, and $1.0$ mm corresponding to $E_{3\text{F}}=5.6000$ gebits, respectively. b) Plot of $E_{3\text{F}}$ in the frequency-time degree of freedom for the photon triplet source as a function of the pump bandwidth parameter $\sigma_{\omega_{p}}$. The dashed trendline (see Appendix D for derivation) is the bound $E_{3\text{F}}\geq -\log(e\sigma(t_{A}-t_{B})\sigma(\omega_{A}+\omega_{B}+\omega_{C}))+0.207519$, where the constant is independent of $\sigma_{\omega_{p}}$ and $b$; $\sigma(t_{A}-t_{B})=\sqrt{\frac{16b}{9}}$; $\sigma(\omega_{A}+\omega_{B}+\omega_{C})=1/\sqrt{4(\frac{8b}{27}+\frac{1}{4\sigma_{\omega_{p}}^{2}})}$.Vertical guidelines indicate $\sigma_{\omega_{p}}=1.94$GHz, and $27.3$THz for pump bandwidth of a He-Cd gas laser, and the marginal bandwidth (nearly equal to the zero intercept), respectively. The horizontal guideline gives the value $E_{3\text{F}}$ of $13.37$ gebits.} \end{figure*} Using the triple-Gaussian wavefunction, and its Fourier transform for both position and momentum statistics, we are able to evaluate all of the entropies necessary to place a lower limit to $E_{3\text{F}}$. As an example, we consider a 10 mm-long crystal of Aluminum Nitride \cite{LuChi32018}, a nonlinear material with a bandgap of $\approx6$eV (transparent from about $200$nm to $>\!15\mu$m), and a third-order nonlinear susceptibility ($\chi^{(3)}$) of approximately $160\text{pm}^{2}/V^{2}$ (or nonlinear index of about $2.3\times10^{-19}\text{m}^{2}/W$). If we assume a $325$nm pump (with index $n_{p}=2.247$) and beam radius $\sigma_{p}=1.0$mm, then we find $E_{3\text{F}}$ is no less than approximately $5.6000$ gebits, or more tripartite entanglement than can be supported on a 16-qubit state space (See Fig.~3a for plot). \subsubsection{Energy-time entanglement} In the limit of a narrowband pump, there is even more tripartite entanglement to be had in third-order SPDC. This is especially fortunate, as single-mode nonlinear optical waveguides can greatly enhance the efficiency of photon triplet generation by maintaining a high intensity over the entire length of the nonlinear medium. The joint frequency amplitude is given by the integral: \begin{equation} \psi(\omega_{1}\!,\omega_{2},\omega_{3})\!=\!N\!\!\int \!d\omega_{p}\psi_{\text{pump}}(\omega_{p})\delta(\Delta\omega) \operatorname{sinc}\Big(\frac{L_{z}\Delta k_{z}}{2}\Big). \end{equation} Expanding, $\Delta k_{z}$ to second order in frequency, and simplifying in the same fashion as was used to obtain the spatial wavefunction, we obtain: \begin{align} \psi(\omega_{1},\omega_{2},\omega_{3})&=N\psi_{\text{pump}}(\omega_{w}\sqrt{3}) \times\nonumber\\ &\times \operatorname{sinc}\Big(\frac{L_{z}\kappa}{4}\big(\omega_{u}^{2}+\omega_{v}^{2} + \omega_{w}^{2}\big)\Big). \end{align} where $\omega_{u}=\frac{1}{\sqrt{6}}(2 \omega_{1}-\omega_{2}-\omega_{3})$. $\omega_{v}=\frac{1}{\sqrt{2}}(\omega_{2}-\omega_{3})$, $\omega_{w}=\frac{1}{\sqrt{3}}(\omega_{1}+\omega_{2}+\omega_{3}-\omega_{p0})$, and $\omega_{p0}$ is a constant equal to the center frequency of the pump light. In addition, $b\equiv L_{z}\kappa/4$ and $\kappa$ is the group velocity dispersion $(d^{2}k/d\omega^{2})$ at one third of the pump frequency. As with the case in spatial entanglement, we use these orthogonal frequency coordinates and approximate the joint frequency amplitude as a product of three Gaussian functions, one in each coordinate: \begin{align} \psi(\omega_{u},\omega_{v},\omega_{w})=N &\operatorname{exp}[-\frac{8b}{9}\omega_{u}^{2}]\times \operatorname{exp}[-\frac{8b}{9}\omega_{v}^{2}]\nonumber\\ &\times \operatorname{exp}[-(\frac{3}{4\sigma_{\omega_{p}}^{2}} + \frac{8 b}{9})\omega_{w}^{2}] \end{align} Using this joint frequency amplitude to describe the light in the same setup where we previously quantified spatial entanglement, we find (see Fig.~3b) that there is just as much potential for very large amounts of three-party entanglement as has been shown for two-party entanglement in the energy-time degree of freedom of ordinary second-order SPDC \cite{schneeloch2018quantifying}. Using the same experimental example as we did for spatial entanglement, and assuming the relatively broad bandwidth of a 325nm He-Cd gas laser $\sigma(\omega_{p})\approx 1.9$GHz,(and $\kappa=1.01\times10^{25}s^{2}/m$) we predict approximately $13.37$ gebits, or more tripartite entanglement than a $40$-qubit state can support. The amount is only limited by how narrow one can make the bandwidth of the pump light, which is readily available at MHz- or even kHz-scale bandwidths with current technology. As with many other photon triplet sources, it is a subject of ongoing research \cite{corona2011experimental,huang2013generation} to increase the triplet generation rate to make them more feasible. Knowing how much tripartite entanglement can be present in these degrees of freedom, we see that there is much to be excited about in the resources contained in these states. \section{Discussion: Challenges in quantifying multipartite entanglement} Properly quantifying tripartite entanglement for mixed states is a formidable challenge for a number of reasons. For two parties, entanglement can be quantified in terms of ebits (or Bell pairs), which, with LOCC can be used to synthesize any two-party state. For three parties however, there is no known Minimum Reversible Entanglement Generating Set (MREGS) of entangled states from which all three-party states can be synthesized via LOCC. Even including Bell pairs for every bipartite subsystem and for every bipartite split of the tripartite system (i.e., the set of maximally entangled states with repsect to every separability class) is still not sufficient to synthesize every tripartite state \cite{acin2003structure}. While this set of states can generate any tripartite state within any bi-separable class via LOCC, there are fully inseparable states that cannot be synthesized in this way. In addition, the practical aspects of scalably quantifying multi-partite entanglement present a greater challenge than for bipartite entanglement. The difficulty in computing multi-partite entanglement measures is at least NP-hard (the difficulty in determining entanglement in general), and efficient methods to quantify multi-partite entanglement of mixed states in high-dimensional (Continuous-variable) degrees of freedom remain to be developed. That said, we have shown how experimental correlations can be used to bound the amount of tripartite entanglement (as measured by $E_{3\text{F}}$) from below. We have shown its effectiveness both for low-dimensional systems, and for high-dimensional continuous-variable systems. An intriguing open question is whether similar entropic bounds exist for $N$-partite generalizations of the entanglement of formation ($E _{N\text{F}}$ for $N>3$). Tripartite entanglement is special because there are only three bipartite splits of three parties. For $N$ partites, there are $(2^{N-1}-1)$ bipartite splits, making the number of measurements grow exponentially with $N$. However, one can use entropic correlations to witness multi-partite entanglement in a scalable way (though the fidelity to the $N$-partite GHZ state remains optimal \cite{guhne2009entanglement}) if the correlations are strong enough. For $N$ parties correlated in observables $\hat{Q}$ and $\hat{R}$, violating the inequality \begin{equation}\label{StrongMultiRel} \sum_{i=1}^{N}\!\!\Big(\!\!H(Q_{i}|Q_{i+1}) \!+\! H(R_{i}|R_{i+1},...,R_{i+(N-1)})\!\Big)\!\geq\! 2\log(\Omega), \end{equation} (where the sum wraps around $N$ to $1$ again) will witness genuine $N$-partite entanglement, and is maximally violated with an $N$-partite GHZ state (See Appendix C for details). Better still, if the state to be measured is sufficiently close to a target state (e.g., the $N$-partite GHZ state), one can measure a small number of density matrix elements to efficiently quantify as well as qualify its multi-partite entanglement. \subsection{Quantifying multi-partite entanglement close to target states with individual density matrix elements} Although our paper shows how experimental correlations can quantify $E_{3F}$ with minimal prior knowledge of the state, we have recently found in the literature that there exist highly-efficient solutions to quantify entanglement when the state being measured is close to a known state. While a sufficiently high fidelity to a particular multi-partite-entangled target state witnesses multi-partite entanglement \cite{guhne2009entanglement}, one can use similar custom-tailored bounds to also quantify the multi-partite entanglement as well. In \cite{WuMultiEnt}, Wu \emph{et~al} showed how specific elements of a density matrix can be used to efficiently bound from below a measure of genuine $N$-partite entanglement based on the linear entropy $S_{\text{L}}(\hat{\rho})=2[1-\text{Tr}[\hat{\rho}^{2}]]$. Their measure given here as $E_{N\text{L}}$, is given by: \begin{equation} E_{N\text{L}}(\hat{\rho})=\min_{|\psi\rangle}\sum_{i}p_{i}\min_{\gamma}\sqrt{S_{L}(\hat{\rho}_{\gamma i})}, \end{equation} where $\gamma$ in an index running over all possible subsystems formed by tracing over one side of a bipartite split, and $\hat{\rho}_{\gamma i}$ is the density operator of the $\gamma$ subsystem of the $i$-th pure state in the decomposition of the full $N$-party state. They also point out how $E_{N\text{L}}(\hat{\rho})$ is related to $E_{N\text{F}}(\hat{\rho})$ through convexity arguments and relations between different entropy functions. The function $-\log(1-x^{2}/2)$ is a convex function of $x$. The quantum collision entropy $S_{C}(\hat{\rho})$ is $-\log_{2}(\text{Tr}[\hat{\rho}^{2}])$, and is a direct function of the linear entropy: \begin{equation} S_{C}(\hat{\rho})=-\log_{2}\Big(1-\frac{\sqrt{S_{L}(\hat{\rho})}^{2}}{2}\Big). \end{equation} Because of this convexity, and that $E_{N\text{L}}(\hat{\rho})$ is a mean value of $\sqrt{S_{\text{L}}}$, we can say the following. Given a bound $\mathcal{B}\leq E_{N\text{L}}$, it follows that: \begin{equation} -\log_{2}\Big(1-\frac{\mathcal{B}^{2}}{2}\Big)\leq E_{N\!C}(\hat{\rho})\leq E_{N\text{F}}(\hat{\rho}), \end{equation} where \begin{equation} E_{N\!C}(\hat{\rho})=\min_{|\psi\rangle}\sum_{i}p_{i}\min_{\gamma}S_{C}(\hat{\rho}_{\gamma i}), \end{equation} and the last inequality bounding $E_{N\text{F}}$ comes from $S_{C}(\hat{\rho})$ being less than or equal to the von Neumann entropy of the same density matrix. To construct this bound $\mathcal{B}$, we refer the reader to \cite{WuMultiEnt} and \cite{MaEntBound}, but here we give the specific example well-adapted to the $N$-qubit GHZ state: \begin{align} \mathcal{B}&=2|\langle 0^{\otimes N}|\hat{\rho}|1^{\otimes N}\rangle|\nonumber\\ &-\!\!\sum_{q=1}^{2^{N}-2}\!\!\!\sqrt{\langle q|\hat{\rho}|q\rangle\langle 2^{N}\!-\!1-q|\hat{\rho}|2^{N}\!-\!1-q\rangle}.\label{BoundB} \end{align} To explain the compact notation with an example, $q=5$ expressed in binary is $101$, and the four-qubit ket $|q\rangle$ for $q=5$ is given by $|0,1,0,1\rangle$. This bound $\mathcal{B}$ is as tight as the fidelity-based witnesses in \cite{guhne2009entanglement}, having an equal tolerance for noise in the GHZ-Werner state for $N$ qubits (e.g., $p>3/7$ implies $\mathcal{B}>0$ and $E_{3\text{F}}>0$ for the three-qubit $\hat{\rho}_{\text{GW}}$.), while providing the additional information of a minimum nonzero value for the entanglement measure. However, this bound works well only for states close to the $N$-qubit GHZ state. If one instead inputs the W state, the bound $\mathcal{B}<0$ and no multi-partite entanglement could be quantified (though bounds constructed especially for states close to the W state work well). Interestingly, the bound $\mathcal{B}$ adapted to states close to the $N$-qubit GHZ state can be further improved without losing its robustness to noise. The second term in \eqref{BoundB} is expressible as an inner product between two vectors. Using the Cauchy-Schwarz inequality, and the fact that summing over all probabilities gives unity leaves us with the result: \begin{align} \mathcal{B}&\geq2|\langle 0^{\otimes N}|\hat{\rho}|1^{\otimes N}\rangle|\nonumber\\ &\,\,\,+|\langle 0^{\otimes N}|\hat{\rho}|0^{\otimes N}\rangle| +|\langle 1^{\otimes N}|\hat{\rho}|1^{\otimes N}\rangle| -1 \end{align} In other words, by summing the magnitudes of just the four corner-elements of the $N$-qubit density matrix, any sum greater than unity witnesses genuine $N$-partite entanglement, and the amount by which it is greater than unity is a lower bound for $\mathcal{B}$, which in turn gives a lower bound for $E_{N\text{F}}$. This restricted bound is equally robust to noise, and is equally optimized for states close to the GHZ state (and even maintains the critical value $p\geq3/7$ for the $\text{GHZ}_{3}$ - Werner state). For high-dimensional quantum systems, where maximally correlated systems still have a comparatively large number of significant elements of the density matrix, research into efficiently quantifying multipartite entanglement is ongoing. \section{Conclusion} In our investigations, we have developed a relation that allows us to bound from below the tripartite entanglement of formation $E_{3F}$, both for general 3-qudit systems (where each system can have independent dimension), and for pure-state continuous-variable systems. In doing so, we found that photon triplets generated in third-order spontaneous parametric down-conversion have several three-party gebits (i.e., 3-qubit-GHZ states) worth of tripartite entanglement both in their spatial and especially in their energy-time degrees of freedom. In doing this, we have come across fundamental challenges in quantifying multi-partite entanglement, where a single number is insufficient to characterize all forms of entanglement present in a multi-partite state. However, this may yet be resolved. While gebits and all bipartite ebits are not sufficient to synthesize every tripartite state, they also cannot be converted into each other by LOCC, and represent resources in their own right. Futher developing $E_{3F}$ and other resource measures of multi-partite entanglement may shed much light on this issue. \begin{acknowledgments} We gratefully acknowledge support from the Air Force Office of Scientific Research LRIR 18RICOR028 and LRIR 18RICOR079, as well as insightful discussions with Ms. Laura Wessing, Dr. A. Matthew Smith, and Dr. Dimitri Uskov. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of AFRL. \end{acknowledgments}
\section{Introduction} \begin{figure}[tb!] \centering \includegraphics[scale=0.8]{Others/Loss} \caption{Comparison between 0-1 loss, LSE and PD-Rank loss. On the horizontal axis is the value difference between two items, according to the observed label. If this value is positive, the current ranking respects this label, otherwise, the order is incorrect according to the label. While the 0-1 loss just distinguishes between correct and incorrect ordering, both approximations penalize according to the "degree of correctness". However, we note that for values closer to and less than 0, the proposed PD-Rank loss is a tighter approximation on the 0-1 loss.}\label{fig:loss} \end{figure} If early applications of the ranking problem were focused on voting scenarios \cite{article:Borda_1781,article:Condorcet_1785} and later on ranking of a small set of items where experts consider weighted ratios between pairs of items \cite{article:PCM_EM}, modern needs call for the rank of a large number of items with noisy labels and missing information, as it is not feasible to retrieve all possible pairwise comparisons for these large scale scenarios. We can find a wide number of applications raging from ranking players in different kinds of competitions \cite{article:PCM_incomplete_tennis}, document retrieval \cite{article:Doc_retrieval}, recommender systems \cite{article:recommender_sys}, selection of patients \cite{article:kidney_transplant} or mapping urban safety perception \cite{article:Mapping_Urban_Perception}. In order to rank a set of items, we need predefined labels ranking subsets of the whole set. Those labels are incomplete, possibly noisy orderings between groups of elements or just pairs. Even if the latter contains less information and requires more observations, it is appealing as it is easier and faster to obtain, and leads to fewer human assessment errors \cite{article:human_pair_class}. Obtaining labels from pairwise comparisons is thus an interesting data collection mechanism for the case where we need to obtain labeling from humans, or the number of items to rank is too large. In this sense, our work distinguishes from others considering access to a ranking of larger subsets of orderings \cite{article:subset_Topk,article:subset_fotakis2021aggregating}. Furthermore, the labels can be obtained from one single assessor or from several annotators. The latter is useful as in a lot of cases it is impractical or even impossible to get all comparisons from a single annotator. This motivates tools such as the Amazon Mechanical Turk, where large sets of pairwise comparisons from different annotators can be obtained at a low cost. This, however, brings new challenges: different annotators will classify the same pair with different labels (either because it is a subjective comparison or there are inevitable assessment errors), entailing that the labels are subject to noise. This gives rise to the so-called \textit{rank aggregation} problem, where we are presented with several rankings of the same items and aim at building a unique one. When considering noisy labels, a question arises in how to include a noise model in the ranking method. A widespread view is to parametrically model the relationship between noise and the true scores of items, e.g., by using a random utility model (RUM). In particular, we say that if two items are more distant in their scores, their labels will be subjected to less noise, and vice-versa. A popular model is the Thurstone's model \cite{article:Thurstone1927} or the Bradley-Terry (BT) model \cite{article:BradleyTerry}, and it is possible to find a large body of pairwise rank aggregation methods under this parametric assumption \cite{article:Spectral_MLE,article:BT_model,article:subset_Topk}. However, this assumption does not necessarily hold for all applications, and sometimes there is the need for a more general modeling of the noise \cite{article:SVM_RankAggregation_2014, article:NoisySorting_2007, article:EffRank_2013,article:Minimax} --- the approach we take in this work. This model considers that noise is the same for any comparison, regardless of the ranking of the items being compared\footnote{This formulation can also be understood as the noisy sorting problem with resampling.}. Another non-parametric approach is the Borda Count, recently analyzed in \cite{article:SimpleRobust_Shah_2018}. Here the items are simply ranked by the number of pairwise comparison probabilities won and this simple approach leads to a much lower computational load. Nonetheless, a disadvantage of the non-parametric model is that one obtains only a ranking (ordered items) and not a rating (scores for each item), so our method is aimed exclusively at the ranking of items. We note that there are also approaches focusing on robustness of parametric models in the presence of adversarial corruption \cite{article:BTL_Adversarial}, but this is not our setting. Finally, a major focus of the current work is to obtain a method suitable for datasets with a large collection of items, as this is a current need of emerging applications. In the ranking context, we can find two main trends: either to carefully sample the observations so that a smaller number of them provides more information \cite{article:Crowd_BT_2013,article:Hybrid_MST_2018,article:Hodge-active_2017,article:ActiveRank_Heckel_2016}, or to use suitable optimization procedures, able to handle large data. The latter is mostly found for the solution of pairwise comparison matrices in general \cite{article:PCM_iter_LargeScale_Sparse}, which is a slightly different problem: it is assumed that we have a ratio between each pair of items, instead of a given number of annotators providing labels --- this, for instance, makes it difficult to include annotator behavior. The former employs active learning techniques to sequentially pick informative pairs. Except for the work in \cite{article:ActiveRank_Heckel_2016}, where the number of comparisons won at one step is used to select the next pairs to be compared, the other authors employ parametric assumptions. Both Crowd-BT \cite{article:Crowd_BT_2013} and Hybrid-MST \cite{article:Hybrid_MST_2018} are based on the Bradley-Terry model, with the latter having smaller time complexity and the former including modeling of annotator behavior. Hodge-active \cite{article:Hodge-active_2017} employs the HodgeRank model as well as the Bayesian information maximization to actively select the pair. Apart from the latter \cite{article:Hodge-active_2017}, which presents an unsupervised approach, the previous methods all assume that the pairs can be selected in an active manner, which may not always be the case. This is a motivation to look for other approaches, considering also that they are not mutually exclusive. However, active selection is not the focus of the current work. An important distinction when proposing ranking aggregation methods for large scale lies in theier goal. While there has been a growing interest in methods tailored for Top-K rank \cite{article:subset_Topk,article:rating_topk,article:adversarial_topk}, i.e. the retrieval of highest K items of a set, we focus on retrieving the full rank. Therefore, compared to these methods, our proposal may be unnecessarily complex if the application demands only a choice of highest preferences, for instance. However, when one wishes to have a full rank the assumptions employed for Top-K are no longer applicable. It is also relevant to point out that there is also extensive research on the related problem of learning to rank \cite{article:LargeScale_LearningRank,article:LargeScale_LearningRank_2016}. However, this is a different problem as it aims at predicting the ranking of unseen items given a set of features, while we just wish to rank the observed items. To summarize, in this work we focus on developing a principled model and an optimization method suitable for large data, for the ranking problem from possibly noisy pairwise comparisons, made by multiple annotators. In the following subsection, we review in more detail the closely related approaches. \subsection{Further considerations on related work} Looking in more detail at the non-parametric line of work \cite{article:SVM_RankAggregation_2014, article:NoisySorting_2007, article:EffRank_2013,article:Minimax}, where the data model is similar to ours, the approaches diverge in the kind of underlying assumptions. One can consider that all possible comparisons are available (or not) and that we have access to repeated observations (or not). For modern applications, it seems more reasonable to assume that we have access to repeated observations (as argued above, the case of multiple annotators), but do not necessarily contemplate all possibilities (given the large number of items being compared). In \cite{article:NoisySorting_2007} the authors assume all possible pairs of comparisons are known, without resampling, while in \cite{article:EffRank_2013} they do not assume access to all comparisons, but still do not consider resampling. In \cite{article:Minimax}, the data model is the same as ours, as they allow for resampling and consider only partial access to the observations, thus making it appealing for large scale item datasets. However, their formulation with a pairwise matrix aggregates all comparisons for the same pair, while ours allows for a specification of a different level of confidence on each pair. While this contributes to ranking accuracy it makes our approach dependent on the number of available comparisons, while \cite{article:Minimax} only depends on the number of items. Finally, we look into SVM-RankAggregation \cite{article:SVM_RankAggregation_2014}, as the closest approach both in terms of data model and problem formulation. In SVM-RankAggregation \cite{article:SVM_RankAggregation_2014} the authors provide conditions on the pairwise comparison matrix, and determine when do some common methods converge to the optimal ranking when the observed data matches those conditions. They propose an SVM algorithm, which is proved to converge to the optimal ranking under their most general condition, so-called generalized low-noise (GNL). If the pairwise matrix satisfies this condition, then the induced dataset is linearly separable and a hard-margin SVM is used, otherwise a soft-margin one with a suitable regularization parameter. However, the experiments are performed for a small number of items and comparisons. Besides, we test our method in a more general setting than the GNL. \subsection{Contributions} Our main contributions are: (1) we propose a new formulation for the ranking problem, based on a well-defined data model. Furthermore, the formulation is generic enough to allow for personalized annotator behavior (Section~\ref{sec:ProbStatement}); (2) we propose a sum of quasi-
convex approximation thus avoiding the drawbacks of the original optimization; (3) an algorithm that scales well with the number of items: an increase from 1000 to 8000 items only leads to an increase of one order of magnitude in time (seconds), while the benchmark methods increase by two (Section~\ref{sec:OptimizationProb}); (4) we provide a measure of confidence on each pairwise comparison, obtained from the weights in the iterative reweighed optimization (Section\ref{sec:Approximation}); (5) when compared with state-of-the-art active learning methods, we can achieve a lower computational time by at least one order of magnitude in seconds, when aiming at Kendall tau higher than $0.85$; compared with the closely related model of SVM-Rank Aggregation, PD-Rank shows higher Kendall tau up to at least 10 standard trials and lower computational time (Section~\ref{sec:Experiments}). \section{The PD-Rank Model} \label{sec:ProbStatement} \subsection{Model with toggle noise independent of scores} Given $M$ elements and $N$ pairwise comparisons between them, the goal is to retrieve their true unknown ranking encoded by $x \in \mathbb{R}^M$. Note that elements are given in an arbitrary order, which must not be confused with their ranking, i.e., $x[i]$ is the ranking of element $i$. Comparison $n$ between item $i$ and item $j$, for $i>j$ is encoded in a sparse vector $c_n \in \mathbb{R}^M$, with \begin{equation*} c_n^{(i)} = 1, \qquad c_n^{(j)} = -1, \end{equation*} and the remaining elements set to zero. Each vector $c_n$ corresponds to one comparison, with a total of $N$ vectors. Multiple observations between the same elements are allowed. The observed label we have access to is only an ordering of the two items, that is, \begin{equation*} y_n^{TRUE} = \mathrm{sign}(c_n^Tx). \end{equation*} where $\mathrm{sign}(x)$ is the sign function. Meaning that, if item $i$ has larger ranking than $j$, then $y_n=1$; if there is a tie $y_n = 0$; and if $j$ ranks lower, then $y_n=-1$. However, we do not have access to this label but rather to noisy ones, according to the assumptions expressed in the previous section. Therefore, we model the noise $z_n \in \{-1,1\}$ as an independently and identically distributed Bernoulli random variable, where the probability of a toggle error is $\delta_n$, such that \begin{equation} \mathbb{P}(z_n = -1) = \delta_n, \qquad \mathbb{P}(z_n=1) = 1 - \delta_n. \label{eq:toggle_error}\end{equation} Therefore, the noisy observed labels are given as $\label{eq:noise_model} y_n = \mathrm{sign}(c_n^Tx)z_n$. \subsection{Maximum likelihood estimation} Given the previous model, we formulate the problem as finding the maximum likelihood estimator for the ranking $x$, given a set of $N$ observations $y_n$. Under the assumption that the comparisons are i.i.d.\ and stacking them in a vector $y = (y_1, \cdots, y_N)$, the likelihood is given as \begin{equation*} p(y|x) = \prod_{n=1}^N p(y_n|x),\label{eq:ML} \end{equation*} where the likelihood for each observation is \begin{equation*} \begin{split} p(y_n|x) &= p(y_n |x,z=1)\mathbb{P}(z_n = 1) + \\ &p(y_n |x,z=-1)\mathbb{P}(z_n = -1). \end{split} \end{equation*} The conditional probabilities translate to the probability of observing a given label if an error occurred or not, given the ranking knowledge. This corresponds to either $0$ or $1$, depending on whether $y_n$ corresponds to the true comparison $c_n^Tx$ or not. We express this with an indicator function \begin{equation*} \mathbb{I}_S(u) = \begin{cases} 1 & \mathrm{ if }\quad u \in S\\ 0 & \mathrm{ if } \quad \mathrm{otherwise}. \end{cases} \end{equation*}Therefore, and considering the error probabilities as given in \eqref{eq:toggle_error}, we write the likelihood as \begin{equation*} \begin{split} p(y_n|x) = \mathbb{I}_{\mathrm{sign} (c_n^Tx)}(y_n)(1-\delta_n) + \mathbb{I}_{-\mathrm{sign} (c_n^Tx)}(y_n)\delta_n \end{split} \label{eq:likelihood_obs}\end{equation*} and the maximum likelihood estimator of \eqref{eq:ML} is given as \begin{equation} \hat{x}_{ML}\in \mathrm{argmax}_x \sum_{n=1}^N \log \left \{ p(y_n|x) \right \}. \label{eq:Prob_MLE} \end{equation} Our goal is to find an approximate solution for this optimization problem. \section{Solving the PD-Rank Problem} \label{sec:OptimizationProb} \subsection{Reformulation: Equivalence to 0-1 loss} In this section we describe our proposed algorithm: PD-Rank. We first note that our problem can be reformulated into the well-known 0-1 loss. Problem \eqref{eq:Prob_MLE} can be reformulated (see Appendix~\ref{sec:Appendix_Equiv_01} for further details) as \begin{equation*} \operatornamewithlimits{minimize }_x \sum_{n=1}^N\log \Big( \frac{1-\delta_n}{\delta_n} \Big) [1-\mathbb{I}_{\geq 0}(y_nc_n^Tx)].\end{equation*} We will introduce a variable $w_n = \log \Big( \frac{1-\delta_n}{\delta_n} \Big)$, now representing the confidence on each observation, instead of the noise associated to it. We will also introduce $a_n = y_nc_n$, corresponding to the noisy data received from annotators. Therefore, we will obtain \begin{equation} \operatornamewithlimits{minimize }_x \sum_{n=1}^Nw_n [1-\mathbb{I}_{\geq 0}(a_n^Tx)], \label{eq:likelihood_01loss}\end{equation} which is the formulation of the well-known 0-1 loss. \subsection{Approximation}\label{sec:Approximation} Problem~\eqref{eq:likelihood_01loss} is nonconvex and discontinuous, and thus difficult to optimize. One way to approach this problem is to use a convex approximation, desirably as tight to the nonconvex one as possible. A common convex surrogate of the 0-1 loss in the Hinge loss \cite{article:hingeloss}, given as \begin{equation*} \operatornamewithlimits{minimize }_x \sum_{n=1}^Nw_n \max\{0,1-a_n^Tx\}. \end{equation*} However, given the nature of our problem, where $a_n^Tx$ is always the difference between two terms of $x$, this cost will favor smaller values of $x_n$ instead of the correct rank. That is, in the presence of opposite labels for the same pair, one of them will necessarily lead to $a_nx<0$, regardless of the values in $x$. On the other hand, the minimum of each term is $0$, so we strongly penalize large differences in wrong ranks but equally benefit from any difference in correct rank. Given that this surrogate is not appropriate for our setting, we propose a different approximation using the logarithm of the Log-Sum-Exp (LSE) function ($LSE(t) = \log (1+e^t)$), moving away from convexity but attaining continuity and differentiability. We notice that using the LSE would lead to a similar model as the one found in BT, but in order to stay closer to our noise assumptions we take the logarithm of LSE, which is a tighter approximation to the 0-1 loss (we refer the reader to Figure~\ref{fig:loss} for a more detailed comparison between the original cost and the two approximations). Consequently, we can overcome the problems with the hinge loss in a related manner to the BT while staying closer to our noise model (and as we will in Section~\ref{sec:Experiments} this will have a positive effect on the ranking accuracy). At this point, we leave $w_n$ temporarily aside (i.e. we consider constant and equal $w_n$ for all terms) and propose the following approximation for the the second factor of \eqref{eq:likelihood_01loss} \begin{equation} \begin{split} \operatornamewithlimits{minimize }_x & \sum_{n=1}^N \log \Big[ \log\big(1+e^{1-a_n^Tx}\big) + \epsilon\Big] + \gamma\|x\|^2_2\\ \operatornamewithlimits{\text{subject to }} & \quad \textbf{1}^Tx = 0,\label{eq:NewCost} \end{split}\end{equation} where $\epsilon>0$ is small and the constraint is added to anchor the solution (without it, adding any constant to $x$ still leads to a solution of the problem). The regularization term $\gamma\|x\|^2_2$ is also added to prevent the unbounded increase of distance between the elements in $x$, with the regularization weight $\gamma$ set to a small value. To solve the approximation we will use an iterative re
$x^*$ of $(\mathcal{P}_f^\epsilon)$ in this case satisfies a bound of the form \begin{align*} \norm{x-x^*}_2 \leq \kappa_1 \dist_{\norm{\cdot}_1} (x_0, \mathcal{S}_k) + \kappa_2 \epsilon, \end{align*} where $\kappa_1$ and $\kappa_2$ depend on the $RIP$-constants of $A$. In this paper, we will formulate and investigate a geometrical criterion for general convex programs $(\mathcal{P}_f^\epsilon)$ to satisfy such a bound, at least when $f$ is a norm. We will call such programs \emph{robust} (with respect to noise) and \emph{stable} (with respect to distance of $x_0$ to $\mathcal{C}$). The criterion, which we will call the \emph{Angular Separation Criterion} or $ASC$, will only depend on the relative positions of the kernel of $A$ and the descent cone $\mathcal{D}(f, x_0)$. More specifically, we will prove that if the mentioned sets have a positive angular separation, $(\mathcal{P}_f^\epsilon)$ will be robust and, in the case of $f$ being a norm, stable. We will furthermore relate the $ASC$ to known criteria for stability and robustness from the literature. We will also prove the somewhat remarkable fact that for a very large class of norms, the $ASC$ is in fact \emph{implied} by the ability of $(\mathcal{P}_f)$ to recover signals exactly from \emph{noiseless} measurements. The idea originates from a part of the Master Thesis \cite{Flinth2015Master} of the author. \subsection*{Related Work and Contributions of This Paper} Some research towards a geometrical understanding of the stability of compressed sensing has already been conducted. Here we list a few of the approaches that have been considered. First, we would like to mention the so called $RIP$-$NSP$-condition from the recent paper \cite{cahill2015gap}. This paper only deals with the classical compressed sensing setting, i.e., that the signal of interest is approximately sparse and $\ell_1$-minimization is used to recover it. A matrix $A$ is said to satisfy the $RIP$-$NSP$-condition if there exists another matrix $B$, which has the $RIP$-property, such that $\ker A = \ker B$. The authors of \cite{cahill2015gap} prove that this is enough to secure stability and robustness of $(\mathcal{P}_1^\epsilon)$. This is intriguing, as it shows that stability and robustness can be secured by only considering the kernel of the measuring matrix. Another line of research is the so called \emph{Robust Width Property}, which was developed in \cite{cahill2014robwidth}. The authors of said article define \emph{compressed sensing spaces}, a general framework that covers both different types of sparsity in $\mathbb{R}^d$ as well as the case of low rank matrices. The main part of the definition of a compressed sensing space is a norm decomposability condition; if $\norm{\cdot}$ is the norm induced by the inner product of the Hilbert space $\mathcal{H}$, and $\norm{\cdot}_*$ is another norm on $\mathcal{H}$, $(\mathcal{H}, \mathcal{C},\norm{\cdot}_{*})$ is said to be a compressed sensing space with bound $L$ if for every $a$ in the subset $\mathcal{C}\subseteq \mathcal{H}$ and $z \in \mathcal{H}$, there exists a decomposition $z=z_1 +z_2$ such that \begin{align*} \norm{a+z}_{*} = \norm{a}_{*} + \norm{z_1}_{*} \ , \ \norm{z_2}_{*} \leq L \norm{z}. \end{align*} A matrix is then said to satisfy the $(\rho, \alpha)$-\emph{robust width property} if for every $z \in \mathcal{H}$ with $\norm{Az}\leq \alpha \norm{z}$, we have $\norm{z}\leq \rho \norm{z}_{*}$. This robust width property is in fact equivalent with the stability and robustness of $(\mathcal{P}_f^\epsilon)$ with $f(x) = \norm{x}_{*}$. One large difference between this approach and ours is that we do not require any norm decomposability conditions. For instance, as was pointed out in \cite{cahill2015gap}, the $\ell_\infty$-norm in $\mathbb{R}^d$ is not well suited to be included in this framework. As for the problem of stability of recovering almost sparse vectors, we want to mention the paper \cite{xu2011precise}. The authors of that article carry out an asymptotic analysis of the threshold amount of Gaussian measurements needed for the classical technique of $\ell_1$-minimization for sparse recovery to be stable. The analysis heavily relies on the theory of so called \emph{Grassmannian angles} of polytopes, which is a purely geometrical concept. This approach has connections to, but is still relatively far away from, the one in this work. In particular, it is by no means straight-forward, if at all possible, to generalize it other problems than $\ell_1$-minimization. The condition presented in this paper is highly related to several other geometric stability measures: so called \emph{restricted singular values} \cite{amelunxen2014gordon} of matrices on cones, as also \emph{Renegar's condition number} \cite{BelloniFreund2007,renegar1992} as well as the \emph{Grassmannian condition number} \cite{amelunxenBurgisser2012} (see Section \ref{sec:ASCvsConds}). In fact, already in the previously mentioned paper \cite{chandrasekaran2012convex}, it is proven that if the smallest singular value of $A$ restricted to the descent cone of the functional $f$ does not vanish, we will have stability. During the final review of this paper, the author was made aware of the fact that also the connection between Renegar's Condition number and robustness of compressed sensing has recently been investigated in \cite{Roulet2015Renegar}. This work provides a new, more elementary, perspective to the above mentioned notions, and in particular establishes its relations to classical criteria for stability in compressed sensing. Another contribution of this paper is the observation that if $f$ is a norm, the criterion also implies stability, an observation which, to the best of the knowledge of the author, has not been done before. \subsection*{Notation} Throughout the whole paper, $\mathcal{H}$ will denote a general, finite-dimensional Hilbert space. The corresponding inner product will be denoted $\sprod{\cdot, \cdot}$, and the induced norm by $\norm{\cdot}$. For a subspace $U \subseteq \mathcal{H}$, we will write \begin{align*} \norm{x +U} = \inf_{u \in U} \norm{x+u}. \end{align*} Note that due to the finite-dimensionality of $\mathcal{H}$, this infimum is in fact attained. In the final part of the paper, we will deal with Gaussian vectors and Gaussian matrices. A random vector, or linear map, is said to be Gaussian if its representation in an orthonormal basis, or in a pair of such, has i.i.d. standard normally distributed entries. The entries of a vector $x$ in $\mathbb{R}^d$ will be denoted $x(1), x(2), \dots, x(d)$. $\mathbb{R}^d$ is equipped with the standard scalar product whose induced norm is the $\ell_2$-norm: \begin{align*} \sprod{x,y} = \sum_{i=1}^d x(i)y(i), \quad \norm{x}_2 = \sqrt{\sum_{i=1}^d x(i)^2}. \end{align*} \section{A Geometrical Robustness Condition} \label{sec:geomStab} Let us begin by considering the most classical compressed sensing setting: that is, the problem of retrieving a signal $x_0 \in \mathbb{R}^d$ with few non-zero entries from exact measurements $b = Ax_0$ using $\ell_1$-minimization (i.e. $(\mathcal{P}_1)$). One of the most well known criteria for $(\mathcal{P}_1)$ to be successful is the so called Null Space Property, $NSP$. A matrix $A$ satisfies the $NSP$ with respect to the index set $S_0 \subseteq \set{1, \dots d}$ if for every $\eta \in \ker A$, we have \begin{align} \label{eq:NSP} \sum_{i \in S_0} \abs{\eta(i)} < \sum_{i \notin S_0} \abs{\eta(i)}. \end{align} The $NSP$ with respect to $S$ is in fact equivalent to $(\mathcal{P}_1)$ recovering a signal $x_0$ with support $S$ \cite{MathIntroToCS}. The $NSP$ has a geometrical meaning. In order to explain it, let us first define \emph{descent cones} of a function $f$. \begin{defi} \label{def:descentCone} Let $\mathcal{H}$ be a finite-dimensional Hilbert space, and $f: \mathcal{H} \to \mathbb{R}$. The \emph{descent cone} $\mathcal{D}(f,x_0)$ of $f$ at the point $x_0 \in \mathcal{H}$ is the cone generated by the descent directions of $f$ at $x_0$, i.e. \begin{align*} \mathcal{D}(f,x_0) = \set{\eta \ \vert \ \exists \tau>0 : f(x_0 + \tau \eta) \leq f(x_0)}. \end{align*} \end{defi} \begin{example} \label{ex:l1desc} The descent cone of the $\ell_1$-norm at a vector $x_0$ supported on the set $S_0$ is given by \begin{align*} \mathcal{D}(\norm{\cdot}_1,x_0)= \Big\{\eta \ \vert \ \sum_{i \notin S_0} \abs{\eta}_i \leq -\sum_{i \in S_0} \sgn(x_0(i)) \eta_i \Big\}. \end{align*} \underline{Proof:} Since the conditions for $\eta$ to belong to both the left hand and the right hand set, respectively, is invariant under scaling, we may assume that $\eta$ has small norm. Then we have \begin{align*} \abs{x_0(i)+\eta(i)} = \begin{cases} \abs{x_0(i)}+\sgn(x_0(i))\eta_i & \ i \in S_0 \\ \abs{\eta(i)} & \ i \notin S_0 \end{cases}, \end{align*} which implies $\norm{x_0+\eta}_1 - \norm{x_0} = \sum_{i \notin S_0} \abs{\eta}_i + \sum_{i \in S_0} \sgn(x_0(i)) \eta_i$. This is smaller than or equal to zero exactly when $\sum_{i \notin S_0} \abs{\eta}_i \leq - \sum_{i \in S_0} \sgn(x_0(i)) \eta_i$ \end{example} With the last example in mind, it is not hard to convince oneself that Equation \eqref{eq:NSP} exactly states that the vector $\eta$ does not lie in the descent cone of any signal supported on the set $S_0$. I.e., the $NSP$ actually reads \begin{align*} \forall x_0: \supp x_0 = S_0: \mathcal{D}(\norm{\cdot}_1, x_0) \cap \ker A = \set{0}. \end{align*} This observation can be generalized to more general situations, as the following well-known lemma shows. \begin{lem}\label{lem:descConeKernel} (E.g. \cite[Proposition 2.1]{chandrasekaran2012convex}.) Let $f: \mathcal{H} \to \mathbb{R}$ be convex, $A : \mathcal{H} \to \mathbb{R}^m$ be linear and consider the program $(\mathcal{P}_f)$, with $b=Ax_0$ for noiseless recovery of the signal $x_0$. The solution of $(\mathcal{P}_f)$ is equal to $x_0$ if and only if \begin{align} \mathcal{D}(f, x_0) \cap \ker A = \set{0}. \label{eq:geomExactCond} \end{align} \end{lem} \begin{figure} \centering \includegraphics[scale=.3]{tubNeigh.eps} \caption{ The impact of an angular separation of $\mathcal{D}(f,x_0)$ and $\ker A$. Note that locally, the sets $x_0 +\mathcal{D}(f,x_0)$ and $\set{x \vert f(x) \leq f(x_0)}$ have the same structure. \label{fig:angsep}} \end{figure} Can we use the previous lemma to develop a geometrical intuition of what we have to assume in order to prove stability and robustness of the recovery using $(\mathcal{P}_f^\epsilon)$? The main difference between the program $(\mathcal{P}_f)$ and $(\mathcal{P}_f^\epsilon)$ is that the former is only allowed to search for a solution in the set $x_0 +\ker A$, while the latter can search in a tubular neighborhood of the same set. Figure \ref{fig:angsep} suggests that if the descent cone $\mathcal{D}(f, x_0)$ and $\ker A$ do not only trivially intersect each other, but also have an angular separation, it should be possible to prove that the intersection of the mentioned tubular neighborhood and the set $\set{x \vert f(x) \leq f(x_0)}$ is not large. This should in turn imply robustness. (In fact, this intuition was used already in \cite{CandesRombergTao2006} when proving robustness in the original compressed sensing setting.) To provide a precise formulation of angular separation, we first define the \emph{$\theta$-expansion of a cone}. \begin{defi} \label{def:muExp} Let $\mathcal{H}$ be a finite-dimensional Hilbert space with norm $\norm{\cdot}$ and scalar product $\sprod{\cdot,\cdot}$. Let further $C\subseteq \mathcal{H}$ be a convex cone, i.e. a convex set with $\tau C=C$ for every $\tau>0$. Then for $\theta \in [0, \pi]$, we define the $\theta$-expansion $C^{\wedge \theta}$ as the set \begin{align*} C^{\wedge \theta} = \set{x \in \mathcal{H} \ \big\vert \ \exists y \in C : \sprod{x,y} \geq \cos(\theta) \norm{x} \norm{y}}. \end{align*} \end{defi} For an illustration of the relation between a cone $C$ and its $\theta$-expansion $C^{\wedge \theta}$, see Figure \ref{fig:thetaExp}. Before moving on, let us make some remarks. \begin{figure} \centering \includegraphics[scale=.45]{thetaExpansion.eps} \caption{A convex cone $C$ and its $\theta$-expansion $C^{\wedge \theta}$. \label{fig:thetaExp}} \end{figure} \begin{rem} \begin{enumerate}[(i)] \item $C = C^{\wedge 0} \subseteq C^{\wedge \theta}$ for every $\theta >0$. \item If $C$ is closed, $C^{\wedge \theta}$ can alternatively defined as the set \begin{align*} C^{\wedge \theta}=\cone (\{x \in \mathcal{H} \ \big\vert \ \norm{x}=1, \sup_{y \in C, \norm{y}=1} \sprod{x,y} \geq \cos(\theta)\} ). \end{align*} \item $C^{\wedge \theta}$ is always a cone, but not always convex. As a concrete counterexample, consider the closed, convex cone $K \subseteq \mathbb{R}^3$ \begin{align*} K= \cone \left(\set{x \in \mathbb{R}^3 \ \big\vert \ x(1)=1, x(2)=0, \abs{x(3)} \leq 1}\right). \end{align*} Using the previous remark, we can calculate $K^{\wedge \theta}$ exactly. We have for $x\in \mathbb{R}^3$ \begin{align*} \sup_{y \in K, \norm{x}=1} \sprod{x,y} = \sup_{t\in [-1,1]} \frac{x(1) + t x(3)}{\sqrt{1+t^2}} = \begin{cases} \sqrt{x(1)^2 + x(3)^2} &\text{if } \abs{x(3)}\leq x(1) \\ \frac{x(1) + \abs{x(3)}}{\sqrt{2}}& \text{else,} \end{cases} \end{align*} where the last equality can be proven using elementary calculus. The last remark now tells us that $K^{\wedge \frac{\pi}{6}}$ is given by \begin{align*} \set{ y\in \mathbb{R}^3 \ \big\vert \ \frac{\sqrt{3}\norm{y}_2}{2} \leq \begin{cases} \sqrt{y(1)^2 + y(3)^2} &\text{if } \abs{y(3)}\leq y(1) \\ \frac{y(1) + \abs{y(3)}}{\sqrt{2}}& \text{else.} \end{cases}}. \end{align*} This set is not convex; for instance, the points $y_1=\left(1,\sqrt{\frac{2}{3}},1\right)$ and $y_2=\left(1,\sqrt{\frac{2}{3}},-1\right)$ are both contained in the set, whereas $\frac{1}{2}y_1 + \frac{1}{2}y_2 = \left(1,\sqrt{\frac{2}{3}},0\right)$ is not. See also Figure \ref{fig:nonConvExt}. \end{enumerate} \end{rem} \begin{rem} \label{rem:angleMetric} It is not hard to convince oneself that \begin{align*} \delta(x,y) = \arccos(\sprod{x,y}) \end{align*} defines a metric on $\mathbb{S}^{d-1}$. In particular, the triangle inequality holds: \begin{align*} \arccos(\sprod{x,y})\leq\arccos(\sprod{x,z})+\arccos(\sprod{z,y}). \end{align*} \end{rem} After these preparations, we may state our robustness condition. \begin{prop} \label{lem:geomConvStab} Let $x_0 \in \mathcal{H}$ and $ A : \mathcal{H} \to \mathbb{R}^m$ be linear. Consider the program $(\mathcal{P}_f^\epsilon)$, where $f: \mathcal{H} \to \mathbb{R}$ is convex and $b= Ax_0 +n$ with $\norm{n}\leq \epsilon$. If there exists a $\theta>0$ such that \begin{align} \label{eq:geomConvStab} \mathcal{D}^{\wedge \theta}(f,x_0) \cap \ker A = \set{0}, \end{align} then there exists a constant $\kappa>0$ so that the solution $\hat{x}$ of $(\mathcal{P}^\epsilon_f)$ obeys \begin{align*} \norm{ \hat{x}-x_0} \leq \kappa \epsilon. \end{align*} $\kappa$ depends on $\mu$ and the smallest non-zero singular value $\sigma_{\min}(A)$ of $A$. \end{prop} For simplicity, we will call \eqref{eq:geomConvStab} the \emph{$\theta$-angular separation condition}, or $\theta$-$ASC$. We will in fact not prove this proposition directly. Instead, we first establish a connection to so called \emph{restricted singular values} of the matrix $A$, which then implies Proposition \ref{lem:geomConvStab} as a corollary. \begin{figure} \begin{centering} \includegraphics[scale=.3]{Kwedge.eps} \caption{The convex cone $K$ and its non-convex $\frac{\pi}{6}$-extension $K^{\wedge \frac{\pi}{6}}$ \label{fig:nonConvExt}} \end{centering} \end{figure} The concept of restricted singular values was extensively studied in the paper \cite{amelunxen2014gordon}. There, the singular value of a linear map $A: \mathcal{H} \to \mathbb{R}^m$ restricted to the cones $C \subseteq \mathcal{H}$ and $D \subseteq \mathbb{R}^m$ was defined as by \begin{align*} \sigma_{C \to D} (A) = \min_{x \in C, \norm{x}=1} \norm{\Pi_D Ax}_2, \end{align*} where $\Pi_D$ denotes the \emph{Euclidean} (\emph{metric}, \emph{orthogonal}) \emph{projection} (or \emph{nearest point map}) of the convex set $D$: $$ \Pi_D(x) = \argmin_{y \in D} \norm{x-y}_2.$$ (See also, for instance, \cite{AmelunxLotzMcCoyTropp2014}.) If $D= \mathbb{R}^m$, one also speaks of the \emph{minimal gain} \cite{chandrasekaran2012convex}. Interesting for us is that if $\sigma_{\mathcal{D}(f,x_0) \to \mathbb{R}^m} (A)>0$, $(\mathcal{P}_f^\epsilon)$ will be robust. \begin{lem} \label{lem:singValMeansStab} (See \cite{amelunxen2014gordon}, \cite[Proposition 2.2]{chandrasekaran2012convex}.) Let $f: \mathcal{H} \to \mathbb{R}$ be convex and $A: \mathcal{H} \to \mathbb{R}^m$ be linear with $m< \dim \mathcal{H}$. If $\sigma_{\mathcal{D}(f,x_0) \to \mathbb{R}^m} (A)>0$, any solution $x^*$ of $(\mathcal{P}_f^\epsilon)$ obeys \begin{align*} \norm{x^* - x_0}_\mathcal{H} \leq \frac{2\epsilon}{\sigma_{\mathcal{D}(f,x_0) \to \mathbb{R}^m}}. \end{align*} \end{lem} \begin{rem} \begin{enumerate}[(i)] \item Note that in the references we cited, the lemma is proved under slightly less general conditions (e.g., the function $f$ is assumed to be a norm). The proof does however work, line for line, also in our case, and we therefore omit it, for the sake of brevity. \item As is indicated by the formulation of Lemma \ref{lem:singValMeansStab}, the solution vector $x^*$ by no means has to be unique, even for arbitrarily small $\epsilon >0$. We can construct an example of such a situation as follows: Consider a sparse vector $x_0 \in \mathbb{R}^d$ supported on the set $S$ and a matrix $A \in \mathbb{R}^{m,d}$ such that $\sigma_{\mathcal{D}(\norm{\cdot}_1, x_0) \to \mathbb{R}^m}>0$ - i.e. in particular that $x_0$ is recovered from its exact measurements by $\mathcal{P}_{1}$. Suppose that the solution $x^*$ of $\mathcal{P}_{1}^\epsilon$ is unique for some $\epsilon >0$ and is supported on a set $S'$ which is larger than $S$ (this is arguably the most common situation). For any $i \in S' \backslash S$, consider the matrix $\widetilde{A} \in \mathbb{R}^{m,d+1}$ formed by concatenating $A$ with a copy of the $i$:th column of $A$, and the vectors $\tilde{x}^*$ and $\tilde{x}_0$ formed by concatenating $x^*$ and $x_0$ with a zero, respectively. It is then clear that $\tilde{x}_0$ still is recovered from the exact measurements $b= \widetilde{A}\tilde{x}_0$ via the exact program $\mathcal{P}_{1}$, and hence (see Section \ref{sec:ExactImpliesStable}) $\sigma_{\mathcal{D}(\norm{\cdot}, \tilde{x}_0) \to \mathbb{R}^m}( \tilde{A})>0$, but that any vector \begin{align*} \tilde{x}^*_\theta = \theta \tilde{x}^* + (1-\theta) e_{d+1} \end{align*} for $\theta \in [0,1]$ solves $\mathcal{P}_{1}^\epsilon$. Hence, the solution of the relaxed problem $\mathcal{P}_{1}^\epsilon$ for $\widetilde{A}$ is not unique. \end{enumerate} \end{rem} We now prove that the $ASC$ to some extent is equivalent to $\sigma_{\mathcal{D}(f,x_0) \to \mathbb{R}^m} (A)>0$. \begin{lem} \label{lem:singValAndGeomConv} Let $C \subseteq \mathcal{H}$ be a non-empty convex cone and $A : \mathcal{H} \to \mathbb{R}^d$ be a linear map. Then the following are equivalent \begin{enumerate}[(1)] \item There exists a $\theta>0$ such that $C^{\wedge \theta} \cap \ker A = \set{0}$. \item $\sigma_{C \to \mathbb{R}^m}(A) >0$. \end{enumerate} In particular, if $\sigma_{\min/\max}(A)$ denotes the smallest/largest non-vanishing singular value of $A$, respectively, we have for every $\theta$ with $C^{\wedge \theta} \cap \ker A = \set{0}$ that \begin{align} \label{eq:singValAndGeomConv} \sin(\theta) \sigma_{\min}(A) \leq \sigma_{C \to \mathbb{R}^m}(A) \leq \sin\left( \theta \right) \sigma_{\max}(A). \end{align} \end{lem} \begin{proof} $(1) \Rightarrow (2)$. Suppose that $C^{\wedge \theta} \cap \ker A = \set{0}$ and let $x \in C$ have unit norm. Then we have for every $y \in \ker A$ \begin{align*} \norm{x-y}_2^2 = 1 + \norm{y}^2 - 2 \sprod{x,y} \geq 1+ \norm{y}^2-2\cos(\theta)\norm{y} = 1 - \cos^2(\theta) + ( \norm{y}-\cos(\theta))^2 \geq \sin^2(\theta), \end{align*} since due to $C^{\wedge \theta} \cap \ker A = \set{0}$, there must be $\sprod{x,y} \leq \cos(\theta)\norm{y}$. Since $y \in \ker A$ was arbitrary, we obtain $\norm{x + \ker A} \geq \sin(\theta)$. This has the consequence \begin{align*} \norm{Ax} \geq \sigma_{\min}(A) \norm{x + \ker A} \geq \sigma_{\min}(A)\sin(\theta). \end{align*} It follows that $\sigma_{C \to \mathbb{R}^m}(A) \geq \sigma_{\min}(A)\sin(\theta)>0$, since $\sigma_{\min}(A)>0$ due to $\ker A \neq \mathcal{H}$, which follows from $ C^{\wedge \theta} \cap \ker A = \set{0}$, and $\theta>0$. $(2) \Rightarrow (1)$ Suppose that $\sigma_{C \to \mathbb{R}^m}(A)>0$ and let $x \in C$ and $y \in \ker A$ have unit norm. Define $\theta$ through $\sprod{x,y} = \cos(\theta)$. Our goal is to prove that $\theta$ has to be larger than some number $\theta_0>0$. Since $x \in C$, we have $\norm{Ax}_2 \geq \sigma_{C \to \mathbb{R}^m}(A)$. We also have for every $t \in \mathbb{R}$, due to $y \in \ker A$, \begin{align*} \sigma_{C \to \mathbb{R}^m}(A)\leq\norm{Ax}_2 = \norm{A(x-ty)}_2 \leq \sigma_{\max}(A) \norm{x-ty}_2 = \sigma_{\max}(A) \sqrt{1+t^2 -2t\cos(\theta)}. \end{align*} Choosing $t =\cos(\theta)$ yields $$\sin\left(\theta \right)=\sqrt{1-\cos^2(\theta)}\geq \frac{\sigma_{C \to \mathbb{R}^m}(A)}{\sigma_{\max}(A)}>0 ,$$ where we in the last step used $\sigma_{C \to \mathbb{R}^m}(A)>0$. This proves the claim. \end{proof} Now Proposition \ref{lem:geomConvStab} easily follows from combining Lemma \ref{lem:singValMeansStab} with Lemma \ref{lem:singValAndGeomConv}. We will return to the connection between restricted singular values and the $ASC$ in the next section. \begin{rem} \label{rem:stableL2NoAngle} The $ASC$ is not necessary for robust recovery. Consider for example $\ell_2$-minimization in $\mathcal{H} = \mathbb{R}^d$: \begin{align} \tag{$\mathcal{P}_2^\epsilon$} \min \norm{x}_2 \text{ subject to } \norm{Ax -b}\leq \epsilon. \end{align} In order for $\ell_2$-minimization to exactly recover a signal $x_0$ (which of course is necessary for robust recovery, choose $\epsilon=0)$, we need to have $x_0 \perp \ker A$, since the solution of $(\mathcal{P}_2)$ is given by $\Pi_{\ker A^\perp} x_0$. Furthermore, $\mathcal{D}(\norm{\cdot}_2, x_0)$ is given by $\set{ v \in \mathbb{R}^d \ \vert \ \sprod{x_0,v} < 0}$. We claim that this implies that for each $\theta>0$, $\mathcal{D}^{\wedge \theta}(\norm{\cdot}_2, x_0) \cap \ker A \neq \set{0}$. To see why, let $v \in \mathcal{D}(\norm{\cdot}_2,x_0)$ and $ \eta$ be a nonzero element of $\ker A$. Such an element necessarily exists as soon as $d>m$. For any $\lambda >0$, $v + \lambda \eta \in \mathcal{D}(\norm{\cdot}_2, x_0)$. This since we have $\sprod{x_0,\eta}=0$ due to $x_0 \perp \ker A \ni \eta$. Consequently, $\sprod{v + \lambda \eta, x_0} = \sprod{v,x_0}<0$, i.e. $v + \lambda \eta \in \mathcal{D}(\norm{\cdot}_2, x_0)$. Now consider the quotient \begin{align*} \frac{\sprod{v+ \lambda \eta, \eta} }{\norm{v+\lambda \eta}_2\norm{\eta}_2} \geq \frac{\sprod{v, \eta}+ \lambda\norm{ \eta}_2^2 }{(\norm{v}_2+\lambda \norm{\eta}_2)\norm{\eta}_2}. \end{align*} By letting $\lambda \to \infty$, this quotient can be made arbitrarily close to $1$. Since $v + \lambda \eta \in \mathcal{D}(\norm{\cdot}_2, x_0)$, this means that $\ker A \ni \eta~\in~\mathcal{D}^{\wedge \theta}(\norm{\cdot}_2, x_0)$ for every $\theta >0$. Hence, we have a non-trivial intersection between the $\theta$-expansion of the descent cone and the kernel for every $\theta > 0$. It is, however, not hard to convince oneself that the solution $\hat{x}$ of $(\mathcal{P}_2^\epsilon)$ necessarily lies in $\ker A^\perp$ (any part in $\ker A$ can be removed without affecting $\norm{Ax -b}_2$, and at the same time making $\norm{x}_2$ smaller). We already argued that $x_0$ also has this property. This has the immediate consequence that \begin{align*} \norm{\hat{x}-x_0}_2 \leq \sigma_{\min}(A)^{-1} \norm{A\hat{x} - A x_0}_2 \leq 2 \sigma_{\min}(A)^{-1} \epsilon, \end{align*} i.e. we have robustness. \end{rem} Now we prove that under the assumption that $f$ is a norm on $\mathcal{H}$, the $ASC$ in fact also implies stability. \begin{theo} \label{prop:robustCrit} Let $\norm{\cdot}_*$ be a norm on $\mathcal{H}$ and consider the convex program \begin{align} \label{eq:normProg} \min \norm{x}_* \text{ subject to } \norm{Ax-b} \leq \epsilon, \end{align} where $b=A\check{x}_0 +n$ with $\norm{n} \leq \epsilon$. Suppose that there exists a $\theta>0$ so that $A$ fulfills the $\theta$-$ASC$ for every $x_0$ in some subset $\mathcal{C} \subseteq \mathcal{H}$. Then the following is true for \eqref{eq:normProg}: There exist constants $\kappa_1$ and $\kappa_2$ such that any solution $x^*$ fulfills \begin{align} \label{eq:robustEq} \norm{x^*- \check{x}_0} \leq \kappa_1 \epsilon + \kappa_2 \dist_{\norm{\cdot}_*}(\check{x}_0, \cone(\mathcal{C})) . \end{align} Here, $\cone(\mathcal{C})$ denotes the cone generated by $\mathcal{C}$, i.e., the set $\set{\lambda x_0, \lambda >0 \text{ and } x_0 \in \mathcal{C}}$. The first constant $\kappa_1$ depends on $\sigma_{\min}(A)$, $\norm{\cdot}_*$ and $\theta$. The second constant $\kappa_2$ depends on $\theta$, $\norm{\cdot}_*$ and the condition number $\sigma_{\max}(A)/\sigma_{\min}(A)$ of $A$. \end{theo} \begin{proof} (See Fig. \ref{fig:rob} for a graphical depiction of the proof.) Let $x_0$ be a, not necessarily unique, vector in $\cone(\mathcal{C})$ with $\norm{\check{x}_0-x_0}_{*} = \delta:= \dist_{\norm{\cdot}_{*}}(\check{x}_0, \cone(\mathcal{C}))$. Due to the homogenity of $\norm{\cdot}_{*}$, we have $\mathcal{D}(\norm{\cdot}_*,x_0) = \mathcal{D}(\norm{\cdot}_*, \lambda x_0)$ for every $\lambda >0$. Therefore we may without loss of generality scale the problem such that $\norm{x_0}_*=1$. Also, since all norms on the finite dimensional space $\mathcal{H}$ are equivalent, there exists $\gamma >0$ so that for each $x\in \mathcal{H}$, $\gamma^{-1} \norm{x}_{*} \leq \norm{x} \leq \gamma \norm{x}_{*}$. These two facts have the consequence that \begin{align*} \norm{A\check{x}_0 - Ax_0} \leq \sigma_{\max}(A) \norm{\check{x}_0-x_0}\leq \gamma \sigma_{\max}(A) \delta. \end{align*} Since $\norm{Ax^*-A\check{x}_0} \leq \norm{Ax^*-b} + \norm{b-A\check{x}_0} \leq 2 \epsilon$, this implies that \begin{align*} \norm{Ax^* - Ax_0}\leq \norm{Ax^* - A \check{x}_0} + \norm{A \check{x}_0 - Ax_0} \leq 2\epsilon + \gamma \sigma_{\max}(A) \delta, \end{align*} i.e. $\norm{x_0 - \hat{x} + \ker A} \leq \sigma_{\min}(A)^{-1}(2\epsilon + \gamma \sigma_{\max}(A) \delta)$. Let $h$ be the vector in $\ker A$ so that $\norm{x_0 + h - \hat{x}} = \norm{x_0 - \hat{x} + \ker A}$. \begin{figure} \centering \includegraphics[scale=.3]{robustCritProof.eps} \caption{The proof of Proposition \ref{prop:robustCrit}. \label{fig:rob}} \end{figure} Now since $x^*$ is a solution of \eqref{eq:normProg}, there must be $\norm{x^*}_*\leq \norm{\check{x}_0}_* \leq \norm{x_0}_* +\delta= \norm{(1+\delta)x_0}_*$, i.e. $\tilde{h} := x^* - (1+\delta)x_0 \in \mathcal{D}(\norm{\cdot}_*,(1+\delta) x_0)=\mathcal{D}(\norm{\cdot}_*,x_0)$, where the latter is due to the homogeneity of $\norm{\cdot}_*$. Now we have \begin{align*} \norm{x^*-x_0} \leq \norm{x^* - (1+\delta)x_0} + \norm{\delta x_0} \leq \Vert \tilde{h}\Vert + \gamma \delta. \end{align*} Due to $h \in \ker A$ and $\tilde{h} \in \mathcal{D}(\norm{\cdot},x_0)$, \eqref{eq:geomConvStab} implies that $\langle h, \tilde{h} \rangle \leq \cos(\theta) \norm{h}\Vert{\tilde{h}}\Vert$. This has the consequence that \begin{align*} \Vert h - \tilde{h} \Vert^2 \geq \norm{h}^2 + \Vert \tilde{h} \Vert^2 - 2 \cos(\theta) \norm{h} \Vert \tilde{h} \Vert = ( \norm{h} - \cos(\theta) \Vert \tilde{h} \Vert )^2 + (1 - \cos^2(\theta)) \Vert \tilde{h} \Vert^2 \geq \sin^2(\theta) \Vert \tilde{h} \Vert^2, \end{align*} which implies that $\Vert \tilde{h}\Vert_2 \leq \sin^{-1}(\theta)\Vert h - \tilde{h}\Vert$. Since \begin{align*} \Vert h-\tilde{h} \Vert = \norm{h- x^* + (1+\delta)x_0} \leq \norm{x_0 + h - x^*} + \delta \norm{x_0} \leq \sigma_{\min}(A)^{-1}(2\epsilon + \gamma \sigma_{\max}(A) \delta) + \gamma \delta, \end{align*} we have $\Vert \tilde{h}\Vert \leq \left(\sin(\theta)\sigma_{\min}(A)\right)^{-1}( 2\epsilon + \gamma \sigma_{\max}(A) \delta) + \sin^{-1}(\theta)\gamma \delta $. Finally, we estimate \begin{align*} \norm{x^*-\check{x}_0}_* &\leq \norm{x^* - x_0}_* + \norm{x_0 - \check{x}_0}_* \leq \gamma \norm{x^*-x_0} + \delta \leq \gamma(\Vert \tilde{h}\Vert + \gamma \delta) + \delta \\ &\leq \sin^{-1}(\theta) \left(\gamma\sigma_{\min}(A)^{-1}( 2\epsilon + \gamma \sigma_{\max}(A) \delta) + \delta \right) + 2 \delta =: \kappa_1 \epsilon + \kappa_2 \dist_{\norm{\cdot}}(\check{x}_0, \cone (\mathcal{C}) ), \end{align*} where $\kappa_1 =2\gamma (\sin(\theta)\sigma_{\min}(A))^{-1}$ and $\kappa_2 = \sin\theta^{-1}(\gamma^2\sigma_{\max}(A)/\sigma_{\min}(A) +1)+2$, which is what we wanted to prove. \end{proof} \subsection{$ASC$ and Two Other Geometrical Notions of Stability.} \label{sec:ASCvsConds} Let us end this section by briefly discussing the connection between the $ASC$-condition and two other measures for stability of a linear embedding, namely the so-called \emph{Renegar's condition number}\cite{BelloniFreund2007,renegar1992} $\mathcal{C}_\mathcal{R}(A)$ and the \emph{Grassmannian condition number} \cite{amelunxenBurgisser2012} $\mathcal{C}(A)$ of a matrix. They were originally introduced to study the stability of the \emph{homogeneous convex feasibility problem}: Given a closed convex cone $K \subseteq \mathbb{R}^m$ with non-empty interior not containing a subspace (a \emph{regular} cone), for which $A \in \mathbb{R}^{m,d}$ does there exist a $z$ with \begin{align*} Az \in \text{int } K ? \end{align*} Let us call matrices such \emph{feasible}. The connection to our problem follows from duality: given a cone $C$ whose polar $C^*= \set{ x \in \mathbb{R}^d \ \vert \ \forall y \in C : \sprod{x,y}\leq 0 } $ is regular, it is well known that if the range of the transposed matrix $A^*$ intersects the $\text{int } C^*$ , the kernel of a matrix $A$ can't intersect $C$ non-trivially (see for instance \cite{BelloniFreund2007}). Since the range of $A^*$ always is equal to the orthogonal complement $\ker A^\perp$ of $\ker A$, it makes sense to define the following sets of subspaces : \begin{align*} P_m(K) = \set{U \in \mathbb{G}(d,m) \ \vert \ U^\perp \cap K^* \neq \set{0}}, D_m(C) = \set{U \in \mathbb{G}(d,m) \ \vert \ U \cap K \neq \set{0}} \end{align*} $\mathbb{G}(d,m)$ denotes the \emph{Grassmannian manifold} of $m$-dimensional subspaces of $\mathbb{R}^d$. It can be proven that both $P_m(K)$ and $D_m(K)$ are closed in $\mathbb{G}(d,m)$ and that they share a common boundary $\Sigma_m(K)$. The Grassmannian condition number of a matrix $A$ with respect to a regular cone $C$ is defined as the inverse of the distance of $\ker A$ to $\Sigma_m(C)$, i.e. \begin{align*} \mathcal{C}(A) = \frac{1}{\dist(\ker A, \Sigma_m(C))}. \end{align*} The distance is thereby calculated with respect to the canonical metric on $\mathbb{G}(d,m)$: if $U$ and $V$ are $m$-dimensional subspaces of $\mathbb{R}^d$ and $\Pi_U$ and $\Pi_V$ are the orthogonal projections onto them, we define \begin{align*} d(U,V)= \norm{\Pi_U - \Pi_V}. \end{align*} Here, $\norm{\cdot}$ denotes the operator norm. $\mathcal{C}(A)$ is very closely related to the $ASC$-condition: if $A$ is feasible, the largest angle $\theta^*$ so that $A$ satisfies the $\theta$-$ASC$ satisfies \cite[Proposition 1.6]{amelunxenBurgisser2012} \begin{align} \label{eq:grassmannCond} \sin(\theta^*) = \frac{1}{\mathcal{C}(A)}. \end{align} The Grassmannian condition number is itself closely related to the so-called Renegar's condition number $\mathcal{C}_\mathcal{R}(A)$. Given a feasible matrix $A$ (in the same sense as above), it is defined as the distance from $A$ to the set of infeasible matrices, i.e. \begin{align*} \mathcal{C}_\mathcal{R}(A) = \min \set{ \norm{\Delta A} \ \vert \ A + \Delta A \text{ is infeasible} }. \end{align*} In our setting, it can be proven that \cite[Lemma 2.2]{Roulet2015Renegar} \begin{align} \label{eq:renegar} \mathcal{C}_\mathcal{R}(A) = \frac{\sigma_{\max}(A)}{\sigma_{C \to \mathbb{R}^m}(A)}. \end{align} The article \cite{Roulet2015Renegar} contains some more details and interesting results concerning the connection between $\mathcal{C}_\mathcal{R}(A)$ and the robustness properties of compressed sensing problems. In particular, they prove the following version of Lemma \ref{lem:singValMeansStab} of this article: if we assume that the noise level is below $\norm{A} \epsilon$ (which in particular is interesting when the measurement matrix $A$ only is known up to some error $\Delta A$), every solution $x^*$ of the program $\mathcal{P}_f^\epsilon$ obeys \begin{align*} \norm{x^*-x_0}_2 \leq 2 \mathcal{C}_\mathcal{R}(A) \epsilon. \end{align*} Let us end this section by noting that one can prove Lemma \ref{lem:singValAndGeomConv} by using the following inequality from \cite[Theorem 1.4]{amelunxenBurgisser2012} : \begin{align*} \mathcal{C}(A) \leq \mathcal{C}_\mathcal{R}(A) \leq \frac{\sigma_{\max}(A)}{\sigma_{\min}(A)}\mathcal{C}(A). \end{align*} Using \eqref{eq:grassmannCond} and \eqref{eq:renegar}, this can be rewritten as \begin{align*} \sin(\theta^*)\sigma_{\min}(A) \leq \sigma_{C \to \mathbb{R}^m}(A) \leq \sin(\theta^*) \sigma_{\max}(A), \end{align*} which is exactly the inequality \eqref{eq:singValAndGeomConv}. \section{When is the $ASC$ satisfied?} Having established that the $ASC$ implies stability and robustness for signal recovery using the convex program \eqref{eq:normProg}, it is of course interesting to ask for which matrices this condition is satisfied. In this section, we will first prove the maybe somewhat remarkable fact that, for many reasonable norms, the weak $NSP$-like condition \eqref{eq:geomExactCond} in fact \emph{implies} that the $ASC$ is satisfied for some $\theta>0$. However, the above reasoning only yields the \emph{existence} of a $\theta>0$ with \eqref{eq:geomConvStab}, and does not give any control of the size of $\theta$. Therefore, we will also briefly discuss the relation between already known stability conditions for compressed sensing, and that the $ASC$ can be secured with high probability using random Gaussian matrices. Using the concept of $\emph{Gaussian widths}$, we will argue that if one needs $m_0$ measurements to secure that $(\mathcal{P}_f)$ recovers a signal $x_0$ with high probability from noiseless measurements, we need $m_0 + O\left(\sin\left(\frac{\theta}{2}\right)d\right)$ to secure the $ASC$ for $\theta>0$. \subsection{Exact Recovery Implies Some Stability and Some Robustness.} \label{sec:ExactImpliesStable} The first result of this subsection was essentially already proven in \cite{amelunxen2014gordon}. Let us state it, and for completeness also give a proof, and then discuss its implications and limitations. \begin{theo} Let $C \subseteq \mathcal{H}$ be a \emph{closed} convex cone, and $A: \mathcal{H} \to \mathbb{R}^m$ be linear. Then if $ C \cap \ker A= \set{0}$, there exists a $\theta>0$ such that $C^{\wedge \theta} \cap \ker A = \set{0}$, i.e., the $\theta$-$ASC$ holds. \end{theo} \begin{proof} Under the assumption that $C$ and $D$ are closed, the restricted singular value $\sigma_{C \to D}(A)$ vanishes if and only if either $A C \cap D^* \neq \set{0}$ or $C \cap \ker A \neq \set{0}$ \cite[Proposition 2.2]{amelunxen2014gordon}. Since $D=\mathbb{R}^m$ in our case, $D^* = \set{0}$, and we hence by contraposition have the equivalence \begin{align*} \sigma_{C \to \mathbb{R}^m}(A) >0 \Leftrightarrow C \cap \ker A = 0. \end{align*} Since by Lemma \ref{lem:singValAndGeomConv}, $\sigma_{C \to \mathbb{R}^m}(A) >0$ is equivalent to the existence of a $\theta>0$ such that $C^{\wedge \theta} \cap \ker A = \set{0}$, the claim is proven. \end{proof} On a theoretical level, the last proposition implies that as soon as the recovery of some class of signals $\mathcal{C}$ from exact measurements with the help of a convex program is guaranteed, we also have stability and robustness for the recovery of signals close to $\mathcal{C}$ from noisy measurements. As simple and beautiful the result is, it has its flaws. In particular, we have no control whatsoever over the size of the parameter $\theta$, which in turn implies that we have no control over the constants in \eqref{eq:robustEq}. In the case that the norm $\norm{\cdot}_*$ has a unit ball which is a polytope, we can do a bit better. Although we still cannot provide any general bound on the size of $\theta$, we can prove that it will have the same size for all points lying in the same face of the unit ball. Before stating and proving the result, let us note that the assumption that the unit ball of $\norm{\cdot}_*$ is a polytope is not far-fetched. In particular, it is true for both $\ell_1$-minimization (and its many variants, i.e., also for weighted norms etc.), and for $\ell_\infty$-minimization -- or in general any atomic norm generated by a finite set of atoms $\calA$. Let us now formulate the main part of the argument in the following lemma. \begin{lem} \label{lem:boundingAngle} Let $P \subseteq \mathcal{H}$ be a closed polytope and $\mathcal{C} \subseteq P$ be a union of faces of $P$. Suppose that the linear subspace $U \subseteq \mathcal{H}$ has the property that for each $x_0 \in \cal C$, $x_0 + U$ intersects $P$ only in $x_0$. Then for each $x_0 \in \mathcal{C}$, there exists a $\mu<1$ such that \begin{align*} \forall x \in P, z \in U : \sprod{x -x_0 , z} \leq \mu \norm{x-x_0}_2\norm{z}_2. \end{align*} The size of $\mu$ is only dependent on which face $x_0$ lies in. \end{lem} Although the proof of this lemma is elementary, it is relatively long. Therefore, we postpone it to Appendix \ref{app:A}. Instead, we use it to prove the aforementioned result about stability and robustness for recovery using convex programs involving norms with polytope unit balls. \begin{cor} Let $A: \mathcal{H} \to \mathbb{R}^m$ be given. Suppose that $\norm{\cdot}_{*}$ is a norm whose unit ball is a polytope, and $\mathcal{C}$ be a union of faces of that polytope. If the program $(\mathcal{P}_{\norm{\cdot}_{*}})$ recovers $x_0$ from the noiseless measurements $Ax_0$ for every $x_0 \in \mathcal{C}$, all signals $\check{x}_0$ close to the cone generated by $\mathcal{C}$ will be stably and robustly recovered by $(\mathcal{P}_{\norm{\cdot}_{\ast}}^{\epsilon})$ in the sense of \eqref{eq:robustEq}. The constants $\kappa_1$ and $\kappa_2$ will only depend on which face the normalized version $\check{x}_0/ \norm{\check{x}_0}_*$ of $\check{x}_0$ lies closest to. \end{cor} \begin{proof} Since each $x_0$ in $\mathcal{C}$ is recovered exactly by $(\mathcal{P}_{\norm{\cdot}_*})$ by noiseless measurements, we will by Lemma \ref{lem:descConeKernel} have $\mathcal{D}( \norm{\cdot}_*,x_0)\cap \ker A = \set{0}$ for each $x_0 \in \cal C$. Since the descent cone of $\norm{\cdot}_*$ at $x_0$ is generated by the vectors $x -x_0$, where $x \in P = \set{y \vert \norm{y}_*\leq 1}$, the conditions of Lemma \ref{lem:boundingAngle} are satisfied. Said lemma therefore implies that $\mathcal{D}^{\wedge \theta}( \norm{\cdot}_*,x_0)\cap \ker A = \set{0}$ for $x_0 \in \mathcal{C}$, where $\theta>0$ only depends on which face $x_0$ lies in. This together with Theorem \ref{prop:robustCrit} implies the claim. \end{proof} \subsection{$ASC$ Compared to Classical Stability and Robustness Conditions in Compressed Sensing.} In the following, we will relate the $ASC$ to two well-known criteria for stability and robustness of $\ell_1$-minimization from the literature: the $RIP$ and the $RNSP$. We will begin by considering the $RIP$. It is a well-known fact that if the restricted isometry constant $\delta_{2s}$ is small, the program $(\mathcal{P}_{\norm{\cdot}_1}^\epsilon)$ will recover any $s$-sparse vector in a robust and stable manner. E.g. in \cite[Theorem 6.12]{MathIntroToCS}, it is proved that if $\delta_{2s} < 4/ \sqrt{41}$, \eqref{eq:robustEq} will be satisfied for some constants $\kappa_1$, $\kappa_2$ only dependent on $\delta_{2s}$. Having this in mind, it is of course interesting to ask oneself if it is possible to directly prove that a small $\delta_{2s}$ will imply the $ASC$ for some $\theta>0$. The next proposition gives a positive answer to that question, and it furthermore provides the control of the size of $\theta>0$ we lacked in the previous section. \begin{prop} \label{prop:RIPASC} Suppose that $\delta_s$ and $\delta_{2s}$ of the matrix $A$ satisfies \begin{align*} \frac{1}{1-\delta_s} \left(\delta_s + \sqrt{5}\sqrt{\frac{d}{s}+1} \frac{\delta_{2s}}{1+\delta_s} \right) \leq \cos(\theta). \end{align*} Then $\mathcal{D}^{\wedge \theta}(\norm{\cdot}_1, x_0) \cap \ker A = \set{0}$ for every $s$-sparse $x_0$. \end{prop} For the proof of this claim, we need two lemmata. The first one, we cite from the book \cite{MathIntroToCS}. \begin{lem} \label{lem:RIPROC} (See \cite[Proposition 6.3]{MathIntroToCS}). Let $u$ and $v$ be $s$-sparse vectors with disjoint support. Then we have \begin{align*} \abs{\sprod{Au,Av}} \leq \delta_{2s}\norm{u}_2 \norm{v}_2, \end{align*} where $\delta_{2s}$ is the $(2s)$-th restricted isometry constant of $A$. \end{lem} The next one is about the structure of vectors in the descent cone of the $\ell_1$-norm at a sparse vector. \begin{lem} \label{lem:normIneqL1Desc} Let $x_0 \in \mathbb{R}^d$ be supported on the set $S_0$ with $\abs{S_0}=s$, and $
u \in \mathcal{D}(\norm{\cdot}_1, x_0)$. Let furthermore $(S_i)_{i=1}^n$ be a partition of the set $\set{1, \dots, d} \backslash S_0$ with the properties \begin{align} \label{eq:monCrit} \min_{i \in S_k} \abs{u(i)} & \geq \max_{i \in S_{k+1}} \abs{u(i)} \text{ and } \abs{S_k} \leq s \end{align} for every $k=1, \dots n-1$. Then we have \begin{align*} \sum_{i=0}^n \norm{u_{S_i}}_2 < \sqrt{5}\norm{u}_2. \end{align*} \end{lem} \begin{proof} First, \eqref{eq:monCrit} implies that $\norm{u_{S_{k+1}}}_2 \leq \frac{1}{\sqrt{s}} \norm{u_{S_k}}_1$ for $k=1 , \dots n$ \cite[Lemma 6.10]{MathIntroToCS}. Therefore, we have the following estimate due to $\norm{v}_2 \leq \norm{v}_1$ for $v \in \mathbb{R}^d$: \begin{align*} \sum_{k=0}^n \norm{u_{S_k}}_2 \leq \norm{u_{S_0}}_2+ \norm{u_{S_1}}_2 +\frac{1}{\sqrt{s}}\sum_{k=2}^n \norm{u_{S_{k-1}}}_1 \leq \norm{u_{S_0}}_2 + \Vert u_{S_0^c} \Vert_2 + \frac{1}{\sqrt{s}}\Vert u_{S_0^c} \Vert_1 \end{align*} Now, since $u \in \mathcal{D}(\norm{\cdot}_1, x_0)$, we have (see Example \ref{ex:l1desc}) \begin{align*} \Vert u_{S_0^c}\Vert_1 \leq -\sum_{i \in S_0} \sgn(x_0(i)) u_i \leq \norm{u_{S_0}}_1 \leq \sqrt{s}\norm{u_{S_0}}_2, \end{align*} where the last inequality is due to the fact that $u_{S_0}$ is supported on $S_0$. This implies \begin{align*} \norm{u_{S_0}}_2 + \Vert u_{S_0^c} \Vert_2 + \frac{1}{\sqrt{s}}\Vert u_{S_0^c} \Vert_1 &\leq 2 \norm{u_{S_0}}_2 + \Vert u_{S_0^c} \Vert_2 = (2,1) \cdot ( \norm{u_{S_0}}_2 , \Vert u_{S_0^c} \Vert_2 ) \\ &\leq \sqrt{5} \sqrt{ \Vert u_{S_0} \Vert_2^2 + \Vert u_{S_0^c} \Vert_2^2}= \sqrt{5} \norm{u}_2, \end{align*} where we used the Cauchy-Schwarz inequality in the third step. \end{proof} With these two lemmas, we may prove Proposition \ref{prop:RIPASC}. \begin{proof}[Proof of Proposition \ref{prop:RIPASC}] Let $u \in \mathcal{D}(\norm{\cdot}_1, x_0)$ and $v \in \ker A$, where without loss of generality $\norm{u}_2=\norm{v}_2=1$. Our goal is to prove that necessarily $\sprod{u,v} < \cos(\theta)$. Let us begin by partitioning the set $\set{1, \dots, d} \backslash S_0$ into $n$ sets $S_1, \dots S_n$ so that $S_i \cap S_j = \emptyset$ for $i \neq j$, $\abs{S_i}= s$ for $i=1, \dots n-1$, $\abs{S_n}\leq s$, and the monotonicity criterion \eqref{eq:monCrit} is met for $u$. Then $n \leq \frac{d}{s}$, and we have \begin{align*} \sprod{u,v} = \sum_{i=0}^n \sprod{u_{S_i}, v_{S_i}} &= \frac{1}{4} \left(\sum_{i=0}^n \norm{u_{S_i} + v_{S_i}}_2^2 - \norm{u_{S_i} - v_{S_i} }_2^2\right) \\ &\leq \frac{1}{4} \left( \sum_{i=0}^n\frac{1}{1- \delta_s}\norm{Au_{S_i} + Av_{S_i}}_2^2 -\frac{1}{1+ \delta_s}\norm{Au_{S_i} - Av_{S_i}}_2^2 \right) \\ &= \frac{1}{4} \left( \sum_{i=0}^n\frac{2\delta_s}{1- \delta_s^2}(\norm{Au_{S_i}}_2^2 + \norm{Av_{S_i}}_2^2) + \frac{4}{1-\delta_s^2}\sprod{Au_{S_i},Av_{S_i}}\right) , \end{align*} where we in the second to last step used that $u_{S_i} \pm v_{S_i}$ for each $i$ is $s$-sparse, and the definition of $\delta_s$. Now since $v \in \ker A$, we have $Av_{S_i} = - \sum_{\substack{k=1, k \neq i}}^n Av_{S_k}$, and consequently, \begin{align*} \sprod{Au_{S_i},Av_{S_i}} = -\sum_{\substack{k=1 \\ k \neq i}}^n \sprod{Au_{S_i},Av_{S_k}} \leq \sum_{\substack{k=1 \\ k \neq i}}^n \delta_{2s} \norm{u_{S_i}}_2 \norm{v_{S_k}}_2 \leq \sum_{\substack{k=1}}^n \delta_{2s} \norm{u_{S_i}}_2 \norm{v_{S_k}}_2, \end{align*} where we in the second to last step applied Lemma \ref{lem:RIPROC}. Again using the definition of $\delta_s$, we conclude $\norm{Au_i}_2^2 \leq (1+\delta_s)\norm{u_i}_2^2$ and $\norm{Av_i}_2^2 \leq (1+\delta_s)\norm{v_i}_2^2$. Combining all of the previous estimates, we obtain \begin{align*} \sprod{u,v} \leq \sum_{i=0}^n\frac{\delta_s}{2(1- \delta_s)}(\norm{u_{S_i}}_2^2 + \norm{v_{S_i}}_2^2) + \frac{\delta_{2s}}{1-\delta_s^2}\left(\sum_{i=0}^n \norm{u_{S_i}}_2\right) \left(\sum_{i=0}^n \norm{v_{S_i}}_2\right) \end{align*} Now we use $\norm{u},\norm{v}=1$, Lemma \ref{lem:normIneqL1Desc}, the inequality $\sum_{i=0}^n x_i \leq \sqrt{n+1}\sqrt{ \sum_{i=0}^n x_i^2}$ and $n \leq \frac{d}{s}$ to conclude \begin{align*} \sprod{u,v} < \frac{\delta_s}{1- \delta_s} +\sqrt{5}\frac{\delta_{2s}}{1-\delta_s^2}\sqrt{\frac{d}{s}+1}\leq \cos(\theta), \end{align*} which is what we wanted to prove. \end{proof} Now let us turn to another criterion for robust and stable recovery using $\ell_1$-minimization: the \emph{Robust Null-Space Property} or $RNSP$. Let $0\leq\gamma<1$ and $\tau>0$. A matrix $A \in \mathbb{R}^{m,d}$ satisfies the $(\gamma,\tau)$-$RNSP$ with respect to the index set $T \subseteq \set{1,2, \dots, d}$ if for every $x \in \mathbb{R}^d$, we have \begin{align} \norm{x_T}_1 \leq \gamma \norm{x_{T^c}}_1 + \tau \norm{Ax}_2. \label{eq:RNSP} \end{align} In fact, it turns out that the $RNSP$ with respect to an index set $T$ is equivalent to that the $ASC$ is fulfilled for any $x_0$ supported on $T$, in the sense specified by the following theorem. \begin{theo} \label{theo:RNSPvsASC} Let $A\in \mathbb{R}^{m,d}$. \begin{enumerate}[(1)] \item If the $(\gamma,\tau)$-$RNSP$ is satisfied, then the $ASC$-condition is satisfied uniformly for all $x_0$ supported on $T$, i.e. there exists a $\theta>0$ with \begin{align} \mathcal{D}^{\wedge \theta}(\norm{\cdot}_1, x_0) \cap \ker A = \emptyset \label{eq:RNSPvsASCEq} \end{align} for every $x_0$ supported on $T$. \item If there exists a $\theta>0$ such that \eqref{eq:RNSPvsASCEq} holds for every $x_0$ supported on $T$, there exists $0\leq\gamma<1$ and $\tau>0$ such that the $(\gamma,\tau)$-$RNSP$ holds with respect to $T$. \end{enumerate} \end{theo} \begin{proof} $(1)$. Due to Lemma \ref{lem:singValAndGeomConv}, it suffices to prove that $\sigma_{\mathcal{D}(\norm{\cdot}_1, x_0)}(A) >\sigma_0>0$ for every $x_0$ supported on $T$. To this end, note that the $RNSP$ implies that \begin{align*} \min_{\substack{x \in \mathcal{D}(\norm{\cdot}_1, x_0) \\ \norm{x}_2=1}} \norm{Ax}_2 \geq \frac{1}{\tau}\min_{\substack{x \in \mathcal{D}(\norm{\cdot}_1, x_0) \\ \norm{x}_2=1}} \left(\norm{x_T}_1 - \gamma \Vert x_{T^c} \Vert_1 \right) &\geq \frac{1-\gamma}{\tau} \min_{\substack{x \in \mathcal{D}(\norm{\cdot}_1, x_0) \\ \norm{x}_2=1}} \norm{x_T}_1 \\ &\geq \frac{1-\gamma}{\tau} \min_{\substack{x \in \mathcal{D}(\norm{\cdot}_1, x_0) \\ \norm{x}_2=1}} \frac{1}{2}\norm{x}_1 \geq \frac{(1-\gamma)}{2\tau}. \end{align*} We used that since $x \in \mathcal{D}(\norm{\cdot}_1,x_0)$, $\norm{x_T}_1 \geq \Vert x_{T^c} \Vert_1$. This in particular implies that $\norm{x_T}_1 \geq \frac{1}{2}\norm{x}_1$ The claim has been proven. $(2)$. Suppose that there exists a $\theta$ with \eqref{eq:RNSPvsASCEq} for every $x_0$ supported on $T$. Let us begin by arguing that this implies that there exists a $0\leq\gamma<1$ with the following property: If $\norm{x_T}_1 \geq \gamma \Vert x_{T^c} \Vert_1$, there exists an $x_0$ supported on $T$ with $x \in \mathcal{D}^{\wedge \frac{\theta}{2}}( \norm{\cdot}_1, x_0)$. Consider an $x$ with $\norm{x_T}_1 \geq \gamma \Vert x_{T^c} \Vert_1$ (where the value of $\gamma$ is yet to be determined) and consider $x_0 := -x_T$. Then we have $\tilde{x} \in \mathcal{D}( \norm{\cdot}_1, x_0)$, where \begin{align*} \tilde{x}(i) = \begin{cases} x(i) & i \in T \\ \gamma x(i) & i \in T^c, \end{cases} \end{align*} since \begin{align*} \sum_{i \in T^c} \vert \tilde{x}(i)\vert = \gamma\sum_{i \in T^c} \vert {x}(i)\vert \leq \sum_{i \in T} \vert {x}(i)\vert = -\sum_{i \in T} \sgn x_0(i) \tilde{x}(i), \end{align*} since $\Vert x_T \Vert_1 \geq \gamma\Vert x_{T^c} \Vert_1$ (see also Example \ref{ex:l1desc}). Now we have \begin{align*} \frac{\sprod{x, \tilde{x}}}{\norm{x}_2 \norm{\tilde{x}}_2} = \frac{\sum_{i \in T} x(i)^2 + \gamma \sum_{i \in T^c} x(i)^2}{\norm{x}_2 \norm{\tilde{x}}_2} = \frac{\norm{x_T}^2 + \gamma (1-\norm{x_T}^2)}{\sqrt{\norm{x_T}^2 + \gamma^2(1 - \norm{x_T}^2})} \geq \inf_{0 \leq a \leq 1 } \frac{a + \gamma (1-a)}{\sqrt{a + \gamma^2 (1-a)}} \end{align*} where we in the second to last step without loss of generality assumed that $\norm{x}_2=1$. Since $0 \leq a, \gamma \leq 1$, we have $\sqrt{a + \gamma^2(1-a)} \leq \sqrt{a+ \gamma(1-a)}$, and consequently $\frac{a + \gamma (1-a)}{\sqrt{a + \gamma^2 (1-a)}} \geq \sqrt{a+ \gamma(1-a)}$. Hence, we have $$\inf_{0 \leq a \leq 1 } \frac{a + \gamma (1-a)}{\sqrt{a + \gamma^2 (1-a)}} \geq \inf_{0 \leq a \leq 1 }\sqrt{a+ \gamma(1-a)} = \sqrt{\gamma} \geq \cos\left(\frac{\theta}{2}\right),$$ if we choose $\gamma$ close enough to $1$. This proves that $x \in \mathcal{D}^{\wedge \frac{\theta}{2}}( \norm{\cdot}_1, x_0)$. Now we claim that there exist a $\tau>0$ such that the $(\gamma, \tau)$-$RNSP$ with respect to $T$ is satisfied. To see this, first note that \eqref{eq:RNSP} is trivial for $x$ with $\norm{x_T}_1 \leq \gamma \Vert x_{T^c} \Vert_1$. For the other $x$, which we without loss of generality may assumed to be $\ell_2$-normalized, we know by the above argument that $x \in \mathcal{D}^{\wedge \frac{\theta}{2}}( \norm{\cdot}_1, x_0)$ for some $x_0$ supported on $T$. This implies that $x \notin (\ker A)^{\wedge \frac{\theta}{2}}$, since if indeed $x \in (\ker A)^{\wedge \frac{\theta}{2}}$, there would exist unit norm vectors $y \in \ker A $ and $z \in \mathcal{D}(\norm{\cdot}_1, x_0)$ such that $\arccos \sprod{x,y} \leq \frac{\theta}{2}$ and $\arccos\sprod{x,z} \leq \frac{\theta}{2}$. Remark \ref{rem:angleMetric} would then imply that $\arccos\sprod{y, z} \leq \theta$, i.e., $\sprod{x,y} \geq \cos(\theta)$, which is a contradiction to \eqref{eq:RNSPvsASCEq}. But, $x \notin (\ker A)^{\wedge \frac{\theta}{2}}$ implies that $\norm{x + \ker A}_2 \geq \sin \left(\frac{\theta}{2}\right)$, which is seen as in the proof of Theorem \ref{prop:robustCrit}; for $y \in \ker A$ arbitrary, we have \begin{align*} \norm{x-y}_2^2 \geq 1 + \norm{y}_2^2 - 2 \cos\left(\frac{\theta}{2}\right)\norm{y} = \left( \norm{y}_2 - \cos \left(\frac{\theta}{2}\right)\right)^2 +1 - \cos^2\left(\frac{\theta}{2}\right)\geq \sin^2 \left(\frac{\theta}{2}\right). \end{align*} Therefore, we have for normalized $x$ with $\norm{x_T}_1 \geq \gamma \Vert x_{T^c} \Vert_1$ \begin{align*} \norm{Ax}_2 \geq \sigma_{\min}(A) \norm{x + \ker A}_2 \geq \sigma_{\min}(A) \sin \left(\frac{\theta}{2}\right). \end{align*} Therefore, we can choose $\tau = \sqrt{\abs{T}}(\sigma_{\min}(A) \sin \theta)^{-1}$, since then \begin{align*} \norm{x_{T}}_1 \leq \sqrt{T} \norm{x}_2 \leq \sqrt{\abs{T}}\left(\sigma_{\min}(A) \sin \left(\frac{\theta}{2}\right)\right)^{-1} \norm{Ax}_2 \leq \gamma \Vert x_{T^c} \Vert_1 + \tau \norm{Ax}_2, \end{align*} i.e. \eqref{eq:RNSP} is satisfied for any $x \in \mathbb{R}^d$. \end{proof} \subsection{How Many Gaussian Measurements Are Needed to Secure the $ASC$ for a Given $\theta$?} \label{sec:GaussMeas} In this final section we will assume that the linear map $A : \mathcal{H} \to \mathbb{R}^m$ is Gaussian, and ask how large $m$ has to be such that $\theta$-$ASC$ for some given $\theta$ is satisfied with high probability. Typically in compressed sensing, one can prove that as long as the parameter $m$ is larger than a certain threshold, depending on the dimension of $\mathcal{H}$ and the type of structured signals, a program of the form $(\mathcal{P}^\epsilon_f)$ will exhibit stability and robustness. One way of proving such results is to use Gordon's ''Escape through a mesh''-lemma \cite{Gordon1988}. This lemma relates the so called \emph{Gaussian width} $w(T)$ to the dimension that a uniformly distributed random subspace has to have in order to miss a set $T$ contained in the unit sphere $\mathbb{S}(\mathcal{H})$ of $\cal H$ (which clearly is equivalent to missing $\cone(T)$). Let us make this idea more precise. If $g$ is a Gaussian vector in $\cal H$, it is defined by \begin{align*} w(T) = \mathbb{E} \bigg( \sup_{x \in T} \sprod{x,g} \bigg). \end{align*} There are several ways to state the ''Escape through a mesh''-lemma. The following one from \cite[Th. 9.21]{MathIntroToCS} is probably the most convenient for us: let $\ell_m$ denote the expected length of a Gaussian vector in $\mathbb{R}^m$, i.e. $\ell_m = \erw{\norm{g}_2}$ for $g \in \mathbb{R}^m$ Gaussian, and $T$ a subset of $\mathbb{S}(\mathcal{H})$. Then we have \begin{align} \label{eq:meshThickEscape} \prb{ \inf_{x \in T} \norm{Ax}_2 \leq \ell_m - w(T) -t } \leq \exp\left(\frac{-t^2}{2}\right). \end{align} This particularly implies that if $\ell_m> w(T)+ 2\sqrt{\eta^{-1}}$, \begin{align} \label{eq:meshEscape} \prb{ \ker A \cap T = \emptyset} > \prb{\inf_{x \in T} \norm{Ax}_2 \geq \ell_m - w(T) -2\sqrt{\eta^{-1}} }>1-\eta. \end{align} Since $\ell_m \approx \sqrt{m}$, the estimate \eqref{eq:meshEscape} qualitatively tells us that if $m > w(T)^2$, $\ker A$ will probably miss the set $\cone(T)$. If we choose $T$ such that $\cone(T)= \mathcal{D}(f, x_0) \cap \mathbb{S}(\mathcal{H})$, we can relate this to exact recovery through the program $\mathcal{P}_f$. \eqref{eq:meshThickEscape} suggests that it is possible to directly relate the Gaussian width of the cone $C$ to the threshold amount of measurements needed to establish $\sigma_{C \to \mathbb{R}^m}(A)>0$, or equivalently the $ASC$ for some $\theta>0$, with high probability. This was discussed in detail in \cite{amelunxen2014gordon,chandrasekaran2012convex}, and we refer to those articles for more information. We can however also compare the Gaussian width of $C^{\wedge \theta} \cap \mathbb{S}(\mathcal{H})$ to the one of $C \cap \mathbb{S}(\cal H)$ for a convex cone $C$. This is also interesting in its own right, since Gaussian widths in general are hard to estimate. We have the following result. \begin{prop} \label{prop:gaussWidthExt} Let $C \subseteq \mathcal{H}$ be a convex cone, and $\theta \in [0, \frac{\pi}{2}]$. Then we have \begin{align*} w(C \cap \mathbb{S}(\mathcal{H})) \leq w\left(C^{\wedge \theta} \cap \mathbb{S}(\mathcal{H}) \right) \leq \cos(\theta)w(C \cap \mathbb{S}(\mathcal{H})) + \sin(\theta) \ell_\mathcal{H} \end{align*} where $\ell_\mathcal{H}:= \erw{\norm{g}}$ is the expected length of a Gaussian vector in $\mathcal{H}$. In particular, \begin{align*} w(C\cap \mathbb{S}(\mathcal{H}))^2 \leq w\left(C^{\wedge \theta}\cap \mathbb{S}(\mathcal{H})\right)^2 \leq w(C\cap \mathbb{S}(\mathcal{H}))^2 + O\left(\sin(\theta)\dim \mathcal{H} \right). \end{align*} \end{prop} \begin{proof} Since $C \subseteq C^{\wedge \theta}$, we trivially have $w(C\cap \mathbb{S}(\mathcal{H})) \leq w( C^{\wedge \theta}\cap \mathbb{S}(\mathcal{H}))$. To prove the upper bound, notice that if a unit vector $x$ lies in $C^{\wedge \theta}$, there exists a unit vector $y \in C$ with $\sprod{x,y} \geq \cos(\theta)$, or (see Remark \ref{rem:angleMetric}) $\delta(x,y) \leq \theta$. The triangle inequality for the sphere metric $\delta$ now implies for every $g \in \mathcal{H}$ \begin{align*} \delta\left( y, \tfrac{g}{\norm{g}} \right)= \leq \delta\left( x, \tfrac{g}{\norm{g}} \right) + \delta(x,y) \leq \delta\left( x, \tfrac{g}{\norm{g}} \right)+ \theta. \end{align*} Using the monotonicity properties of $\cos$, this implies \begin{align*} \sprod{x, \frac{g}{\norm{g}}}= \cos\left(\delta\left( x, \tfrac{g}{\norm{g}} \right)\right) \leq \cos\left ( \pos\left(\delta\left( y, \tfrac{g}{\norm{g}} \right) - \theta \right)\right), \end{align*} where $\pos(t) = \max(t,0)$ denotes the positive part of a real number $t$. Using the cosine addition formula, this yields \begin{align*} \sprod{x, \tfrac{g}{\norm{g}}} &\leq \begin{cases} 1 &\text{ if } \delta\left( y, \tfrac{g}{\norm{g}} \right)<\theta \\ \sprod{y, \tfrac{g}{\norm{g}}} \cos(\theta) + \sin\left(\delta\left( y, \tfrac{g}{\norm{g}} \right)\right) \sin(\theta) & \text{ else} \end{cases} \\ &\leq \cos(\theta) \sprod{y, \tfrac{g}{\norm{g}}} + \sin(\theta), \end{align*} where the last step follows from $\sin\left(\delta\left( y, \frac{g}{\norm{g}} \right)\right)\leq 1$, $\sin(\theta) \in [0,1]$, $\cos(\theta) \geq 0$ (since $\theta \in [0, \frac{\pi}{2}]$) and \begin{align*} \cos(\theta) \sprod{y, \tfrac{g}{\norm{g}}} + \sin(\theta) \geq \cos^2(\theta) + \sin^2(\theta) =1 \end{align*} for $y$ with $\delta\left( y, \tfrac{g}{\norm{g}} \right)<\theta$. Consequently \begin{align*} w\left(C^{\wedge \theta}\cap \mathbb{S}(\mathcal{H})\right) = \erw{ \sup_{x \in C^{\wedge \theta}, \norm{x}=1} \sprod{x,g}} & \leq \erw{ \sup_{y \in C, \norm{y}=1} \cos(\theta) \sprod{y,g} + \norm{g} \sin(\theta)} \\ &= \cos(\theta)w(C\cap \mathbb{S}(\mathcal{H})) + \sin(\theta)\ell_\mathcal{H}. \end{align*} For the second inequality, we simply need to square the first one: \begin{align*} w\left(C^{\wedge \theta} \cap \mathbb{S}(\mathcal{H}) \right)^2 &\leq \cos^2(\theta)w(C \cap \mathbb{S}(\mathcal{H}))^2 + 2\cos(\theta)\sin(\theta)w(C \cap \mathbb{S}(\mathcal{H})\ell_\mathcal{H} + \sin^2(\theta) \ell_\mathcal{H}^2 \\ &\leq w(C \cap \mathbb{S}(\mathcal{H}))^2 + \left(2\cos(\theta) + \sin(\theta)\right)\sin(\theta) \ell_\mathcal{H}^2 \leq w(C \cap \mathbb{S}(\mathcal{H}))^2 + \sqrt{5}\sin(\theta)\ell_\mathcal{H}^2, \end{align*} where we utilized that $w(C\cap \mathbb{S}(\mathcal{H})) \leq \erw{\norm{g}} =\ell_\mathcal{H} = O( \sqrt{\dim \mathcal{H}} )$. \end{proof} The $ASC$ states that if $\mathcal{D}^{\wedge \theta}(f,x_0) \cap \ker A = \set{0}$, the program $(\mathcal{P}_f^\epsilon)$ will stably recover $x_0$. According to above discussion, this will be the case provided $m$ is larger than $w(\mathcal{D}^{\wedge\theta}(f,x_{0}) \cap \mathbb{S}(\mathcal{H}))^{2}$. The last proposition therefore shows that in order to achieve the $ASC$ for a given $\theta$, we need $O(\sin\left(\theta\right)\dim \mathcal{H})$ more Gaussian measurements than we would need to ensure \eqref{eq:geomExactCond}. Let us end by considering an example which shows that this claim in general cannot be substantially improved. For this, we will use an alternative tool to calculate thresholds as described above: the \emph{statistical dimension $\delta(C)$ of a convex cone $C$}. It was introduced in \cite{AmelunxLotzMcCoyTropp2014}, where the authors also proved that if $m> \delta(C)$, the probability that $\ker A \cap C = \set{0}$ is very high. $\delta(C)$ is always close to $w(C \cap \mathbb{S}(\mathcal{H}))$ -- in fact, we have \cite[Prop 10. 2]{AmelunxLotzMcCoyTropp2014} \begin{align*} w(C)^2 \leq \delta(C) \leq w(C)^2 +1. \end{align*} \begin{example} Consider the circular cone $\text{Circ}_d(\alpha) \subseteq \mathbb{R}^d$, \begin{align*} \text{Circ}_d(\alpha) = \set{ x \in \mathbb{R}^d \vert x(1) \geq \cos(\alpha) \norm{x}_2} \end{align*} According to \cite[Proposition 3.4]{AmelunxLotzMcCoyTropp2014}, the statistical dimension of $\text{Circ}_d(\alpha)$ is equal to $d\sin^2(\alpha) +O(1)$. It is furthermore clear that $\text{Circ}^{\wedge \theta}_d(\alpha) = \text{Circ}_d(\alpha+\theta)$ (in particular again a convex cone). Hence, \begin{align*} \delta\left(\text{Circ}^{\wedge \theta}_d(\alpha)\right) &= d \sin^2(\alpha+\theta) + O(1) = d (\sin^2(\alpha)\cos^2(\theta) + \frac{1}{2}\sin(2\alpha)\sin(2\theta) + \cos^2(\alpha) \sin^2(\theta)) +O(1). \end{align*} If we assume that $\theta$ is small, we have $\cos^2(\theta) \approx 1$, $\sin^2(\theta) \approx 0$ and $\sin(2\theta) \approx 4\sin\left(\theta \right)$. Hence, we obtain for such $\theta$ \begin{align*} \delta\left(\text{Circ}^{\wedge \theta}_d(\alpha)\right) \approx \delta( \text{Circ}_d(\alpha)) + O\left(d\sin\left(\theta\right)\right) + O(1). \end{align*} \end{example} \subsection*{Acknowledgement} The author acknowledges support by Deutsche Forschungsgemeinschaft (DFG) Grant KU 1446/18~-~1, as well as Grant SPP 1798, as well as the Deutscher Akademischer Austausch Dienst (DAAD), which supported him with a scholarship for his master studies, during which the research was completed. He also likes to thank Gitta Kutyniok for the supervision of the Master Thesis leading up to this article, Martin Genzel for careful proofreading and many useful suggestions for increasing the readability of the paper and Dae Gwan Lee for pointing out a few errors, and suggesting ways to improve some statements in the paper. He also wishes to thank the anonymous reviewers, whose comments, criticisms and suggestions greatly improved the final version of this article, in particular regarding Sections \ref{sec:ASCvsConds} and \ref{sec:GaussMeas}. \bibliographystyle{abbrv}
\section{Introduction} \label{intro} Machine ethics is a newly evolving field aiming at creating machines able to compute and choose the best moral action. However the overall aim is not only important for equipping machines with capabilities of moral reasoning, but also for helping us to better understand morality through creating and testing computational models of ethical machines that follow a set of ideal ethical principles. Since the beginning of this century there were several attempts for implementing ethical decision making into intelligent autonomous agents using different approaches. But, no fully descriptive and widely accepted model of moral judgment and decision-making exists. In this work we propose a hybrid logic-based approach for modeling ethical machines, particularly ethical chatbots. As a matter of fact the potential of logic programming (LP) to model moral machines was envisioned by Pereira and Saptawijaya \cite{PereiraS16}. In their work the authors investigated the potential of LP for modeling different morality aspects that appear to be amenable to computational modeling by exploiting LP features. Chatbot, or virtual assistant, is a computer program or an artificial intelligent software which can simulate a conversation with a user in natural language via auditory or textual methods. They are typically used in dialogue systems for various practical purposes including customer service or information acquisition. From a technological point of view, a chatbot only represents the natural evolution of question answering system leveraging Natural Language Processing. Businesses are rapidly moving towards the need for chatbots and other self-service technology. Many banks and insurers, media and e-commerce companies, airlines and hotel chains, retailers, health care providers, government entities and restaurant chains have used chatbots to answer simple questions, increase customer engagement, for promotion, and to offer additional ways to order from them. However the chatbots raise many ethical questions from privacy to data ownership, to abuse and transparency. Ethics form the foundation of how a chatbot is built, and more importantly, they dictate how a bot interacts with users and how a bot behaves has the potential to influence how an organization can be perceived and unethical behavior can lead to consumer mistrust and litigation issues. Ethical chatbots can promote brand loyalty and help boost profit margins. The behavior of these machines should be guided by the company's codes of ethics and conduct. However, building such ethical chatbots is not an easy task. Codes of ethics and conduct are mostly abstract rules that lack clear directions for decision making. Customer service codes such as confidentiality, accountability, empathy, fidelity, honesty, etc. are quite difficult to apply in real world situations, they cover a wide range of specific cases. They are subject to interpretations and carry different meanings in different situations. It is extremely difficult to use deductive logics alone to build such ethical machines, because it is impossible for experts to define intermediate rules to cover all possible situations. For the sake of transparency and for future ethical decisions, we need detailed ethical principles in place to guide the behavior of our chatbot. To achieve this we need to incorporate a learning technique in the design of our agent. The approach proposed in this work combine deductive(rule-based) logic programming and inductive(learning) logic programming approaches in one framework for building our ethical agent. We use Answer Set Programming (ASP) for knowledge representation and reasoning, and Inductive Logic Programming (ILP) as a machine learning technique for learning from cases and generating the missing detailed ethical rules needed for reasoning about future similar cases. The newly learned rules are to be added to the agent knowledge base. ASP, the purely declarative non-monotonic reasoning paradigm, was chosen because ethical rules are said to be default rules, which means that they tolerate exceptions. This in fact nominates non-monotonic logics which simulate common sense reasoning to be used for formalizing different ethical conceptions. In addition, there are the many advantages of ASP including it is expressiveness, flexibility, extensibility, ease of maintenance, readability of its code. In addition, the existence of solvers to derive consequences of different ethical principles automatically can help in precise comparison of ethical theories, and makes it easy to validate our models in different situations. ILP was chosen as a machine learning approach because ILP as a logic-based machine learning approach supports two very important and desired aspects of machine ethics implementation into artificial agents viz. explainability and accountability, ILP is known for its explanatory power, clauses of the generated rules can be used to formulate an explanation for the choice of certain decisions over others; moreover, ILP also seems better suited than statistical methods to domains in which training examples are scarce as in the case of ethical domain. This Paper is organized as follows: In section \ref{pre} we give a short background, then in Section \ref{state}, we briefly review the state of the art. In Section \ref{approach} we present our approach with examples. Then we conclude with future directions in Section \ref{conclude}. \section{Preliminaries} \label{pre} \subsection{Answer Set Programming} ASP is a logic programming paradigm under answer set (or "stable model") semantics \cite{GelLif88}, which applies ideas of autoepistemic logic and default logic. In ASP, search problems are reduced to computing answer sets, and an answer set solver (i.e., a program for generating stable models) is used to find solutions. An answer set Program is a collection of rules of the form: $H\leftarrow A_{1} , \ldots , A_m, not A_{m+1}, \ldots, not A_n$ were each of $A_i$'s is a literal in the sense of classical logic. Intuitively the above rule means that if $A_1, \ldots , A_m$ are true and if $A_{m+1}, \ldots , A_n$ can be safely assumed to be false then $H$ must be true. The left-hand side and right-hand side of rules are called \emph{head} and \emph{body}, respectively. A rule with empty body ($n = 0$) is called a \emph{fact}. A rule with empty head is a \emph{constraint}, and states that literals of the body cannot be simultaneously true in any answer set. Unlike other semantics, a program may have several answer sets or may have no answer set. So, differently from traditional logic programming, the solutions of a problem are not obtained through substitutions of variables values in answer to a query. Rather, a program $\Pi$ describes a problem, of which its answer sets represent the possible solutions. For more information about ASP and its applications the reader can refer, among many, \cite{DyoubCG18} and the references therein). \subsection{Inductive Logic Programming} ILP \cite{muggleton1991inductive} is a branch of artificial intelligence (AI) which investigates the inductive construction of logical theories from examples and background knowledge. In the general settings, we assume a set of Examples \textit{E}, positive $E^+$ and negative $E^-$, and some background knowledge \textit{B}. An ILP algorithm finds the hypothesis \textit{H} such that $B \bigcup H \models E^+$ and $B \bigcup H \not\models E^-$. The possible hypothesis space is often restricted with a language bias that is specified by a series of mode declarations \textit{M}. A mode declaration is either a head declaration \textit{modeh(r, s)} or a body declaration \textit{modeb(r, s)}, where \textit{s} is a ground literal, this scheme serves as a template for literals in the head or body of a hypothesis clause, where \textit{r} is an integer, the recall, which limits how often the scheme can be used. A scheme can contain special \textit{placemarker} terms of the form \textit{$\sharp$ type}, \textit{+type} and \textit{-type}, which stand, respectively, for ground terms, input terms and output terms of a predicate \textit{type}. Finally, it is important to mention that ILP has found applications in many areas. For more information on ILP and applications, refer, among many to \cite{MuggletonR94} and references therein. \section{State of The Art} \label{state} Moral decision-making and judgment is a complicated process involving many aspects: it is considered as a mixture of reasoning and emotions. In addition moral decision making is highly flexible, contextual and culturally diverse. Until now it is agreed upon that there is a lack of general theory to guide ethical decision making. Below we briefly review the research work done to model ethical machines using ASP and others using ILP. LP particularly ASP were used to formalize different ethical conceptions, logical representations help to make ideas clear and highlight differences between different ethical systems. In \cite{ganascia2007modelling}, the authors formalized three ethical conceptions (the Aristotelian rules, Kantian categorical imperative, and Constant's objection) using nonmonotonic logic, particularly Answer Set Programming. In \cite{BerrebyBG17}, authors modeled many ethical theories of the right, and implemented them in ASP. However, their framework assesses the permissibility of an action or a set of actions using different theories of Good and Right separately, i.e. it only permits to judge an action with respect to a single ethical principle. It doesn't handle the conflicting decisions given by different theories, i.e. it doesn't provide a final decision for the agent about what it should do as a result. Pereira and Saptawijaya have proposed the use of different logic-based features for representing diverse issues of moral facets \cite{PereiraS16}. In their formalization, the relationship between the action and its consequences is stated by the programmer rather than inferred, i.e. they are not dynamically linked. Thus, it fails to account for causality and ethical responsibility. Furthermore, because they automatically specify the ethical character of the situation outcome, one needs to write different programs for each case. This is redundant and can lead to inconsistencies. In \cite{CointeBB16}, authors introduced a model that can be used by the agent in order to judge the ethical dimensions of its own behavior and the behavior of others. Their model was implemented in ASP. However, the model is still based on a qualitative approach. Whereas it can define several moral valuations, there is neither a degree of desires, nor a degree of capability, nor a degree of rightfulness. Moreover, ethical principles need to be more precisely defined to capture various sets of theories suggested by philosophers. Sergot in \cite{sergot2016engineering}, provides an alternative representation to the argumentative representation of a moral dilemma case concerning a group of diabetic persons, presented in \cite{AtkinsonB06}, where the authors used value-based argumentation to solve this dilemma. According to Sergot, the argumentation framework representation doesn't work well and doesn't scale. Sergot proposal for handling this kind of dilemmas is based on Defeasible Conditional Imperatives. The proposed solution was implemented in ASP. Ethics is more complicated than following a single ethical principle. According to Ross (\cite{ross2002right}), ethical decision making involves considering several Prima Facie duties, and any single-principled ethical theory like Act Utilitarianism is sentenced to fail. ILP was used by researchers to model ethical decision making in MedEthEx \cite{AndersonAA05}, and EthEl \cite{AndersonA08}. These two systems are based on a more specific theory of prima facie duties viz., the principle of Biomedical ethics of Beauchamp and Childress \cite{beauchamp1991principles}. In these systems, the strength of each duty is measured by assigning it a weight, capturing the view that a duty may take precedence over another. Then computes, for each possible action, the weighted sum of duty satisfaction, and the right action is the one with the greatest sum. The three systems use ILP to learn the relation \textit{supersedes(A1,A2)} which says that action \textit{A1} is preferred over action \textit{A2} in an ethical dilemma involving these choices. MedEthEx is designed to give advice for dilemmas in biomedical fields, while EthEl is applied to the domain of elder
care with the main purpose to remind a patient to take her medication, taking ethical duties into consideration. GenEth \cite{AndersonA14} is another System that makes use of ILP. GenEth has been used to codify principles in a number of domains relevant to the behavior of autonomous systems. \begin{table} \begin{center} \footnotesize \begin{tabular}{l l} \hline \textbf{window} w1 & \textbf{} \\ \hline \textbf{Facts} & \textbf{Conclusion} \\ \textit{ask(customer,infoabout(productx)).} & \textit{unethical(healthy-way-to-loose-wieght).}\\ \textit{answer(healthy-way-to-loose-wieght).}& \\ \textit{not\_SupportEvidence(healthy-way-to-loose-wieght).}& \\ \textbf{Kernal Set} & \textbf{Variabilized Kernal Set} \\ unethical(healthy-way-to-loose-wieght) $\leftarrow$ & K1= unethical(V) $\leftarrow$ \\ \qquad answer(healthy-way-to-loose-wieght), & \qquad answer(V),\\ \qquad not\_SupportEvidence(healthy-way-to-loose-wieght).& \qquad not\_SupportEvidence(V).\\ \textbf{Running Hypothesis} & \textbf{Support Set} \\ $H1$= unethical(V) $\leftarrow$ answer(V).& $H1.supp=\{K1\}$\\ \hline \textbf{window} w2 & \textbf{} \\ \hline \textbf{Facts} & \textbf{Conclusion} \\ ask(customer,infoabout(productY)). & not\_unethical(xxx).\\ answer(xxx). supportEvidence(xxx).& \\ \textbf{Revised Hypothesis} & \textbf{Support Set} \\ $H2$= unethical(X1) $\leftarrow$answer(X1), not\_SupportEvidence(X1)& $H2.supp=\{K1\}$\\ \hline \textbf{window} w3 & \textbf{} \\ \hline \textbf{Facts} & \textbf{Conclusion} \\ ask(customer,infoabout(productZ)). & unethical(WithoutOurProductYouBecomeFat).\\ answer(WithoutOurProductYouBecomeFat)& \\ exploitEmotions(WithoutOurProductYouBecomeFat)& \\ spreadFalseBelief(WithoutOurProductYouBecomeFat)& \\ \textbf{Kernal Set} & \textbf{Variabilized Kernal Set} \\ unethical(WithoutOurProductYouBecomeFat) $\leftarrow$ & K2= unethical(X1) $\leftarrow$ \\ \qquad answer(WithoutOurProductYouBecomeFat), & \qquad answer(X1),\\ \qquad exploitEmotions(WithoutOurProductYouBecomeFat), & \qquad exploitEmotions(X1),\\ \qquad spreadFalseBelief(WithoutOurProductYouBecomeFat).& \qquad spreadFalseBelief(X1).\\ \textbf{Running Hypothesis:}Remains unchanged & \textbf{Support Set:} $H2.supp=\{K1,K2\}$ \\ \hline \textbf{window} w4 & \textbf{} \\ \hline \textbf{Facts} & \textbf{Conclusion} \\ ask(customer,infoabout(productW)).answer(www). & not\_unethical(www).\\ exploitEmotions(www), not\_spreadFalseBelief(www)& \\ \textbf{Revised Hypothesis} & \textbf{Support Set} \\ $H31$= unethical(X1) $\leftarrow$ answer(X1), not\_SupportEvidence(X1)).& $H3.supp=\{K1,K2\}$\\ $H32$= unethical(X1) $\leftarrow$ answer(X1), spreadFalseBelief(X1). & \\ \hline \textbf{window} w5 & \textbf{} \\ \hline \textbf{Facts} & \textbf{Conclusion} \\ ask(customer,infoabout(productR)). & not\_unethical(rrr).\\ answer(rrr).& \\ not\_exploitEmotions(rrr), not\_spreadFalseBelief(rrr).& \\ \textbf{Running Hypothesis:}Remains unchanged & \textbf{Support Set:} $H3.supp=\{K1,K2\}$ \\ \hline \textbf{window} w6 & \textbf{} \\ \hline \textbf{Facts} & \textbf{Conclusion} \\ ask(customer,infoabout(productS)).answer(sss). & unethical(sss).\\ exploitEmotions(sss). spreadFalseBelief(sss)& \\ \textbf{Revised Hypothesis} & \textbf{Support Set} \\ $H31$= unethical(X1) $\leftarrow$ answer(X1), not\_SupportEvidence(X1)).& $H3.supp=\{K1,K2\}$\\ $H32$= unethical(X1) $\leftarrow$answer(X1), spreadFalseBelief(X1),& \\ \qquad \qquad \qquad \qquad \qquad exploitEmotions(X1). &\\ \hline \end{tabular} \caption{Example:Input examples and output theory} \end{center} \end{table} \section{Building the Ethical Agent: Our Approach} \label{approach} In this section we present our approach, the application we are considering is an online customer service chatbot. In this work we are concerned only with the ethical reasoning capabilities of our agent, other details related to the complete design of a chatbot are not handled here. The behavior of an ethical online customer service chatbot should be dictated by the codes of ethics and conduct of its company. Codes of ethics in domains such as customer service are abstract general principles, they apply to a wide range of situations. They are subject to interpretations and may have different meanings in different contexts. There are no intermediate rules that elaborate these abstract principles or explain how they apply in concrete situations. We propose an approach to generate these intermediate rules from interactions with clients through a simplified dialogue. The newly generated rules are to be added to our agent knowledge base, to be used for ethical reasoning of future cases. Initially our agent will have a very small ethical background knowledge limited to few ethical rules represented by ASP like: $rule1 = \{unethical(V) \leftarrow not\_correct(V), answer(V).\}$ which says that it is unethical to give incorrect information to the customers. The missing ethical rules are learned by our agent incrementally overtime through interactions with clients. During the training phase, the trainer enters a series of sentences in the form of requests and responses through the keyboard simulating a customer service chat point conversation, along with the ethical evaluation of the responses in each scenario. The first step is to convert the natural language sentences to the syntax of ASP (e.g. refer \cite{PendharkarG19}). The system remembers the facts about the narratives given by the trainer and learns to form ethical evaluation rules according to the facts given in the story context (\textit{C}) and background knowledge (\textit{B}). For learning the ethical rules (\textit{H}) needed for dictating the ethical behavior of our agent, we use the state of the art ILP tool ILED \cite{KatzourisAP15}. In the test phase, the agent uses both \textit{B \& H} to respond to the client request avoiding unethical practices. The goal is to recognize unethical responses from combinations of case' facts. To illustrate our approach, let us consider the following scenarios were we want to teach our customer service chatbot that it is natural to highlight or exaggerate the best features of a product or a service, but this practice crosses the ethical line when it comes to messaging that misleads the customer, such as marketing a product as a healthy way to lose weight when there isn't significant evidence to support such a claim. Or that appealing to emotions is an effective way to reach customers, but it is unethical to intentionally evoke emotions like rage, fear, sadness, etc. to manipulate customers. And it is unethical to spread a false belief that a certain product or service can only save them. These practices violate honesty and truthfulness. To do so the trainer will provide the system with different positive and negative examples, table 1 demonstrate the learning process. The system will start constructing hypothesis from the first available case (c1). The generated hypothesis (rule) will be added to the agent knowledge base. When a new case (c2) arrive, the system will check whether the new case is covered by the running hypothesis. If not, it will start the revision process to update the running hypothesis (rule) to a new rule that cover the new case (see table1). Considering the following mode declarations serving as patterns for restricting the hypotheses search space: $M=<M_h,M_b>$ with head declarations $M_h= \{unethical(V)\}$ and body declarations $M_b=\{not\_supportEvedence(V), \allowbreak spreadFalseBelief(V), exploitEmotions(V)\}$, where V denotes that the arguments of the predicates are variables. In our preliminary experiment we used similar setting to those used in (\cite{KatzourisAP15}). We created a small set of examples which we stored in a 'mongodb' database, then we run ILED to learn incrementally from this dataset. \section{Conclusions And Future Directions} \label{conclude} Combining ASP with ILP for modeling ethical agents provides many advantages: increases the reasoning capability of our agent; promotes the adoption of hybrid strategy for modeling ethical agents; allows the generation of rules with valuable expressive and explanatory power which equips our agent with the capacity to explain the reasons behind its actions. In other words, our method supports transparency and accountability of such models, which facilitates instilling confidence and trust in our agent. Furthermore, in our opinion and for the sake of transparency, machines ethical behavior should be guided by explicit ethical rules determined by competent judges or ethicists or through consensus of ethicists. Our approach provides support for developing these ethical rules. ILP algorithms, unlike neural networks, output rules which are easily understood by people. Lack of intuitive descriptions makes it hard for users to understand and verify the underlying rules that govern the model. Also, statistical methods cannot produce a justification for a prediction they compute. Furthermore, if background knowledge is extended, then the entire model needs to be re-learned. ILP particularly appropriate for tasks in which the comprehensibility of the generated knowledge is essential. Moreover, in an ill-defined domain like the ethics domain, it is infeasible to define abstract codes in precise and complete enough terms to be able to use deductive problem solvers to apply them correctly. A combination of deductive (rule-based) and inductive (case-based learning) is needed. As a future work, we would like to test our agent in a real chat scenario. Finally, as another future direction we would like to investigate the possibility of judging the ethical behavior from a series of related chat sessions. \bibliographystyle{eptcs}
\section{Introduction} The evaluation of multivariate risks under model uncertainty has become a central issue in several applications, ranging from hydrology and engineering to mathematical finance. In mathematical finance, this has been in parts driven by the changing regulations requiring the quantification of model uncertainty in risk management; see \textit{e.g.} the latest Basel Accord. Measuring risk under uncertainty often relates to the computation of bounds on probabilities of the form $\mathbb{P}(\varphi(\mathbf X)\le \cdot)$, where $\mathbf X = (X_1,\dots,X_d)$ is an $\mathbb{R}^d$-valued random vector and $\varphi\colon\mathbb{R}^d\to\mathbb{R}$ an aggregation function. Here $\mathbf X$ can be thought of as a vector modeling $d$ risks in a portfolio and $\varphi$ as a function to aggregate these risks. In this paper we focus on risk measurement under \textit{dependence uncertainty}, hence we assume that the marginal distributions of the constituents $X_i\sim F_i$ for $i=1,\dots,d$ are known, while the dependence structure between the components of $\mathbf{X}$ is unknown or only partially known. We then derive bounds on the distribution function of $\varphi(\mathbf X)$ using the available information on the distribution of $\mathbf X$. Then, by inversion, the bounds on the distribution of $\varphi(\mathbf X)$ can be translated immediately into bounds on the Value-at-Risk (VaR) of $\varphi(\mathbf X)$. A significant part of the related literature focuses on the situation where only the marginals $F_1,\dots,F_d$ are known and no information on the dependence structure of $\mathbf{X}$ is available. In this case, explicit bounds on the distribution function of the sum of two random variables, \textit{i.e.} $\varphi(\mathbf{X}) = X_1+X_2$, were derived by \citet*{makarov1981} and for more general functions $\varphi$ by \citet*{rueschendorf1981} in the early 1980's. These results were later generalized for functions of more than two random variables, for instance by \citet*{denuit1999} for the sum and by \citet*{embrechts2003} and \citet*{embrechts2006} for more general aggregation functions; see also \citet{Cheung_Lo_2013}. These bounds however may fail to be sharp. Therefore, numerical schemes to compute sharp distributional bounds have become increasingly popular. The rearrangement algorithm, which was introduced by \citet*{puccetti2012} and \citet*{embrechts2013}, represents an efficient method to approximate sharp bounds on the VaR of the sum $X_1+\cdots+X_d$ under additional requirements on the marginal distributions $F_1,\dots,F_d$. However, the complete absence of information on the dependence structure typically leads to very wide bounds that are not sufficiently informative for practical applications. This calls for methods to account for additional information on the dependence structure in the computation of risk bounds. Several analytical and numerical approaches to derive risk bounds including additional dependence information have recently been developed. Analytical bounds were derived by \citet*{embrechts2003} and \citet*{embrechts2006} for the case that a lower bound on the copula of $\mathbf{X}$ is given. Moreover, \citet*{embrechts2010} and \citet*{puccetti2012b} established bounds when the laws of some lower dimensional marginals of $\mathbf{X}$ are known. Analytical bounds that account for positive or negative dependence assumptions were presented in \citet*{embrechts2003} and \citet*{rueschendorf2005}. \citet*{bernard2015b} derived risk bounds when an upper bound on the variance of $\varphi(\mathbf X)$ is prescribed, and presented a numerical scheme to efficiently compute these bounds. In addition, \citet*{bernard2015} considered the case where the distribution of $\mathbf X$ is known only on a subset of its domain and established a version of the rearrangement algorithm to account for this type of dependence information. A detailed account of this literature can be found in \citet*{rueschendorf2016}. In this paper we develop alternative approaches to compute VaR bounds for aggregations of multiple risks in the presence of dependence uncertainty. After recalling several definitions and usefull results in Setion \ref{setting}, in Section \ref{boundsOnVar} we revisit the standard and improved standard bounds on VaR and provide a direct derivation of the improved standard bounds when $\varphi = \max$ or $\varphi = \min$. In Section \ref{prescribedMax} we develop a reduction principle to account for extreme value information, such as the distribution of partial minima or maxima of the risk vector $\mathbf X$, in the computation of risk bounds for the sum $X_1+\cdots+X_d$. The term partial maxima hereby refers to the maximum of lower dimensional marginals of $\mathbf X$, \textit{i.e.} $\max\{X_{i_1},\dots,X_{i_n}\}$ for $1\leq i_1\leq\cdots\leq i_n\leq d$, and analogously for the minimum. We thereby interpolate between the marginals-only case and the situation where the distributions of the lower-dimensional marginals of $\mathbf X$ are completely specified; \textit{cf.} \citep{embrechts2010, puccetti2012b}. In Section \ref{boundsOnCopula} we present an approach to compute VaR bounds for general aggregation functions $\varphi$ including two different types of dependence information. First, we consider the situation where the copula $C$ of the risk vector $\mathbf X$ coincides with a reference model on a subset $\ensuremath{\mathcal{S}}$ of its domain, \textit{i.e.} it holds that $C(\ensuremath{\mathbf{x}}) = C^*(\ensuremath{\mathbf{x}})$ for all $\ensuremath{\mathbf{x}}\in\ensuremath{\mathcal{S}}$ and a reference copula $C^*$. Applying results from \citet{lux2016} and the improved standard bounds of \citet{embrechts2003} and \citet{embrechts2006} we derive bounds on VaR using the available information on the subset $\ensuremath{\mathcal{S}}$. This relates to the \textit{trusted region} in \citet*{bernard2015}, although the methods are different. The second type of dependence information corresponds to $C$ lying in the vicinity of a reference copula $C^*$ as measured by a statistical distance $\ensuremath{\mathcal{D}}$. In this case we establish improved Fr\'echet--Hoeffding\xspace bounds on the set of all (quasi-)copulas $C$ in the $\delta$-neighborhood of the reference model $C^*$, \textit{i.e.} for all $C$ such that $\ensuremath{\mathcal{D}}(C,C^*)\leq\delta$. Our method applies to a large class of statistical distances such as the Kolmogorov--Smirnov or the Cram\'er--von Mises distances. We then use the improved standard bounds of \citep{embrechts2003, embrechts2006} in order to translate the improved Fr\'echet--Hoeffding\xspace bounds into bounds on the VaR of $\varphi(\mathbf X)$. Finally, in Section \ref{numerics} we present several applications of our results in risk measurement. The computational results show that the additional dependence information typically leads to a significant improvement of the VaR bounds when compared to the marginals-only case. Moreover, the VaR bounds using information on the partial maxima are becoming tighter as the confidence level increases, which is in contrast to related results in the literature, and constitutes an advantage of this methodology. \section{Notation and preliminary results} \label{setting} In this section we introduce the notation and some basic results that will be used throughout this work. Let $d\geq2$ be an integer. In the following $\mathbb{I}$ denotes the unit interval $[0,1]$, while boldface letters, \textit{e.g.} $\mathbf{u}$, $\mathbf{v}$ or $\mathbf{x}$, denote vectors in $\mathbb{I}^d$ or $\mathbb{R}^d$. Moreover, $\mathbf 1$ denotes the $d$-dimensional vector with all entries equal to one, \textit{i.e.} $\mathbf 1=(1,\dots,1)$. The finite difference operator $\Delta$ for a function $f\colon\mathbb{R}^d\to\mathbb{R}$ and $a\le b\in\mathbb{R}$ is defined via $$\Delta_{a,b}^i\ f(x_1,\dots,x_d) := f(x_1,\dots,x_{i-1},b,x_{i+1},\dots,x_d) - f(x_1,\dots,x_{i-1},a,x_{i+1},\dots,x_d).$$ \begin{definition} A function $f\colon\mathbb{R}^d\to\mathbb{R}$ is called $d$-\textit{increasing} if for all rectangular subsets $H = (a_1,b_1]\times\cdots\times(a_d,b_d]\subset\mathbb{R}^d$ it holds that \begin{align}\label{volume} V_f(H) := \Delta^d_{a_d,b_d}\circ\cdots\circ\Delta^1_{a_1,b_1}\ f \geq 0. \end{align} We call $V_f(H)$ the $f$-\textit{volume} of $H$. \end{definition} \begin{definition} A function $Q\colon\mathbb{I}^d\to\mathbb{I}$ is a $d$-\textit{quasi-copula} if the following properties hold: \begin{enumerate}[label={$(\mathbf{QC1})$},leftmargin=!,labelwidth=\widthof{\bfseries XXXX}] \item $Q$ satisfies, for all $i\in\{1,\dots,d\}$, the boundary conditions \label{cond:QC1} $$Q(u_1,\dots,u_i = 0,\dots,u_d)=0 \quad\text{ and }\quad Q(1,\dots,1,u_i,1,\dots,1) = u_i.$$ \end{enumerate} \begin{enumerate}[label={$(\mathbf{QC2})$},leftmargin=!,labelwidth=\widthof{\bfseries XXXX}] \item $Q$ is non-decreasing in each argument. \label{cond:QC2} \end{enumerate} \begin{enumerate}[label={$(\mathbf{QC3})$},leftmargin=!,labelwidth=\widthof{\bfseries XXXX}] \item $Q$ is Lipschitz continuous, \textit{i.e.} for all $\mathbf{u},\mathbf{v}\in\mathbb{I}^d$\label{cond:QC3} $$|Q(u_1,\dots,u_d)-Q(v_1,\dots,v_d)|\leq\sum_{i=1}^d |u_i-v_i|.$$ \end{enumerate} Moreover, $Q$ is a $d$-\textit{copula} if \begin{enumerate}[label={$(\mathbf{QC4})$},leftmargin=!,labelwidth=\widthof{\bfseries XXXX}] \item $Q$ is $d$-increasing. \label{cond:QC4} \end{enumerate} \end{definition} We denote the set of all $d$-quasi-copulas by $\mathcal{Q}^d$ and the set of all $d$-copulas by $\mathcal{C}^d$. Obviously $\mathcal{C}^d\subset\mathcal{Q}^d$. Henceforth we will simply refer to a $d$-(quasi-)copula as a (quasi-)copula if the dimension is clear from the context. Let $C$ be a $d$-copula and consider $d$ univariate probability distribution functions $F_1,\dots,F_d$. Then $F(x_1,\dots,x_d):=C(F_1(x_1),\dots,F_d(x_d))$, for all $\mathbf{x}\in\mathbb{R}^d$, defines a $d$-dimensional distribution function with univariate margins $F_1,\dots,F_d$. The converse also holds by Sklar's Theorem, see \citet{sklar1959}, \textit{i.e.} for each $d$-dimensional distribution function $F$ with univariate marginals $F_1,\dots,F_d$, there exists a copula $C$ such that $F(x_1,\dots,x_d) = C(F_1(x_1),\dots,F_d(x_d))$ for all $\mathbf{x}\in\mathbb{R}^d$. In this case, the copula $C$ is unique if the marginals are continuous. A simple and elegant proof of Sklar's Theorem based on the distributional transform can be found in \citet{rueschendorf2009}. Sklar's Theorem establishes a fundamental link between copulas and multivariate distribution functions. Thus, given a random vector we will refer to its copula, \textit{i.e.} the copula corresponding to the distribution function of this random vector. The \textit{survival function} of a $d$-copula $C$ is defined as follows: $$\widehat{C}(u_1,\dots,u_d) := V_C([u_1,1]\times\cdots\times[u_d,1]),\quad \mathbf{u} \in \mathbb{I}^d.$$ This is illustrated for $d=3$ below: \begin{align*} \widehat{C}(u_1,u_2,u_3) &= 1 - C(u_1,1,1,) - C(1,u_2,1) - C(1,1,u_3)\\ &\quad + C(u_1,u_2,1) + C(u_1,1,u_3) + C(1,u_2,u_3) - C(u_1,u_2,u_3). \end{align*} The function $\widehat{C}(1-u_1,\dots,1-u_d)$, for $\mathbf{u} \in \mathbb{I}^d$, is again a copula, namely the \textit{survival copula} of $C$; see \textit{e.g.} \citet*{georges2001}. Note that for a distribution function $F$ of a random vector $(X_1,\dots,X_d)$ with marginals $F_1,\dots,F_d$ and a corresponding copula $C$ such that $F(x_1,\dots,x_d) = C(F_1(x_1),\dots,F_d(x_d))$ it holds that \begin{align} \label{survivalCopulaProbability} \mathbb{P}(X_1>x_1,\dots,X_d>x_d) = \widehat{C}(F_1(x_1),\dots,F_d(x_d)). \end{align} The map $\widehat{Q}$ could be defined analogously for quasi-copulas $Q$, however the function $\widehat{Q}(1-u_1,\dots,1-u_d)$ is not necessarily again a quasi-copula. Therefore, we introduce the term \textit{quasi-survival functions} to refer to functions $\widehat{Q}:\mathbb{I}^d\to\mathbb{I}$ such that $(u_1,\dots,u_d)\mapsto\widehat{Q}(1-u_1,\dots,1-u_d)$ is again a quasi-copula. The set of $d$-quasi-survival functions is denoted by $\widehat{\mathcal{Q}}^d$. \begin{definition} Let $Q,Q'$ be $d$-quasi-copulas. $Q'$ is greater than $Q$ in the \textit{lower orthant order}, denoted by $Q \preceq Q'$, if $Q(\mathbf{u})\leq Q'(\mathbf{u})$ for all $\mathbf{u}\in\mathbb{I}^d$. \end{definition} The well-known Fr\'{e}chet--Hoeffding theorem establishes the minimal and maximal bounds on the set of quasi-copulas in the lower orthant order. In particular, for each $Q\in\mathcal{Q}^d$, it holds that $$W_d(\mathbf{u}) := \max\Big\{0,\sum_{i=1}^d u_i - d + 1\Big\} \leq Q(\mathbf{u}) \leq \min\{u_1,\dots,u_d\} =: M_d(\mathbf{u}),$$ for all $\mathbf{u}\in\mathbb{I}^d$, \textit{i.e.} $W_d\preceq Q\preceq M_d$, where $W_d$ and $M_d$ are the lower and upper Fr\'{e}chet--Hoeffding bounds respectively. The properties of the Fr\'echet--Hoeffding\xspace bounds carry over to the set of survival copulas in a straightforward way, hence one obtains similarly for any $C\in\mathcal{C}^d$ the following bounds: $$W_d(\mathbf 1 - \ensuremath{\mathbf{u}}) \le \widehat{C}(\ensuremath{\mathbf{u}}) \le M_d(\mathbf 1 - \ensuremath{\mathbf{u}}), \qquad\text{for all }\mathbf{u}\in\mathbb{I}^d.$$ \section{Bounds on Value-at-Risk under partial dependence information: an overview and some new results} \label{boundsOnVar} In this section we consider a vector of risks $\mathbf{X}=(X_1,\dots,X_d)$ and an aggregation function $\varphi\colon\mathbb{R}^d\to\mathbb{R}$, and want to quantify the risk of $\varphi(\mathbf{X})$ by means of Value-at-Risk. This corresponds to the quantile function of $\varphi(\mathbf{X})$, \textit{i.e.} when $\varphi(\mathbf{X}) \sim F_\varphi$ then the VaR of $\varphi(\mathbf{X})$ for a certain confidence level $\alpha\in(0,1)$ is given by the quantity $$\mathrm{VaR}_\alpha(\varphi(\mathbf{X})) = F_\varphi^{-1}(\alpha) = \inf\{x\in\mathbb{R}\colon F_\varphi(x)>\alpha\}.$$ Typical levels of $\alpha$ are close to 1 in practice, and the most commonly considered risk aggregation function $\varphi$ is the sum of the individual risks $X_1+\cdots+X_d$, while the maximum and minimum of the risks, $\max\{X_1,\dots,X_d\}$ and $\min\{X_1,\dots,X_d\}$, are also relevant choices for applications. Once the distribution of $\varphi(\mathbf X)$ is specified, the determination of VaR amounts to a simple computation of the quantile function. If the distribution $F_\varphi$ is not known, but instead the joint law of $\mathbf{X}$ is known, then the problem renders into the computation of the distribution of $\varphi(\mathbf{X})$ from the joint law of $\mathbf{X}$. In order to solve this problem, one can resort to numerical integration techniques or Monte Carlo methods. For the important case $\varphi(\mathbf{X}) = X_1+\cdots+X_d$, two efficient algorithms to determine the law of the aggregated risk given the joint distribution of $\mathbf{X}$ were presented by \citet{arbenz2011,arbenz2012}. However, in many situations the distribution of $\mathbf{X}$ is not fully specified or cannot be determined with sufficient precision. In particular when $d$ is large, the limited amount of data available in most applications makes it difficult to estimate the joint law of $\mathbf{X}$ accurately. Therefore, we consider the situation of \textit{model uncertainty}, where the distribution of $\mathbf{X}$ is not fully specified. In particular, we focus on \textit{dependence uncertainty}, where one assumes that the marginal distributions $X_i\sim F_i$ are known for $i=1,\dots,d$, but the dependence structure between the constituents of $\mathbf{X}$ is either unknown or only partially known. Using Sklar's Theorem, every distribution of $\mathbf{X}$ that is consistent with the marginals $F_1,\dots,F_d$ can be expressed by means of a copula $C$ and the marginals, \textit{i.e.} if $\mathbf{X}\sim F$ then $F(x_1,\dots,x_d) = C(F_1(x_1),\dots,F_d(x_d))$. This implies that dependence uncertainty is in fact uncertainty about the copula of $\mathbf{X}$. In this case, for most functionals $\varphi$ of interest, neither the distribution of $\varphi(\mathbf{X})$ can be determined completely, nor can its risk be calculated exactly. Indeed, each model for $\mathbf{X}$ that is consistent with the available information can produce a different risk estimate. Therefore, one is interested in deriving upper and lower bounds on the risk of $\varphi(\mathbf{X})$ over the set of distributions that comply with the given information. These bounds are then considered best or worst case estimates for the VaR of $\varphi(\mathbf{X})$, given the available information about the distribution of $\mathbf{X}$. This problem has a long history and many approaches to its solution for different types of dependence uncertainty have emerged. In the situation of complete dependence uncertainty, where only the marginals $F_1,\dots,F_d$ are known and one has no information about the copula of $\mathbf{X}$, bounds for the quantiles of the sum $X_1+\cdots+X_d$ were derived in a series of papers, starting with the results by \citet{makarov1981} and \citet{rueschendorf1982} for $d=2$, and their extensions for $d>2$ by \citet{frank1987}, \citet{denuit1999} and \citet{embrechts2003}. These bounds are in the literature referred to as \textit{standard bounds} and they are given by \begin{align}\label{standardBounds} \begin{split} \max\Big\{\sup_{\mathcal{U}(s)}\Big(F^-_1(u_1)+\sum_{i=2}^d F_i(u_i)\Big)-d+1,0\Big\} &\leq \ensuremath{\mathbb{P}}\xspace(X_1+\cdots+X_d<s)\\ &\qquad \leq\min\Big\{\inf_{\mathcal{U}(s)}\sum_{i=1}^d F^-_i(u_i),1\Big\}, \end{split} \end{align} where $\mathcal{U}(s) = \{(u_1,\dots,u_d)\in\mathbb{R}^d\colon u_1+\cdots+u_d=s\}$ and $F_i^-$ denotes the left-continuous version of $F_i$. These bounds hold for all random variables $\mathbf{X}$ with margins $F_1,\dots,F_d$, and the corresponding bounds for the VaR of the sum $X_1+\cdots+X_d$ are given by the respective inverse functions. It was shown independently in \citep{makarov1981} and \citep{rueschendorf1982} that the bounds are sharp for $d=2$, in the sense that there exists a distribution for $\mathbf{X}$ such that the sum of its constituents attains the upper and lower bound. The standard bounds may however fail to be sharp in higher dimensions. It turns out that the absence of dependence information leads mostly to very wide bounds on the VaR of the aggregated risks; see \textit{e.g.} \citet*{bernard2015}. Hence, there is a large spread between the upper and the lower bound, such that they are not actually informative for practical applications. In addition, a complete lack of information about the dependence structure of $\mathbf{X}$ is often unrealistic, since quantities such as correlations or the values of the distribution function of $\mathbf{X}$ at certain points can be estimated with a sufficient degree of accuracy. Therefore, the quest for methods to improve the standard bounds by including additional dependence information has turned into a thriving area of mathematical research in recent years. \citet{embrechts2003} and \citet{embrechts2006} derived an improvement of the standard bounds that accounts for a lower bound on the copula of $\mathbf{X}$ or its survival function. This improvement is essential for the results in the present work, since it relates the problem of computing improved VaR bounds in the presence of additional dependence information to the task of improving the Fr\'echet--Hoeffding\xspace bounds on copulas. The improvement of the `classical' Fr\'echet--Hoeffding\xspace bounds by using additional, partial information on the dependence structure has attracted some attention in the literature lately, see \textit{e.g.} \citet{nelsen2006}, \citet{tankov2011} and \citet{lux2016}. Let $\ensuremath{\mathbf{X}}\xspace$ be a random vector with marginals $F_1,\dots,F_d$ and copula $C$, let $\varphi\colon\mathbb{R}^d\to\mathbb{R}$ be non-decreasing in each coordinate, and define the functional \[ \ensuremath{\mathbb{P}}\xspace_C(\varphi(\mathbf{X})<s) := \int_{\mathbb{R}^d} \mathds{1}_{\{\varphi(x_1,\dots,x_d)<s\}}\ \mathrm{d} C(F_1(x_1),\dots,F_d(x_d)). \] Let $C_0,C_1$ be copulas and consider the following quantities \begin{align*} m_{C_0,\varphi}(s) &:= \inf\big\{\ensuremath{\mathbb{P}}\xspace_C(\varphi(\mathbf{X})<s) \colon C\in\mathcal C^d, C_0\preceq C\big\}, \\ M_{\widehat C_1,\varphi}(s) &:= \sup\big\{\ensuremath{\mathbb{P}}\xspace_C(\varphi(\mathbf{X})<s) \colon C\in\mathcal C^d, \widehat{C}_1\preceq \widehat{C}\big\}. \end{align*} The following bounds on $m_{C_0,\varphi}, M_{\widehat C_1,\varphi}$ are known in the literature as \textit{improved standard bounds} and read as follows: \begin{align}\label{eq:ISB} \begin{split} m_{C_0,\varphi}(s) & \geq \sup_{\mathcal V^<_\varphi(s)}\ C_0\big(F_1(x_1),\dots,F_{d}(x_{d})\big) =: \underline{m}_{C_0,\varphi}(s),\\ M_{\widehat C_1,\varphi}(s) & \leq \inf_{\mathcal V^>_\varphi(s)}\ 1-\widehat{C}_1\big(F_1(x_1),\dots,F_{d}(x_{d})\big) =: \overline{M}_{\widehat C_1,\varphi}(s), \end{split} \end{align} where $\mathcal V^<_\varphi(s) = \{ (x_1,\dots,x_d)\in\mathbb{R}^d: \varphi(\ensuremath{\mathbf{x}})< s \}$ and $\mathcal V^>_\varphi(s) = \{ (x_1,\dots,x_d)\in\mathbb{R}^d: \varphi(\ensuremath{\mathbf{x}})> s \}$; see \cite{embrechts2003,embrechts2006}. A careful examination of the proof of Theorem 3.1 in \citet{embrechts2006} reveals that these results hold also when $C_0$, resp. $\widehat{C}_1$, is just increasing, resp. decreasing, in each coordinate. Hence, they hold in particular when $C_0$ is a quasi-copula and $\widehat{C}_1$ a quasi-survival function. The above bounds relate to the VaR of $\varphi(\mathbf{X})$ in the following way. \begin{remark}\label{varBoundsRemark} Let $\varphi\colon\mathbb{R}^d\to\mathbb{R}$ be increasing in each component and the copula $C$ of $\mathbf{X}$ be such that $Q_0\preceq C$ and $\widehat{Q}_1\preceq\widehat{C}$, for a quasi-copula $Q_0$ and a quasi-survival function $\widehat{Q}_1$. Then $$ \overline{M}^{-1}_{\widehat Q_1,\varphi}(\alpha) \leq \mathrm{VaR}_\alpha(\varphi(\mathbf{X})) \leq \underline{m}^{-1}_{Q_0,\varphi}(\alpha). $$ \end{remark} Besides the aggregation function $\varphi(x_1,\dots,x_d) = x_1+\cdots+x_d$, the operations $\varphi(x_1,\dots,x_d) = \max\{x_1,\dots,x_d\}$ and $\varphi(x_1,\dots,x_d) = \min\{x_1,\dots,x_d\}$ are also of particular interest in risk management, however fewer methods to handle dependence uncertainty for these operations exist; \textit{cf.} \citet{embrechts2014}. The following result establishes bounds for the minimum and maximum operations in the presence of additional information on the copula using straightforward computations, and further shows that these bounds coincide with the improved standard bounds \eqref{eq:ISB}. Analogous statements for $d=2$ in the absence of additional information on the copula $C$ can be found in \citet[Theorem~5.1]{frank1987}. \begin{proposition} \label{varBoundsMax} Let $\mathbf X$ be a random vector with copula $C$ and marginals $F_1,\dots,F_d$, and let $\underline{Q},\overline{Q}$ be quasi-copulas. Then, for $\varphi(x_1,\dots,x_d)=\max\{x_1,\dots,x_d\}$, we have that \begin{align*} m_{\underline{Q},\max}(s) &:= \inf\big\{\ensuremath{\mathbb{P}}\xspace_C(\varphi(\mathbf{X})<s)\colon \underline{Q}\preceq C\big\} \geq \underline{Q}(F_1(s),\dots,F_d(s)) = \underline{m}_{\underline{Q},\max}(s)\\ M_{\overline{Q},\max}(s) &:= \sup\big\{\ensuremath{\mathbb{P}}\xspace_C(\varphi(\mathbf{X})<s)\colon C\preceq \overline{Q}\big\} \leq \overline{Q}(F_1(s),\dots,F_d(s)). \end{align*} Analogously, if $\underline{\widehat{Q}}$ and $\widehat{\overline{Q}}$ are quasi-survival functions then, for $\varphi(x_1,\dots,x_d)=\min\{x_1,\dots,x_d\}$, we have that \begin{align*} m_{\widehat{\overline{Q}},\min}(s) &:= \inf\big\{\ensuremath{\mathbb{P}}\xspace_C(\varphi(\mathbf{X})<s)\colon \widehat{C}\preceq \widehat{\overline{Q}}\big\} \geq 1-\widehat{\overline{Q}}(F_1(s),\dots,F_d(s))\\ M_{\widehat{\underline{Q}},\min}(s) &:= \sup\big\{\ensuremath{\mathbb{P}}\xspace_C(\varphi(\mathbf{X})<s)\colon \widehat{\underline{Q}}\preceq \widehat{C}\big\} \leq 1-\widehat{\underline{Q}}(F_1(s),\dots,F_d(s)) = \overline{M}_{\widehat{\underline{Q}},\min}(s). \end{align*} \end{proposition} \begin{proof} Let $\varphi(x_1,\dots,x_d) = \max\{x_1,\dots,x_d\}$, then for any copula $C$ we have that \begin{align*} \ensuremath{\mathbb{P}}\xspace_C(\max\{X_1,\dots,X_d\}<s) = \ensuremath{\mathbb{P}}\xspace_C(X_1<s,\dots,X_d<s) = C(F_1(s),\dots,F_d(s)), \end{align*} using Sklar's Theorem for the last equality. Hence, it follows immediately that \begin{align*} m_{\underline{Q},\max}(s) &= \inf\big\{C(F_1(s),\dots,F_d(s))\colon \underline{Q}\preceq C\big\} \geq \underline{Q}(F_1(s),\dots,F_d(s)) \\ \text{and} \qquad\qquad \qquad M_{\overline{Q},\max}(s) &= \sup\big\{C(F_1(s),\dots,F_d(s))\colon C\preceq \overline{Q}\big\} \leq \overline{Q}(F_1(s),\dots,F_d(s)). \end{align*} Moreover, since \begin{align*} \mathcal V^<_{\max} (s) &= \{ (x_1,\dots,x_d)\in\mathbb{R}^d: \max\{x_1,\dots,x_d\}<s\}\\ &= \{ (x_1,\dots,x_d)\in\mathbb{R}^d: x_1<s,\dots,x_d<s\}, \end{align*} we get from the improved standard bounds \eqref{eq:ISB} that \begin{align*} \underline{m}_{\underline{Q},\max}(s) &= \sup_{\mathcal V^<_{\max} (s)}\ \underline{Q}\big(F_1(x_1),\dots,F_{d}(x_{d})\big) = \underline{Q}(F_1(s),\dots,F_d(s)), \end{align*} where the last equality follows from the fact that $\underline{Q}$ is a quasi-copula, hence it is increasing in each component such that the supremum is attained at $(F_1(s),\dots,F_d(s))$. Similarly, we have for $\varphi(x_1,\dots,x_d) = \min\{x_1,\dots,x_d\}$ and any copula $C$ that \begin{align*} \ensuremath{\mathbb{P}}\xspace_C(\min\{X_1,\dots,X_d\}<s) = 1-\ensuremath{\mathbb{P}}\xspace_C(\min\{X_1,\dots,X_d\}\geq s) = 1-\widehat{C}(F_1(s),\dots,F_d(s)), \end{align*} where the last equality follows from the definition of $\widehat{C}$. Hence it follows \begin{align*} m_{\widehat{\overline{Q}},\min}(s) &= \inf\big\{1-\widehat{C}(F_1(s),\dots,F_d(s))\colon \widehat{C}\preceq \widehat{\overline{Q}}\big\} \geq 1-\widehat{\overline{Q}}(F_1(s),\dots,F_d(s)) \end{align*} and analogously $M_{\widehat{\underline{Q}},\min}(s)\leq 1-\widehat{\underline{Q}}(F_1(s),\dots,F_d(s))$. Once again, it follows from the improved standard bounds \eqref{eq:ISB} again that \begin{equation*} \overline M_{\widehat{\underline{Q}},\min}(s) = \inf_{\mathcal V^>_{\min}(s)}\ \big\{ 1-\widehat{\underline{Q}}\big(F_1(x_1),\dots,F_{d}(x_{d})\big) \big\} = 1-\widehat{\underline{Q}}\big(F_1(s),\dots,F_d(s)\big), \end{equation*} which conlcudes the proof. \end{proof} In the absence of (partial) dependence information on $\mathbf X$, the best-possible bounds on its copula or its survival function are given by the Fr\'echet--Hoeffding\xspace bounds. In this case, the improved standard bounds in \eqref{eq:ISB} reduce to the standard bounds in \eqref{standardBounds}. These will serve as a proxy in order to measure the quality of the improved VaR bounds presented in the remainder of this paper. Inspired by the properties of the standard bounds, two interesting lines of research have emerged: First, the fact that the standard bounds for the sum of risks are not sharp in general for $d>3$, calls for a different approach to obtain sharp bounds on VaR in the absence of dependence information. While generally the quest for sharp analytical bounds in this situation remains an open problem, some prominent approaches have been developed to provide solutions under additional requirements on the marginals $F_1,\dots,F_d$. For instance, sharp explicit bounds were obtained by \citet{rueschendorf1982} using a dual approach in the case of uniform marginals. This approach was later refined to obtain bounds for more general marginals in the homogeneous case, \textit{i.e.} when $F_1=\cdots=F_d$; see \citet{embrechts2006}, \citet{puccetti2012b} and the references therein for details on the so-called dual approach. Alternatively, \citet{wang2013} used the concept of joint and complete mixability to derive sharp bounds in the homogeneous case under some monotonicity requirements on the marginals. Moreover, the rearrangement algorithm of \citet{embrechts2013} provides an efficient way to approximate sharp VaR bounds in the absence of dependence information for general marginals. Although the favorable numerical properties of this algorithm have been demonstrated in many practical examples, a proof of convergence and other theoretical properties remain an open challenge. Besides the derivation of sharp VaR bounds in the absence of dependence information, the second question of interest is how to account for additional dependence information in the computation of the bounds. Such additional information could be the knowledge of the correlation matrix of $\mathbf{X}$, or the knowledge of its copula on a subset of $\mathbb{I}^d$. In such cases, one can make use of the additional information in order to improve the standard VaR bounds. For instance, \citet{embrechts2003}, \citet{rueschendorf2005} and \citet{puccetti2012b} computed improved bounds that account for the information that $\mathbf{X}$ is positively upper or lower orthant dependent, meaning that its copula or survival copula are bounded from below by the independence copula. Moreover, in a series of papers, analytical improvements of the standard bounds were developed, when some higher-dimensional marginals of $\mathbf{X}$ are known; see \citet{rueschendorf1991}, \citet{embrechts2010} and \citet{puccetti2012b}. Beyond analytical improvements, numerical bounds that include additional dependence information were established based on the rearrangement algorithm. \citet{bernard2015} refined the rearrangement algorithm in order to account for given values of the distribution of $\mathbf{X}$ on a subset of its domain. A similar algorithm was presented by \citet{bernard2015b} in order to include information about the variance of the sum $X_1+\cdots+X_d$, while numerical and analytical methods to compute risk bounds in factor models were presented by \citet{bernard2016}. Continuing this line of research, in the following we will improve the standard bounds and account for additional types of dependence information. In the next section we derive improved VaR bounds for the sum of risks when, besides the marginals, the distributions of the minima or maxima of some subsets of the risks are known. Such information can typically be inferred, with an appropriate degree of accuracy, using tools from extreme value theory and are thus available in many practical applications. In section \ref{boundsOnCopula}, we take a different approach to derive improved VaR bounds for general functionals $\varphi$ of $\mathbf{X}$ when the copula of the risk vector lies in the vicinity of a reference copula, as measured by some distance on the set of copulas. We therefore derive improved Fr\'echet--Hoeffding\xspace bounds on the copula of $\mathbf{X}$ that account for the given information. The improved Fr\'echet--Hoeffding\xspace bounds are then translated into bounds on VaR via the improved standard bounds. \section{Improved bounds on the Value-at-Risk of the sum with known distributions of some minima and maxima} \label{prescribedMax} In this section we improve the standard bounds on the VaR of the sum $X_1+\cdots+X_d$ in the situation where, besides the marginal distributions, the laws of the minima and maxima of some subsets of the risks $X_1,\dots,X_d$ are known. In particular, we assume that for a system $J_1,\dots,J_m\subset\{1,\dots,d\}$ the distributions of $\max_{j\in J_n} X_j$ or $\min_{j\in J_n} X_j$ for $n=1,\dots,m$ are given. This setting can be viewed as an interpolation between the marginals-only case and the situation where the lower-dimensional marginals of the vectors $(X_j)_{j\in J_n}$ are completely specified; see also \citet{rueschendorf2017} for another work in the same spirit. The latter setting has been studied extensively in the literature, and risk bounds for aggregations of $\mathbf{X}$ given some of its lower-dimensional marginals were obtained, for instance, by \citet{rueschendorf1991}, \citet{embrechts2010} and \citet{puccetti2012b}. These bounds are based on a reduction principle that transforms the optimization problem involving higher-dimensional marginals into a standard Fr\'echet problem (\textit{i.e.} marginals-only), utilizing the extra information about the distribution of the subvector $(X_j)_{j\in J_n}$. In practice however, it is often difficult to determine the distributions of the lower-dimensional vectors $(X_j)_{j\in J_n}$. In particular for large dimensions of the subsets, a vast amount of data is required to estimate the distribution of $(X_j)_{j\in J_n}$ with an adequate degree of accuracy. Thus, having complete information about lower-dimensional marginals of $(X_1,\dots,X_d)$ turns out to be a rather strong assumption. Therefore, methods that interpolate between this scenario and the marginals-only case are of practical interest. Based on the reduction principle of \citep{puccetti2012b}, we develop in this section a method to improve the standard bounds when instead of the distribution of $(X_j)_{j\in J_n}$, only the distribution of its maximum $\max_{j\in J_n} X_j$ or minimum $\min_{j\in J_n} X_j$ is known. \begin{remark} Let us point out that obtaining information about the maximum or minimum of a sequence of random variables is the central theme of extreme value theory, which provides a rich collection of methods for their estimation; see \textit{e.g.} \citet[Chapter 9]{Beirlant_etal_2004}. \end{remark} Let us denote by $\mathcal{I}:=\{1,\dots,d\}$ and by $\mathcal{J}:=\{1,\dots,m\}$. \begin{theorem} \label{boundMax} Let $(X_1,\dots,X_d)$ be a random vector with marginals $F_1,\dots,F_d$, and consider a collection $\mathcal{E} = \{J_1,\dots,J_m\}$ of subsets $J_n\subset\mathcal{I}$ for $n\in\mathcal{J}$ with $\bigcup_{n\in\mathcal{J}} J_{n} = \mathcal{I}$. Denote by $G_n$ the distribution of $Y_n=\max_{j\in J_n} X_j$. Then it follows that \begin{multline*} \inf\big\{\ensuremath{\mathbb{P}}\xspace(X_1+\cdots+X_d\leq s)\colon X_i\sim F_i, i\in\mathcal{I}, \max_{j\in J_n} X_j \sim G_n, n\in\mathcal{J}\big\}\\ \geq \sup_{(\alpha_1,\dots,\alpha_m)\in\underline{\mathcal{A}}} \inf\big\{\ensuremath{\mathbb{P}}\xspace(\alpha_1Y_1+\cdots+\alpha_mY_m\leq s)\colon Y_n\sim G_n, n \in\mathcal{J}\big\} =: \underline{m}_{\mathcal{E},\max}(s), \end{multline*} where $$\underline{\mathcal{A}} = \Big\{(\alpha_1,\dots,\alpha_m)\in\mathbb{R}_+^m\colon \sum_{n=1}^m \alpha_n\max_{j\in J_n} x_j \geq \sum_{i=1}^d x_i,\text{ for all } (x_1,\dots,x_d)\in\mathbb{R}^d\Big\}\neq\emptyset.$$ Moreover if $(X_1,\dots,X_d)$ is $\mathbb{R}_+^d$-valued, then \begin{multline*} \sup\big\{\ensuremath{\mathbb{P}}\xspace(X_1+\cdots+X_d\leq s)\colon X_i\sim F_i, i\in\mathcal{I}, \max_{j\in J_n} X_j \sim G_n, n\in\mathcal{J}\big\}\\ \leq \inf_{(\alpha_1,\dots,\alpha_m)\in\overline{\mathcal{A}}} \sup\big\{\ensuremath{\mathbb{P}}\xspace(\alpha_1Y_1+\cdots+\alpha_mY_m\leq s)\colon Y_n\sim G_n, n\in\mathcal{J}\big\} =: \overline{M}_{\mathcal{E},\max}(s), \end{multline*} where $$\overline{\mathcal{A}} = \Big\{(\alpha_1,\dots,\alpha_m)\in\mathbb{R}_+^m\colon \sum_{n=1}^m \alpha_n\max_{j\in J_n} x_j \leq \sum_{i=1}^d x_i,\text{ for all } (x_1,\dots,x_d)\in\mathbb{R}_+^d\Big\}\neq\emptyset.$$ \end{theorem} \begin{proof} We first show that the lower bound $\underline{m}_{\mathcal{E},\max}$ is valid. It follows from $\bigcup_{n=1}^m J_n = \{1,\dots,d\}$ that $\underline{\mathcal{A}}\neq\emptyset$. Indeed, choosing for instance $\alpha_n=|J_n|$ we get that $\sum_{j\in J_n} x_j \leq \alpha_n \max_{j\in J_n} x_j$, for all $(x_1,\dots,x_d)\in\mathbb{R}^d$ and $n=1,\dots,m$. Hence $$\sum_{n=1}^m \alpha_n\max_{j\in J_n} x_j \geq \sum_{n=1}^m\sum_{j\in J_n} x_j \geq \sum_{i=1}^d x_i\quad\text{for all }(x_1,\dots,x_d)\in\mathbb{R}^d.$$ Then, it follows for arbitrary $(\alpha_1,\dots,\alpha_m)\in\underline{\mathcal{A}}$ that $$\bigg\{\sum_{n=1}^m \alpha_n\max_{j\in J_n} X_j\leq s\bigg\} \subseteq \bigg\{\sum_{i=1}^d X_i\leq s\bigg\},$$ henceforth \begin{align*} \inf & \big\{\ensuremath{\mathbb{P}}\xspace(X_1+\cdots+X_d\leq s)\colon X_i\sim F_i, i\in\mathcal{I}, \max_{j\in J_n} X_j \sim G_n, n\in\mathcal{J}\big\} \\ &\geq \inf\bigg\{\ensuremath{\mathbb{P}}\xspace\bigg(\sum_{n=1}^m \alpha_n\max_{j\in J_n} X_j\leq s\bigg)\colon X_i\sim F_i, i\in\mathcal{I}, \max_{j\in J_n} X_j \sim G_n, n\in\mathcal{J}\bigg\}\\ &= \inf\big\{\ensuremath{\mathbb{P}}\xspace(\alpha_1Y_1+\cdots+\alpha_mY_m\leq s)\colon Y_n\sim G_n, n\in\mathcal{J},\big\}. \end{align*} Now, since $(\alpha_1,\dots,\alpha_m)\in\underline{\mathcal{A}}$ was arbitrary, it follows that the lower bound holds by taking the supremum over all elements in $\underline{\mathcal{A}}$. Likewise for the upper bound, we note that since $(X_1,\dots,X_d)$ is $\mathbb{R}^d_+$-valued, $(0,\dots,0)$ and $(1,\dots,1)$ belong to $\overline{\mathcal{A}}$, hence it is not empty. Moreover, for arbitrary $(\alpha_1,\dots,\alpha_m)\in\overline{\mathcal{A}}$, it follows that $$ \bigg\{\sum_{n=1}^m \alpha_n\max_{j\in J_n} X_j\leq s\bigg\} \supseteq \bigg\{\sum_{i=1}^d X_i\leq s\bigg\}, $$ due to the fact that $(X_1,\dots,X_d)$ is non-negative and $\sum_{j\in J_n} x_j \geq \sum_{n=1}^m \alpha_n\max_{j\in J_n} x_j$. Hence, we get that \begin{align*} \sup\big\{ & \ensuremath{\mathbb{P}}\xspace(X_1+\cdots+X_d\leq s)\colon X_i\sim F_i, i\in\mathcal{I}, \max_{j\in J_n} X_j \sim G_n, n\in\mathcal{J}\big\} \\ &\leq\sup\bigg\{\ensuremath{\mathbb{P}}\xspace\bigg(\sum_{n=1}^m \alpha_n\max_{j\in J_n} X_j\leq s\bigg)\colon X_i\sim F_i, i\in\mathcal{I}, \max_{j\in J_n} X_j \sim G_n, n\in\mathcal{J}\bigg\}\\ &=\sup\big\{\ensuremath{\mathbb{P}}\xspace(\alpha_1 Y_1+\cdots+\alpha_m Y_m\leq s)\colon Y_n\sim G_n, n\in\mathcal{J}\big\}. \end{align*} Since $(\alpha_1,\dots,\alpha_m)\in\overline{\mathcal{A}}$ was arbitrary, it follows that the upper bound holds indeed. \end{proof} \begin{remark}\label{rem:calE} The assumption $\bigcup_{n=1}^m J_{n} = \{1,\dots,d\}$ can always be met by adding singletons to $\mathcal{E}$, \textit{i.e.} $J_n = \{i_n\}$ for $i_n\in\{1,\dots,d\}$, since the marginal distributions of $(X_1,\dots,X_d)$ are known. However, the bounds are valid even when the marginal distributions are not known. \end{remark} By the same token, the following result establishes bounds on the distribution the sum of the components of $\mathbf{X}$ when distributions of some minima are known. The proof follows along the same lines of argumentation as the proof of Theorem \ref{boundMax}, and is therefore omitted. \begin{theorem} \label{boundMin} Consider the setting of Theorem \ref{boundMax} and denote by $H_n$ the distribution of $Z_n=\min_{j\in J_n} X_j$. Then it follows that \begin{multline*} \sup\big\{\ensuremath{\mathbb{P}}\xspace(X_1+\cdots+X_d\leq s)\colon X_i\sim F_i, i\in\mathcal{I}, \min_{j\in J_n} X_j \sim H_n, n\in\mathcal{J}\big\}\\ \leq \inf_{(\alpha_1,\dots,\alpha_m)\in\overline{\mathcal{B}}} \sup\big\{\ensuremath{\mathbb{P}}\xspace(\alpha_1Z_1+\cdots+\alpha_mZ_m\leq s)\colon Z_n\sim H_n, n\in\mathcal{J}\big\} =: \overline{M}_{\mathcal{E},\min}(s), \end{multline*} where $$\overline{\mathcal{B}} = \Big\{(\alpha_1,\dots,\alpha_m)\in\mathbb{R}_+^m\colon \sum_{n=1}^m \alpha_n\min_{j\in J_n} x_j \leq \sum_{i=1}^d x_i,\text{ for all } (x_1,\dots,x_d)\in\mathbb{R}^d\Big\}\neq\emptyset.$$ Moreover if $(X_1,\dots,X_d)$ is $\mathbb{R}_-^d$-valued, then \begin{multline*} \inf\big\{\ensuremath{\mathbb{P}}\xspace(X_1+\cdots+X_d\leq s)\colon X_i\sim F_i, i\in\mathcal{I}, \min_{j\in J_n} X_j \sim H_n, n\in\mathcal{J}\big\}\\ \geq \sup_{(\alpha_1,\dots,\alpha_m)\in\underline{\mathcal{B}}}\inf\big\{\ensuremath{\mathbb{P}}\xspace(\alpha_1 Z_1+\cdots+\alpha_m Z_m\leq s)\colon Z_n\sim H_n, n\in\mathcal{J}\big\} =: \underline{m}_{\mathcal{E,\min}}(s), \end{multline*} where $$\underline{\mathcal{B}} = \Big\{(\alpha_1,\dots,\alpha_m)\in\mathbb{R}_+^m\colon \sum_{n=1}^m \alpha_n\min_{j\in J_n} x_j \geq \sum_{i=1}^d x_i,\text{ for all } (x_1,\dots,x_d)\in\mathbb{R}_-^d\Big\}\neq\emptyset.$$ \end{theorem} The computation of the bounds presented in Theorems \ref{boundMax} and \ref{boundMin} can be cumbersome for two reasons. Firstly, for fixed $(\alpha_1,\dots,\alpha_m)$ there does not exist a method to compute sharp analytical bounds on the set $\big\{\ensuremath{\mathbb{P}}\xspace(\alpha_1Y_1+\cdots+\alpha_mY_m\leq s)\colon Y_n\sim G_n, n = 1,\dots,m\}$, except when $m=2$. This problem can however be circumvented either by using the standard bounds in \eqref{standardBounds}, or numerically, by an application of the rearrangement algorithm of \citet{embrechts2013}. Using the rearrangement algorithm, we are able to approximate upper and lower bounds on the set in an efficient way. Secondly, the determination of the sets $\underline{\mathcal{A}}$, $\overline{\mathcal{A}}$ and $\underline{\mathcal{B}}$, $\overline{\mathcal{B}}$ depends on the system $J_1,\dots,J_m$ and is, in general, not straightforward. However, in Section \ref{numerics} we will demonstrate that, even for possibly non-optimal elements in $\underline{\mathcal{A}}$, $\overline{\mathcal{A}}$ and $\underline{\mathcal{B}}$, $\overline{\mathcal{B}}$, the bounds in Theorems \ref{boundMax} and \ref{boundMin} yield a significant improvement over the standard bounds. \section{Improved Fr\'echet--Hoeffding\xspace bounds using a reference copula} \label{boundsOnCopula} In this section we present an alternative approach for improving the standard bounds on the Value-at-Risk, for general aggregation functions $\varphi$ and different types of additional dependence information on the risk vector $\mathbf X$. Our approach is based on improvements of the Fr\'echet--Hoeffding\xspace bounds on copulas that account for additional dependence information. The improved Fr\'echet--Hoeffding\xspace bounds are then used to derive sharper bounds on VaR in conjunction with the improved standard bounds \eqref{eq:ISB}. We focus on two types of additional dependence information. Firstly, we consider the situation where the copula $C$ of the risk vector $\mathbf X$ coincides with a reference model on a compact subset $\ensuremath{\mathcal{S}}$ of its domain, \textit{i.e.} it holds that $C(\ensuremath{\mathbf{x}}) = C^*(\ensuremath{\mathbf{x}})$ for all $\ensuremath{\mathbf{x}}\in\ensuremath{\mathcal{S}}$ and a reference copula $C^*$. In practice, the set $\ensuremath{\mathcal{S}}$ may correspond to a region in $\mathbb{I}^d$ that contains enough observations to estimate the copula $C$ with sufficient accuracy, so that we can assume that $C$ is known on $\ensuremath{\mathcal{S}}$. \citet{bernard2015} call such a subset \textit{trusted region} and present several techniques and criteria to select such regions when estimating copulas. If $\ensuremath{\mathcal{S}}$ is not equal to the entire domain of the copula, then dependence uncertainty stems from the fact that $C$ remains unknown on $\mathbb{I}^d\setminus\ensuremath{\mathcal{S}}$. In order to obtain VaR bounds in this situation, we use results from \citet{lux2016} who established improved Fr\'echet--Hoeffding\xspace bounds on the set of copulas with prescribed values on a compact set. Secondly, we present a new improvement of the Fr\'echet--Hoeffding\xspace bounds when the copula $C$ is assumed to lie in the vicinity of a reference model as measured by a statistical distance. More formally, we establish bounds on the set of all (quasi-)copulas $C$ in the $\delta$-neighborhood of the reference copula $C^*$, \textit{i.e.} such that $\ensuremath{\mathcal{D}}(C,C^*)\leq\delta$ for a distance $\ensuremath{\mathcal{D}}$. Our method applies to a large class of statistical distances such as the Cram\'er--von Mises or the $L^p$ distances. Such situations arise naturally in practice when one tries to estimate a copula from, or calibrate it to, empirical data. The estimation typically involves the minimization of a distance to the empirical copula over a parametric family of copulas, \textit{i.e.} $\ensuremath{\mathcal{D}}(C_\theta,C^*)\to \min_\theta$ where $C^*$ is an empirical copula and $(C_\theta)_\theta$ is a family of parametric copulas. This is in the literature often referred to as \textit{minimal distance} or \textit{minimal contrast} estimation. \citet{kole2007} for instance present several distance-based techniques for selecting copulas in risk management. These estimation procedures lend themselves immediately to the methodology we propose, as typically one arrives at $\delta:=\min_\theta\ensuremath{\mathcal{D}}(C_\theta,C^*)>0$, due to the fact that the family of models $(C_\theta)_\theta$ is not able to match the empirical observations exactly, thus dependence uncertainty remains. In this case, $\delta$ can be viewed as the inevitable model risk due to the choice of the parametric family $(C_\theta)_\theta$. Our method can then be used to account for such types of dependence uncertainty in the computation of VaR. Approaches to compute robust risk estimates over a class of models that lie in the proximity of a reference model have been proposed earlier in the literature. \citet{glasserman2013} derive robust bounds on the portfolio variance, the conditional VaR and the CVA over the class of models within a relative entropy distance of a reference model. \citet{barrieu2015} establish bounds on the VaR of a univariate random variable given that its distribution is close to a reference distribution in the sense of the Kolmogorov--Smirnov or L\'evy distance. In a multivariate setting, \citet{blanchet2016} use an optimal transport approach to derive robust bounds on risk estimates, such as ruin probabilities, over models that are in a neighborhood of a reference model in terms of the Wasserstein distance. This brief overview is, of course, incomplete and we refer the reader to the references in each of the aforementioned articles for a more detailed review of the associated literature. Let us now consider the setting where, apart from the marginal distributions, partial information on the dependence structure of the random vector $\mathbf X$ is available. In particular, assume that the copula is known on some subset $\mathcal S$ of $[0,1]^d$. Theorem 3.1 in \citep{lux2016} establishes sharp bounds on the set \begin{align*} \mathcal{Q}^{\ensuremath{\mathcal{S}},Q^*} := \big\{Q\in\mathcal{Q}^d\colon Q(\mathbf{x}) = Q^*(\mathbf{x}) \text{ for all } \mathbf{x}\in \ensuremath{\mathcal{S}}\big\}, \end{align*} where $\ensuremath{\mathcal{S}}\subset\mathbb{I}^d$ is compact and $Q^*$ is a $d$-quasi-copula. The bounds are provided by \begin{align} \label{bounds} \begin{split} \underline{Q}^{\ensuremath{\mathcal{S}},Q^*}(\mathbf{u}) :=& \min\big\{Q(\ensuremath{\mathbf{u}})\colon Q(\ensuremath{\mathbf{x}}) = Q^*(\ensuremath{\mathbf{x}}) \text{ for all } \ensuremath{\mathbf{x}}\in\ensuremath{\mathcal{S}}\big\}\\ =& \max\Big(0, \sum_{i=1}^d u_i-d+1,\max_{\mathbf{x}\in \ensuremath{\mathcal{S}}} \Big\{Q^*(\mathbf{x})-\sum_{i=1}^d (x_i-u_i)^+\Big\}\Big),\\ \overline{Q}^{S,Q^*}(\mathbf{u}) :=& \max\big\{Q(\ensuremath{\mathbf{u}})\colon Q(\ensuremath{\mathbf{x}}) = Q^*(\ensuremath{\mathbf{u}}) \text{ for all } \ensuremath{\mathbf{x}}\in\ensuremath{\mathcal{S}}\big\}\\ =& \min\Big(u_1,\dots,u_d,\min_{\mathbf{x}\in S} \Big\{Q^*(\mathbf{x})+\sum_{i=1}^d (u_i-x_i)^+\Big\}\Big), \end{split} \end{align} for all $\ensuremath{\mathbf{u}}\in\mathbb{I}^d$, and they are quasi-copulas, hence they belong to $\mathcal{Q}^{S,Q^*}$. Let us point out that a similar version of these bounds was presented recently by \citet{puccetti2016}. They were derived independently in the master thesis of the third-named author. \begin{remark} By slightly abusing notation, we will sometimes write $\underline{Q}^{\{\ensuremath{\mathbf{u}}\},\alpha}$ and $\overline{Q}^{\{\ensuremath{\mathbf{u}}\},\alpha}$ with $\alpha\in [W_d(\ensuremath{\mathbf{u}}),M_d(\ensuremath{\mathbf{u}})]$ instead of a quasi-copula function $Q^*$, and mean that $Q^*(\ensuremath{\mathbf{u}})=\alpha$. \end{remark} The bounds in \eqref{bounds} hold also for sets of copulas, \textit{i.e.} for each copula $C$ in $$\mathcal{C}^{\ensuremath{\mathcal{S}},Q^*} := \big\{C\in\mathcal{C}^d\colon C(\mathbf{x}) = Q^*(\mathbf{x}) \text{ for all } \mathbf{x}\in \ensuremath{\mathcal{S}}\big\}$$ it holds that $\underline{Q}^{\ensuremath{\mathcal{S}},Q^*}\lo C\lo\overline{Q}^{\ensuremath{\mathcal{S}},Q^*}$, assuming that $\mathcal{C}^{\ensuremath{\mathcal{S}},Q^*}$ is not empty. Moreover, Proposition A.1 in \citep{lux2016} provides analogous bounds on survival functions, \textit{i.e.} for a reference copula $C^*$ and any copula $C$ in $$\widehat{\mathcal{C}}^{\ensuremath{\mathcal{S}},C^*} := \big\{C\in\mathcal{C}^d\colon \widehat{C}(\mathbf{x}) = \widehat{C}^*(\mathbf{x}) \text{ for all } \mathbf{x}\in\ensuremath{\mathcal{S}}\big\}$$ it holds that $\widehat{\underline{Q}}^{\ensuremath{\mathcal{S}},C^*} \lo \widehat{C}\lo \widehat{\overline{Q}}^{\ensuremath{\mathcal{S}},C^*}$, where \begin{align} \label{survivalBounds} \widehat{\underline{Q}}^{\ensuremath{\mathcal{S}},C^*}(\mathbf{u}) := \underline{Q}^{\widehat{\ensuremath{\mathcal{S}}},\widehat{C}^*}(\mathbf 1-\ensuremath{\mathbf{u}}) \quad\text{and}\quad \widehat{\overline{Q}}^{\ensuremath{\mathcal{S}},C^*}(\mathbf{u}) := \overline{Q}^{\widehat{\ensuremath{\mathcal{S}}},\widehat{C}^*}(\mathbf 1-\ensuremath{\mathbf{u}}), \end{align} while $\widehat{\ensuremath{\mathcal{S}}} = \{(1-x_1,\dots,1-x_d)\colon (x_1,\dots,x_d)\in \ensuremath{\mathcal{S}}\}$. In case $d=2$, the above bounds correspond to the improved Fr\'echet--Hoeffding\xspace bounds derived by \citet{tankov2011}. He showed that the bounds are themselves copulas under certain constraints on the set $\ensuremath{\mathcal{S}}$, and those were readily relaxed by \citet{bernard2012}. For instance, if $Q^*$ is a 2-copula and $\ensuremath{\mathcal{S}}$ a rectangle, \textit{i.e.} $\ensuremath{\mathcal{S}} = \{(x_1,x_2)\colon a_1\leq x_1\leq b_1, a_2\leq x_2\leq b_2\}$ then $\underline{Q}^{\ensuremath{\mathcal{S}},Q^*}$ and $\overline{Q}^{\ensuremath{\mathcal{S}},Q^*}$ are 2-copulas. In contrast, \citet{lux2016} showed that for $d>2$ the bounds $\underline{Q}^{\ensuremath{\mathcal{S}},Q^*}$ and $\overline{Q}^{\ensuremath{\mathcal{S}},Q^*}$ are copulas only in degenerate cases, and quasi-copulas otherwise. In the following we will establish improved Fr\'echet--Hoeffding\xspace bounds using a different type of additional dependence information. Namely, we consider the set of copulas that are close to a reference copula in the sense of a statistical distance as defined below. Let us first define the minimal and maximal convolution between two quasi-copulas $Q,Q'$ as the pointwise minimum and maximum between them, \textit{i.e.} $(Q\wedge Q')(\ensuremath{\mathbf{u}}) = Q(\ensuremath{\mathbf{u}}) \wedge Q'(\ensuremath{\mathbf{u}})$ and $(Q\vee Q')(\ensuremath{\mathbf{u}}) = Q(\ensuremath{\mathbf{u}}) \vee Q'(\ensuremath{\mathbf{u}})$. \begin{definition} A function $\ensuremath{\mathcal{D}}\colon\mathcal{Q}^d\times\mathcal{Q}^d\to\mathbb{R}_+$ is called a \emph{statistical distance} if for $Q,Q'\in\mathcal{Q}^d$ $$\ensuremath{\mathcal{D}}(Q,Q') = 0\quad\Longleftrightarrow\quad Q(\ensuremath{\mathbf{u}})=Q'(\ensuremath{\mathbf{u}})\quad \text{for all }\ensuremath{\mathbf{u}}\in\mathbb{I}^d.$$ \end{definition} \begin{definition} A statistical distance $\ensuremath{\mathcal{D}}$ is \emph{monotonic} with respect to the order $\preceq$ on $\mathcal{Q}^d$, if for $Q,Q',Q''\in\mathcal{Q}^d$ it holds \begin{align*} Q\preceq Q'\preceq Q'' \quad\Longrightarrow\quad \ensuremath{\mathcal{D}}(Q',Q'')\leq\ensuremath{\mathcal{D}}(Q,Q'') \ \text{ and } \ \ensuremath{\mathcal{D}}(Q'',Q')\leq\ensuremath{\mathcal{D}}(Q'',Q). \end{align*} A statistical distance $\ensuremath{\mathcal{D}}$ is \emph{min-} resp. \emph{max-stable} if for $Q,Q'\in\mathcal{Q}^d$ it holds \begin{align*} \ensuremath{\mathcal{D}}(Q,Q') & \geq \max\{\ensuremath{\mathcal{D}}({Q\wedge Q'},Q), \ensuremath{\mathcal{D}}(Q,{Q\wedge Q'})\} \\ \ensuremath{\mathcal{D}}(Q,Q') & \geq \max\{\ensuremath{\mathcal{D}}({Q\vee Q'},Q), \ensuremath{\mathcal{D}}(Q,{Q\vee Q'})\}. \end{align*} \end{definition} The following Theorem establishes pointwise bounds on the set of quasi-copulas that are in the $\delta$-vicinity of a reference copula $C^*$ as measured by a statistical distance $\ensuremath{\mathcal{D}}$. \begin{theorem} \label{prescribedDistance} Let $C^*$ be a $d$-copula and $\ensuremath{\mathcal{D}}$ be a statistical distance which is continuous with respect to the pointwise convergence of quasi-copulas, monotonic with respect to the lower orthant order and min/max-stable. Consider the set $$\mathcal{Q}^{\ensuremath{\mathcal{D}},\delta} := \big\{ Q\in\mathcal{Q}^d\colon \ensuremath{\mathcal{D}}(Q,C^*) \leq \delta \big\}$$ for $\delta\in\mathbb{R}_+$. Then \begin{align*} \underline{Q}^{\ensuremath{\mathcal{D}},\delta}(\ensuremath{\mathbf{u}}) &:=\min\Big\{\alpha \in \mathbb S(\ensuremath{\mathbf{u}}) \colon \ensuremath{\mathcal{D}}\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha}\wedge\refC},\refC\Big) \leq \delta\Big\} = \min\big\{Q(\ensuremath{\mathbf{u}})\colon Q\in\mathcal{Q}^{\ensuremath{\mathcal{D}},\delta}\big\},\\ \overline{Q}^{\ensuremath{\mathcal{D}},\delta}(\ensuremath{\mathbf{u}}) &:=\max\Big\{\alpha \in \mathbb S(\ensuremath{\mathbf{u}}) \colon \ensuremath{\mathcal{D}}\Big({\downQ^{\{\ensuremath{\mathbf{u}}\},\alpha}\vee\refC},\refC\Big) \leq \delta\Big\} = \max\big\{Q(\ensuremath{\mathbf{u}})\colon Q\in\mathcal{Q}^{\ensuremath{\mathcal{D}},\delta}\big\}, \end{align*} where $\mathbb S(\ensuremath{\mathbf{u}}) := [W_d(\ensuremath{\mathbf{u}}),M_d(\ensuremath{\mathbf{u}})]$, and both bounds are quasi-copulas. \end{theorem} \begin{proof} We show that the statement holds for the lower bound, while the proof for the upper bound follows along the same lines. Fix an $\alpha\in[W_d(\ensuremath{\mathbf{u}}),M_d(\ensuremath{\mathbf{u}})]$ and a $\ensuremath{\mathbf{u}}\in\mathbb I^d$, then the map $v\mapsto\big({\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha}\wedge\refC}\big)(v)$ is a quasi-copula; this follows by straightforward calculations using the definition of the minimal convolution, see also \citet[Theorem 2.1]{Rodriguez_Ubeda_2004}. By definition, $\ensuremath{\mathcal{D}}$ is monotonic with respect to the lower orthant order, thus it follows for $\underline{\alpha},\overline{\alpha} \in [W_d(\ensuremath{\mathbf{u}}),M_d(\ensuremath{\mathbf{u}})]$ with $\underline{\alpha}<\overline{\alpha}$ that $$\ensuremath{\mathcal{D}}\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},\overline{\alpha}}\wedge\refC},\refC\Big) \leq \ensuremath{\mathcal{D}}\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},\underline{\alpha}}\wedge\refC},\refC\Big),$$ due to the fact that $\upQ^{\{\ensuremath{\mathbf{u}}\},\underline{\alpha}} \lo \upQ^{\{\ensuremath{\mathbf{u}}\},\overline{\alpha}}$, which readily implies $$\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},\underline{\alpha}}\wedge\refC}\Big)\lo \Big({\upQ^{\{\ensuremath{\mathbf{u}}\},\overline{\alpha}}\wedge\refC}\Big) \lo \refC.$$ Hence, the map $$[W_d(\ensuremath{\mathbf{u}}),M_d(\ensuremath{\mathbf{u}})]\ni\alpha\mapsto\ensuremath{\mathcal{D}}\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha}\wedge\refC},\refC\Big)$$ is decreasing. Moreover, as a consequence of the Arzel\`{a}--Ascoli Theorem, it follows that for every sequence $(\alpha_n)_n\subset[W_d(\ensuremath{\mathbf{u}}),M_d(\ensuremath{\mathbf{u}})]$ with $\alpha_n\to\alpha$, $$\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha_n}\wedge\refC}\Big) \xrightarrow[n\to\infty]{} \Big({\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha}\wedge\refC}\Big)$$ uniformly and, since $\ensuremath{\mathcal{D}}$ is continuous with respect to the pointwise convergence of quasi-copulas, it follows that $\alpha\mapsto\ensuremath{\mathcal{D}}\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha}\wedge\refC},\refC\Big)$ is continuous. In addition, we have that \begin{align} \label{prescribedDistanceEq1} \ensuremath{\mathcal{D}}\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},M_d}\wedge\refC},\refC\Big) =\ensuremath{\mathcal{D}}\Big({M_d\wedge\refC},\refC\Big) = \ensuremath{\mathcal{D}}\Big(\refC,\refC\Big) =0, \end{align} due to the fact that $C^*\lo M_d$. We now distinguish between two cases: \quad ($i$) Let $\delta\leq\ensuremath{\mathcal{D}}\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},W_d}\wedge\refC},\refC\Big)$. Then, due to the monotonicity and continuity of the map $[W_d(\ensuremath{\mathbf{u}}),M_d(\ensuremath{\mathbf{u}})]\ni\alpha\mapsto\ensuremath{\mathcal{D}}\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha}\wedge\refC},\refC\Big)$ and \eqref{prescribedDistanceEq1} it holds that the set $$\mathcal{O} :=\Big\{\alpha\colon \ensuremath{\mathcal{D}}\Big(\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha}\wedge\refC,\refC\Big) = \delta\Big\}$$ is non-empty and compact. Define $\alpha^* := \min\{\alpha\colon\alpha\in\mathcal{O}\}$. We will show that $\min\big\{Q(\ensuremath{\mathbf{u}})\colon Q\in\mathcal{Q}^{\ensuremath{\mathcal{D}},\delta}\big\}=\alpha^*$. On the one hand, it holds that $\min\big\{Q(\ensuremath{\mathbf{u}})\colon Q\in\mathcal{Q}^{\ensuremath{\mathcal{D}},\delta}\big\}\leq\alpha^*$. Indeed, consider ${\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha^*}\wedge\refC}$ which is a quasi-copula and belongs to $\mathcal{Q}^{\ensuremath{\mathcal{D}},\delta}$ since $\alpha^*\in\mathcal{O}$. Then, we have that $$\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha^*}\wedge\refC}\Big)(\ensuremath{\mathbf{u}}) = \min\{\alpha^*,C^*(u)\} = \alpha^*,$$ using again that $\alpha^*\in\mathcal{O}$ and \eqref{prescribedDistanceEq1}. Hence the inequality holds. On the other hand, we will show now that the inequality cannot be strict by contradiction. Assume there exists a quasi-copula $Q'\in\mathcal{Q}^{\ensuremath{\mathcal{D}},\delta}$ with $Q'(\ensuremath{\mathbf{u}})<\alpha^*$. Then it follows that \begin{align} \begin{split} \label{prescribedDistanceEq2} \ensuremath{\mathcal{D}}(Q',C^*) & \geq \ensuremath{\mathcal{D}}\big({Q'\wedge\refC},C^*\big) \geq \ensuremath{\mathcal{D}}\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},Q'}\wedge\refC},C^*\Big)\\ & \geq \ensuremath{\mathcal{D}}\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha^*}\wedge\refC},C^*\Big) = \delta, \end{split} \end{align} where the first inequality follows from the min-stability of $\ensuremath{\mathcal{D}}$, and the second and third ones from its monotonicity properties. However, since $Q'(\ensuremath{\mathbf{u}})\notin\mathcal{O}$ it follows that $ \ensuremath{\mathcal{D}}\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},Q'}\wedge\refC},C^*\Big)\neq\delta$, hence \eqref{prescribedDistanceEq2} yields that $\ensuremath{\mathcal{D}}\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},Q'}\wedge\refC},C^*\Big)>\delta$. This contradicts the assumption that $Q'\in\mathcal{Q}^{\ensuremath{\mathcal{D}},\delta}$, showing that indeed $\min\big\{Q(\ensuremath{\mathbf{u}})\colon Q\in\mathcal{Q}^{\ensuremath{\mathcal{D}},\delta}\big\}=\alpha^*$. Hence, the lower bound holds for $\delta\leq\ensuremath{\mathcal{D}}\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},W_d}\wedge\refC},\refC\Big)$. \quad ($ii$) Now, let $\delta>\ensuremath{\mathcal{D}}\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},W_d}\wedge\refC},\refC\Big)$, then it follows that $$\min\Big\{\alpha\in[W_d(\ensuremath{\mathbf{u}}),M_d(\ensuremath{\mathbf{u}})]\colon\ensuremath{\mathcal{D}}\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha}\wedge\refC},\refC\Big) \leq \delta\Big\} = W_d(\ensuremath{\mathbf{u}}).$$ Moreover, since $\Big(\upQ^{\{\ensuremath{\mathbf{u}}\},W_d}\wedge\refC\Big)\in\mathcal{Q}^{\ensuremath{\mathcal{D}},\delta}$ and every element in $\mathcal{Q}^{\ensuremath{\mathcal{D}},\delta}$ is bounded from below by $W_d$, it follows that $\min\big\{Q(\ensuremath{\mathbf{u}})\colon Q\in\mathcal{Q}^{\ensuremath{\mathcal{D}},\delta}\big\} = W_d(\ensuremath{\mathbf{u}})$. Hence, the lower bound holds in this case as well. Finally, it follows again from \citep[Theorem 2.1]{Rodriguez_Ubeda_2004} that the bounds are quasi-copulas, which completes the proof. \end{proof} \begin{remark} Let $C^*$ and $\ensuremath{\mathcal{D}}$ be as in Theorem \ref{prescribedDistance}, and consider $\delta\in\mathbb{R}_+$. Then, the bounds $\underline{Q}^{\ensuremath{\mathcal{D}},\delta}$ and $\overline{Q}^{\ensuremath{\mathcal{D}},\delta}$ also apply to the set of copulas $\mathcal{C}^{\ensuremath{\mathcal{D}},\delta} := \{C\in\mathcal{C}^d\colon \ensuremath{\mathcal{D}}(C,C^*) \leq \delta\}$, assuming that $\mathcal{C}^{\ensuremath{\mathcal{D}},\delta}\neq\emptyset$, that is \begin{align} \label{boundsDeltaCopulas} \underline{Q}^{\ensuremath{\mathcal{D}},\delta} \lo C \lo \overline{Q}^{\ensuremath{\mathcal{D}},\delta}, \end{align} for all $C\in\mathcal{C}^{\ensuremath{\mathcal{D}},\delta}$, due to the fact that $\mathcal{C}^{\ensuremath{\mathcal{D}},\delta}\subseteq\mathcal{Q}^{\ensuremath{\mathcal{D}},\delta}$. \end{remark} \begin{remark} If $\ensuremath{\mathcal{D}}$ is not symmetric, the set $\{Q\in\mathcal{Q}^d\colon \ensuremath{\mathcal{D}}(Q,C^*) \leq \delta\}$ might not coincide with the set $\{Q\in\mathcal{Q}^d\colon \ensuremath{\mathcal{D}}(C^*,Q) \leq \delta\}$. In this case the bounds on $\{Q\in
\mathcal{Q}^d\colon \ensuremath{\mathcal{D}}(C^*,Q) \leq \delta\}$ are provided by \begin{align*} &\underline{Q}^{\ensuremath{\mathcal{D}},\delta}(\ensuremath{\mathbf{u}})=\min\Big\{\alpha\in[W_d(\ensuremath{\mathbf{u}}),M_d(\ensuremath{\mathbf{u}})]\colon \ensuremath{\mathcal{D}}\Big(\refC,\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha}\wedge\refC\Big) \leq \delta\Big\},\\ &\overline{Q}^{\ensuremath{\mathcal{D}},\delta}(\ensuremath{\mathbf{u}})=\max\Big\{\alpha\in[W_d(\ensuremath{\mathbf{u}}),M_d(\ensuremath{\mathbf{u}})]\colon \ensuremath{\mathcal{D}}\Big(\refC,\downQ^{\{\ensuremath{\mathbf{u}}\},\alpha}\vee\refC\Big)\leq \delta\Big\}. \end{align*} \end{remark} Many well-known statistical distances satisfy the requirements of Theorem \ref{prescribedDistance}. Typical examples are the Kolmogorov--Smirnov and the Cram\'er--von Mises distances, where \begin{align*} \ensuremath{\mathcal{D}}_{\text{KS}}(Q,Q') := \sup_{\ensuremath{\mathbf{u}}\in\mathbb{I}^d}|Q(\ensuremath{\mathbf{u}})-Q'(\ensuremath{\mathbf{u}})| \quad \text{ and } \quad \ensuremath{\mathcal{D}}_{\text{CM}}(Q,Q') := \int\nolimits_{\mathbb{I}^d} |Q(\ensuremath{\mathbf{u}})-Q'(\ensuremath{\mathbf{u}})|^2 \mathrm{d}\ensuremath{\mathbf{u}}. \end{align*} The same holds for all $L^p$ distances with $p\geq 1$, where $$\ensuremath{\mathcal{D}}_{L^p}(Q,Q') := \Big(\int\nolimits_{\mathbb{I}^d} |Q(\ensuremath{\mathbf{u}})-Q'(\ensuremath{\mathbf{u}})|^p \mathrm{d}\ensuremath{\mathbf{u}}\Big)^{\frac{1}{p}}.$$ Distances with these properties are of particular interest in the theory of minimum distance and minimum contrast estimation, where---as opposed to maximum likelihood methods---parameters of distributions are estimated based on a statistical distance between the empirical and the estimated distribution. These estimators have favorable properties in terms of efficiency and robustness; \textit{cf.} \citet[Chapter 2.8]{spokoiny2015}. The computation of the bounds $\underline{Q}^{\ensuremath{\mathcal{D}},\delta}$ and $\overline{Q}^{\ensuremath{\mathcal{D}},\delta}$ in Theorem \ref{prescribedDistance} involves the solution of optimization problems, which can be computationally intricate depending on the distance $\ensuremath{\mathcal{D}}$. An explicit representation of the bounds is thus highly valuable for applications. The following result shows that in the particular case of the Kolmogorov--Smirnov distance the bounds can be computed explicitly. \begin{lemma} \label{explicitBounds} Let $C^*$ be a $d$-copula, $\delta \in \mathbb{R}_+$, and consider the Kolmogorov--Smirnov distance $\ensuremath{\mathcal{D}}_{\emph{KS}}$. Then \begin{align*} \underline{Q}^{\ensuremath{\mathcal{D}}_{\emph{KS}},\delta}(\ensuremath{\mathbf{u}}) = \max\big\{C^*(\ensuremath{\mathbf{u}})-\delta,W_d(\ensuremath{\mathbf{u}})\big\} \quad \text{ and } \quad \overline{Q}^{\ensuremath{\mathcal{D}}_{\emph{KS}},\delta}(\ensuremath{\mathbf{u}}) = \min\big\{C^*(\ensuremath{\mathbf{u}})+\delta,M_d(\ensuremath{\mathbf{u}})\big\}. \end{align*} \end{lemma} \begin{proof} Let us start with the lower bound $\underline{Q}^{\ensuremath{\mathcal{D}}_{\text{KS}},\delta}$. Due to ${\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha}\wedge\refC} \preceq \refC$ for all $\alpha\in[W_d(\ensuremath{\mathbf{u}}),M_d(\ensuremath{\mathbf{u}})]$, it holds that \begin{align*} \ensuremath{\mathcal{D}}_{\text{KS}}\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha}\wedge\refC},\refC\Big) &= \sup_{\ensuremath{\mathbf{x}}\in\mathbb{I}^d} \Big|\big({\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha}\wedge\refC}\big)(\ensuremath{\mathbf{x}})-\refC(\ensuremath{\mathbf{x}})\Big| = \sup_{\ensuremath{\mathbf{x}}\in\mathbb{I}^d} \Big\{ \refC(\ensuremath{\mathbf{x}})-\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha}(\ensuremath{\mathbf{x}}) \Big\}. \end{align*} Since $\sup_{\ensuremath{\mathbf{x}}\in\mathbb{I}^d} \big\{\refC(\ensuremath{\mathbf{x}})-\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha}(\ensuremath{\mathbf{x}})\big\} = 0$ when $\alpha> C^*(\ensuremath{\mathbf{u}})$, we can assume w.l.o.g. that the minimum is attained for $\alpha\leq C^*(\ensuremath{\mathbf{u}})$. Hence \begin{align*} \min\Big\{\alpha \in [W_d(\ensuremath{\mathbf{u}}),M_d(\ensuremath{\mathbf{u}})] &\colon \ensuremath{\mathcal{D}}_{\text{KS}}\Big({\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha}\wedge\refC},\refC\Big) \leq \delta\Big\} \\ &= \min\Big\{\alpha \in \mathbb [W_d(\ensuremath{\mathbf{u}}),C^*(\ensuremath{\mathbf{u}})] \colon \sup_{\ensuremath{\mathbf{x}}\in\mathbb{I}^d} \Big\{\refC(\ensuremath{\mathbf{x}})-\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha}(\ensuremath{\mathbf{x}})\Big\} \leq \delta\Big\}. \end{align*} Then, using the definition of $\upQ^{\{\ensuremath{\mathbf{u}}\},\alpha}$ in \eqref{bounds}, we obtain \begin{align*} \sup_{\ensuremath{\mathbf{x}}\in\mathbb{I}^d} \Big\{\refC(\ensuremath{\mathbf{x}}) - \upQ^{\{\ensuremath{\mathbf{u}}\},\alpha}(\ensuremath{\mathbf{x}})\Big\} &= \sup_{\ensuremath{\mathbf{x}}\in\mathbb{I}^d} \Big\{\refC(\ensuremath{\mathbf{x}}) - \min\Big\{M_d(\ensuremath{\mathbf{x}}),\alpha+\sum_{i=1}^d (x_i-u_i)^+\Big\}\Big\}\\ &= \sup_{\ensuremath{\mathbf{x}}\in\mathbb{I}^d} \Big\{\refC(\ensuremath{\mathbf{x}}) - \alpha-\sum_{i=1}^d (x_i-u_i)^+\Big\}\\ &= \sup_{\ensuremath{\mathbf{x}}\in\mathbb{I}^d} \Big\{\refC(\ensuremath{\mathbf{x}}) -\sum_{i=1}^d (x_i-u_i)^+\Big\} - \alpha = C^*(\ensuremath{\mathbf{u}})-\alpha, \end{align*} where the second equality holds due to the fact that $C^*(\ensuremath{\mathbf{x}})-M_d(\ensuremath{\mathbf{x}})\leq 0$ for all $\ensuremath{\mathbf{x}} \in \mathbb{I}^d$. Hence, we conclude that \begin{align*} \downQ^{\ensuremath{\mathcal{D}}_{\text{KS}},\delta}(\ensuremath{\mathbf{u}}) &= \min\big\{\alpha \in \mathbb [W_d(\ensuremath{\mathbf{u}}),C^*(\ensuremath{\mathbf{u}})] \colon C^*(\ensuremath{\mathbf{u}})-\alpha \leq \delta\big\} \\ &= \min\big\{\alpha \in \mathbb [W_d(\ensuremath{\mathbf{u}}),C^*(\ensuremath{\mathbf{u}})] \colon C^*(\ensuremath{\mathbf{u}})-\delta \leq \alpha\big\} = \max\big\{C^*(\ensuremath{\mathbf{u}})-\delta,W_d(\ensuremath{\mathbf{u}})\big\}. \end{align*} The proof for the upper bound $\overline{Q}^{\ensuremath{\mathcal{D}}_{\text{KS}},\delta}$ is analogous, therefore omitted. \end{proof} Analogously to Theorem \ref{prescribedDistance}, one can also consider the situation where information on the survival copula is available. Note that each statistical distance that measures the discrepancy between quasi-copulas can easily be translated into a distance on quasi-survival functions, \textit{i.e.} if $\ensuremath{\mathcal{D}}$ is a statistical distance on $\mathcal{Q}^d\times\mathcal{Q}^d$, then $(\widehat{Q},\widehat{Q}')\mapsto \ensuremath{\mathcal{D}}\big(\widehat{Q}(\mathbf 1-\cdot),\widehat{Q}'(\mathbf 1-\cdot)\big)$ defines a distance on the set of survival copulas or quasi-survival functions. \begin{corollary} \label{prescribedDistanceSurvival} Let $C^*$ be a $d$-copula and $\ensuremath{\mathcal{D}}$ be a statistical distance which is continuous with respect to the pointwise convergence of quasi-copulas, monotonic with respect to the upper orthant order and min/max-stable. Consider the set $\widehat{\mathcal{Q}}^{\ensuremath{\mathcal{D}},\delta} = \big\{\widehat{Q}\in\widehat{\mathcal{Q}}^d\colon \ensuremath{\mathcal{D}}(\widehat{Q},\widehat{C}^*) \leq \delta\big\}$ for $\delta\in\mathbb{R}_+$. Then \begin{align*} \underline{\widehat{Q}}^{\ensuremath{\mathcal{D}},\delta}(\ensuremath{\mathbf{u}}) :=& \min\Big\{\alpha\in\mathbb S(\ensuremath{\mathbf{u}})\colon \ensuremath{\mathcal{D}}\Big(\widehat{\overline{Q}}^{\{\ensuremath{\mathbf{u}}\},\alpha}\wedge\refC,\refC\Big) \leq \delta\Big\} = \min\big\{C(\ensuremath{\mathbf{u}})\colon C\in\widehat{\mathcal{Q}}^{\ensuremath{\mathcal{D}},\delta}\big\} \\ \widehat{\overline{Q}}^{\ensuremath{\mathcal{D}},\delta}(\ensuremath{\mathbf{u}}) :=& \max\Big\{\alpha\in\mathbb S(\ensuremath{\mathbf{u}})\colon \ensuremath{\mathcal{D}}\Big(\widehat{\underline{Q}}^{\{\ensuremath{\mathbf{u}}\},\alpha}\vee\refC,\refC\Big) \leq \delta\Big\} = \max\big\{C(\ensuremath{\mathbf{u}})\colon C\in\widehat{\mathcal{Q}}^{\ensuremath{\mathcal{D}},\delta}\big\}. \end{align*} \end{corollary} The proof is analogous to the proof of Theorem \ref{prescribedDistance} and is therefore omitted. \section{Numerical examples and illustrations} \label{numerics} In this section we apply the results deduced in the previous parts in order to derive bounds on the Value-at-Risk that account for additional information on the dependence structure. In particular, we are able to include different types of partial dependence information that are both relevant for practical applications and have not been considered in the literature so far. Let us first recall the setting of Section \ref{prescribedMax}, where we showed that \begin{align*} \underline{m}_{\mathcal{E},\max}(s) \le \ensuremath{\mathbb{P}}\xspace(X_1+\dots+X_d \le s) \le \overline{M}_{\mathcal{E},\max}(s); \end{align*} see Theorem \ref{boundMax}. In order to compute the bounds $\underline{m}_{\mathcal{E},\max}(s)$ and $\overline{M}_{\mathcal{E},\max}(s)$, we first need to choose a method to estimate the probability $\ensuremath{\mathbb{P}}\xspace(\alpha_1Y_1+\cdots+\alpha_mY_m\leq s)$ for fixed $(\alpha_1,\dots,\alpha_m)$ in $\overline{\mathcal{A}}$ or $\underline{\mathcal{A}}$ and $Y_i\sim G_i$, $i=1,\dots,m$. This corresponds to a standard Fr\'echet problem over a class of distributions with fixed marginals. Thus, two approaches lend themselves naturally for this task: an approximation by the standard bounds given in \eqref{standardBounds} or by the rearrangement algorithm. Indeed, we can use the standard bounds in \eqref{standardBounds} to estimate \begin{align*} \max\Big\{0,\sup_{\mathcal{U}(s)}\sum_{i=1}^m G^-_i\Big(\frac{u_i}{\alpha_i}\Big)-m+1\Big\} &\le \ensuremath{\mathbb{P}}\xspace(\alpha_1Y_1+\cdots+\alpha_mY_m\leq s) \\ &\qquad \le \min\Big\{1,\inf_{\mathcal{U}(s)}\sum_{i=1}^m G^-_i\Big(\frac{u_i}{\alpha_i}\Big)\Big\}, \end{align*} where $\mathcal{U}(s) = \{(u_1,\dots,u_m)\in\mathbb{R}^m\colon u_1+\cdots+u_m=s\}$ and $G_i^-$ denotes the left-continuous version of $G_i$. Then, the bounds $\underline{m}_{\mathcal{E},\max}$ and $\overline{M}_{\mathcal{E},\max}$ are estimated by \begin{align} \label{computeBounds} \begin{split} &\underline{m}_{\mathcal{E},\max}(s) \geq \sup_{(\alpha_1,\dots,\alpha_m)\in\underline{\mathcal{A}}} \max\Big\{0,\sup_{\mathcal{U}(s)}\sum_{i=1}^m G^-_i\Big(\frac{u_i}{\alpha_i}\Big)-m+1\Big\}, \\ &\overline{M}_{\mathcal{E},\max}(s) \leq \inf_{(\alpha_1,\dots,\alpha_m)\in\overline{\mathcal{A}}} \min\Big\{1,\inf_{\mathcal{U}(s)}\sum_{i=1}^m G^-_i\Big(\frac{u_i}{\alpha_i}\Big)\Big\}. \end{split} \end{align} Similarly, for fixed $(\alpha_1,\dots,\alpha_m)\in\underline{\mathcal{A}}$, the rearrangement algorithm allows us to approximate the bound \begin{align} \label{lowerRA} \inf\big\{\ensuremath{\mathbb{P}}\xspace(\alpha_1Y_1+\cdots+\alpha_mY_m\leq s)\colon Y_n\sim G_n, n \in\mathcal{J}\big\}, \end{align} while for $(\alpha_1,\dots,\alpha_m)\in\overline{\mathcal{A}}$ we can approximate \begin{align} \label{upperRA} \sup\big\{\ensuremath{\mathbb{P}}\xspace(\alpha_1Y_1+\cdots+\alpha_mY_m\leq s)\colon Y_n\sim G_n, n \in\mathcal{J}\big\}. \end{align} To this end, we need to suitably discretize the variables $\alpha_1Y_1,\cdots,\alpha_mY_m$ and apply the rearrangement algorithm to the resulting matrix; for further details see \cite*{embrechts2013}. Denoting the lower bound in \eqref{lowerRA} computed by means of the rearrangement algorithm by $\underline{RA}(\alpha_1 Y_1,...,\alpha_m Y_m)$ and analogously the upper bound in \eqref{upperRA} by $\overline{RA}(\alpha_1 Y_1,...,\alpha_m Y_m)$, we thus obtain the following estimates: \begin{align} \label{computeBoundsRA} \begin{split} &\underline{m}_{\mathcal{E},\max}(s) \geq \sup_{(\alpha_1,\dots,\alpha_m)\in\underline{\mathcal{A}}} \underline{RA}(\alpha_1 Y_1,...,\alpha_m Y_m), \\ &\overline{M}_{\mathcal{E},\max}(s) \leq \inf_{(\alpha_1,\dots,\alpha_m)\in\overline{\mathcal{A}}} \overline{RA}(\alpha_1 Y_1,...,\alpha_m Y_m). \end{split} \end{align} Let us stress that the rearrangement algorithm has favorable numerical properties compared to the improved standard bounds. In particular, the bounds $\underline{RA}(\alpha_1 Y_1,...,\alpha_m Y_m)$ and \newline$\overline{RA}(\alpha_1 Y_1,...,\alpha_m Y_m)$ can be computed very quickly for a reasonably fine discretization, thus the subsequent optimization over the set $\underline{\mathcal{A}}$ and $\overline{\mathcal{A}}$ can be performed much faster. The following example illustrates the improvement achieved by the VaR bounds in this setting, that accounts for extreme value information. \begin{example} \label{exExtremeValue} We consider a homogeneous portfolio $\mathbf{X} = (X_1,\dots,X_6)$ where the marginals are Pareto-2 distributed, \textit{i.e.} $X_1,\dots,X_6\sim\text{Pareto}_2$, and analyze the improvement of the VaR bounds when additional information on the dependence structure is taken into account. In particular, we assume that the distributions $G_n$ of the maxima $\max_{j\in J_n} X_j$ are known for $J_1=\{1,2,3\}$ and $J_2=\{4,5,6\}$. In this case, it follows from Theorem \ref{boundMax} and equation \eqref{computeBoundsRA}, that \begin{align} \label{exExtremeValueEq1} \begin{split} & \sup_{(\alpha_1,\dots,\alpha_8)\in\underline{\mathcal{A}}} \underline{RA}(\alpha_1 Y_1,\alpha_2 Y_2,\alpha_3 X_1,...,\alpha_{8} X_6) \\ & \quad \leq \inf\Big\{\ensuremath{\mathbb{P}}\xspace(X_1+\cdots+X_6\leq s)\colon X_1,\dots,X_6\sim\text{Pareto}_2, \max_{j\in J_n} X_n \sim G_n, n = 1,2\Big\} \end{split} \end{align} and analogously \begin{align} \label{exExtremeValueEq2} \begin{split} & \inf_{(\alpha_1,\dots,\alpha_8)\in\overline{\mathcal{A}}} \overline{RA}(\alpha_1 Y_1,\alpha_2 Y_2,\alpha_3 X_1,...,\alpha_{8} X_6)\\ &\quad \geq \sup\Big\{\ensuremath{\mathbb{P}}\xspace(X_1+\cdots+X_6\leq s)\colon X_1,\dots,X_6\sim\text{Pareto}_2, \max_{j\in J_n} X_n \sim G_n, n = 1,2\Big\}. \end{split} \end{align} Note that the marginals $X_1,\dots,X_6$ appear in the optimization since the distribution of the maximum of every individual variable is known and equals the respective marginal distribution; \textit{i.e.} $\max\{X_i\} = X_i\sim F_i$ for $i=1,\dots,d$; see again Remark \ref{rem:calE}. The marginal distributions are thus accounted for in the computation of the bounds. The solution of the optimization problems in \eqref{exExtremeValueEq1} and \eqref{exExtremeValueEq2} yields bounds on the VaR of the sum $X_1+\cdots+X_6$ when the distribution of the partial maxima is taken into account. Table \ref{tab:extremeValue} shows the $\alpha$ confidence level in the first column and the VaR bounds without additional information in the second column, \textit{i.e.} the unconstrained bounds. The third and fourth columns contain the improved VaR bounds that account for the extreme value information, as well as the improvement over the unconstrained bounds in percentage terms. In order to illustrate our method, we need to know the distribution of the partial maxima. To this end, we assume that the vectors $(X_1,X_2,X_3)$ and $(X_4,X_5,X_6)$ have the same Student-$t$ copula with equicorrelation matrices and two degrees of freedom, and numerically determine the distribution of $\max\{X_1,X_2,X_3\}$ and $\max\{X_4,X_5,X_6\}$. In the third column it is assumed that the pairwise correlations of $(X_1,X_2,X_3)$ and $(X_4,X_5,X_6)$ are equal to 0.9 and in the fourth column the pairwise correlations amount to 0.7 respectively. All bounds in this table, both without and with additional information, are computed using the rearrangement algorithm, while we have also performed the same computations using the standard bounds in \eqref{standardBounds}. In the case with additional information, the standard bounds and the rearrangement algorithm yield the same results. In the case without additional information, the rearrangement algorithm clearly outperforms the standard bounds, which is well documented, hence we do not report any of these results here. \begin{table}[h] \begin{center} \begin{tabular}{c|cc|ccc|ccc} \hline \hline $\alpha$ & lower & upper & \shortstack{\vspace{0.1cm}\\ lower\\ improved} & \shortstack{\vspace{0.1cm}\\ upper\\ improved} & \shortstack{\vspace{0.1cm}\\ impr.\\ \%} & \shortstack{\vspace{0.1cm}\\ lower\\ improved} & \shortstack{\vspace{0.1cm}\\ upper\\ improved} & \shortstack{\vspace{0.1cm}\\ impr.\\ \%}\\ \hline 95\% & 3.8 & 47.8 & 3.8 & 39.5 & 19.7 & 4.9 & 44.8 & 9.1 \\ 99\% & 4.9 & 114.0 & 11.0 & 96.1 & 22.0 & 12.4 & 107.8 & 12.5 \\ 99.5\% & 5.2 & 163.7 & 16.1 & 138.5 & 22.7 & 18.0 & 155.1 & 13.5 \\ \hline \hline \end{tabular} \caption{Unconstrained and improved VaR bounds for the sum $X_1+\cdots+X_6$ with known distribution of partial maxima for different confidence levels.} \label{tab:extremeValue} \end{center} \end{table} The following observations ensue from this example: (i) The addition of partial dependence information allows to notably reduce the spread between the upper and lower bounds. Indeed, the bounds with additional information are finer than the unconstrained bounds resulting from the rearrangement algorithm, which are approximately sharp in this setting. Nevertheless, the model risk is still not negligible. (ii) The level of improvement \textit{increases} with increasing confidence level $\alpha$. This is in contrast to related results in the literature, see \textit{e.g.} \cite{bernard2015,bignozzi2015}, where the improvement typically decreases as the confidence level increases, and is an advantage of the present methodology. (iii) The improvement is more pronounced in the high-correlation scenario, and for the lower bound. These two observations are in accordance with the related literature; \textit{e.g.} \citep{puccetti2016} report also a more pronounced improvement of the VaR bounds in the presence of strong positive dependence (especially in the tails), while \cite{bernard2015} report a more noticable improvement of the lower relative to the upper VaR bound. \end{example} In the next example we combine the results of Section \ref{boundsOnCopula} with Proposition \ref{varBoundsMax} in order to derive improved bounds on the VaR of the maximum of risks over a class of copulas in the proximity of a reference copula. \begin{example} \label{exMaximumMinimum} Let us consider a homogeneous portfolio of three risks $(X_1,X_2,X_3)$ where the marginals are again Pareto 2 distributed, \textit{i.e.} $X_1,X_2,$ $X_3\sim\text{Pareto}_2$. We assume that the reference copula $C^*$ is a Student-$t$ copula with equicorrelation matrix and two degrees of freedom, and are interested in computing bounds on the VaR over the class of models in the $\delta$-neighborhood of $C^*$ as measured by the Kolmogorov--Smirnov distance. In other words, we consider the class $$\mathcal{C}^{\ensuremath{\mathcal{D}}_{\text{KS}},\delta}:= \big\{C\in\mathcal{C}^d\colon \ensuremath{\mathcal{D}}_{\text{KS}}(C,C^*) \leq \delta \big\},$$ and using Theorem \ref{prescribedDistance} and Lemma \ref{explicitBounds} we arrive at bounds on the copulas in $\mathcal{C}^{\ensuremath{\mathcal{D}}_{\text{KS}},\delta}$. Then, we apply Proposition \ref{varBoundsMax} using the bounds $\underline{Q}^{\ensuremath{\mathcal{D}}_{\text{KS}},\delta}$ and $\overline{Q}^{\ensuremath{\mathcal{D}}_{\text{KS}},\delta}$ obtained above in order to compute bounds on the VaR of the maximum $\max\{X_1,X_2,X_3\}$ over the class of models in the vicinity of $C^*$. Table \ref{tab:distance1} shows the confidence level and the sharp unconstrained (\textit{i.e.} marginals-only) VaR bounds in the first two columns. The third, fourth and fifth column contain the upper and lower VaR bounds which use the information on the distance from $C^*$, for different levels of the threshold $\delta$, as well as the improvement over the unconstrained bounds in percentage terms. In the computation we assume that the pairwise correlation of the $t$-copula $C^*$ equals 0.9. The results are rounded to one decimal digit for the sake of legibility. \begin{table}[h] \begin{center} \resizebox{\textwidth}{!}{% \begin{tabular}{>{\centering}m{0.7cm}|c|cc|cc|cc} \hline \hline $\alpha$ & (lower : upper)& \shortstack{\vspace{0.1cm}\\ $\delta = 0.001$ \\ (lower : upper)} & \shortstack{\vspace{0.1cm}\\ impr.\\ \%} & \shortstack{\vspace{0.1cm}\\ $\delta = 0.005$\\ (lower : upper)} & \shortstack{\vspace{0.1cm}\\ impr.\\ \%} & \shortstack{\vspace{0.1cm}\\ $\delta = 0.01$\\ (lower : upper)} & \shortstack{\vspace{0.1cm}\\ impr.\\ \%}\\ \hline 95\% & (1.4 : 6.8) & (3.6 : 4.6) & 81 & (2.5 : 4.7) & 59 & (2.3 : 5.0) & 50 \\ 97\% & (2.0 : 9.1) & (4.8 : 6.2) & 78 & (3.5 : 6.7) & 55 & (3.2 : 7.7) & 37\\ 99\% & (3.0 : 16.4) & (9.0 : 11.8) & 79 & (6.4 : 15.5) & 32 & (5.2 : 16.2) & 18 \\ \hline \hline \end{tabular} } \caption{Unconstrained and improved VaR bounds for $\max\{X_1,X_2,X_3\}$ given a threshold on the distance from the reference $t$-copula $C^*$ with pairwise correlation equal to 0.9.} \label{tab:distance1} \end{center} \end{table} The next table is analogous to Table \ref{tab:distance1}, but this time weaker dependence is induced by the reference model, assuming that the pairwise correlations in the $t$-copula $C^*$ are equal to 0.6. \begin{table}[h] \begin{center} \resizebox{\textwidth}{!}{% \begin{tabular}{>{\centering}m{0.7cm}|c|cc|cc|cc} \hline \hline $\alpha$ & (lower : upper)& \shortstack{\vspace{0.1cm}\\ $\delta = 0.001$ \\ (lower : upper)} & \shortstack{\vspace{0.1cm}\\ impr.\\ \%} & \shortstack{\vspace{0.1cm}\\ $\delta = 0.005$\\ (lower : upper)} & \shortstack{\vspace{0.1cm}\\ impr.\\ \%} & \shortstack{\vspace{0.1cm}\\ $\delta = 0.01$\\ (lower : upper)} & \shortstack{\vspace{0.1cm}\\ impr.\\ \%}\\ \hline 95\% & (1.4 : 6.8) & (3.5 : 5.3) & 67 & (1.5 : 5.6) & 24 & (1.4 : 5.8) & 19 \\ 97\% & (2.0 : 9.1) & (4.8 : 7.2) & 66 & (2.3 : 7.8) & 23 & (2.0 : 8.8) & 4\\ 99\% & (3.0 : 16.4) & (9 : 14) & 62 & (4.2 : 16.4) & 9 & (3.4 : 16.4) & 3 \\ \hline \hline \end{tabular} } \caption{Unconstrained and improved VaR bounds for $\max\{X_1,X_2,X_3\}$ given a threshold on the distance from the reference $t$-copula $C^*$ with pairwise correlation equal to 0.6.} \label{tab:distance2} \end{center} \end{table} Let us point out that the bounds in Proposition \ref{varBoundsMax}, hence also in the second column of Tables \ref{tab:distance1} and \ref{tab:distance2}, are sharp when no dependence information is available, \textit{i.e.} when $\underline{Q} = W_3$ and $\overline{Q} = M_3$. This is due to the fact that $M_3$ is a copula and $W_3$ is pointwise best-possible. The observations made for the previous example are largely valid also in the present one, namely: (i) The addition of partial information reduces significantly the spread between the upper and lower bounds. This reduction is more pronounced as the threshold $\delta$ decreases; in other words, the more reliable the reference model, the more pronounced the reduction of model risk. These results should be compared, qualitatively, with analogous results for the `trusted region' in \cite{bernard2015}. (ii) The level of improvement decreases in this case, sometimes dramatically, with increasing confidence level $\alpha$. In particular, for $\alpha = 99\%$ the improvement was small, especially for large values of $\delta$. (iii) The improvement is more pronounced in the high-dependence scenario, with improvements over the sharp unconstrained bounds of up to 81\%. \end{example} \bibliographystyle{abbrvnat}
\section*{\huge Date-31 Oct 18- 11 Dec 18} \section{Introduction} \noindent The rise of many two-dimensional materials like graphene, phosphorene, silicene, etc. become a matter of curiosity for the entire field of photonics and opto-electronics \cite{li2017light}. There are many crucial and significant physical properties in graphene due to its unique electronic spectrum of massless Dirac like particles {\cite{novoselov2004electric,neto2009electronic}}. But graphene is a zero-gap semiconductor, so it has very limited applications in electronic devices without significant strain-engineering {\cite{ni2008uniaxial1}} or physical modification in the morphology {\cite{han2007energy1}}. The monolayer of black phosphorous (BP), is known as phosphorene (2D allotrope of phosphorous)\cite{liu2014phosphorene}. There is highly anisotropic nature in phosphorene. Its band structure is quite different than other 2D materials, electronic spectrum of phosphorene has a Dirac like (linear) band in one direction and Schr${\ddot{\mbox{o}}}$dinger like (parabolic) band in other direction \cite{ezawa2015highly}. Phosphorene possess a finite band gap in its electronic spectrum \cite{guo2015from} in contrast to graphene. Phosphorene becomes a direct band gap semiconductor \cite{guo2015from} in presence of visible region of electromagnetic spectrum. So it will be highly useful in electronic devices operating in the visible region of the electromagnetic spectra like LEDs and solar cells \cite{yang2015optical}. Due to very high hole mobility, phosphorene can be utilized in P-type device materials \cite{liu2014phosphorene}. The advantage of phosphorene in compare of graphene, is the presence of tunable direct band gap, so there is tuning of nonlinear optical response of phosphorene by applying an external field \cite{li2017tunable}. There are many specific properties in phosphorene like electrical, thermal and optical anisotropy, can be utilised in device fabrication such as transparent saturable absorbers, fast photo-conductive switch and low noise photodetectors \cite{viti2018photonic}. Optical properties studied in the context of graphene are, optical conductivity, optical Stark effect and Rabi oscillations \cite{eberly1975optical,haug2009quantum,gerry4introductory}. Many experiments have been performed to study optical properties, such as optical Stark effect \cite{haug2009quantum}, optical conductivity, Rabi oscillations \cite{haug2009quantum,gerry4introductory,ni2007graphene,mandel1995optical,boyd2008nonlinear}, universal optical conductance \cite{lee1993localized,ludwig1994integer,ziegler1998delocalization}, measurement of fine structure constant \cite{nair2008fine}, four wave mixing \cite{haug2009quantum} and incoherent optical properties like optical dephasing \cite{boyd2008nonlinear}, relaxation of charge carriers, both inter-band and intra-band in graphene and graphene based systems on various substrate has been reported experimentally by pump-probe technique \cite{kumar2009femtosecond,breusing2011ultrafast,dawlaty2008measurement,shang2010femtosecond,george2008ultrafast,ruzicka2010femtosecond}. There is presence of tunable band gap in electronic spectrum of BP, so electronic and optical properties of BP can be changed drastically \cite{wang2016optical}. There are many unique optical properties in the monolayer of BP \cite{wang2015highly} such as a large third-order nonlinear optical susceptibility of about $10^{-19} \mathrm{m^2/V^2}$, and the measured fast relaxation time is $0.13 ps$ \cite{wang2016optical,miao2017ultrafast,Margulis2017coherent}. By controlling size of BP, its nonlinear optical response can be adjusted and a new way to develop electronic and optoelectronic devices \cite{xu2017size,pedersen2017nonlinear} produced. Therefore, the optical properties (linear and nonlinear) of BP and graphene becomes a matter of curiosity for materials scientists and gives motivation to research in the field of phosphorene nonlinear optics. If there is a cyclic change of energy between a two-level quantum system and deriving field, some oscillations produced, known as Rabi oscillations \cite{rabi1937space}. There is a lot of study has been performed on Rabi oscillations in conventional semiconductors by using rotating wave approximation (RWA) \cite{allen1975optical,lindberg1988effective} and RWA is valid only in case of resonance. In off resonance case, there is a new type of oscillation found by application of Floquet theory \cite{oka2009photovoltaic,lindner2011floquet,kitagawa2011transport,inoue2010j,dora2012optically}. An {\it{off resonant}} light frequency applied for any electron transition in Floquet theory. Therefore, there is no directly excitement of electrons from light but effectively modifies the electronic band structures via virtual photon absorption processes \cite{ezawa2013photoinduced}. So Floquet theory becomes an alternative of RWA in the off-resonance case and recently applied in Dirac fermionic systems \cite{oka2009photovoltaic,lindner2011floquet,kitagawa2011transport,inoue2010j,dora2012optically}. The Floquet theory has been studied with the name of asymptotic rotating wave approximation (ARWA) by Enam {\it et al.} \cite{kumar2012crossover}. From fig. (1) of Enam {\it et al.} \cite{kumar2012crossover}, it can be explicitly seen that, the Floquet theory dominates in case of low energy physics. There is shift in resonance condition of RWA, know as Bloch-Siegert shift (BSS) \cite{bloch1940magnetic}, comes due to considering counter-rotating term. Such shift is very important for characterizing the amplitude, homogeneity of the proton-decoupling field and monitoring the probe performance \cite{vierkotter1996applications}. It is also found in a strongly driven classical two-level systems by Beijersbergen {\it{et al.}} \cite{beijersbergen1992multiphoton}. In graphene, BSS becomes important in case of the next nearest neighbour hopping or inclusion of Rashba spin-orbit interaction \cite{kumar2014band}. The BSS in phosphorene comes due to puckered crystal structure \cite{fukuoka2015electronic,kumar2019anisotropic}. BSS has been well studied in presence of classical \cite{shirley1965solution,bloch1940magnetic} and quantized fields \cite{stenholm1972saturation,hannaford1973analytical,cohen1973quantum}. So motivation of this work to study BSS in phosphorene and graphene, when external field is considered in quantized form. When, there is interaction between an isolated two-level atom and a single mode quantized electromagnetic field in a lossless cavity, Jaynes-Cummings model \cite{jaynes1963comparison,yoo1985dynamical,Cummings} comes in picture for explanation of such phenomenon. Jaynes-Cummings model is exactly soluble in the rotating wave approximation. Jaynes-Cummings model has already been used for explaining collapse-revival phenomena \cite{eberly1980periodic,narozhny1981coherence,yoo1981non}. The periodic recurrence of the quantum wave function from its original form during the time evolution is known as collapse-revival, it has already been predicted by theoretical \cite{ficek2014quantum} and experimental \cite{narozhny1981coherence} manners. The oscillations of collapse-revival, decays rapidly at short times, but periodically regenerates to large amplitudes on a longer time scale \cite{vela2005coherent,torosov2015mixed}. In this article, the bands of phosphorene has been tuned via application of Floquet theory. There is presence of anisotropy in phosphorene bands, so the role of anisotropy is described in various phenomenon like Bloch-Siegert shift, collapse-revival spectra and Floquet oscillations. There is also numerical justification of anisotropy in Floquet theory. Therefore, intrinsic anisotropy of phosphorene has major physical significance and becomes important in modern physics. The results of phosphorene compared with graphene wherever required. \section{Collapse-Revival Spectra of Phosphorene and Graphene} The low energy Hamiltonian of phosphorene is $H=\left(u p_{y}^2+m\right)\sigma_{x}+v_{F} p_{x}\sigma_{y}$ \cite{ezawa2015highly}. Here $\sigma$ is the Pauli matrices, $v_{F}$ is fermi velocity, $u$ is effective mass, $m$ is the gap acting as the Dirac mass, $p_{x}$, $p_{y}$ is the $x$ and $y$ components of momentum. The low energy Hamiltonian of phosphorene in presence of vector potential ${\bf{A}}(t)$ in second quantized form is \small \begin{align} H&=&c_A^{\dagger}\left[ \left(up^2_{y}+m\right)-iv_{F} p_{x}\right]c_B + c_B^{\dagger }\left[\left( up^2_{y}+m\right)+iv_{F} p_{x}\right]c_A+e^{ -i \omega t}\left[c_A^{\dagger} \left\{ \left(-i\frac{2up_{y}}{2v_{F}}\lambda b \right)+\frac{i}{2}\lambda b \right\} c_B\right. \nonumber\\&& + \left. c_B^{\dagger}\left\{\left(-i\frac{2up_{y}}{2v_{F}}\lambda b \mbox{ }\right)-\frac{i}{2}\lambda b\right\}c_A \right] +e^{ i \omega t}\left[ c_A^{\dagger}\left\{i\left(\frac{2up_{y}}{2v_{F}}\lambda b^{\dagger} \right)+\frac{i}{2}\lambda b^{\dagger}\right\}c_B \right. \nonumber\\&& + \left. c_B^{\dagger}\left\{\left(i\frac{2up_{y}}{2v_{F}}\lambda b^{\dagger} \mbox{ }\right)-\frac{i}{2}\lambda b^{\dagger}\right\}c_A \right] \label{JCHP} \end{align} \normalsize Here $ A, B$ represents either spin up or spin down and $c\mbox{ }(c^{\dagger})$ for the annihilation (creation) operator. The vector is considered as ${\bf{A}}(t)=\mathrm{Re}({\bf{A_{0}}} e^{-i\omega t})$ i.e. for circular polarization. The Floquet oscillations are present in only case of circular polarization \cite{kumar2014quantum}. The coupling constant $\lambda$ is defined as $-\frac{ev}{c}{\bf{A_{0}}}=\lambda b$, $ [b,b^{\dagger}] = 1 $ are the photon operators. The Hamiltonian model (eq.(\ref{JCHP})) becomes analogous to Jaynes-Cummings model by using identifications $\sigma_{+}=\sigma_{x}+i\sigma_{y} = c^{\dagger}_{B} c_{A}$, $\sigma_{-}=\sigma_{x}-i\sigma_{y} = c^{\dagger}_{A}c_{B}$ and $\sigma_{z} =(c_A^{\dagger} c_A-c_B^{\dagger}c_B)$. There is unitary transformation on the photons i.e. replace $ b $ with $ b e^{ i \omega t }$. For making above model (eq.(\ref{JCHP})) simpler, there is consideration of one electron hoping. Therefore, the transition states has form $\langle0,1,n+1|\phi(t)\rangle$ and $\langle1,0,n+1|\phi(t)\rangle$, with non-zero amplitudes. $n$ is number of the photon, which is very large, so it can be considered $ n \approx n+1 $. Therefore, the energy-eigenvalue equation $i\hbar\frac{\partial}{\partial t}|\phi(t)\rangle=H |\phi(t)\rangle$ of phosphorene in matrix form (setting $\hbar=1$) \begin{align}&&i\hbar\frac{\partial}{\partial_t}\begin{bmatrix} \langle 0,1,n+1|\phi \rangle \\ \langle 1,0,n+1|\phi \rangle \end{bmatrix}=\begin{bmatrix} 0& \left[\left\{ up^2_{y}+m\right\}+iv_{F} p_{x}\right]\\ \left[ \left\{ up^2_{y}+m\right\}-iv_{F} p_{x}\right] &0 \end{bmatrix}\begin{bmatrix} \langle 0,1,n+1|\phi \rangle \\ \langle 1,0,n+1|\phi \rangle \end{bmatrix} \nonumber \\&& +e^{ -i \omega t}\begin{bmatrix}0& \left[ \left\{-i\frac{2up_{y}}{2v_{F}}\lambda \mbox{ }\right\}-\frac{i}{2}\lambda \right]\;\sqrt{n+1}\\ \left[ \left\{-i\frac{2up_{y}}{2v_{F}}\lambda \right\}+\frac{i}{2}\lambda \right]\;\sqrt{n+1}&0 \end{bmatrix}\begin{bmatrix} \langle 0,1,n+1|\phi \rangle \\ \langle 1,0,n+1|\phi \rangle \end{bmatrix} \nonumber \\&& +e^{ i \omega t}\begin{bmatrix}0& \left[ \left\{i\frac{2up_{y}}{2v_{F}}\lambda \mbox{ }\right\}-\frac{i}{2}\lambda \right]\;\sqrt{n+1}\\ \left[ \left\{i\frac{2up_{y}}{2v_{F}}\lambda \right\}+\frac{i}{2}\lambda \right]\;\sqrt{n+1}&0 \end{bmatrix}\begin{bmatrix} \langle 0,1,n+1|\phi \rangle \\ \langle 1,0,n+1|\phi \rangle \end{bmatrix} \label{matrixP} \end{align} Similarly, the low energy Hamiltonian of graphene $H=v_{F}(\sigma_{x} p_{x}+\sigma_{y} p_{y})$ \cite{neto2009electronic} in the presence of vector potential ${\bf{A}}(t)$ in second quantized form \begin{align} H &=& c^{\dagger}_{A}\mbox{ } v_{F}( p_{x}-i p_{y})\mbox{ } c_{B} + c^{\dagger}_{B}\mbox{ } v_{F}( p_{x}+i p_{y})\mbox{ } c_{A} + \lambda \mbox{ } c_B^{\dagger}c_A\mbox{ } b \mbox{ }e^{ i \omega t } + \lambda\mbox{ }c_A^{\dagger}c_B \mbox{ }b^{\dagger}\; e^{ -i \omega t }. \label{JCHG} \end{align} Therefore, matrix form of the energy-eigenvalue equation $i\hbar\frac{\partial}{\partial t}|\phi(t)\rangle=H |\phi(t)\rangle$ of graphene (setting $\hbar=1$) \begin{eqnarray} i\frac{\partial}{\partial_{t}}\left[ \begin{array}{cc} \langle 0,1,n+1\mbox{ }|\phi(t) \rangle\\ \langle 1,0,n+1\mbox{ }|\phi(t) \rangle\\ \end{array}\right] &=&\left[\begin{array}{ccc} 0& v_{F}(p_{x}+ip_{y})\\ v_{F}(p_{x}-ip_{y})&0\\ \end{array}\right]\left[\begin{array}{cc} \langle 0,1,n+1\mbox{ }|\phi(t) \rangle\\ \langle 1,0,n+1\mbox{ }|\phi(t) \rangle\\ \end{array}\right] \nonumber \\&&+e^{- i \omega t } \left[\begin{array}{cc} 0&0\\ \lambda \sqrt{n+1}&0\\ \end{array}\right] \left[\begin{array}{cc} \langle 0,1,n+1\mbox{ }|\phi(t) \rangle\\ \langle 1,0,n+1\mbox{ }|\phi(t) \rangle\\ \end{array}\right]\nonumber\\&&+e^{ i \omega t }\left[\begin{array}{cc} 0&\lambda \sqrt{n+1}\\ 0&0\\ \end{array}\right] \left[\begin{array}{cc} \langle 0,1,n+1\mbox{ }|\phi(t) \rangle\\ \langle 1,0,n+1\mbox{ }|\phi(t) \rangle\\ \end{array}\right]. \label{matrixG} \end{eqnarray} \subsection{Rotating wave approximation (RWA)}\label{RWA-appox} \subsubsection{Phosphorene}\label{RWAP} With help well known RWA \cite{haug2009quantum}, the eq.(\ref{matrixP}) can be solved. In such approximation, the rapidly oscillating terms of the effective Hamiltonian are neglected. First, the first matrix of eq.(\ref{matrixP}) is diagonalize by using unitary transformation \begin{align}\begin{bmatrix} \langle 0,1,n+1|\phi \rangle \\ \langle 1,0,n+1|\phi \rangle \end{bmatrix} =\left( \begin{array}{cc} -\frac{u p_{y}^2+m+i p_{x} v_{F}}{\beta} & \frac{u p_{y}^2+m+i p_{x} v_{F}}{\beta} \\ 1 & 1 \\ \end{array} \right)\begin{bmatrix} \langle0,1,n+1|\phi \rangle_{1} \\ \langle 1,0,n+1|\phi_{1} \rangle_{1} \end{bmatrix} \end{align} Here, ${ \beta =\sqrt{\left(u p_{y}^2+m\right)^2+p_{x}^2 v_{F}^2}}$. Using above transformation, eq.(\ref{matrixP}) becomes \small \begin{align}\hspace{-1.5cm} &&i\hbar\frac{\partial}{\partial_t}\begin{bmatrix} \langle0,1,n+1|\phi \rangle_{1} \\ \langle
1,0,n+1|\phi_{1} \rangle_{1} \end{bmatrix}= \left( \begin{array}{cc} -\beta & 0 \\ 0 & \beta \\ \end{array} \right) \begin{bmatrix} \langle0,1,n+1|\phi \rangle_{1} \\ \langle 1,0,n+1|\phi_{1} \rangle_{1} \end{bmatrix} +e^{ -i \omega t}\sqrt{n+1}\;\lambda\left( \begin{array}{cc} \frac{ \left[p_{x} v_{F}^2+2 i p_{y} u \left(u p_{y}^2+m\right)\right] }{2 v_{F} \beta} & \frac{i \left[m+p_{y} (p_{y}-2 i p_{x}) u\right] }{2 \beta} \\ -\frac{i \left[m+p_{y} (p_{y}-2 i p_{x}) u\right] }{2 \beta} & -\frac{i \left[2 p_{y} u \left(u p_{y}^2+m\right)-i p_{x} v_{F}^2\right] }{2 v_{F} \beta} \\ \end{array} \right)\nonumber \\&& \begin{bmatrix} \langle0,1,n+1|\phi \rangle_{1} \\ \langle 1,0,n+1|\phi_{1} \rangle_{1} \end{bmatrix} +e^{ i \omega t}\sqrt{n+1}\;\lambda\left( \begin{array}{cc} \frac{ \left[p_{x} v_{F}^2-2 i p_{y} u \left(u p_{y}^2+m\right)\right] }{2 v_{F} \beta} & \frac{i \left[m+p_{y} (2 i p_{x}+p_{y}) u\right]) }{2 \beta} \\ -\frac{i \left[m+p_{y} (2 i p_{x}+p_{y}) u\right] }{2 \beta} & \frac{i \left[i p_{x} v_{F}^2+2 p_{y} u \left(u p_{y}^2+m\right)\right] }{2 v_{F} \beta} \\ \end{array} \right) \begin{bmatrix} \langle0,1,n+1|\phi \rangle_{1} \\ \langle 1,0,n+1|\phi_{1} \rangle_{1} \end{bmatrix} \end{align} \normalsize Now again using new transformation $\langle0,1,n+1|\phi \rangle_{1}=e^{i t \beta }\langle0,1,n+1|\phi \rangle_{2}$ and $\langle1,0,n+1|\phi \rangle_{1}=e^{-i t \beta }\langle1,0,n+1|\phi \rangle_{2}$ and leaving the counter-rotating term (rapidly varying term), the final equation of RWA has form \begin{subequations} \begin{align} i\frac{\partial}{\partial_t} \langle0,1,n+1|\phi \rangle_{2}=\frac{i \left[m+p_{y} (2 i p_{x}+p_{y}) u\right] \sqrt{n+1}\;\lambda}{2 \beta}\;e^{ i (\omega-2\beta) t}\langle 1,0,n+1|\phi_{1} \rangle_{2} \label{RWA1} \end{align} \begin{align} i \frac{\partial}{\partial_t}\langle 1,0,n+1|\phi_{1} \rangle_{2}= \;\frac{i \left[m+p_{y} (p_{y}-2 i p_{x}) u\right] \sqrt{n+1}\;\lambda}{2 \beta}e^{ -i (\omega-2\beta) t}\langle0,1,n+1|\phi \rangle_{2} \label{RWA2} \end{align} \end{subequations} Applying initial condition $\langle0,1,n+1|\phi \rangle_{2}=0$ and $\langle 1,0,n+1|\phi_{1} \rangle_{2}=1$ and solving above eq.(\ref{RWA1}) and eq.(\ref{RWA2}), the probability amplitude of wave-function $\langle 1,0,n+1|\phi_{1} \rangle_{2}$ is \begin{align}P_{2P-RWA}(t)=|\langle 1,0,n+1|\phi_{1} \rangle_{2}|^2=\cos^2\left(\frac{t}{2} \Omega_{\mathrm{RWA-P}}(n)\right). \end{align} Where $\Omega_{\mathrm{RWA-P}}(n)=\sqrt{\Delta_{P}^2+(n+1) \lambda^2\gamma}$ is conventional Rabi frequency of phosphorene, $\Delta_{P}=(\omega-2\beta)$ is detuning parameter and $\gamma=\left[(m+p^2 u \sin ^2\theta )^2+p^4 u^2 \sin ^22 \theta \right]/\beta^2$ by considering momentum vector in polar form i.e. $p_x=p \cos \theta $, $p_y=p \sin \theta$ and $\hbar=1$. In RWA, detuning $\Delta_{P}$ becomes zero. \begin{figure}[h] \centering \subfigure[]{\includegraphics[width=70mm,height=65mm]{collapse_revival_RWA_theta_piby4.png}} \subfigure[]{\includegraphics[width=70mm,height=65mm]{collapse_revival_RWA_theta_zero.png}}\\ \caption{\small (Color online) The Collapse and revival phenomenon of Rabi oscillations in phosphorene for different value of wavevector angle {\bf(a)} $\theta=\frac{\pi}{4}$ {\bf(b)} $\theta=0$. The plot between the probability of state $\langle 0,1,n+1\mbox{ }|\phi(t) \rangle$ and time. For plotting, we considered the mean number of photon $\langle n \rangle =20$, $\lambda=1$, $u=1$, $m=1$, $v_{F}=1$ $p=1$ and $\omega=1$. Time is in the unit of $\lambda^{-1}$.} \label{collapse_revival_RWA} \end{figure} \noindent Taking initial conditions in quantized form i.e. values of the probability amplitudes are $\langle 0,1,n\mbox{ }|\phi(t) \rangle=\langle n|\alpha\rangle\; \mbox{and} \;\langle 1,0,n\mbox{ }|\phi(t) \rangle=0 $, which gives \begin{align}|\langle 0,1,n\mbox{ }|\phi(t) \rangle|^2=\frac{|\alpha|^{2n}}{n!}e^{-|\alpha|^2}. \end{align} Where $\alpha$ is a complex number and $|\alpha|^{2}=\langle n \rangle$ is the mean number of photons in the cavity field. The population $P_{2}(t)$ comes in form \begin{align} P_{2P-RWA}(t)=\sum_{n}\frac{\langle n \rangle^{n}}{n!}e^{-\langle n \rangle}\cos^2\left(\frac{t}{2} \Omega_{\mathrm{RWA-P}}(n)\right). \label{P2-RWA} \end{align} The conventional Rabi frequency start to spread after coming Poisson distribution of the photon number $n$ in the picture. Due to Poisson distribution, there is dephasing in Rabi oscillations and collapse after some time $t$. There is revival of the collapsed comes due to the phases of oscillation of neighbouring terms in eq.(\ref{P2-RWA}) differ by the factor $2\pi$ \cite{dung1990collapses}. The collapse and revival in Rabi oscillation can be explicitly seen by plotting $P_{2P-RWA}(t)$ with respect to time $t$ [eq.(\ref{P2-RWA})]. Therefore, conventional Rabi frequency contains anisotropic nature, depicted in fig.(\ref{collapse_revival_RWA}). $P_{2P-RWA}(t$ has different value of amplitudes for different value of wave-vector angle $\theta$ [fig.(\ref{collapse_revival_RWA})]. The detailed analysis about collapse and revival phenomenon can be seen in the book of Scully et al. \cite{scully1999quantum}, the expression of collapse and revival time is derived as \cite{kumar2014quantum} \begin{align}t_{\mathrm{col}}(\bar{n})=\sqrt{\frac{2Log(10)}{\bar{n}}}\frac{\bar{n}}{\Omega_{RWA}(\bar{n})},\quad t_{\mathrm{rev}}(\bar{n})=\frac{2\pi\bar{n} }{\Omega_{RWA}(\bar{n})}. \end{align} \begin{figure}[H] \centering \subfigure[]{\includegraphics[width=70mm,height=65mm]{tcollapse_RWA.png}} \subfigure[]{\includegraphics[width=70mm,height=65mm]{trevival_RWA.png}}\\ \caption{\small (Color online) For Rabi oscillations of phosphorene {\bf(a)} collapse time, when wave vector angle $\theta=\frac{\pi}{2}$ and $0$. {\bf(b)} revival time, when wave vector angle $\theta=\frac{\pi}{2}$ and $0$. For plotting, we considered $\lambda=1$ and $\omega=1$. Time is considered in the unit of $\lambda^{-1}$.} \label{collapse_revival_time_RWA} \end{figure} \FloatBarrier \noindent Therefore, in case of phosphorene final expression of of collapse and revival time for conventional Rabi frequency is \begin{align}t_{\mathrm{col-RWA}}(\bar{n})=\sqrt{2Log(10)}\frac{1}{\lambda \sqrt{\gamma}},\quad t_{\mathrm{rev-RWA}}(\bar{n})=\frac{2\pi \sqrt{\bar{n}}}{\lambda \sqrt{\gamma}}. \label{colrev-RWA} \end{align} \noindent The plot of collapse and revival time of phosphorene conventional Rabi frequency [eq.(\ref{colrev-RWA})] is depicted in fig.(\ref{collapse_revival_time_RWA}). The $t_{\mathrm{col}}$ is not changing with the increasing number of photon[fig.\ref{collapse_revival_time_RWA}({\bfseries{a}})], on the other hand $t_{\mathrm{rev}}$ is changing in continuous way as number of photons increases [fig.\ref{collapse_revival_time_RWA}({\bfseries{b}})]. But $t_{\mathrm{col}}$ and $t_{\mathrm{rev}}$ possess an anisotropic nature in phosphorene. \subsubsection{Graphene}\label{RWAG} Doing similar analogy like earlier section (\ref{RWAP}), the final equation of RWA for graphene \begin{subequations} \begin{align} i\frac{\partial}{\partial_t} \langle0,1,n+1|\phi \rangle_{2}=-\frac{\sqrt{n+1}\mbox{ }\lambda\;e^{-i \Delta_{G} t}v_{F}(p_{x}-i\;p_{y} )}{2v_{F} \sqrt{p_{x}^2+p_{y}^2}}\langle 1,0,n+1|\phi_{1} \rangle_{2} \label{RWA3} \end{align} \begin{align} i \frac{\partial}{\partial_t}\langle 1,0,n+1|\phi_{1} \rangle_{2}= - \frac{\sqrt{n+1}\;\lambda e^{i \Delta_{G} t}\mbox{ } v_{F}(p_{x} +i p_{y} )}{2v_{F} \sqrt{p_{x}^2+p_{y}^2}}\langle0,1,n+1|\phi \rangle_{2} \label{RWA4} \end{align} \end{subequations} Here $\Delta_{G}=\omega-2v_{F}\sqrt{p_{x}^2+p_{y}^2}$, which is detuning parameter in case of graphene. Applying initial condition in quantized form [like earlier section (\ref{RWAP})], the probability amplitude of wave-function $\langle 1,0,n+1|\phi_{1} \rangle_{2}$ is \begin{align} P_{2G-RWA}(t)=\sum_{n}\frac{\langle n \rangle^{n}}{n!}e^{-\langle n \rangle}\cos^2\left(\frac{t}{2} \Omega_{\mathrm{RWA-G}}(n)\right). \label{P2-RWAG} \end{align} Where $\Omega_{\mathrm{RWA-G}}(n)=\sqrt{\Delta_{G}^2+(n+1) \lambda^2}$ is conventional Rabi frequency of graphene. $\Delta_{G}$ is detuning parameter, becomes zero in case of RWA. The expression of collapse and revival time \cite{kumar2014quantum} for graphene is \begin{align}t_{\mathrm{colRWA}}(\bar{n})=\sqrt{\frac{2Log(10)}{\bar{n}}}\frac{\bar{n}}{\lambda\sqrt{\bar{n}+1}},\quad t_{\mathrm{revRWA}}(\bar{n})=\frac{2\pi\bar{n} }{\lambda\sqrt{\bar{n}+1}}. \label{tcolrevG} \end{align} \begin{figure}[h] \centering\subfigure[]{\includegraphics[width=70mm,height=65mm]{collapse_revival_graphe.png}} \subfigure[]{\includegraphics[width=70mm,height=65mm]{graph_collapse_revival_time.png}}\\ \caption{\small (Color online) The plot {\bf(a)} showing isotropic nature of collapse-revival phenomenon in graphene and {\bf(b)} showing collapse and revival time nature for graphene. Both plot are taken in case of conventional Rabi frequency. For plotting, we considered the mean number of photon $\langle n \rangle =20$, $\lambda=1$ and $v_{F}=1$. Time is in the unit of $\lambda^{-1}$.} \label{collapse_revival_RWAG} \end{figure} \noindent The plot related with collapse-revival phenomenon [eq.(\ref{P2-RWAG})], collapse time $t_{\mathrm{col}}$ and revival time $t_{\mathrm{rev}}$ [eq.(\ref{tcolrevG})] is depicted in fig.(\ref{collapse_revival_RWAG}) for graphene. It can be seen in fig.\ref{collapse_revival_RWAG}{\bf(b) }that $t_{\mathrm{revRWA}}$ is not varying with number of photon, on the other hand $t_{\mathrm{colRWA}}$ is changing drastically in case of graphene. \subsection{Floquet theory approximation}\label{Floq} \subsubsection{Phosphorene}\label{FloqP} If the external driving frequency $(\omega)$ is nearly equal to the particle-hole pairs $(2v|p|)$ frequencies or resonant frequencies $(\omega_R)$ of the system i.e. $\omega \approx \omega_R$ and $\omega \approx 2v|p| $, energy eigenvalue equation ($H\psi=E\psi$) are solved by using rotating wave approximation (RWA) \cite{haug2009quantum}, described in earlier section(\ref{RWA-appox}). On the other hand, when $\omega$ is too large compare to the Rabi frequency $\omega_R$ and the resonant frequency of the creation particle-hole pairs i.e. $2v|p |$, $\omega \gg \omega_R$ and $\omega \gg 2v|p|$ (off-resonant case), the Floquet approximation is applied to solve energy eigenvalue equation. In Floquet theory, the Hamiltonian is decomposed in series harmonics i.e. $H = H_0 + e^{ -i \omega t } \mbox{ }V_{+} + e^{ i \omega t } \mbox{ }V_{-}$. Similarly, writing wave-function in series harmonics form $\psi = \psi_0 + e^{ -i \omega t } \mbox{ }\psi_{+} + e^{ i \omega t } \mbox{ }\psi_{-}$. $H_0$ and $\psi_0$ related to slow parts, on the other hand $V_{+},V_{-}$ and $\psi_{+},\psi_{-}$ related to the (coefficients of) fast parts of the full Hamiltonian and wave-function, respectively. By application of Floquet theory conditions i.e. external driving frequency $\omega$ contains larger value than band gap. Putting all these expression into energy eigenvalue equation $H\psi=E\psi$ and leaving higher harmonics i.e. order of $\frac{1}{\omega^{2}}$. Eventually, writing Hamiltonian $H$ in term of slow part only \begin{eqnarray} H_{eff}=\left(H_0\mbox{ } +\frac{1}{\omega} \left[ V_{-} , \mbox{ }V_{+}\right]\right). \label{firstorder} \end{eqnarray} The {\it Floquet oscillations frequency} is eigenvalues of $H_{eff}$. Therefore, by comparison of eq.(\ref{matrixP}) with $H|\phi(t)\rangle =\left( H_0 + e^{ -i \omega t } \mbox{ }V_{+} + e^{ i \omega t } \mbox{ }V_{-}\right)|\phi(t)\rangle$, the value of $H_{0}$, $V_{+}$ and $V_{-}$ will found. Therefore, Floquet energy eigenvalue equations i.e. $i\frac{\partial}{\partial_{t}}|\phi(t)\rangle=H_{eff}|\phi(t)\rangle$ have form \begin{subequations} \begin{align}i\hbar\frac{\partial}{\partial_t}\langle 0,1,n+1|\phi \rangle&=&\left(\frac{\frac{(n+1) (2 p_{y} u-v_{F})^2 \lambda \lambda}{4 v^{2}_{F}}-\frac{(n+1) (2 p_{y} u+v_{F})^2 \lambda \lambda}{4 v^{2}_{F}}}{\omega }\right)\langle 0,1,n+1|\phi \rangle \nonumber\\&&+\left(u p_{y}^2+m+i p_{x} v_{F}\right)\langle 1,0,n+1|\phi \rangle \end{align} \begin{align} i\hbar\frac{\partial}{\partial_t}\langle 1,0,n+1|\phi \rangle&=&\left(\frac{\frac{(n+1) (2 p_{y} u+v_{F})^2 \lambda \lambda}{4 v^{2}_{F}}-\frac{(n+1) (2 p_{y} u-v_{F})^2 \lambda \lambda}{4 v^{2}_{F}}}{\omega }\right)\langle 1,0,n+1|\phi \rangle \nonumber\\&&+\left(u
$. One way is to replace $E_{i_0}$ and $E_0$ with rough estimations calculated by classical computational methods that are less costly. For example, in the case of quantum chemistry, we can use the Hartree-Fock (mean-field) method or the M{\o}ller–Plesset method~\cite{szabo2012modern}, which are much less costly on classical computers than the exact diagonalization method, to estimate $E_{i_0}$ and $E_0$. We call the coefficient $\mu_C$ determined by a classical method $\mu_C^{(\mathrm{ce})}$: \begin{equation}\label{eq:classical_estimate_mu} \mu_C^{(\mr{ce})} \coloneqq \frac{E_{i_0}^{(\mr{ce})} - E_0^{(\mr{ce})}}{C_{\min}^2}, \end{equation} where $E_{i_0}^{(\mr{ce})}, E_0^{(\mr{ce})}$ are classically-estimated values. Another way to estimate $E_{i_0} - E_0$ is to use the rigorous upper bound of it. Let $\hat{H} = \sum_j c_jP_j$ be a decomposition of a given Hamiltonian into Pauli operators. When we perform the VQE/VQD, this decomposition is already obtained~\cite{peruzzo2014variational, mcclean2016theory}. By denoting the spectral norm of operators as $\|\cdot\|$, it follows that $E_{i_0} - E_0 \leqq 2\|\hat{H}\| \leqq 2\sum_j |c_j|$ because $\|P_j\|=1$. In this way we define another choice of $\mu_C$, \begin{equation}\label{eq:loose_estimate_mu} \mu_C^{(\mathrm{rough})} \coloneqq \frac{2}{(C_{\min})^2}\sum_j |c_j|, \end{equation} which is easy and applicable to any system but may be too large for the fast convergence of the constrained VQE/VQD (see Sec.~\ref{sec:simulation}). We note that the classical estimation $\mu_C^\mr{(ce)}$ may suffer from the deviation from the exact value of $\mu_C^{\mr{(simple)}}$. When we overestimate the energy gap $E_{i_0} - E_0$ (and $\mu_C^{\mr{(simple)}}$), the cost function~\eqref{eq:cost_func1} with $\mu_C^{\mr{(ce)}}$ still works properly and we can find the desired eigenstate. When we underestimate $E_{i_0} - E_0$, it is possible that the minimum of the cost function differs from the desired eigenstate. We can nevertheless know whether such cases happen or not because we compute the value of $\braket{\psi(\bm{\theta})|(\hat{C}-c)^2|\psi(\bm{\theta})}$ during the optimization; when the value deviates from 0 drastically even after the optimization, we judge that the optimization fails to obtain the desired eigenstate. We can then adopt slightly larger $\mu_C$ for the optimization. Finally, we discuss several extensions of our result. First, our result applies to a case where we have multiple conserved quantities $\{\hat{C}^{(l)}\}_l$. The cost function in this case is \begin{equation} \bra{\psi(\bm{\theta})}\hat{H}\ket{\psi(\bm{\theta})} + \sum_{l}\mu_{C^{(l)}}\bra{\psi(\bm{\theta})}(\hat{C}^{(l)} - c^{(l)})^2\ket{\psi(\bm{\theta})}. \end{equation} The same discussion deriving Eq.~\eqref{eq:estimate_mu} leads to the condition for the optimized result to yield the eigenstate of $\hat{C}^{(l)}$ with eigenvalue $c^{(l)}$: \begin{equation} \mu_{C^{(l)}} \geqq \frac{E_{i_0} - E_0}{(C^{(l)}_{\min})^2}, \end{equation} where $C^{(l)}_{\min}$ is the smallest gap among distinct eigenvalues of $\hat{C}^{(l)}$. Second, the formula~\eqref{eq:estimate_mu} is also applicable to the other VQE-based algorithms to obtain the excited states, namely, SSVQE~\cite{nakanishi2018subspace} and MCVQE~\cite{Parrish2019}. The result is given by simply replacing $E_{i_0}$ of Eq.~\eqref{eq:estimate_mu} with the largest energy $E_{\mr{ex}}$ that one wants to obtain as a result of the optimization. In summary, we derive the formulas~\eqref{eq:mu_formula} and \eqref{eq:estimate_mu} for $\mu_C$ in $F^{(1)}(\bm{\theta})$ that apply to general quantum systems, and they always guarantee that we can obtain the desired eigenstates/energies as a result of the optimization of $F^{(1)}(\bm{\theta})$. \subsection{\label{subsec:theory_f_2} Analysis of $F^{(2)}$: Failure to obtain the desired eigenstates} In this section, we investigate the cost function $F^{(2)}(\bm{\theta})$ (Eq.~\eqref{eq:cost_func2}) and prove that we cannot obtain the exact desired energy/eigenstate by minimizing this cost function. Concretely, the resulting energy after the minimization of $F^{(2)}(\bm{\theta})$ deviates from the one that we want to obtain by $O(1/\mu_C)$ even in the best cases and can be completely wrong in the worst cases. Substituting Eq.~\eqref{eq: expand ansatz} into Eq.~\eqref{eq:cost_func2} leads to \begin{equation}\label{eq:exp_val_F2} F^{(2)}(\bm{\theta}) = \sum_{i = 0}^{2^n-1} |a_i|^2E_i + \mu_C\left(\sum_{i=0}^{2^n-1}|a_i|^2C_i - c\right)^2. \end{equation} We minimize this cost function with respect to the parameters $\{ |a_i|^2 | \sum_i |a_i|^2 = 1 \}_{i=0}^{2^n-1}$ and see whether $E_{i_0}$ is obtained as a result of the optimization (with the parameters $|a_{i_0}|^2= 1, |a_{i\neq i_0}|^2=0$). To graphically see the way the magnitude of $\mu_C$ affects the global minimum of the cost function, we consider an orthogonal coordinate plane whose vertical axis represents the expectation values of $\hat{C}$ and whose horizontal axis represents the expectation values of $\hat{H}$, as depicted in Fig.~\ref{fig:F2_graph}. For example, a point corresponding to some state $\ket{\phi}$ on this plane is $(\braket{\phi|\hat{C}|\phi}, \braket{\phi|\hat{H}|\phi})$. Hereafter, we call this coordinate plane the $(C,E)$ plane. For the ansatz state $\ket{\psi(\bm{\theta})}$ (Eq.~\eqref{eq: expand ansatz}), the expectation values are \begin{align} \bra{\psi(\bm{\theta})} \hat{C} \ket{\psi(\bm{\theta})} = \sum_{i=0}^{2^n-1} |a_i|^2C_i, \\ \bra{\psi(\bm{\theta})} \hat{H} \ket{\psi(\bm{\theta})} = \sum_{i=0}^{2^n-1} |a_i|^2E_i, \end{align} so all possible points corresponding to $\ket{\psi(\bm{\theta})}$ on the $(C, E)$ plane constitute the convex envelope defined by the $2^n$ points, $\{(C_0,E_0), (C_1,E_1),\ldots,(C_{2^n-1},E_{2^n-1})\}$ (the region colored cyan in Fig.~\ref{fig:F2_graph}). On this $(C,E)$ plane, a contour line of the cost function~\eqref{eq:exp_val_F2} (\textit{i.e.}, a set of points where Eq.~\eqref{eq:exp_val_F2} takes the same value) is a parabola: \begin{equation}\label{eq:quadfunc_F2} E = -\mu_C (C - c)^2 + f, \end{equation} where $f$ is the value of Eq.~\eqref{eq:exp_val_F2}. Therefore, the minimization of the cost function is identical to finding the smallest $f$ such that the parabola~\eqref{eq:quadfunc_F2} and the convex envelope defined by $\{(C_i, E_i)\}_{i=0}^{2^n-1}$ have a non-empty intersection. The smallest $f$ is achieved when the parabola \eqref{eq:quadfunc_F2} touches the polygon at just one point as shown in Fig.~\ref{fig:F2_graph}. Depending on the location of the point $(c, E_{i_0})$ corresponding to the desired eigenstate, we can consider two cases: \begin{enumerate} \item[(A)] The point $(c, E_{i_0})$ is a boundary point of the convex envelope (the blue point in Fig.~\ref{fig:F2_graph}), \item[(B)] The point $(c, E_{i_0})$ is an interior point of the convex envelope (the red point in Fig.~\ref{fig:F2_graph}). \end{enumerate} In case (A), unless $i_0 = 0$ (the desired state is the ground state of the Hamiltonian), the global minimum of the cost function is reached at a point slightly different from $(c, E_{i_0})$ (the green and yellow points in Fig.~\ref{fig:F2_graph}). Let us write an edge of the convex envelope that is tangent to the parabola~\eqref{eq:quadfunc_F2} as $E-E_{i_0} = \alpha (C - c)$ with some real number $\alpha$ on the $(C, E)$ plane. It is straightforward to show that \begin{align} \label{eq:tangent_point_f2} (C_t, E_t) &= \left( c - \frac{\alpha}{2\mu_C}, E_{i_0} - \frac{\alpha^2}{2\mu_C} \right), \\ \label{eq:fmin_f2} f_{\min} &= E_{i_0} - \frac{\alpha^2}{4\mu_C}, \end{align} where $(C_t, E_t)$ is a coordinate of the tangent point and $f_{\min}$ is the value of $f$ in Eq.~\eqref{eq:quadfunc_F2} for $(C,E) = (C_t, E_t)$, \textit{i.e.}, the minimal value of the cost function. Those equations mean that the expectation values $(\bra{\psi(\bm{\theta})} \hat{C} \ket{\psi(\bm{\theta})}, \bra{\psi(\bm{\theta})} \hat{H} \ket{\psi(\bm{\theta})})$ at the global minimum of the cost function~\eqref{eq:cost_func2} {\it always} deviate from the target ones $(c, E_{i_0})$ by $O(1/\mu_C)$ for any finite $\mu_C$. Only for infinitely large $\mu_C$, does the tangent point become $(c, E_{i_0})$, and we get the desired eigenstate. On the other hand, in case (B), the desired eigenstate can never be obtained as a result of the optimization even for infinitely large $\mu_C$; the desired point $(c, E_{i_0})$ has no chance to be a tangent point to the parabola~\eqref{eq:quadfunc_F2}. \begin{figure} \centering \includegraphics[width = \linewidth]{figure_F2_ver3.jpg} \caption{A schematic diagram of the location of the global minimum of the cost function $F^{(2)}(\bm{\theta})$. In this diagram, small black points correspond to simultaneous eigenstates for $\hat{H}$ and $\hat{C}$ (Eq.~\eqref{eq:spec_decomp}), and the convex envelope defined by them is colored cyan. The curves colored yellow, green, and pink are the parabolas~\eqref{eq:quadfunc_F2} with small, medium-sized, and large $\mu_C$, respectively. The tangent points to the convex envelope (\textit{i.e.}, the global minimum of the cost function) are also indicated by the points of the same color. The dashed vertical line represents the desired expectation value of the conserved quantity.} \label{fig:F2_graph} \end{figure} Before ending this section, we give another intuitive explanation for the reason why $F^{(1)}(\bm{\theta})$ works properly to choose the desired eigenstate while $F^{(2)}(\bm{\theta})$ does not. In the expression of $F^{(1)}(\bm{\theta})$ (the right hand side of Eq.~\eqref{eq:costfunc_exp}), both the ``energy part" (the first term) and the ``conserved-quantity" part (the second term) are proportional to $|a_i|^2$. On the other hand, in $F^{(2)}(\bm{\theta})$ (the right hand side of Eq.~\eqref{eq:exp_val_F2}), the conserved-quantity part is a quadratic function of $|a_i|^2$ while the energy part is proportional to $|a_i|^2$. Since $|a_i|^2 \leqq 1$ for all $i$, the deviation in the conserved-quantity part from the desired value is less penalized for $F^{(2)}(\bm{\theta})$ than for $F^{(1)}(\bm{\theta})$. This makes the difference between the performance of the cost functions $F^{(1)}(\bm{\theta})$ and $F^{(2)}(\bm{\theta})$. Indeed, the minimum of $F^{(2)}(\bm{\theta})$ in case (A) is achieved at a point where the value of the conserved quantity deviates from the desired one, and the energy gets smaller than the desired energy. In short, we show that the cost function $F^{(2)}(\bm{\theta})$ for the constrained VQE/VQD (Eq.~\eqref{eq:cost_func2}) does not work at all in a rigorous sense. We can only obtain a slightly-deviated desired eigenstate with an error of $O(1/\mu_C)$ even in the best cases while we obtain a totally different eigenstate in the worst case. \subsection{Analysis of noise robustness of $F^{(1)}$ and $F^{(2)}$} Here, we investigate the effects of noise on the performance of the two cost functions of $F^{(1)}(\bm{\theta})$ and $F^{(2)}(\bm{\theta})$. To make analysis simple and obtain the general tendency of the robustness of $F^{(1)}(\bm{\theta})$ and $F^{(2)}(\bm{\theta})$ to noise, we consider the $n$-qubit depolarizing channel to represent the noise. The $n$-qubit depolarizing channel outputs the completely mixed state with a certain probability and outputs the original input state otherwise~\cite{nielsen_chuang_2010}. More formally, the $n$-qubit depolarizing channel $\Delta_p$ with probability $p$ is defined as \begin{equation} \Delta_p(\rho) \coloneqq (1-p)\rho + p\frac{I}{2^n} \end{equation} for all $n$-qubit states $\rho$, where $0\leqq p \leqq 1$ and $I$ is the $2^n\times 2^n$ identity matrix. For the analysis of noise, we suppose that the depolarizing channel is applied to the ansatz state $\ket{\psi(\bm{\theta})}$ resulting in the mixed state \begin{equation}~\label{eq:noisy_ansatz_state} \begin{aligned} \rho_p(\bm{\theta}) &\coloneqq \Delta_p(\ket{\psi(\bm{\theta})}\bra{\psi(\bm{\theta})})\\ &=(1-p)\ket{\psi(\bm{\theta})}\bra{\psi(\bm{\theta})} + p\frac{I}{2^n}. \end{aligned} \end{equation} When $p=1$, the resulting state is the completely mixed state $I/2^n$, which does not depend on the parameters. In this case, the cost function is a constant function, and the optimization becomes trivial. Hereafter, we take $0\leqq p < 1$ to avoid this situation. First, we investigate the effects of this noise on $F^{(1)}(\bm{\theta})$. Let $\tilde{F}^{(1)}(\bm{\theta})$ be the modified version of $F^{(1)}(\bm{\theta})$ where the expectation values with respect to $\ket{\psi(\bm{\theta})}$ are replaced with those with respect to $\rho_p(\bm{\theta})$. Then, considering the expansion of $\ket{\psi(\bm{\theta})}$ as in Eq.~\eqref{eq: expand ansatz} and the noisy ansatz state~\eqref{eq:noisy_ansatz_state}, we obtain \begin{equation} \begin{aligned} \tilde{F}^{(1)}(\bm{\theta}) &= (1-p)\sum_{i=0}^{2^n-1} |a_i|^2(E_i + \mu_C (C_i - c)^2) \\ &+ p\mathrm{Tr}(\hat{H}+\mu_C(\hat{C}-c)^2). \end{aligned} \end{equation} Since the second term of $\tilde{F}^{(1)}(\bm{\theta})$ does not depend on the parameters $\bm{\theta}$, the optimization of $\tilde{F}^{(1)}(\bm{\theta})$ with respect to $\bm{\theta}$ can be completed by the optimization of the first term. The first term is identical to $F^{(1)}(\bm{\theta})$ up to the constant coefficient $(1-p)$; therefore, the optimization of $\tilde{F}^{(1)}(\bm{\theta})$ yields the same optimal parameters as that of $F^{(1)}(\bm{\theta})$. Thus, in this case, the noise has no effect and we can obtain the desired eigenstate if we set the penalty coefficient $\mu_C$ appropriately as in Eq.~\eqref{eq:mu_formula}. Next, we analyze the effects of the $n$-qubit depolarizing noise on $F^{(2)}(\bm{\theta})$. Similarly, consider the modified version $\tilde{F}^{(2)}(\bm{\theta})$ where the expectation values are computed with respect to $\rho_p(\bm{\theta})$. By using the expansions of $\ket{\psi(\bm{\theta})}$ (Eq.~\eqref{eq: expand ansatz}) and the noisy ansatz state (Eq.~\eqref{eq:noisy_ansatz_state}), we obtain \begin{equation}\label{eq:exp_val_F2_noise} \begin{aligned} \tilde{F}^{(2)}(\bm{\theta}) &= (1-p)\sum_{i = 0}^{2^n-1} |a_i|^2E_i + p\mathrm{Tr}(\hat{H}) \\ &+ \mu_C\left((1-p)\sum_{i=0}^{2^n-1}|a_i|^2C_i - (c - p\mathrm{Tr}(\hat{C}))\right)^2. \end{aligned} \end{equation} A contour line of the cost function $\tilde{F}^{(2)}(\bm{\theta})$ on the $(C,E)$ plane considered in Sec.~\ref{subsec:comparison_F1F2} becomes a parabola again: \begin{equation}\label{eq:quadfunc_F2_noise} \begin{aligned} (1-p)E = &-\mu_C ((1-p)C - (c - p\mathrm{Tr}(\hat{C})))^2 \\ &+ \tilde{f} - p\mathrm{Tr}(\hat{H}), \end{aligned} \end{equation} where $\tilde{f}$ is the value of Eq.~\eqref{eq:exp_val_F2_noise}. The minimum of $\tilde{F}^{(2)}(\bm{\theta})$ is achieved when the parabola \eqref{eq:quadfunc_F2_noise} touches the convex envelope constructed from the set of points $\{(C_i,E_i)\}_i$ corresponding to the eigenvalues of $\hat{C}$ and $\hat{E}$. Intuitively, observing that the axis of symmetry of the parabola deviates from $C=c$, we expect that the optimization of $\tilde{F}^{(2)}(\bm{\theta})$ does not yield the desired eigenstate whose eigenvalue for $\hat{C}$ is $c$. Indeed, it is straightforward to obtain \begin{align*} \tilde{C}_t &= \frac{1}{1-p}\left((c-p\mathrm{Tr}(\hat{C})) - \frac{\alpha}{2\mu_C}\right) \\ &= C_t + p\left(c-\mathrm{Tr}(\hat{C})-\frac{\alpha}{2\mu_C}\right) + O(p^2),\\ \tilde{E}_t &= E_{i_0} - \frac{\alpha}{1-p}\left(\frac{\alpha}{2\mu_C} + p\mathrm{Tr}(\hat{C})\right)\\ &= E_t -p\left(\alpha\mathrm{Tr}(\hat{C})+\frac{\alpha^2}{2\mu_C}\right) + O(p^2)\\ \tilde{f}_{\min} &= f_{\min} + p(\mathrm{Tr}(\hat{H})-\alpha\mathrm{Tr}(\hat{C})-E_{i_0}), \end{align*} where $(\tilde{C}_t, \tilde{E}_t)$ is a coordinate of the tangent point and $\tilde{f}_{\min}$ is the value of $\tilde{F}^{(2)}(\bm{\theta})$ at that point. We again write an edge of the convex envelope that is tangent to the parabola as $E-E_{i_0} = \alpha (C - c)$. In addition, $(C_t,E_t)$ and $f_{\min}$ are the values in the noiseless case as shown in Eqs.~\eqref{eq:tangent_point_f2} and \eqref{eq:fmin_f2}. The above equations tell us that the optimal parameters that minimize $\tilde{F}^{(2)}(\bm{\theta})$ deviate from those minimizing $F^{(2)}(\bm{\theta})$ due to the non-zero probability $p$ of the depolarizing noise. This is in stark contrast to the case of $F^{(1)}(\bm{\theta})$ and $\tilde{F}^{(1)}(\bm{\theta})$. We have shown that $F^{(1)}(\bm{\theta})$ is robust to the $n$-qubit depolarizing while $F^{(2)}(\bm{\theta})$ is not. We expect that this tendency of the noise vulnerability of the two cost functions applies to more general types of the noise, although the further detailed analysis is needed. \section{\label{sec:simulation} Numerical simulations} In this section, we numerically simulate the constrained VQE/VQD for two molecules, $\ce{H_2}$ and $\ce{H_4}$, to validate our results presented in the previous section. We also discuss the comparison of the performance of the two cost functions $F^{(1)}(\bm{\theta})$ and $F^{(1)}(\bm{\theta})$ in practical calculations. Our setup for numerical simulations is as follows. We consider a hydrogen molecule $\ce{H_2}$ at bond distance $R=\SI{0.7414}{\AA}$ and a hydrogen chain $\ce{H_4}$ where four hydrogen atoms are aligned in line with the identical bond distance $R=\SI{2.0}{\AA}$. We adopt the STO-3G minimal basis set to perform the restricted Hartree-Fock calculation for the two molecules. We prepare the fermionic second-quantized Hamiltonian for electrons using PySCF~\cite{Sun2018_pyscf} and Openfermion~\cite{mcclean2017openfermion} and use Jordan-Wigner transformation~\cite{Jordan1928} to map the fermionic Hamiltonians into qubit Hamiltonians~\cite{mcardle2018quantum, Cao2018}. The numbers of qubits to express the Hamiltonian are $4$ and $8$ for $\ce{H_2}$ and $\ce{H_4}$, respectively. As conserved quantities, we consider the total number of electrons $\hat{N}_e$, the total spin-squared operator $\hat{S}^2$, and the total $z$-component of the spin $\hat{S}_z$. Those conserved quantities are also transformed into operators on qubits by Jordan-Wigner transformation. We employ the hardware-efficient type ansatz~\cite{kandala2017hardware} shown in Fig.~\ref{fig:HEA} for $\ce{H_2}$. The depth of the ansatz is $D=4$ in the constrained VQE to compute the $\ce{T_1}$ state (defined later) and $D=12$ in the constrained VQD to compute the $\ce{S_1}$ state (defined later). We adopt the real-valued symmetry-preserving type ansatz of the depth $D=12$ introduced in Refs.~\cite{ibe2020calculating,Gard2019} for $\ce{H_4}$ to perform the constrained VQE/VQD. We do not include any noise for quantum circuit simulations, and exact expectation values are used in numerical simulations. All simulations are performed by the high-speed quantum circuit simulator Qulacs~\cite{qulacs_2018}. \begin{figure} \centering \includegraphics[width = 0.8\linewidth]{HEA_orig.png} \caption{Quantum circuit for the hardware-efficient type ansatz. Each of $R_Y = e^{i\theta Y/2}$ and $R_Z = e^{i\theta Z/2}$ gates has an independent parameter $\theta$, where $Y$ and $Z$ are the Pauli $Y, Z$ operators. $D$ denotes the depth of the ansatz.} \label{fig:HEA} \end{figure} \subsection{\label{subsec:comparison_mu} Numerical demonstration for analysis on $F^{(1)}$} We perform simulations to validate the analysis on $F^{(1)}(\bm{\theta})$ in Sec.~\ref{subsec: construction_mu} and investigate how the magnitude of penalty coefficient $\mu_C$ affects the accuracy and the convergence of the optimization. As for the target state of the constrained VQE/VQD, we consider the $\ce{T_1}$ state (the ground state in the triplet sector) which corresponds to $\hat{S}^2 = 2$ and $\hat{S}_z = -1$ and the $\ce{S_1}$ state (the first excited state in the singlet sector) which corresponds to $\hat{S}^2 = 0$ and $\hat{S}_z = 0$. Note that the ground state is the spin singlet ($\hat{S}^2 = 0$ and $\hat{S}_z = 0$) state for both the $\ce{H_2}$ and $\ce{H_4}$ molecules. We set the penalty terms for some of $\hat{N}, \hat{S}^2$ and, $\hat{S}_z$ depending on the molecule and the target state, which is summarized in TABLE~\ref{table_penalty_summary}. We compute the energy of the $\ce{S_1} (\ce{T_1})$ state as the ground state (first excited-state) energy of the constrained VQE (VQD) with these constraints. \begin{table} \centering \begin{tabular}{c c c c c} \hline && $\ce{H_2}$ && $\ce{H_4}$ \\ \hline triplet $\ce{T_1}$ && $\hat{N}_e=2, \hat{S}^2=2, \hat{S}_z=-1$ && $\hat{S}^2=2, \hat{S}_z=-1$ \\ singlet $\ce{S_1}$ && $\hat{N}_e=2, \hat{S}^2
=0$ && $\hat{S}^2=0$ \\ \hline \end{tabular} \caption{The penalty terms included in the cost functions of the constrained VQE/VQD in numerical simulations. Since the real-valued symmetry-preserving type ansatz~\cite{ibe2020calculating, Gard2019} preserves the number of electrons of the reference state (see Eq.~\eqref{eq: ansatz state}) and we take $\ket{\psi_0} = \ket{00001111}$, we do not set the penalty on $\hat{N}_e$ for $\ce{H_4}$. \label{table_penalty_summary}} \end{table} First, we estimate $\mu_C^{(\mathrm{ce})}$ (Eq.~\eqref{eq:classical_estimate_mu}) and $\mu_C^{(\mathrm{rough})}$ (Eq.~\eqref{eq:loose_estimate_mu}) for the $\ce{H_2}$ and $\ce{H_4}$ molecules. We use the configuration interaction singles and doubles (CISD) method implemented in PySCF~\cite{Sun2018_pyscf} to compute the classical estimations of the energies appearing in $\mu_C^{(\mathrm{ce})}$. Note that we may use other less-costly methods for large molecules. The values of $\mu_C$ are presented in TABLE~\ref{table_mu}. As expected, $\mu_C^{(\mathrm{rough})}$ is much larger than $\mu_C^{(\mathrm{ce})}$ for all coefficients, which may cause the slow convergence of the optimization in the constrained VQE/VQD. Next, by using the values of $\mu_C$ in TABLE~\ref{table_mu}, we numerically simulate the constrained VQE/VQD with the cost function $F^{(1)}(\bm{\theta})$ (Eq.~\eqref{eq:cost_func1}). The optimization is performed by several optimizers (Powell, conjugate gradient (CG), and Broyden-Fletcher-Goldfarb-Shanno (BFGS)) implemented in SciPy, a numerical library of Python~\cite{Virtanen_2020} with default parameters. We simulate the constrained VQE (VQD) to obtain the $\ce{T_1}$ state ($\ce{S_1}$ state) for both the $\ce{H_2}$ and $\ce{H_4}$ molecules. When performing the constrained VQD for the $\ce{S_1}$ state, we take a hyperparameter $\beta_0$ (see Eq.~\eqref{VQD_costfunc}) as $\beta_0 = 2(E_{\ce{S_1}} - E_0)$, where $E_{\ce{S_1}}$ and $E_0$ are energies of the $\ce{S_1}$ state and the ground state, respectively. This choice of $\beta_0$ is based on the original VQD proposal~\cite{higgott2019variational} to ensure that the VQD cost function $L_{\mr{VQD}}(\bm{\theta})$ brings out the excited state properly. In practice we also estimate the value of $\beta_0$ in the same way as we do for $\mu_C$, $ \beta_0^{(\mr{ce})} = 2(E_{\ce{S_1}}^{(\mr{ce})} - E_0^{(\mr{ce})})$ and $\beta_0^{(\mr{rough)}} = 4\sum_i |c_i|$, where the superscript ``(ce)" represents the classical estimation of the energy and $\{c_i\}_i$ are the coefficients appearing in the decomposition of the Hamiltonian into Pauli operators, $\hat{H} = \sum_i c_iP_i$. We adopt $\beta_0^{(\mr{ce})}$ for the simulation with $\mu_C^{(\mr{ce})}$ and $\beta_0^{(\mr{rough})}$ for that with $\mu_C^{(\mr{rough})}$. We run the simulations for ten different initial values of $\bm{\theta}$ in each case and compute the average number of the evaluations of $F^{(1)}(\theta)$ and the average value of the expectation value of the Hamiltonian for the optimized ansatz state. The results for $\ce{H_2}$ and $\ce{H_4}$ are shown in TABLEs~\ref{table_H2_mu} and \ref{table_H4_mu}, respectively. Those results clearly indicate that the penalty coefficient $\mu_C$ determined by our formulas in Sec.~\ref{subsec: construction_mu} is sufficient for the cost function to choose the desired eigenstates. Moreover, we observe that the number of function evaluations is much smaller for $\mu_C^{(\mathrm{ce})}$ than for $\mu_C^{(\mathrm{rough})}$. The formula for $\mu_C^{(\mathrm{rough})}$ is convenient and always applicable as long as the Hamiltonian is known, but it may be inappropriate for the fast convergence in practical calculations. We note that the optimization result is slightly worse for the Powell method possibly due to the imperfectness of the optimization, nevertheless the tendency of the small number of function evaluations in $\mu_C^{(\mathrm{ce})}$ is well observed. \begin{table} \centering \begin{tabular}{c c c c c c c c} \hline &\multicolumn{4}{c}{$\mu_C^{(\mathrm{ce})}$} &&\multicolumn{2}{c}{$\mu_C^{(\mathrm{rough})}$}\\\hline &\multicolumn{2}{c}{$\ce{H_2}$} &\multicolumn{2}{c}{$\ce{H_4}$} &&$\ce{H_2}$ &$\ce{H_4}$ \\\hline &Triplet T$_1$& Singlet S$_1$& Triplet &Singlet&\\\hline $\mu_{N}$ &$0.6048$ &$0.9674$ &$0.03421$ &$0.03711$ &&$3.968$ &$12.11$\\ $\mu_{S^2}$ &$1.075$ &$1.720$ &$0.06082$ &$0.06597$ && $7.054$ &$21.52$\\ $\mu_{S_z}$ &$2.419$ &$3.869$ &$0.1368$ &$0.1484$ &&$15.87$ &$48.42$\\ \hline \end{tabular} \caption{Penalty coefficients calculated by our formulas~\eqref{eq:classical_estimate_mu} and \eqref{eq:loose_estimate_mu} for the target eigenstates of $\ce{H_2}$ and $\ce{H_4}$.} \label{table_mu} \end{table} \begin{table*} \centering \begin{tabular}{c c r r r r c r r r r } \hline && \multicolumn{4}{c}{$\ce{H_2}$ Triplet $\ce{T_1}$} && \multicolumn{4}{c}{$\ce{H_2}$ Singlet $\ce{S_1}$} \\\hline && \multicolumn{2}{c}{$\mu_C^{(\mathrm{ce})}$} & \multicolumn{2}{c}{$\mu_C^{(\mathrm{rough})}$} && \multicolumn{2}{c}{$\mu_C^{(\mathrm{ce})}$} & \multicolumn{2}{c}{$\mu_C^{(\mathrm{rough})}$} \\\hline Optimizer && \multicolumn{1}{c}{nfev} & \multicolumn{1}{c}{residuals} & \multicolumn{1}{c}{nfev} & \multicolumn{1}{c}{residuals} && \multicolumn{1}{c}{nfev} & \multicolumn{1}{c}{residuals} & \multicolumn{1}{c}{nfev} & \multicolumn{1}{c}{residuals}\\\hline Powell &&$\SI{3569}{}$ & $\SI{0.000011}{}$ & $\SI{5073}{}$ & $\SI{0.000002}{}$ && $\SI{35484}{}$ & $\SI{-0.000219}{}$ & $\SI{48863}{}$ & $\SI{0.064635}{}$ \\\hline CG && $\SI{5055}{}$ &$\SI{0.000000}{}$ & $\SI{7987}{}$ & $\SI{0.000000}{}$ && $\SI{142559}{}$ &$\SI{0.000000}{}$ & $\SI{263750}{}$ &$\SI{0.000000}{}$ \\\hline BFGS && $\SI{1406}{}$ &$\SI{0.000000}{}$ & $\SI{1615}{}$ &$\SI{0.000000}{}$ && $\SI{16128}{}$ & $\SI{0.000000}{}$ & $\SI{19373}{}$ & $\SI{0.000000}{}$ \\\hline \end{tabular} \caption{The results of the numerical simulations for calculating the $\ce{T_1}$ state and the $\ce{S_1}$ state of $\ce{H_2}$ by the constrained VQE/VQD with the cost function $F^{(1)}(\bm{\theta})$ whose penalty coefficient is $\mu^{(\mathrm{ce})}$ or $\mu^{(\mathrm{rough})}$. The optimizer is chosen from the Powell, CG, and BFGS methods. The term ``nfev" is the number of evaluations of the cost function during the optimization, and ``residuals" represents the difference between the expectation value of the Hamiltonian for the resulting optimized state and the theoretical (or desired) energy in Hartree (Ha). The theoretical value for the $\ce{T_1}$ state is $\SI{-0.532479}{Ha}$, and that of the $\ce{S_1}$ state is $\SI{-0.169901}{Ha}$. We perform the constrained VQE/VQD for ten different initial values, and all values shown in this table are the average for the ten trials. } \label{table_H2_mu} \end{table*} \begin{table*} \centering \begin{tabular}{c c r r r r c r r r r } \hline && \multicolumn{4}{c}{$\ce{H_4}$ Triplet $\ce{T_1}$} && \multicolumn{4}{c}{$\ce{H_4}$ Singlet $\ce{S_1}$} \\\hline && \multicolumn{2}{c}{$\mu_C^{(\mathrm{ce})}$} & \multicolumn{2}{c}{$\mu_C^{(\mathrm{rough})}$} && \multicolumn{2}{c}{$\mu_C^{(\mathrm{ce})}$} & \multicolumn{2}{c}{$\mu_C^{(\mathrm{rough})}$} \\\hline Optimizer && \multicolumn{1}{c}{nfev} & \multicolumn{1}{c}{residuals} & \multicolumn{1}{c}{nfev} & \multicolumn{1}{c}{residuals} && \multicolumn{1}{c}{nfev} & \multicolumn{1}{c}{residuals} & \multicolumn{1}{c}{nfev} & \multicolumn{1}{c}{residuals}\\\hline Powell &&$\SI{20405}{}$ &$\SI{0.007091}{}$ & $\SI{110822}{}$ &$\SI{0.082902}{}$ && $\SI{42933}{}$ &$\SI{0.024028}{}$ & $\SI{266387}{}$ & $\SI{0.138162}{}$ \\\hline CG && $\SI{60563}{}$ &$\SI{0.000000}{}$ & $\SI{263041}{}$ &$\SI{0.009657}{}$ && $\SI{115915}{}$ & $\SI{0.000000}{}$ & $\SI{528539}{}$ &$\SI{0.020683}{}$ \\\hline BFGS && $\SI{17604}{}$ &$\SI{0.000000}{}$ & $\SI{147063}{}$ &$\SI{0.001421}{}$ && $\SI{31042}{}$ &$\SI{0.000000}{}$ & $\SI{252330}{}$ & $\SI{0.000000}{}$ \\\hline \end{tabular} \caption{ The results of the numerical simulations for calculating the $\ce{T_1}$ state and the $\ce{S_1}$ state of $\ce{H_4}$ by the constrained VQE/VQD with the cost function $F^{(1)}(\bm{\theta})$ whose penalty coefficient is $\mu^{(\mathrm{ce})}$ or $\mu^{(\mathrm{rough})}$. The optimizer is chosen from the Powell, CG, and BFGS methods. The term ``nfev" is the number of evaluations of the cost function during the optimization, and ``residuals" represents the difference between the expectation value of the Hamiltonian for the resulting optimized state and the theoretical (or desired) energy in Hartree (Ha). The theoretical value for the $\ce{T_1}$ state is $\SI{-1.881876}{Ha}$, and that of the $\ce{S_1}$ state is $\SI{-1.856584}{Ha}$. We perform the constrained VQE/VQD for ten different initial values, and all values shown in this table are the average for the ten trials. } \label{table_H4_mu} \end{table*} \subsection{\label{subsec:comparison_F1F2} Numerical demonstration for analysis on $F^{(2)}$ and comparison of practical performance of two cost functions} We perform the following two simulations to validate our analysis on $F^{(2)}(\bm{\theta})$ in Sec.~\ref{subsec:theory_f_2} and to compare the two cost functions $F^{(1)}(\bm{\theta})$ and $F^{(2)}(\bm{\theta})$. \begin{itemize} \item[(i)] Computing the energy of the $\ce{T_1}$ state for the $\ce{H_4}$ by the constrained VQE with the two cost functions having a range of penalty coefficients, $\mu_C = \mu_{S^2} = \mu_{S_z} = 0.01, 0.1, 1, 10, 100$. \item[(ii)] Computing the energy of the $\ce{S_1}$ state of $\ce{H_2}$ using the constrained VQD under the constraint $\hat{N}_e = 2$ with $\mu_C = \mu_{N_e} = 0.01, 0.1, 1, 10, 100$. Since we set the constraint only for $\hat{N}_e$ in this case, the $\ce{S_1}$ state is obtained as the fourth excited state of the constrained VQD $(k=4)$. The hyperparameters of the VQD $\{\beta_i\}_{i=0}^3$ (see Eq.~\eqref{VQD_costfunc}) are all set to $3.0$. \end{itemize} For both simulations, we use the BFGS optimizer, which shows the best performance in Sec.~\ref{subsec:comparison_mu}. The results of simulation (i) are shown in TABLE~\ref{table_cost}. We perform the constrained VQE for ten different initial values and calculate the average of the expectation values of the Hamiltonian for the resulting optimized states. Unlike in the previous section, we calculate the average number of ``Pauli measurements" to assess the computational cost for the constrained VQE. The number of Pauli measurements is defined as (the number of evaluations of the cost function during the optimization) $\times$ (the number of Pauli operators to be measured to evaluate the cost function once). This number represents the actual computational cost to perform experiments on a real NISQ device~\cite{mcclean2016theory}. From TABLE~\ref{table_cost}, we can observe two facts. First, the accuracy (``residuals") of the optimization tends to improve with $1/\mu_C$, which validates the discussion in Sec.~\ref{subsec:theory_f_2}. Second, more importantly from the viewpoint of practical applications, the number of Pauli measurements for $F^{(2)}(\bm{\theta})$ is smaller than that for $F^{(1)}(\bm{\theta})$. This is mainly because the evaluation of $F^{(1)}(\bm{\theta})$ involves the evaluation of the expectation value of the operator $(\hat{C}-c)^2$ that consists of more Pauli operators than the original $\hat{C}$ does. The evaluation of $F^{(2)}(\bm{\theta})$ necessitates only the expectation value of $\hat{C}$ itself. If we choose the best cases for $F^{(1)}(\bm{\theta}) \: (\mu_C =0.01)$ and $F^{(2)}(\bm{\theta}) \: (\mu_C =100)$, where both cost functions give sufficiently accurate results, the number of Pauli measurements still is smaller for $F^{(2)}(\bm{\theta})$. This interesting observation indicates that even though $F^{(2)}(\bm{\theta})$ cannot theoretically achieve an exact desired energy, it may achieve a sufficiently accurate energy with small computational costs for some cases. Nevertheless, we also find a practical case where $F^{(2)}(\bm{\theta})$ can never achieve a target energy even with infinitely large $\mu_C$ in simulation (ii). The results of simulation (ii) are shown in TABLE~\ref{table_F2_fail}. The optimization of $F^{(1)}(\bm{\theta})$ gives the correct values for sufficiently large $\mu_C$. On the other hand, the optimization of $F^{(2)}(\bm{\theta})$ yields results crucially far from the exact value even for large $\mu_C$. In this case, the target eigenstate is inside the convex envelope explained in Sec.~\ref{subsec:theory_f_2} and we simply cannot obtain that state by optimizing the cost function $F^{(2)}(\bm{\theta})$. To summarize, we numerically validate our theoretical analysis on $F^{(2)}(\bm{\theta})$ in Sec.~\ref{subsec:theory_f_2} in practical quantum chemistry calculations. We find examples for both cases (A) and (B) in Sec.~\ref{subsec:theory_f_2}, where one can obtain the desired energy with an error of $O(1/\mu_C)$ (case (A)) or cannot obtain it even with infinitely large $\mu_C$ (case (B)). Moreover, we find that sometimes the cost function $F^{(2)}(\bm{\theta})$ can find a sufficiently accurate energy with less computational cost than $F^{(1)}(\bm{\theta})$, reflecting the difference in the amount of effort to evaluate the cost functions on NISQ devices. We stress that the use of $F^{(2)}(\bm{\theta})$ may be preferable in some cases to reduce the computational cost, but whether we can obtain the target state as a result of the optimization cannot be guaranteed at all {\it a priori}. \begin{table} \centering \begin{tabular}{c c r r c r r} \hline && \multicolumn{5}{c}{$\ce{H_4}$ Triplet $\ce{T_1}$}\\ \hline &&\multicolumn{2}{c}{$F^{(1)}(\bm{\theta})$} &&\multicolumn{2}{c}{$F^{(2)}(\bm{\theta})$} \\\hline $\mu_C$ && \multicolumn{1}{c}{$N_\mr{meas}$} & \multicolumn{1}{c}{residuals} && \multicolumn{1}{c}{$N_\mr{meas}$} & \multicolumn{1}{c}{residuals} \\\hline $0.01$ && $\SI{12772015}{}$ &$\SI{0.000000}{}$&&$\SI{4446580}{}$ & $\SI{-0.002530}{}$ \\\hline $0.1$ && $\SI{13315505}{}$ &$\SI{0.000000}{}$&&$\SI{4448356}{}$ &$\SI{-0.000253}{}$ \\\hline $1$ && $\SI{26973645}{}$ &$\SI{0.000000}{}$&&$\SI{4583370}{}$ & $\SI{-0.000025}{}$ \\\hline $10$ && $\SI{85835975}{}$ &$\SI{0.000044}{}$&&$\SI{6571274}{}$ &$\SI{-0.000002}{}$\\\hline $100$ && $\SI{118039912}{}$ &$\SI{0.015919}{}$&&$\SI{9781409}{}$ &$\SI{0.000000}{}$ \\\hline \end{tabular} \caption{The results of the numerical simulations for calculating the $\ce{T_1}$ state of the $\ce{H_4}$ by the constrained VQE with various $\mu_C$. $N_\mr{meas}$ represents the number of Pauli measurements necessary to perform the whole optimization process, which is computed as (the number of evaluations of the cost function during the optimization) $\times$ (the number of Pauli operators to be measured to get the value of the cost function once). The term ``residuals" represents the difference between the expectation value of the Hamiltonian for the resulting optimized state and that for the exact (desired) state in Hartree (Ha). The exact value for this case is $\SI{-1.881876}{Ha}$. We perform the constrained VQE for ten different initial values, and all values shown in this table are the average for the ten trials. } \label{table_cost} \end{table} \begin{table} \centering \begin{tabular}{c c r r c r r} \hline && \multicolumn{5}{c}{$\ce{H_2}$ Singlet $\ce{S_1}$}\\ \hline &&\multicolumn{2}{c}{$F^{(1)}(\bm{\theta})$} &&\multicolumn{2}{c}{$F^{(2)}(\bm{\theta})$} \\\hline $\mu_C$ && \multicolumn{1}{c}{$N_\mr{meas}$} & \multicolumn{1}{c}{residuals} && \multicolumn{1}{c}{$N_\mr{meas}$} & \multicolumn{1}{c}{residuals} \\\hline $0.01$ && $\SI{18099}{}$ &$\SI{-0.360693}{}$&&$\SI{1271937}{}$ & $\SI{-0.363527}{}$ \\\hline $0.1$ && $\SI{12794}{}$ &$\SI{-0.268809}{}$&&$\SI{986748}{}$ &$\SI{-0.329921}{}$ \\\hline $1$ && $\SI{12390}{}$ &$\SI{0.000000}{}$&&$\SI{1308839}{}$ & $\SI{-0.323611}{}$ \\\hline $10$ && $\SI{14519}{}$ &$\SI{0.000000}{}$&&$\SI{1264772}{}$ &$\SI{-0.323009}{}$\\\hline $100$ && $\SI{32000}{}$ &$\SI{0.000000}{}$&&$\SI{1293515}{}$ &$\SI{-0.322953}{}$ \\\hline \end{tabular} \caption{The results of the numerical simulations for calculating the energy of the $\ce{S_1}$ state of $\ce{H_2}$ by the constrained VQD under the constraint $\hat{N}_e = 2$ with various $\mu_C$. Note that we only constrain the number of electrons $\hat{N}_e$ in this simulation while we compute the same energy by constraining $\hat{N}_e$ and $\hat{S}^2$ in the simulation shown in TABLE~\ref{table_H2_mu} of Sec.~\ref{subsec:comparison_mu}. $N_\mr{meas}$ represents the number of Pauli measurements necessary to perform the whole optimization process, which is computed as (the number of evaluations of the cost function during the optimization) $\times$ (the number of Pauli operators to be measured to get the value of the cost function once). The term ``residuals" represents the difference between the expectation value of the Hamiltonian for the resulting optimized state and that for the exact (desired) state in Hartree (Ha). The exact value for this case is $\SI{-0.169901}{Ha}$. We perform the constrained VQD for ten different initial values, and all values shown in this table are the average for the ten trials.} \label{table_F2_fail} \end{table} \section{\label{sec:conclusion}Conclusion} In this work, we study two cost functions $F^{(1)}(\bm{\theta})$ (Eq.~\eqref{eq:cost_func1}) and $F^{(2)}(\bm{\theta})$ (Eq.~\eqref{eq:cost_func2}) of the constrained VQE/VQD, which can be exploited to compute eigenstates of the Hamiltonian of a given system that reside in the desired symmetry sector. Our theoretical analysis revealed that minimization of the cost function $F^{(1)}(\bm{\theta})$ can yield the desired state/energy when the penalty coefficient $\mu_C$ is larger than a certain threshold (Eq.~\eqref{eq:mu_formula}), and we derived a simple and practical formula to estimate it (Eq.~\eqref{eq:estimate_mu}). On the other hand, we proved that the exact desired state/energy cannot be obtained by minimizing the cost function $F^{(2)}(\bm{\theta})$ and that we obtain completely wrong values in some cases. To validate these theoretical analyses, we performed several numerical simulations of the constrained VQE/VQD for $\ce{H_2}$ and $\ce{H_4}$ molecules. Our simulations validated the formula~\eqref{eq:estimate_mu} for $F^{(1)}(\bm{\theta})$ in practical quantum chemistry calculations and indicated that we should estimate the energy gap in the formula as accurately as possible to achieve the fast convergence of the optimization. Furthermore, we found an explicit example where the desired state/energy is never obtained by using the cost function $F^{(2)}(\bm{\theta})$. Even though $F^{(2)}(\bm{\theta})$ sometimes shows a better performance than $F^{(1)}(\bm{\theta})$ in terms of the total number of Pauli measurements required for the optimization, $F^{(1)}(\bm{\theta})$ still serves as a better cost function of the constrained VQE/VQD because we can ensure that the target state/energy is obtained as a result of the optimization. Our results elucidate the fundamental difference between the performances of the two cost functions $F^{(1)}(\bm{\theta})$ and $F^{(2)}(\bm{\theta})$. The inconsistent and heuristic use of the two cost functions is resolved by our results showing the theoretical superiority of $F^{(1)}(\bm{\theta})$ over $F^{(2)}(\bm{\theta})$ and providing the formula~\eqref{eq:estimate_mu} to determine the appropriate penalty coefficient. The proper choice of the penalty coefficient leads to the fast convergence of the optimization. Since we assume only the discreteness of the system, our findings apply to general quantum systems and lay the theoretical foundation for exploiting NISQ devices with the constrained VQE/VQD to solve large quantum systems that are classically intractable. As future work, it would be intriguing to study the effect of the noise for more general types of noise. Another interesting direction is to numerically investigate other symmetries that are not treated in this work, such as translation symmetry or point group symmetry. \begin{acknowledgments} We thank Ryosuke Imai, Youyang Zhang, and Yohei Ibe for helpful discussions. We appreciate Kosuke Mitarai and Wataru Mizukami for stimulating discussions and reading the draft of the manuscript. KK is supported by QunaSys Inc., Mike and Ophelia Lazaridis, and research grants from the NSERC\@. A part of this work was performed for the Council for Science, Technology and Innovation (CSTI), Cross-ministerial Strategic Innovation Promotion Program (SIP), ``Photonics and Quantum Technology for Society 5.0'' (Funding agency: QST). \end{acknowledgments}
\section{Introduction}\label{sec_introduction} \subsection{The equations of motion in Eulerian coordinates}\label{sec_eulerian_form} In this paper we study traveling wave solutions to the free boundary Navier-Stokes equations, which describe the dynamics of an incompressible, viscous fluid. We posit that the fluid evolves in an infinite layer-like domain in dimension $n \ge 2$. Of course, the physically relevant dimensions are $n=2$ and $n=3$, but our analysis works equally well in all dimensions $n \ge 2$, so we present it in this form for the sake of generality. In order to state the equations of motion and describe the physical features, we must first establish some notation needed to describe the fluid domain and its boundaries. We assume throughout the paper that $2 \le n \in \mathbb{N}$, and we make the standard convention of writing points $x \in \mathbb{R}^n$ as $x = (x',x_n) \in \mathbb{R}^{n-1} \times \mathbb{R}$. The fluid domains of interest to us in this paper are layer-like, with fixed, flat, rigid lower boundaries and moving upper boundaries. We will assume that the moving upper boundary can be described by the graph of a function. Given a function $\zeta : \mathbb{R}^{n-1} \to (0,\infty)$ we define the set \begin{equation}\label{Omega_zeta} \Omega_{\zeta} = \{x = (x',x_n) \in \mathbb{R}^{n} \;\vert\; 0< x_{n}<\zeta(x^{\prime}) \} \subseteq \mathbb{R}^n \end{equation} and we define the $\zeta$ graph surface \begin{equation}\label{Sigma_eta} \Sigma_{\zeta} =\{x\in\mathbb{R}^{n} \;\vert\; x_{n}=\zeta(x^{\prime}) \text{ for some } x^{\prime} \in \mathbb{R}^{n-1}\}. \end{equation} In particular, with this notation we have that if $\zeta$ is continuous, then the upper boundary of $\Omega_\zeta$ is $\Sigma_\zeta$, while the flat lower boundary is $\Sigma_{0} =\{x\in\mathbb{R}^{n} \;\vert\; x_{n}=0\}.$ With this notation established, we now turn to a description of the equations of motion for time $t \ge 0$. We assume that in quiescent equilibrium with all external forces and stresses absent, the fluid occupies the flat equilibrium domain \begin{equation}\label{omega_eq} \Omega_b = \{ x \in \mathbb{R}^n \;\vert\; 0 < x_n < b \} \end{equation} for some equilibrium depth parameter $b \in (0,\infty)$. We further assume that when perturbed from its equilibrium state the fluid occupies the moving domain $\Omega_{b + \zeta(\cdot,t)}$, where $\zeta : \mathbb{R}^{n-1} \times [0,\infty) \to (-b,\infty)$ is the unknown free surface function. We describe the evolution of the fluid for $t \ge 0$ with its velocity field $w(\cdot,t) : \Omega_{b+ \zeta(\cdot,t)} \to \mathbb{R}^n$ and its pressure $P(\cdot,t) : \Omega_{b+ \zeta(\cdot,t)} \to \mathbb{R}$. We posit that the fluid is acted upon by five distinct forces, two in the bulk (i.e. in $\Omega_{b+ \zeta(\cdot,t)}$), and three on the free surface (i.e. on $\Sigma_{b+\zeta(\cdot,t)}$). The first bulk force is a uniform gravitational field pointing down: $-\rho \mathfrak{g} e_n \in \mathbb{R}^n$, where $\rho >0$ is the constant fluid density, $\mathfrak{g} >0$ is the gravitational field strength, and $e_n = (0,\dotsc,1) \in \mathbb{R}^n$ is the vertical unit vector. The second bulk force is a generic force described for each $t \ge 0$ by the vector field $\tilde{\mathfrak{f}}(\cdot,t) : \Omega_{b+\zeta(\cdot,t)} \to \mathbb{R}^n$. The first surface force is a constant (in both space and time) external pressure applied by the fluid above $\Omega_{b+\zeta(\cdot,t)}$, which we write as $P_{ext} \in \mathbb{R}$. The second surface force is generated by an externally applied stress tensor, which we describe for each $t \ge 0$ by a map $\tilde{\mathcal{T}}(\cdot,t) : \Sigma_{b+\zeta(\cdot,t)} \to \mathbb{R}^{n \times n}_{\operatorname*{sym}}$, where \begin{equation} \mathbb{R}^{n \times n}_{\operatorname*{sym}} = \{M \in \mathbb{R}^{n \times n} \;\vert\; M = M^{\intercal} \} \end{equation} denotes the set of symmetric $n\times n$ matrices. Note that symmetry is imposed to be consistent with the fact that stresses are typically symmetric in continuum mechanics, but it is not essential in our results and could be dropped. The third surface force is the surface tension generated by the surface itself, which we model in the standard way as $-\sigma \mathcal{H}(\zeta)$, where $\sigma \ge 0$ is the coefficient of surface tension, and (writing $\nabla'$ and $\diverge'$ for the gradient and divergence in $\mathbb{R}^{n-1}$) \begin{equation}\label{MC_def} \mathcal{H}(\zeta) = \diverge'\left( \frac{\nabla' \zeta}{\sqrt{1+\abs{\nabla' \zeta}^2}} \right) \end{equation} is the mean-curvature operator. The equations of motion are then \begin{equation}\label{ns_euler} \begin{cases} \rho(\partial_t w + w \cdot \nabla w) - \mu \Delta w + \nabla P = - \rho \mathfrak{g} e_n + \tilde{\mathfrak{f}} & \text{in } \Omega_{b+\zeta(\cdot,t)} \\ \diverge{w}=0 & \text{in } \Omega_{b+\zeta(\cdot,t)} \\ (P I- \mu \mathbb{D} w) \nu = -\sigma \mathcal{H}(\zeta) \nu + (P_{ext} I + \tilde{\mathcal{T}} ) \nu & \text{on } \Sigma_{b+\zeta(\cdot,t)} \\ \partial_t \zeta = w \cdot \nu \sqrt{1+ \abs{\nabla' \zeta}^2} &\text{on } \Sigma_{b+\zeta(\cdot,t)} \\ w =0 &\text{on } \Sigma_0, \end{cases} \end{equation} where $\rho>0$ is the constant fluid density, $\mu >0$ is the fluid viscosity, \begin{equation}\label{sym_grad_def} \mathbb{D} w = (\nabla w) + (\nabla w)^{\intercal} \in \mathbb{R}^{n \times n}_{\operatorname*{sym}} \end{equation} is the symmetrized gradient of $w$, and \begin{equation} \nu = \frac{(-\nabla'\zeta,1)}{\sqrt{1+\abs{\nabla'\zeta}^2}} \in \mathbb{R}^n \end{equation} denotes the outward pointing unit normal to the surface $\Sigma_{b+\zeta(\cdot,t)}$. The first two equations in \eqref{ns_euler} are the incompressible Navier-Stokes equations: the first is the Newtonian balance of forces, and the second enforces mass conservation. The third equation in \eqref{ns_euler} is called the dynamic boundary condition, and it asserts a balance of the forces acting on the free surface. The fourth equation in \eqref{ns_euler} is called the kinematic boundary condition, as it dictates how the surface evolves with the fluid; note that it may be rewritten as a transport equation in the form \begin{equation} \partial_t \zeta + \nabla' \zeta \cdot w'\vert_{\Sigma_{b+\zeta(\cdot,t)}} = w_n \vert_{\Sigma_{b+\zeta(\cdot,t)}}, \end{equation} which shows that $\zeta$ is transported by the horizontal component of velocity, $w'$, and driven by the vertical component $w_n$. The fifth equation in \eqref{ns_euler} is the usual no-slip condition enforced at rigid, unmoving boundaries. It will be convenient to eliminate three of the physical parameters in \eqref{ns_euler}. This may be accomplished in a standard way by dividing by $\rho$, rescaling in space and time, and renaming $b,$ $\sigma$, and the forcing terms. Doing so, we may assume without loss of generality that $\rho = \mu = \mathfrak{g} =1$. Given an open set $\varnothing \neq U \subseteq \mathbb{R}^n$, a scalar $p \in L^2(U)$, and a vector field $u \in H^1(U;\mathbb{R}^n)$, we define the associated stress tensor \begin{equation}\label{stress_def} S(p,u) := p I - \mathbb{D} u \in \mathbb{R}^{n \times n}_{\operatorname*{sym}}, \end{equation} where $I$ denotes the $n \times n$ identity and $\mathbb{D} u$ is defined as in \eqref{sym_grad_def}. The stress tensor is of fundamental physical importance, but it also allows us to compactly rewrite terms in \eqref{ns_euler}. Indeed, the left side of the third equation in \eqref{ns_euler} is $S(P,w) \nu$, and if we extend the divergence to act on tensors in the usual way, then \begin{equation}\label{stress_div} \diverge S(P,w) = \nabla P - \Delta w - \nabla \diverge{w}, \end{equation} so the first equation may be rewritten as \begin{equation} \partial_t w + w \cdot \nabla w + \diverge S(P,w) = -e_n + \tilde{\mathfrak{f}}. \end{equation} Our focus in this paper is the construction of traveling wave solutions to \eqref{ns_euler}, which are solutions that are stationary (i.e. time-independent) when viewed in an inertial coordinate system obtained from the Eulerian coordinates of \eqref{ns_euler} through a Galilean transformation. Clearly, for the stationary condition to hold, the new coordinate system must be moving at a constant velocity parallel to $\Sigma_0$. Up to a single rigid rotation fixing $e_n$, we may assume, without loss of generality, that the moving coordinate system's velocity relative to the Eulerian coordinates is $\gamma e_1$ for $e_1 = (1,0,\dotsc,0) \in \mathbb{R}^n$ and $\gamma \in \mathbb{R} \backslash \{0\}$. Then $\abs{\gamma} >0$ is the speed of the traveling wave and $\sgn(\gamma)$ determines the direction of travel along the $e_1$ axis. In the new coordinates the stationary free surface is described by the unknown $\eta : \mathbb{R}^{n-1} \to (-b,\infty)$, which is related to $\zeta$ via $\zeta(x',t) = \eta(x'-\gamma t e_1 )$. We then posit that \begin{multline} w(x,t) = v(x-\gamma t e_1), \; P(x,t) = q(x-\gamma t e_1) + P_{ext} - (x_n - b), \\ \tilde{\mathfrak{f}}(x,t) = \mathfrak{f}(x - \gamma t e_1), \text{ and } \tilde{\mathcal{T}}(x,t) = \mathcal{T}(x- \gamma t e_1), \end{multline} where $v: \Omega_{b+\eta} \to \mathbb{R}^n$, $q: \Omega_{b+\eta} \to \mathbb{R}$, $\mathfrak{f}: \Omega_{b+\eta} \to \mathbb{R}^n$, and $\mathcal{T} : \Sigma_{b+\eta} \to \mathbb{R}^{n \times n}_{\operatorname*{sym}}$ are the stationary velocity field, (renormalized) pressure, external force, and external stress, respectively. In the traveling coordinate system the equations for the unknowns $(v,q,\eta)$, given the data $\mathfrak{f}$ and $\mathcal{T}$, become \begin{equation}\label{traveling_euler} \begin{cases} (v-\gamma e_1) \cdot \nabla v - \Delta v + \nabla q = \mathfrak{f} & \text{in } \Omega_{b+\eta} \\ \diverge{v}=0 & \text{in } \Omega_{b+ \eta} \\ (q I- \mathbb{D} v) \mathcal{N} = (\eta -\sigma \mathcal{H}(\eta) )\mathcal{N} + \mathcal{T} \mathcal{N} & \text{on } \Sigma_{b+\eta} \\ - \gamma \partial_1 \eta = v \cdot \mathcal{N} &\text{on } \Sigma_{b+\eta} \\ v =0 &\text{on } \Sigma_0, \end{cases} \end{equation} where here we have written \begin{equation}\label{normal_def} \mathcal{N} = (-\nabla' \eta,1) \in \mathbb{R}^n \end{equation} for the non-unit normal to $\Sigma_{b+\eta}$. Note in particular that the renormalization of the pressure has shifted the gravitational force from the bulk, where it manifested as the force vector $-e_n$, to the free surface, where it is manifested as the term $\eta \mathcal{N}$ on the right side of the third equations of \eqref{traveling_euler}. The renormalization has also completely removed $P_{ext}$. To provide some context for our result we now consider some of the basic features of the system \eqref{traveling_euler} under some modest assumptions on the solution. Suppose we have a solution for which $\eta \in H^{5/2}(\mathbb{R}^{n-1})$, $\eta$ is bounded and Lipschitz, and $\inf_{\mathbb{R}^{n-1}} \eta > -b$. Note that when $n \in \{2,3\}$ the latter two conditions can be verified via the Sobolev embeddings and a smallness condition on $\norm{\eta}_{H^{5/2}}$, but for higher dimensions this is an auxiliary assumption that would need to be verified through a higher regularity argument, which we ignore for the purposes of the discussion here. The latter two assumptions on $\eta$ guarantee that $\Omega_{b+\eta}$ is well-defined, open, and connected, and that the surface $\Sigma_{b+\eta}$ is Lipschitz and thus enjoys a trace theory. We further suppose that $v \in H^2(\Omega_{b+\eta};\mathbb{R}^n) \cap L^\infty(\Omega_{b+\eta};\mathbb{R}^n)$, $q \in H^1(\Omega_{b+\eta})$, $\mathfrak{f} \in L^2(\Omega_{b+\eta};\mathbb{R}^n)$, and $\mathcal{T} \in H^{1/2}(\Sigma_{b+\eta};\mathbb{R}^{n \times n}_{\operatorname*{sym}})$; in other words, we posit that we have a strong solution and that $v$ is bounded. Note again that the boundedness of $v$ follows from Sobolev embeddings when $n \in \{2,3\}$ but is an auxiliary assumption for $n \ge 4$. Then an elementary computation, which we record in Proposition \ref{trav_prop} of the appendix, shows that \begin{equation}\label{power_balance} \int_{\Omega_{b+\eta}} \mathfrak{f} \cdot v - \int_{\Sigma_{b+\eta}} \mathcal{T} \nu \cdot v = \int_{\Omega_{b+\eta}} \frac{1}{2} \abs{\mathbb{D} v}^2. \end{equation} This has a clear physical meaning: the right side is the viscous dissipation rate, and the left side is the power supplied by the external surface stress and bulk force. These must be in perfect balance for a traveling wave solution to exist. In particular, if there are no sources of external surface stress and bulk force, $\mathcal{T} =0$ and $\mathfrak{f}=0$, then \eqref{power_balance} requires that $\mathbb{D} v =0$ a.e. in $\Omega_{b+\eta}$. In turn this implies (see, for instance Lemma A.4 of \cite{JTW_2016}) that $v(x) = z + A x$ for $z \in \mathbb{R}^n$ and $A \in \mathbb{R}^{n \times n}$ such that $A^{\intercal} = -A$, but since $v \in H^1(\Omega_{b+\eta};\mathbb{R}^n)$ this requires that $v =0$. Plugging this into \eqref{traveling_euler} then shows that $\eta =0$ and $q=0$. The upshot of this analysis is that within the functional framework described above, nontrivial stress or forcing is a necessary condition for the existence of nontrivial solutions to \eqref{traveling_euler}. We emphasize, though, that this argument depends crucially on the assumed Sobolev inclusions and thus does not eliminate the possibility of nontrivial solutions to \eqref{traveling_euler} with $\mathcal{T} =0$ and $\mathfrak{f}=0$ in other functional frameworks (e.g. H\"{o}lder spaces). In this paper we identify a Sobolev-based functional framework appropriate for constructing solutions to \eqref{traveling_euler}, and we prove that for every nontrivial wave speed there exists a nonempty open set of forcing and stress data that generate solutions to \eqref{traveling_euler}. While the existence of traveling wave solutions to the free boundary incompressible Euler equations (the system \eqref{ns_euler} with $\mu=0$ and the no-slip condition replaced with no-penetration) is well known with and without external sources of stress and forcing (see Section \ref{sec_prev_work}), to the best of our knowledge this paper is the first to construct traveling wave solutions to the free boundary incompressible Navier-Stokes equations. It is important to account for the viscous case because, while many fluids have small viscosity (or more precisely, the fluid configuration has large Reynolds number), small does not mean zero, so all fluids experience some viscous effects. Developing the viscous theory also opens the possibility of connecting the viscous and inviscid cases through vanishing viscosity limits, which could potentially yield insight into the zoo of known inviscid solutions. In particular, it could lead to a selection mechanism for physically relevant inviscid solutions. \subsection{Previous work}\label{sec_prev_work} The problems \eqref{ns_euler} and \eqref{traveling_euler} and their variants have attracted enormous attention in the mathematical literature, making a complete review impossible. We shall attempt here only a brief survey of those results most closely related to the present paper, which in particular means that we will focus exclusively on incompressible fluids in single layer geometries and neglect the expansive literature on other geometric configurations and on compressible fluids. For more thorough reviews of the literature we refer to the works of Toland \cite{Toland_1996}, Groves \cite{Groves_2004}, and Strauss \cite{Strauss_2010} for the inviscid case and Zadrzy\'{n}ska \cite{Zadrynska_2004} and Shibata-Shimizu \cite{SS_2007} for the viscous case. The oldest results in this area concern traveling wave solutions to the free boundary Euler equations, the inviscid analogs of \eqref{ns_euler} and \eqref{traveling_euler}. In this case it is possible to posit that the flow is irrotational, a condition that propagates with the flow. The rigorous construction of the first periodic solutions was completed in $2D$ by Nekrasov \cite{Nekrasov_1921} and Levi--Civita \cite{Levi-Civita_1924}. Large amplitude $2D$ periodic solutions, including those with angle $2\pi/3$ satisfying the Stokes conjecture, were constructed later by Krasovski\u{\i} \cite{Krasovskii_1961}, Keady-Norbury \cite{KN_1978}, Toland \cite{Toland_1978}, Amick-Toland \cite{AT_1981}, Amick-Fraenkel-Toland \cite{AFT_1982}, Plotnikov \cite{Plotnikov_2002}, and McLeod \cite{Mcleod_1997}. For more recent work on Stokes waves see Plotnikov-Toland \cite{PT_2004} and Gravina-Leoni \cite{GL_2018,GL_2019} and the references therein. Solitary non-periodic solutions in $2D$ were constructed by Beale \cite{Beale_1977}. Progress on the $2D$ Euler problem with rotation came much more recently, starting with the construction of periodic rotational traveling waves by Constantin-Strauss \cite{CS_2004}. Wahl\'{e}n \cite{Wahlen_2006,Wahlen_2006_2} then constructed periodic solutions with surface tension, and Walsh \cite{Walsh_2009,Walsh_2014,Walsh_2014_2} built solutions with density stratification and with surface tension. Hur \cite{Hur_2008}, Groves-Wahl\'{e}n \cite{GW_2008}, and Wheeler \cite{Wheeler_2013} constructed solitary traveling waves, and Chen-Walsh-Wheeler \cite{CWW_2018, CWW_2019} recently constructed infinite depth solitary waves with and without stratification. In these results the only forces are due to gravity and surface tension. Recent work of Walsh-B\"{u}hler-Shatah \cite{WBS_2013} and B\"{u}hler-Shatah-Walsh-Zeng \cite{BSWZ_2016} included effects modeling forcing by wind above the fluid, and Wheeler \cite{Wheeler_2015} studied an applied spatially localized pressure force. In $3D$ much less is known in the inviscid case. Periodic irrotational solutions without surface tension were constructed by Iooss-Plotnikov \cite{IP_2009}. Irrotational solitary waves in $3D$ with surface tension were first constructed by Groves-Sun \cite{GS_2008}, and then by Buffoni-Groves-Sun-Wahl\'{e}n \cite{BGSW_2013} and Buffoni-Groves-Wahl\'{e}n \cite{BGW_2018} with different techniques. There has also been considerable recent progress on the fully dynamic inviscid and irrotational problem. For the infinite depth problem Wu \cite{Wu_1997,Wu_1999} constructed local solutions in $2D$ and $3D$, showed almost global existence in $2D$ \cite{Wu_2009}, and then proved global well-posedness in $3D$ \cite{Wu_2011}. Lannes \cite{Lannes_2005} developed a local well-posedness theory in finite depth in $2D$ and $3D$. In infinite depth Germain-Masmoudi-Shatah \cite{GMS_2012,GMS_2015} proved global well-posedness with gravity only and with surface tension only in $3D$, Deng-Ionescu-Pausader-Pusateri \cite{DIPP_2017} proved global well-posedness with gravity and surface tension in $3D$, and Ionescu-Pusateri \cite{IP_2015,IP_2018} proved global results in $2D$ with and without surface tension. Wang \cite{Wang_2019} produced global solutions in finite depth with gravity but no surface tension. Local existence in arbitrary dimension with surface tension was studied in a series of papers by Alazard-Burq-Zuily \cite{ABZ_2011,ABZ_2014,ABZ_2016}. Alazard-Delort \cite{AD_2015_2,AD_2015} obtained $2D$ global solutions with scattering, while Hunter-Ifrim-Tataru \cite{HIT_2016} and Ifrim-Tataru \cite{IT_2016} obtained $2D$ global solutions in an alternate framework. To the best of our knowledge, the only result for layer geometries without the irrotationality assumption is by Zhang-Zhang \cite{ZZ_2008}, who obtained a local existence result in $3D$. We now turn our attention to the literature associated to the dynamic viscous problem \eqref{ns_euler} in $3D$. In contrast with the inviscid case, irrotationality is not preserved along viscous flow, so the challenges of vorticity are inherent to the viscous problem. Beale \cite{Beale_1981} proved local well-posedness without surface tension and global well-posedness with surface tension \cite{Beale_1983}, and Beale-Nishida \cite{BN_1985} derived algebraic decay estimates for the latter solutions. Solutions in other functional frameworks were produced with surface tension by Tani-Tanaka \cite{TT_1995}, Bae \cite{Bae_2011}, and Shibata-Shimizu \cite{SS_2011} and without surface tension by Abels \cite{Abels_2005_3}. Guo-Tice \cite{GT_2013_inf,GT_2013_lwp} and Wu \cite{Wu_2014} proved global well-posedness without surface tension and derived decay estimates for solutions. Masmoudi-Rousset \cite{MR_2017} proved a local-in-time vanishing viscosity result with infinite depth. For related work on the linearized problem and resolvent estimates in various functional settings we refer to Abe-Shibata \cite{AS_2003,AS_2003_2}, Abels \cite{Abels_2005,Abels_2005_2,Abels_2006}, Abels-Wiegner \cite{AW_2005}, and Abe-Yamazaki \cite{AY_2010}. Much is also known about periodic solutions to the viscous problem in $3D$. Nishida-Teramoto-Yoshihara \cite{NTY_2004} constructed global, exponentially decaying solutions with surface tension. Without surface tension, global solutions with a fixed algebraic decay rate were constructed by Hataya \cite{Hataya_2009} and with almost exponential decay by Guo-Tice \cite{GT_2013_per}. Tan-Wang \cite{TW_2014} established the vanishing surface tension limit for global solutions. Remond--Tiedrez-Tice \cite{R-TT_2019} proved global existence of exponentially decaying solutions with generalized bending energies, and Tice \cite{Tice_2018} constructed global decaying solutions with and without surface tension for flows with a gravitational field component parallel to the bottom. Stationary solutions to $3D$ viscous problems, which correspond to traveling waves with zero velocity ($\gamma =0$ in \eqref{traveling_euler}), have been constructed in various settings. Jean \cite{Jean_1980} and Pileckas \cite{Pileckas_1983,Pileckas_1984} constructed solutions with a partially free boundary, corresponding to a reservoir lying above an infinite channel. Gellrich \cite{Gellrich_1993} constructed a solution with a completely free boundary and with an affine external pressure. Nazarov-Pileckas \cite{NP_1999,NP_1999_2}, Pileckas \cite{Pileckas_2002}, and Pileckas-Zaleskis \cite{PL_2003} built solutions in domains that are layer-like at infinity. Bae-Cho \cite{BC_2000} found stationary solutions for incompressible non-Newtonian fluids. To the best of our knowledge, there are no results in the literature establishing the existence of traveling wave solutions to the free boundary problem \eqref{ns_euler} with nonzero velocity. In fixed domains there are a few results for viscous fluids. In full space Chae-Dubovski\u{\i} \cite{CD_1996} constructed a family of traveling wave solutions to Navier-Stokes, and Freist\"{u}hler \cite{Freistuhler_2014} constructed solutions for a Navier-Stokes-Allen-Cahn system. Kagei-Nishida \cite{KN_2019} studied traveling waves bifurcating from Poiseuille flow in rigid channels. We refer also to Escher-Lienstromberg \cite{EL_2018} for traveling wave solutions to a related thin-film problem. Our goal in the present paper is to construct traveling wave solutions to \eqref{ns_euler} by solving \eqref{traveling_euler} in the presence of bulk forces $\mathfrak{f}$ and surface stresses $\mathcal{T}$. A simple version of the forcing occurs when we take $\mathfrak{f} =0$ and $\mathcal{T} = \varphi I$ for a scalar function $\varphi$. In this case $\varphi I$ can be thought of as a spatially localized external pressure source translating in space with velocity $\gamma e_1$ above the fluid. This is a configuration that has been realized in recent experiments in which a tube blowing air onto the surface of a viscous fluid is uniformly translated above the surface, resulting in the observation of traveling waves on the free surface. For details of the experiments, some numerical simulations, and approximate models we refer to Akylas-Cho-Diorio-Duncan \cite{DCDA_2011,CDAD_2011}, Masnadi-Duncan \cite{MD_2017}, and Park-Cho \cite{PC_2016,PC_2018}. \subsection{Reformulation} A central difficulty in studying \eqref{traveling_euler} is that the domain $\Omega_{b+\eta}$, on which we seek to construct the unknowns $v$ and $q$, is itself unknown since $\eta$ is unknown. To bypass this difficulty we follow the usual path of reformulating \eqref{traveling_euler} in a fixed domain, which comes at the price of worsening the nonlinearities. To this end we reformulate the problem in the equilibrium domain \eqref{omega_eq}; in the interest of notational concision, throughout the rest of the paper we will typically drop the subscript $b$ and simply write \begin{equation}\label{Omega} \Omega = \Omega_b = \mathbb{R}^{n-1}\times(0,b). \end{equation} Given a continuous function $\eta: \mathbb{R}^{n-1} \to (-b,\infty)$ we define the flattening map $\mathfrak{F}: \bar{\Omega} \to \bar{\Omega}_{b+\eta}$ via \begin{equation}\label{flat_def} \mathfrak{F}(x) = (x', x_n(1+\eta(x')/b)) = x + \frac{x_n \eta(x')}{b}e_n. \end{equation} When we need to emphasize the dependence of this map on $\eta$ we will often write $\mathfrak{F}_\eta$ in place of $\mathfrak{F}$. By construction we have that $\mathfrak{F}(x',0) = (x',0)$ and $\mathfrak{F}(x',b) = (x', b + \eta(x'))$, so $\left. \mathfrak{F} \right\vert_{\Sigma_0} = Id_{\Sigma_0}$ and $\mathfrak{F}(\Sigma_b) = \Sigma_{b+\eta}$. Moreover, $\mathfrak{F}$ is a bijection with inverse given by $\mathfrak{F}^{-1}(y) =(y', y_n b/(b+ \eta(y')))$ for $y \in \bar{\Omega}_{b+\eta}.$ Thus $\mathfrak{F}$ is a homeomorphism that inherits the regularity of $\eta$ in the sense that if $\eta$ is Lipschitz then $\mathfrak{F}$ is a bi-Lipschitz homeomorphism, and if $\eta \in C^k(\mathbb{R}^{n-1})$ then $\mathfrak{F}$ is a $C^k$ diffeomorphism. Provided that $\eta$ is differentiable, we may compute and define the following: \begin{equation} \nabla \mathfrak{F}(x) = \begin{pmatrix} I_{(n-1) \times (n-1)} & 0_{(n-1) \times 1} \\ x_n \nabla'\eta(x') / b & 1 + \eta(x')/b \end{pmatrix}, \end{equation} and so we define the Jacobian and inverse Jacobian $J,K: \Omega \to (0,\infty)$ via \begin{equation}\label{JK_def} J = \det \nabla \mathfrak{F} = 1 + \eta /b \text{ and } K = 1/J = b/(b+\eta), \end{equation} and we define the matrix $\mathcal{A}: \Omega \to \mathbb{R}^{n \times n}$ via \begin{equation}\label{A_def} \mathcal{A}(x) = (\nabla \mathfrak{F}(x))^{-\intercal} = \begin{pmatrix} I_{(n-1) \times (n-1)} & - K x_n \nabla' \eta(x') / b \\ 0_{1 \times (n-1)} & K \end{pmatrix} = \begin{pmatrix} I_{(n-1) \times (n-1)} & - x_n \nabla' \eta(x') / (b + \eta(x')) \\ 0_{1 \times (n-1)} & b/(b+ \eta(x')) \end{pmatrix}. \end{equation} We now have all of the ingredients needed to reformulate \eqref{traveling_euler} in $\Omega$. We assume that $\eta \in C^2(\mathbb{R}^{n-1})$ satisfies $\eta > -b$ and define the functions $u : \Omega \to \mathbb{R}^n$, $p: \Omega \to \mathbb{R}$, $f: \Omega \to \mathbb{R}^n$, and $T : \Sigma_b \to \mathbb{R}^{n\times n}_{\operatorname*{sym}}$ via $u = v \circ \mathfrak{F}$, $p = q \circ \mathfrak{F}$, $f = \mathfrak{f} \circ \mathfrak{F}$, and $T = \mathcal{T} \circ \mathfrak{F}$. Then \eqref{traveling_euler} is equivalent to the following quasilinear system in the fixed domain $\Omega$: \begin{equation}\label{flattened_system} \begin{cases} (u-\gamma e_1) \cdot \nab_{\mathcal{A}} u - \Delta_{\mathcal{A}} u + \nabla_{\mathcal{A}} p = f & \text{in } \Omega \\ \diverge_{\mathcal{A}}{u}=0 & \text{in } \Omega \\ (pI- \mathbb{D}_{\mathcal{A}} u) \mathcal{N} = (\eta -\sigma \mathcal{H}(\eta) )\mathcal{N} + T \mathcal{N} & \text{on } \Sigma_{b} \\ u\cdot \mathcal{N} + \gamma \partial_1 \eta = 0 &\text{on } \Sigma_{b} \\ u =0 &\text{on } \Sigma_0. \end{cases} \end{equation} Here we introduce the differential operators $\nab_{\mathcal{A}}$, $\diverge_{\mathcal{A}}$, and $\Delta_{\mathcal{A}}$ with their actions given via \begin{equation}\label{A_op_def_1} (\nab_{\mathcal{A}} \psi)_i = \sum_{j=1}^n \mathcal{A}_{ij} \partial_j \psi, \; \diverge_{\mathcal{A}} X = \sum_{i,j=1}^n \mathcal{A}_{ij}\partial_j X_i, \text{ and } (\Delta_{\mathcal{A}} X)_i = \sum_{j=1}^n \sum_{k=1}^n \sum_{m=1}^n \mathcal{A}_{jk}\partial_k \left(\mathcal{A}_{jm} \partial_m X_i \right) \end{equation} for appropriate $\psi$ and $X$. We also write \begin{equation}\label{A_op_def_2} (X \cdot \nab_{\mathcal{A}} u)_i = \sum_{j,k=1}^n X_j \mathcal{A}_{jk} \partial_k u_i, \; (\mathbb{D}_{\mathcal{A}} u)_{ij} =\sum_{k=1}^n \left( \mathcal{A}_{ik} \partial_k u_j + \mathcal{A}_{jk} \partial_k u_i \right), \text{ and } S_\mathcal{A}(p,u) = pI - \mathbb{D}_{\mathcal{A}} u. \end{equation} Allowing $\diverge_{\mathcal{A}}$ to act on symmetric tensors in the usual way, we arrive at the identity \begin{equation}\label{A_op_def_3} \diverge_{\mathcal{A}} S_{\mathcal{A}}(p,u) = \nab_{\mathcal{A}} p - \Delta_{\mathcal{A}} u - \nab_{\mathcal{A}} \diverge_{\mathcal{A}} u. \end{equation} This allows us to rewrite \eqref{flattened_system} as \begin{equation} \begin{cases} (u-\gamma e_1) \cdot \nab_{\mathcal{A}} u + \diverge_{\mathcal{A}} S_{\mathcal{A}}(p,u) = f & \text{in } \Omega \\ \diverge_{\mathcal{A}}{u}=0 & \text{in } \Omega \\ S_{\mathcal{A}}(p,u) \mathcal{N} = (\eta -\sigma \mathcal{H}(\eta) )\mathcal{N} + T \mathcal{N} & \text{on } \Sigma_{b} \\ u\cdot \mathcal{N} + \gamma \partial_1 \eta = 0 &\text{on } \Sigma_{b} \\ u =0 &\text{on } \Sigma_0. \end{cases} \end{equation} \subsection{Discussion and statement of main results} We now turn to a discussion of our strategy for producing solutions to \eqref{traveling_euler} by way of \eqref{flattened_system}. First note that since \eqref{traveling_euler} is not irrotational, the Bernoulli-based surface reformulations often employed in studying the inviscid irrotational problem are not available, and so we are forced to analyze the problem directly in $\Omega$ after the reformulation \eqref{flattened_system}. The domain $\Omega$ is unbounded, has infinite measure, and non-compact boundaries, which precludes the application of many standard tools in the theory of boundary value problems, including compactness and Fredholm techniques. The problem \eqref{flattened_system} is quasilinear but has no variational structure, so we are left with the option of constructing solutions by way of some sort of fixed point argument built on the linearization of \eqref{flattened_system}. An obvious strategy for attacking \eqref{flattened_system} is to employ a technique used in many of the references on the viscous problem from Section \ref{sec_prev_work}, which proceeds as follows. First we would develop the well-posedness of the linear Stokes system with Navier boundary conditions: \begin{equation}\label{intro_stokes_navier} \begin{cases} \diverge S(p,u) -\gamma\partial_{1}u=f & \text{in }\Omega\\ \diverge u=g & \text{in }\Omega \\ (S(p,u)e_{n})' =k',\quad u_{n} =h & \text{on }\Sigma_b \\ u=0 & \text{on }\Sigma_{0}, \end{cases} \end{equation} where here we recall that the stress tensor $S(p,u)$ is defined by \eqref{stress_def} and satisfies \eqref{stress_div}. Then we would use this to define a map $(v,q,\zeta) \mapsto (u,p)$ for $(u,p)$ solving \eqref{intro_stokes_navier} with $f,g,h,k'$ determined by $(v,q,\zeta)$, and then we would solve for $\eta$ in terms of $S(p,u) e_n \cdot e_n$ and $(v,q,\zeta)$ via the linearization of the gravity-capillary operator, $I - \sigma \Delta'$ (here $\Delta' = \diverge' \nabla' = \sum_{j=1}^{n-1} \partial_j^2$ is the Laplacian on $\mathbb{R}^{n-1}$). We would then seek to show that the map $(v,q,\zeta) \mapsto (u,p,\eta)$ is contractive on some space. Unfortunately, this strategy encounters a serious technical obstruction: while the elliptic system \eqref{intro_stokes_navier} provides control of $\nabla p$, it fails to provide control of $p$ itself. In a bounded domain this can be easily dealt with by simply forcing $p$ to have zero average, which gives control of $p$ via a Poincar\'{e} inequality, but this technique is unavailable in the unbounded domain $\Omega$. Without control of $p$, the best we can hope for is that the pressure belongs to a homogeneous Sobolev space, in which case solving the elliptic problem \begin{equation}\label{intro_surface_elliptic} \eta - \sigma \Delta' \eta = S(p,u) e_n \cdot e_n + k_n = p- 2 \partial_n u_n + k_n \text{ on } \Sigma_b \end{equation} presents a problem due to the appearance of the trace of $p$ onto $\Sigma_b$. This is indeed a serious problem: in recent work \cite{GT_2019} we extended an earlier $2D$ result due to Strichartz \cite{Strichartz_2016} and proved that the trace space associated to homogeneous Sobolev spaces on $\Omega$ is not a standard Sobolev space, and so not only is the elliptic theory for \eqref{intro_surface_elliptic} unavailable in the literature, it has no hope of producing an $\eta$ amenable to the necessary nonlinear analysis. We are thus forced to abandon this strategy and try something else. Note, though, that as a byproduct of our analysis we can actually characterize the data for which \eqref{intro_stokes_navier} admits solutions with $p$ under control. We present this in Section \ref{sec_navier_bcs}, but the resulting spaces are ill-suited for the subsequent nonlinear analysis. A possible variant of the above strategy, aimed at dealing with the pressure problem, would be to base the linear analysis on the Stokes system with stress boundary conditions: \begin{equation}\label{intro_stokes_stress} \begin{cases} \diverge S(p,u) -\gamma\partial_{1}u=f & \text{in }\Omega\\ \diverge u=g & \text{in }\Omega\\ S(p,u)e_{n} =k & \text{on }\Sigma_b\\ u=0 & \text{on }\Sigma_{0}. \end{cases} \end{equation} As we show in Section \ref{sec_stress_bcs}, this does provide control of $p$, but the problem now is that in the map $(v,q,\zeta) \mapsto (u,p,\eta)$ the free surface function $\eta$ would have to be reconstructed via the equation \begin{equation} \gamma \partial_1 \eta = h - u_n \text{ on } \Sigma_b, \end{equation} and when $n \ge 3$ the operator $\gamma \partial_1$ on $\mathbb{R}^{n-1}$ is not elliptic. Thus, this alternate approach cannot work for the most physically relevant case, $n=3$. We are thus led to seek another strategy. This begins with the observation that for $f =0$, $T =0$, and any $\gamma \in \mathbb{R}$, a trivial solution to \eqref{flattened_system} is given by the equilibrium configuration $u=0$, $p=0$, $\eta =0$. Linearizing \eqref{flattened_system} around this solution yields the Stokes system with traveling gravity-capillary boundary conditions: \begin{equation}\label{intro_stokes_full} \begin{cases} \diverge S(p,u) -\gamma\partial_{1}u=f & \text{in }\Omega \\ \diverge u=g & \text{in }\Omega \\ S(p,u)e_{n} -(\eta -\sigma \Delta' \eta) e_n =k,\quad u_{n}+\gamma\partial_{1}\eta=h & \text{on }\Sigma_b \\ u=0 & \text{on }\Sigma_{0}. \end{cases} \end{equation} With this in hand, we can state our strategy for solving \eqref{flattened_system}: prove that \eqref{intro_stokes_full} induces an isomorphism $(u,p,\eta) \mapsto (f,g,h,k)$ between appropriate spaces, and use this in conjunction with the implicit function theorem. The first key to this strategy is the linear problem \eqref{intro_stokes_full}, but at first glance this appears to be susceptible to the same problem that precludes the fixed-point strategies discussed above: the coupling between $\eta$ and $(u,p)$ occurs in two different boundary conditions. As such, there is no clear mechanism for decoupling the problem into one for $(u,p)$ with either Navier or stress boundary conditions, and a second one for $\eta$ (with data possibly involving $(u,p)$). We are thus led to seek a decoupling strategy that synthesizes both boundary conditions simultaneously, and this suggests that as a first step we should understand the over-determined problem \begin{equation} \label{intro_stokes_overdet} \begin{cases} \diverge S(p,u)-\gamma\partial_{1}u=f & \text{in }\Omega\\ \diverge u=g & \text{in }\Omega \\ S(p,u)e_{n}=k,\quad u_{n}=h & \text{on }\Sigma_b \\ u=0 & \text{on }\Sigma_{0}. \end{cases} \end{equation} The problem \eqref{intro_stokes_overdet} is over-determined in the sense that we specify too many, namely $n+1$, boundary conditions on $\Sigma_b$, when only $n$ are needed to uniquely solve the problem. Indeed, as a starting point for understanding \eqref{intro_stokes_overdet} we first analyze the applied stress problem \eqref{intro_stokes_stress} in Section \ref{sec_stress_bcs} and show that it induces an isomorphism $(u,p) \mapsto (f,g,k)$ between appropriate $L^2-$based Sobolev spaces (see Theorem \ref{iso_gamma_stokes} for the precise statement). Consequently, when we specify the extra boundary condition $u_n =h$ on $\Sigma_b$ we should not expect solvability in general. In Section \ref{sec_overdetermined} we endeavor to precisely characterize for which data $(f,g,h,k)$ we can uniquely solve \eqref{intro_stokes_overdet}. If everything were integrable, then a clear necessary compatibility condition would follow from integrating and applying the divergence theorem: \begin{equation}\label{intro_cc_div} \int_{\Omega} g = \int_{\Omega} \diverge{u} = \int_{\Sigma_b} u_n = \int_{\Sigma_b} h. \end{equation} However, since we're working in $L^2-$based spaces in the infinite-measure set $\Omega$, we cannot guarantee integrability, and so this compatibility condition manifests in a more subtle way. In Theorem \ref{cc_divergence} we show that the $L^2$ formulation of \eqref{intro_cc_div} is that \begin{equation} h - \int_0^b g(\cdot,x_n) dx_n \in \dot{H}^{-1}(\mathbb{R}^{n-1}), \end{equation} where $\dot{H}^{-1}(\mathbb{R}^{n-1})$ is the homogeneous Sobolev space of order $-1$ (see \eqref{homogeneous_def} for the definition). In order to see the connection to \eqref{intro_cc_div} note that if we formally rewrite this as \begin{equation} 0 = \int_{\Sigma_b} h - \int_{\Omega} g= \int_{\mathbb{R}^{n-1}} \left(h(x') - \int_0^b g(x',x_n) dx_n \right) dx', \end{equation} then this tells us that the Fourier transform of the function $h - \int_0^b g(\cdot,x_n) dx_n$ vanishes at the origin. The inclusion of this function in $\dot{H}^{-1}(\mathbb{R}^{n-1})$ does not require the Fourier transform to vanish at the origin but it does require that the Fourier transform is not too large near the origin, which is a sort of weak form of vanishing at the origin. This behavior has been seen before in the analysis of viscous surface waves: we refer, for example, to \cite{BN_1985,GT_2013_inf,TZ_2019}. The divergence structure $\diverge S(p,u)$ in \eqref{intro_stokes_overdet} and the appearance of $S(p,u)e_n$ on $\Sigma_b$ suggest that another compatibility condition should hold, but it is more subtle since we have no information about $S(p,u)e_n$ on $\Sigma_0$. To get our hands on it we take a cue from the closed range theorem and identify the formal adjoint of the over-determined problem as the under-determined problem \begin{equation}\label{intro_stokes_underdet} \begin{cases} \diverge S(q,v)+\gamma\partial_{1}v= f & \text{in }\Omega \\ \diverge v= g & \text{in }\Omega \\ (S(q,v)e_{n})^{\prime}=k' & \text{on }\Sigma_b \\ v=0 & \text{on }\Sigma_{0}, \end{cases} \end{equation} which only imposes $n-1$ boundary conditions on $\Sigma_b$. The compatibility condition can then be derived by integrating solutions to \eqref{intro_stokes_overdet} against functions in the kernel of \eqref{intro_stokes_underdet}. From our theory of the Stokes problem with stress boundary conditions, developed in Section \ref{sec_stress_bcs}, we know that this kernel can be exactly parameterized by augmenting \eqref{intro_stokes_underdet}, with $f=0,$ $g=0,$ and $k'=0$, with the extra condition \begin{equation}\label{intro_adjoint_param} S(q,v) e_n \cdot e_n = \varphi \end{equation} for $\varphi$ belonging to an appropriate Sobolev space. This leads us to Theorem \ref{cc_over-det}, which shows that the data $(f,g,h,k)$ must satisfy the second compatibility condition \begin{equation}\label{intro_cc_psi} \int_{\Omega}(f\cdot v-gq)-\int_{\Sigma_b}(k\cdot v-h \varphi) =0 \end{equation} for all appropriate $\varphi$, where $(v,q)$ are in the kernel of \eqref{intro_stokes_underdet} and satisfy \eqref{intro_adjoint_param}. Remarkably, the two necessary compatibility conditions identified in Theorems \ref{cc_divergence} and \ref{cc_over-det} are sufficient as well. We prove this in Theorem \ref{iso_overdetermined}, which establishes that \eqref{intro_stokes_overdet} induces an isomorphism into a space of data satisfying the compatibility conditions. The formulation of the second compatibility condition \eqref{intro_cc_psi} is hard to work with directly, so the next step is to reformulate it on the Fourier side and eliminate $\varphi$. We do this, among other things, in Section \ref{sec_fourier} by studying the horizontal Fourier transform of the problem \eqref{intro_stokes_stress}. This leads to a second-order boundary-value ODE system on $(0,b)$ with the horizontal spatial frequency $\xi \in \mathbb{R}^{n-1}$ as a parameter. The ODE is not particularly easy to work with, and an interesting feature of our work with it is that we use the solvability of the PDE \eqref{intro_stokes_stress} to deduce some key information about the ODE, which is backward from the usual approach of using the ODE to solve the PDE via Fourier synthesis. In Proposition \ref{proposition_cc_fourier} we reformulate \eqref{intro_cc_psi} as \begin{equation}\label{intro_cc_fourier} \int_{0}^{b}(\hat{f}(\xi,x_{n})\cdot\overline{V(\xi,x_{n}, -\gamma)} - \hat{g}(\xi,x_{n}) \overline{Q(\xi,x_{n}, -\gamma )})dx_{n} -\hat{k}(\xi)\cdot \overline{V(\xi,b,-\gamma)} + \hat{h}(\xi) = 0 \end{equation} for almost every $\xi \in \mathbb{R}^{n-1}$, where $Q$ and $V$ are special solutions to the ODE (see \eqref{QVm_def} for the precise definition), and $\hat{\cdot}$ denotes the horizontal Fourier transform. With the solvability criteria of the over-determined problem and \eqref{intro_cc_fourier} in hand, we return to \eqref{intro_stokes_full}. If a solution $(u,p,\eta)$ exists for given data $(f,g,h,k)$, then \eqref{intro_cc_fourier} requires that \begin{equation}\label{intro_PsiDO} \rho(\xi) \hat{\eta}(\xi) = \psi(\xi) \text{ for }\xi \in \mathbb{R}^{n-1}, \end{equation} where $\psi,\rho : \mathbb{R}^{n-1} \to \mathbb{C}$ are given by \begin{equation} \psi(\xi)= \int_{0}^{b}\left(\hat{f}(\xi,x_{n})\cdot\overline{V(\xi,x_{n},-\gamma)} - \hat{g}(\xi,x_{n}) \overline{Q(\xi,x_{n},-\gamma)} \right ) dx_{n} - \hat{k}(\xi) \cdot \overline{V(\xi,b,-\gamma)}+\hat{h}(\xi), \end{equation} and \begin{equation}\label{intro_rho} \rho(\xi) = 2\pi i \gamma \xi_1 + (1+ 4\pi^2 \sigma \abs{\xi}^2 ) \overline{V_n(\xi,b,-\gamma)}. \end{equation} Here for any $\gamma \in \mathbb{R}$, the function $V_n(\cdot,b,\gamma)$ is the symbol associated to the pseudodifferential operator corresponding to the map \begin{equation}\label{intro_stress_to_dirichlet} H^{s}(\Sigma_b) \ni \varphi \mapsto u_n \vert_{\Sigma_b} \in H^{s+1}(\Sigma_b), \end{equation} where $(u,p) \in H^{s+3/2}(\Omega;\mathbb{R}^{n})\times H^{s+1/2}(\Omega)$ solve \eqref{intro_stokes_stress} with $f=0$, $g=0$, and $k=\varphi e_n$ (see Remark \ref{remark_symbol}). This can be thought of as a Stokes system analog of the Neumann to Dirichlet operator associated to the scalar Laplacian (see Remark \ref{remark_symbol_asymp}), which one might call the normal-stress to normal-Dirichlet operator. This reveals a remarkable fact: the two boundary conditions for $\eta$ combine via the compatibility condition into a single pseudodifferential equation on $\mathbb{R}^{n-1}$, $\rho(\nabla/(2\pi i)) \eta = \check{\psi}$, where the symbol of the operator is a synthesis of the symbols for $\gamma \partial_1$, $I - \sigma \Delta'$, and the symbol of the normal-stress to normal-Dirichlet operator. Clearly, for there to be any hope of solving the pseudodifferential equation \eqref{intro_PsiDO}, we need detailed information about $V$ and $Q$. We obtain this in Section \ref{sec_fourier}, where in addition to deriving \eqref{intro_cc_fourier}, we show that $V_n(\xi,b,-\gamma) =0$ if and only if $\xi =0$, and we obtain asymptotic developments of $V$ and $Q$ as $\xi \to 0$ and $\xi \to \infty$. The latter is particularly tricky as it is predicated on the daunting task of working out closed-form expressions for $V$ and $Q$. The asymptotics of $V(\xi,b,-\gamma)$ reveal (see Lemma \ref{rho_lemma} for a precise statement) that for $\gamma \neq 0$ we have that $\rho(\xi)=0$ if and only if $\xi=0$ and that \begin{equation}\label{intro_rho_asymp} \abs{\rho(\xi)}^2 \asymp \begin{cases} \xi_1^2 + \abs{\xi}^4 &\text{for } \abs{\xi} \asymp 0 \\ 1+ \abs{\xi}^2 &\text{for } \abs{\xi} \asymp \infty \end{cases} \text{if }\sigma >0, \text{ while } \abs{\rho(\xi)}^2 \asymp \begin{cases} \abs{\xi}^2 &\text{for } \abs{\xi} \asymp 0 \\ 1+ \abs{\xi}^2 &\text{for } \abs{\xi} \asymp \infty \end{cases} \text{if } \sigma=0 \text{ and }n=2. \end{equation} Here the condition $\gamma \neq 0$ is essential: the asymptotics are worse near $0$ if $\gamma =0$. Having derived detailed information about $V$ and $Q$, we can resume the study of the pseudodifferential equation \eqref{intro_PsiDO}. The first observation is that since $\rho$ vanishes exactly at the origin, $\eta$ is entirely determined via $\hat{\eta} = \psi / \rho$. In particular, this means that in contrast with the previously discussed strategies of determining $(u,p)$ from the data and then determining $\eta$ from $(u,p)$ and the data, the path through \eqref{intro_stokes_full} allows for the determination of $\eta$ first in terms of the data, and then the determination of $(u,p)$ from $\eta$ and the data. The second observation is that the asymptotics \eqref{intro_rho_asymp} dictate the form of the estimates we get for $\hat{\eta}$ when $\gamma \neq 0$: for $\sigma >0$ these read \begin{multline}\label{intro_eta_ests_ST} \int_{B(0,1)} \frac{\xi_1^2 + \abs{\xi}^4}{\abs{\xi}^2} \abs{\hat{\eta}(\xi)}^2 d\xi + \int_{B(0,1)^c} (1+ \abs{\xi}^2)^{s+5/2} \abs{\hat{\eta}(\xi)}^2 d\xi \\ \asymp \int_{B(0,1)} \frac{1}{\abs{\xi}^2} \abs{\psi(\xi)}^2 d\xi + \int_{B(0,1)^c} (1+\abs{\xi}^2)^{s+3/2} \abs{\psi(\xi)}^2 d\xi, \end{multline} while for $\sigma =0$ and $n =2$ these read \begin{multline}\label{intro_eta_ests_no_ST} \int_{B(0,1)} \abs{\hat{\eta}(\xi)}^2 d\xi + \int_{B(0,1)^c} (1+ \abs{\xi}^2)^{s+5/2} \abs{\hat{\eta}(\xi)}^2 d\xi \\ \asymp \int_{B(0,1)} \frac{1}{\abs{\xi}^2} \abs{\psi(\xi)}^2 d\xi + \int_{B(0,1)^c} (1+\abs{\xi}^2)^{s+3/2} \abs{\psi(\xi)}^2 d\xi. \end{multline} Fortunately, the asymptotics of $V$ and $Q$, together with the low frequency bounds provided by \eqref{intro_cc_div}, allow us to control the right-hand sides of these expressions (see Lemma \ref{psi_integral_bounds}). Unfortunately, while in the case $n =2$ the bounds \eqref{intro_eta_ests_ST} and \eqref{intro_eta_ests_no_ST} do provide standard $H^{s+5/2}(\mathbb{R}^{n-1})$ estimates of $\eta$, when $n \ge 3$ and $\sigma >0$ the bound \eqref{intro_eta_ests_ST} does not provide standard Sobolev control due to the poor low frequency control. In this case it's not immediately clear that the resulting $\eta$ will be regular enough to use in the nonlinear analysis of \eqref{flattened_system} or, much less, even define a function. We are thus forced to build specialized Sobolev spaces based on the left side of \eqref{intro_eta_ests_ST} and to study their properties. To the best of our knowledge, the specialized Sobolev spaces defined via \eqref{intro_eta_ests_ST} have not been studied previously in the literature, so we turn our attention to their properties in Section \ref{sec_specialized_sobolev}. In order for these spaces (and in turn the estimate \eqref{intro_eta_ests_ST}) to be useful, they must satisfy three mandates. The first is that the objects in these spaces must be actual functions and not just tempered distributions or equivalence classes of functions modulo polynomials. The source of this mandate is clear: the $\eta$ determined by the pseudodifferential equation \eqref{intro_PsiDO}, and thus satisfying \eqref{intro_eta_ests_ST}, is meant to serve as the free surface function whose graph determines the fluid domain. The second is that these spaces must have useful properties such as good embedding and mapping properties. In particular, as $s$ is made large we need to guarantee at the very least that the functions in these spaces are continuous and decay at infinity. Third, the spaces have to be well-suited for the nonlinear analysis needed to invoke the implicit function theorem. For this we need good product-type estimates and composition estimates. Remarkably, these spaces, which we call $X^s(\mathbb{R}^{n-1})$ in Section \ref{sec_specialized_sobolev}, satisfy the above three mandates. We show in Proposition \ref{specialized_inclusion} that $X^s(\mathbb{R}) = H^s(\mathbb{R})$, so when $n=2$ these spaces are actually the standard $L^2-$Sobolev spaces. However, when $d \ge 2$ we prove that $H^s(\mathbb{R}^d) \subset X^s(\mathbb{R}^d)$, so the new spaces are strictly bigger than the standard spaces. The Fourier multiplier defining $X^s(\mathbb{R}^d)$ for $d \ge 2$ is anisotropic at low frequencies, with a special role played by the $e_1$ direction, which is the direction of motion of the traveling wave. We prove that this induces a strong anisotropy in the space, which manifests itself in the space not being closed under composition with rigid rotations (see Remark \ref{specialized_aniso_remark}). In addition to the spaces $X^s(\mathbb{R}^{n-1})$, in Section \ref{sec_specialized_sobolev} we also define and derive the basic properties of the spaces $Y^s(\Omega) = H^s(\Omega) + X^s(\mathbb{R}^{n-1})$, where here by abuse of notation we view functions in $X^s(\mathbb{R}^{n-1})$ as being defined in $\Omega$ in the obvious way. We need these spaces due to a complication with the pressure that we will describe below. The importance of $\gamma \neq 0$ here is worth emphasizing. It is precisely this condition that yields the asymptotics \eqref{intro_rho_asymp} and in turn guarantees the inclusion $\eta \in X^s(\mathbb{R}^{n-1})$. Without it we would only get inclusion in a space for which we could not guarantee the three mandates, and in particular in which we could not guarantee the objects in the space were actual functions. This all highlights the interesting fact that our technique is capable of producing genuine traveling wave solutions with $\gamma \neq 0$ but is incapable of producing stationary solutions with $\gamma =0$. Armed with the spaces $X^s(\mathbb{R}^{n-1})$ and $Y^s(\Omega)$ and our analysis of \eqref{intro_stokes_stress}, we characterize the solvability of \eqref{intro_stokes_full} in Section \ref{sec_gravity_capillary}. To do so we first define two Banach spaces for $s \ge 0$. The first, $\mathcal{X}^s$ defined in \eqref{Xs_def}, is built from the specialized spaces $X^s(\mathbb{R}^{n-1})$ and $Y^s(\Omega)$, and is the container space for the solutions: $(u,p,\eta) \in \mathcal{X}^s$. The second, $\mathcal{Y}^s$ defined in \eqref{Ys_def}, is the container space for the data: $(f,g,h,k) \in \mathcal{Y}^s$. This space contains the data space used for the over-determined isomorphism (see Theorem \ref{iso_overdetermined}). We prove that \eqref{intro_stokes_full} induces an isomorphism from $\mathcal{X}^s$ to $\mathcal{Y}^s$ for each $s \ge 0$ when $\gamma \neq 0$. This is proved in Theorem \ref{iso_stokes_capillary} when $\sigma >0$ and in Theorem \ref{iso_stokes_capillary_zero} when $\sigma =0$ and $n=2$. The reason the dimension plays a role without surface tension (i.e. $\sigma=0$) can be seen by examining $\rho$, the symbol of the pseudodifferential operator given in \eqref{intro_rho}. When $n=2$ we can take advantage of the fact that $\gamma \partial_1$ is an elliptic operator with symbol $2\pi i \gamma \xi_1 = 2 \pi i \gamma \xi$ in $\mathbb{R}$ to get the asymptotics listed in \eqref{intro_rho_asymp} for $\abs{\xi} \asymp \infty$. However, when $n \ge 3$ the operator $\gamma \partial_1$ is not elliptic on $\mathbb{R}^{n-1}$, and since $\sigma=0$, the asymptotics of $V_n(\xi,b,-\gamma)$ derived in Theorems \ref{QVm_zero} and \ref{QVm_infty} only yield \begin{equation} \abs{\rho(\xi)}^2 \asymp \begin{cases} \xi_1^2 + \abs{\xi}^4 &\text{for } \abs{\xi} \asymp 0 \\ 1 + \xi_1^2 &\text{for } \abs{\xi} \asymp \infty. \end{cases} \end{equation} This induces a second, high-frequency anisotropy in the analog of \eqref{intro_eta_ests_ST}. Our linear techniques can readily extend to this case through the definition of another further specialized scale of spaces beyond $X^s(\mathbb{R}^{n-1})$. Unfortunately, the spaces defined in this manner do not meet the second or third mandates described above, and we are unable to use them to solve the nonlinear problem \eqref{flattened_system}. As such, we have declined to record this extension of our linear analysis in the present paper. The space $Y^s(\Omega)$ appears in these isomorphisms to handle an issue with the pressure. Indeed, our proofs show that for $(u,p,\eta)$ solving \eqref{intro_stokes_full} for data $(f,g,h,k) \in \mathcal{Y}^s$, we have that $p \in Y^{s+1}(\Omega)$, $\eta \in X^{s+5/2}(\mathbb{R}^{n-1})$, and $p-\eta \in H^{s+1}(\Omega)$. Thus, while the pressure is in the non-standard space $Y^{s+1}(\Omega) = H^{s+1}(\Omega) + X^{s+1}(\mathbb{R}^{n-1})$, we characterize precisely the source of this abnormality: $p = \eta + q$ for $q$ in the standard space $H^{s+1}(\Omega)$. From this we see that the problems with the pressure described above in the discussion of the abandoned fixed-point strategy do not entirely go away. However, the source of low-frequency bad behavior in the pressure is identified as exactly the bad behavior of $\eta$ at low frequencies, and so if it happens that $\eta$ is actually well-behaved at low frequencies, $p$ must be as well. We now arrive at the second key to our strategy: the spaces $\mathcal{X}^s$ and $\mathcal{Y}^s$ are amenable to nonlinear analysis. While the isomorphisms associated to the linearized system \eqref{intro_stokes_full} are interesting in their own right, they are useless in the study of \eqref{flattened_system} if we cannot prove that the nonlinear map from $\mathcal{X}^s$ (or really an open subset thereof) to $\mathcal{Y}^s$ defined by \eqref{flattened_system} is $C^1$. The first difficulty is seen immediately upon examining the requirements of the space $\mathcal{Y}^s$, which in particular require that the linearized compatibility condition \eqref{intro_cc_div} holds. This clearly does not hold for the $g$ and $h$ defined by \eqref{flattened_system}. However, in Proposition \ref{nlin_diverge_ident} we identity a nonlinear variant of \eqref{intro_cc_div} that allows us to switch to an equivalent formulation of \eqref{flattened_system} for which the linear compatibility condition holds. This allows us to show that the map defined by this slight reformulation of \eqref{flattened_system} is indeed well-defined from $\mathcal{X}^s$ to $\mathcal{Y}^s$. Then the special nonlinear properties of the spaces $X^s(\mathbb{R}^{n-1})$ and $Y^s(\Omega)$ allow us to prove in Theorem \ref{Xi_well_defd} that this map is indeed $C^1$. We thus arrive at the statement of our first main theorem, which establishes the solvability of \eqref{flattened_system} with surface tension ($\sigma >0$) in dimension $n \ge 2$ and without surface tension ($\sigma =0$) in dimension $n=2$. Before giving the precise statement, a couple of comments on how we treat the bulk forcing and surface stress data are in order. Our ultimate goal is to solve \eqref{traveling_euler} by way of \eqref{flattened_system}, so in the final part of our analysis we will want to have bulk forcing in \eqref{flattened_system} of the form $\mathfrak{f} \circ \mathfrak{F}_\eta$, where $\mathfrak{F}_\eta$ is the flattening map defined in terms of $\eta$ via \eqref{flat_def}, so that when we compose with $\mathfrak{F}_\eta^{-1}$ we have bulk forcing $\mathfrak{f}$ in the first equation of \eqref{traveling_euler}. The minimal assumption on $\mathfrak{f}$ is that it is defined in the domain $\Omega_{b+\eta}$, but this formulation is inconvenient for our analysis because it requires a priori knowledge of $\eta$, which is one of the unknowns we are solving for in terms of $\mathfrak{f}$. We thus assume that $\mathfrak{f}$ is a priori defined in a fixed larger set that we can guarantee always contains $\Omega_{b+\eta}$, which without loss of generality (thanks to extension operators), we can assume is actually all of $\mathbb{R}^n$. This is consistent with the usual physical understanding that bulk force fields are defined globally, not just within the set currently occupied by a continuum. Since we employ the implicit function theorem in our proofs, we then need to show that the map $(\mathfrak{f},\eta) \mapsto \mathfrak{f}\circ \mathfrak{F}_\eta$ is $C^1$, and it is well known (see \cite{IKT_2013} and references therein) that in the context of standard Sobolev spaces this requires the domain for $\mathfrak{f}$ to enjoy one order of regularity more than the codomain (i.e. $H^{s+1}$ for the domain but $H^s$ for the codomain), and we prove in Section \ref{sec_special_nonlinear} that this holds in our context as well. In some settings it may be advantageous to maintain the minimal regularity for the bulk force ($H^s$ for domain and codomain), and we have identified a special structural assumption on a bulk force field that allows for this. Indeed, if $f \in H^s(\mathbb{R}^{n-1};\mathbb{R}^n)$ and we define the bounded linear map $L_{\Omega_\zeta} : H^s(\mathbb{R}^{n-1};\mathbb{R}^n) \to H^s(\Omega_\zeta;\mathbb{R}^n)$ via $L_{\Omega_\zeta} f(x) = f(x')$ (see Lemma \ref{sobolev_slice_extension}), then $L_{\Omega_b} f \circ \mathfrak{F}_\eta^{-1}(x) = f(x') = L_{\Omega_{b+\eta}}f(x)$. In other words, bulk force fields with no $x_n$ dependence are invariant under composition with $\mathfrak{F}_\eta^{-1}$ and thus stay the same as we change from \eqref{flattened_system} to \eqref{traveling_euler}. The map $f \mapsto L_{\Omega_b} f$ is also linear and thus smooth without any augmentation of regularity in its domain. In our formulation of the existence result for \eqref{flattened_system} we have thus chosen to incorporate both types of forces, taking the right side of the first equation in \eqref{flattened_system} to be of the form $\mathfrak{f}\circ \mathfrak{F}_\eta + L_{\Omega_b} f$ for $\mathfrak{f} \in H^{s+1}(\mathbb{R}^n;\mathbb{R}^n)$ and $f \in H^s(\mathbb{R}^{n-1};\mathbb{R}^n)$. A similar analysis applies to the surface stresses, and we have chosen to consider stresses in the third equation of \eqref{flattened_system} of the form $\mathcal{T} \circ \mathfrak{F}_\eta \vert_{\Sigma_b} + S_{b} T$ for $\mathcal{T} \in H^{s+2}(\mathbb{R}^n; \mathbb{R}^{n\times n}_{\operatorname*{sym}})$, $T \in H^{s+1/2}(\mathbb{R}^{n-1}; \mathbb{R}^{n\times n}_{\operatorname*{sym}})$, and $S_b T(x',b) = T(x')$ (see Lemma \ref{sobolev_slice_extension_surface}). Here we need to increase the regularity count to $s+2$ for $\mathcal{T}$ so that the map $(\mathcal{T},\eta) \mapsto \mathcal{T} \circ \mathfrak{F}_\eta$ is $C^1$ with values in $H^{s+1}(\Omega; \mathbb{R}^{n\times n}_{\operatorname*{sym}})$, which then allows us to take a trace to arrive in $H^{s+1/2}(\Sigma_b; \mathbb{R}^{n\times n}_{\operatorname*{sym}})$. Optimal regularity is maintained for $T$, though. Note also that in the following statement we will refer to the spaces $C^k_b$, $C^k_0$, and ${_{0}}H^{s}(\Omega;\mathbb{R}^{n})$, defined later in Section \ref{sec_notation}. \begin{theorem}[Proved later in Section \ref{sec_main_thms_flat}]\label{main_thm_flat} Suppose that either $\sigma >0$ and $n \ge 2$ or $\sigma =0$ and $n =2$. Assume that $n/2 < s \in \mathbb{N}$, let $\mathcal{X}^s$ be as defined by \eqref{Xs_def}, and let $L_\Omega = L_{\Omega_b}$ be as in Lemma \ref{sobolev_slice_extension} and $S_b$ be as defined in Lemma \ref{sobolev_slice_extension_surface}. Then there exist open sets \begin{equation} \mathcal{U}^s \subset (\mathbb{R} \backslash \{0\}) \times H^{s+2}(\mathbb{R}^{n} ; \mathbb{R}^{n\times n}_{\operatorname*{sym}}) \times H^{s+1/2}(\mathbb{R}^{n-1} ; \mathbb{R}^{n\times n}_{\operatorname*{sym}}) \times H^{s+1}(\mathbb{R}^n;\mathbb{R}^n) \times H^s(\mathbb{R}^{n-1};\mathbb{R}^n) \end{equation} and $\mathcal{O}^s\subset \mathcal{X}^s$ such that the following hold. \begin{enumerate} \item $(0,0,0) \in \mathcal{O}^s$, and for every $(u,p,\eta) \in \mathcal{O}^s$ we have that \begin{equation} u \in C^{2 + \lfloor s-n/2 \rfloor}_b(\Omega;\mathbb{R}^n),\; p \in C^{1 + \lfloor s-n/2 \rfloor}_b(\Omega),\; \eta \in C^{3 + \lfloor s-n/2 \rfloor}_0(\mathbb{R}^{n-1}), \end{equation} \begin{equation} \begin{split} \lim_{\abs{x'} \to \infty} \partial^\alpha u(x) &= 0 \text{ for all } \alpha \in \mathbb{N}^n \text{ such that} \abs{\alpha} \le 2 + \lfloor s-n/2 \rfloor, \text{ and }\\ \lim_{\abs{x'} \to \infty} \partial^\alpha p(x) &= 0 \text{ for all } \alpha \in \mathbb{N}^n \text{ such that} \abs{\alpha} \le 1 + \lfloor s-n/2 \rfloor, \end{split} \end{equation} $\max_{\mathbb{R}^{n-1}} \abs{\eta} \le b/2,$ and if $\mathfrak{F}_\eta : \bar{\Omega} \to \bar{\Omega}_{b+\eta}$ denotes the map from \eqref{flat_def}, then $\mathfrak{F}_\eta$ is a bi-Lipschitz homeomorphism and is a $C^{3 + \lfloor s-n/2 \rfloor}$ diffeomorphism. \item We have that $(\mathbb{R} \backslash \{0\}) \times \{0\} \times \{0\} \times \{0\} \times \{0\} \subset \mathcal{U}^s$. \item For each $(\gamma,\mathcal{T}, T,\mathfrak{f},f) \in \mathcal{U}^s$ there exists a unique $(u,p,\eta) \in \mathcal{O}^s$ classically solving \begin{equation}\label{main_thm_flat_0} \begin{cases} (u-\gamma e_1) \cdot \nab_{\mathcal{A}} u - \Delta_{\mathcal{A}} u + \nabla_{\mathcal{A}} p = \mathfrak{f} \circ \mathfrak{F}_\eta + L_\Omega f & \text{in } \Omega \\ \diverge_{\mathcal{A}}{u}=0 & \text{in } \Omega \\ (pI- \mathbb{D}_{\mathcal{A}} u) \mathcal{N} = (\eta -\sigma \mathcal{H}(\eta) )\mathcal{N} + (\mathcal{T} \circ \mathfrak{F}_\eta \vert_{\Sigma_b} + S_b T) \mathcal{N} & \text{on } \Sigma_{b} \\ u\cdot \mathcal{N} + \gamma \partial_1 \eta = 0 &\text{on } \Sigma_{b} \\ u =0 &\text{on } \Sigma_0. \end{cases} \end{equation} \item The map $\mathcal{U}^s \ni (\gamma,\mathcal{T},T,\mathfrak{f},f) \mapsto (u,p,\eta) \in \mathcal{O}^s$ is $C^1$ and locally Lipschitz. \end{enumerate} \end{theorem} Note that if $n=2$ in Theorem \ref{main_thm_flat}, then in fact \begin{equation}\label{intro_Xs_n=2} \mathcal{O}^s \subseteq \mathcal{X}^s = {_{0}}H^{s+2}(\Omega;\mathbb{R}^{2}) \times H^{s+1}(\Omega) \times H^{s+5/2}(\mathbb{R}), \end{equation} and so the solutions belong to standard Sobolev spaces. It is only in dimension $n \ge 3$ that we need the specialized spaces $X^{s+5/2}(\mathbb{R}^{n-1})$ and $Y^{s+1}(\Omega)$, as defined in \eqref{sp_space_def} and \eqref{an_space_def}, respectively. With Theorem \ref{main_thm_flat} in hand, we turn our attention back to the original Eulerian problem \eqref{traveling_euler}. Recall from the discussion at the end of Section \ref{sec_eulerian_form} that Proposition \ref{trav_prop} implies that under some mild Sobolev regularity assumptions on solutions, there cannot exist nontrivial solutions without a nontrivial stress and forcing. When $n=2$, \eqref{intro_Xs_n=2} shows that our functional framework enforces these mild conditions, and we conclude that there cannot exist nontrivial solutions \begin{equation} \eta \in H^{s+5/2}(\mathbb{R}) \text{ with } \inf_{\mathbb{R}^{n-1}} \eta > -b, \; v \in {_{0}}H^{s+2}(\Omega_{b+\eta};\mathbb{R}^{2}), \; q \in H^{s+1}(\Omega_{b+\eta}) \end{equation} to \eqref{traveling_euler} with $1 = n/2 < s\in \mathbb{N}$, $\mathfrak{f} =0$, and $\mathcal{T}=0$. However, when $n\ge 3$ the space $\mathcal{X}^s$ (defined in \eqref{Xs_def}) is built from our specialized Sobolev spaces, and so Proposition \ref{trav_prop} is inapplicable. Our first result on \eqref{traveling_euler} thus addresses the question of whether traveling wave solutions exist within our functional framework without stress and forcing when $n \ge 3$. In the statement we recall that the spaces $Y^{s}(\Omega_\zeta)$ are defined in \eqref{an_space_def}. \begin{theorem}[Proved later in Section \ref{sec_main_thms_euler}]\label{main_thm_no_forcing} Suppose that $\gamma \in \mathbb{R} \backslash \{0\}$, $\sigma >0$, and $n \ge 3$. Let $s = \lfloor n/2 \rfloor + 1 \in \mathbb{N}$. There exists $r >0$ such that if $\eta \in X^{s+5/2}(\mathbb{R}^{n-1})$, $v \in {_{0}}H^{s+2}(\Omega_{b+\eta};\mathbb{R}^{n})$, and $q \in Y^{s+1}(\Omega_{b+\eta})$ satisfy $\inf_{\mathbb{R}^{n-1}} \eta > -b$, $q-\eta \in H^{s+1}(\Omega_{b+\eta})$, and \begin{equation} \begin{cases} (v-\gamma e_1) \cdot \nabla v - \Delta v + \nabla q = 0 & \text{in } \Omega_{b+\eta} \\ \diverge{v}=0 & \text{in } \Omega_{b+ \eta} \\ (q I- \mathbb{D} v) \mathcal{N} = (\eta -\sigma \mathcal{H}(\eta) )\mathcal{N} & \text{on } \Sigma_{b+\eta} \\ - \gamma \partial_1 \eta = u \cdot \mathcal{N} &\text{on } \Sigma_{b+\eta} \\ u =0 &\text{on } \Sigma_0, \end{cases} \end{equation} then either $v=0$, $q=0$, and $\eta =0$, or else \begin{equation} \norm{v}_{{_{0}}H^{s+2}} + \norm{q}_{Y^{s+1}} + \norm{\eta}_{X^{s+5/2}} + \norm{q-\eta}_{H^{s+1}} \ge r. \end{equation} \end{theorem} The upshot of this theorem is that if a nontrivial traveling wave solution $(v,q,\eta)$ exists without forcing (i.e. $\mathfrak{f} =0$ and $\mathcal{T}=0$ in \eqref{traveling_euler}), then either the solution does not belong to the stated function spaces, or else it does but must exist outside a ball of known radius. In particular, we cannot rule out the possible existence of large nontrivial unforced solutions in $\mathcal{X}^s$, though we do not expect them to exist. We emphasize that this result implies nothing about the existence of unforced solutions in other functional frameworks, such as those built from H\"{o}lder spaces. Finally, we turn our attention to the existence of forced solutions to \eqref{traveling_euler}. Note that we continue to consider generalized bulk forces of the form $\mathfrak{f} + L_{\Omega_{b+\eta}} f$ where $L_{\Omega_{b+\eta}}$ is as in Lemma \ref{sobolev_slice_extension}, and we consider generalized surface stresses of the form $\mathcal{T}\vert_{\Sigma_{b+\eta}} + S_{b+\eta}T$, where we write $S_{b+\eta}T(x) = T(x')$. \begin{theorem}[Proved later in Section \ref{sec_main_thms_euler}]\label{main_thm_euler} Suppose that either $\sigma >0$ and $n \ge 2$ or $\sigma =0$ and $n =2$. Assume that $n/2 < s \in \mathbb{N}$, and let \begin{equation} \mathcal{U}^s \subset (\mathbb{R} \backslash \{0\}) \times H^{s+2}(\mathbb{R}^{n} ; \mathbb{R}^{n\times n}_{\operatorname*{sym}}) \times H^{s+1/2}(\mathbb{R}^{n-1} ; \mathbb{R}^{n\times n}_{\operatorname*{sym}}) \times H^{s+1}(\mathbb{R}^n;\mathbb{R}^n) \times H^s(\mathbb{R}^{n-1};\mathbb{R}^n) \end{equation} and $\mathcal{O}^s\subset \mathcal{X}^s$ be the open sets from Theorem \ref{main_thm_flat}. Then for each $(\gamma,\mathcal{T},T,\mathfrak{f},f) \in \mathcal{U}^s$ there exist: \begin{enumerate}[(i)] \item a free surface function $\eta \in X^{s+5/2}(\mathbb{R}^{n-1}) \cap C^{3 + \lfloor s-n/2 \rfloor}_0(\mathbb{R}^{n-1})$ such that $\max_{\mathbb{R}^{n-1}} \abs{\eta} \le b/2$ and $\mathfrak{F}_\eta$, defined by \eqref{flat_def}, is a bi-Lipschitz homeomorphism and $C^{3 + \lfloor s-n/2 \rfloor}$ diffeomorphism, \item a velocity field $v \in {_{0}}H^{s+2}(\Omega_{b+\eta};\mathbb{R}^{n}) \cap C^{2 + \lfloor s-n/2 \rfloor}_b(\Omega_{b+\eta};\mathbb{R}^n)$, \item a pressure $q \in Y^{s+1}(\Omega_{b+\eta}) \cap C^{1 + \lfloor s-n/2 \rfloor}_b(\Omega_{b+\eta})$, \item constants $C, R>0$ \end{enumerate} such that the following hold. \begin{enumerate} \item $(v,q,\eta)$ classically solve \begin{equation}\label{main_thm_euler_0} \begin{cases} (v-\gamma e_1) \cdot \nabla v - \Delta v + \nabla q = \mathfrak{f} + L_{\Omega_{b+\eta}} f & \text{in } \Omega_{b+\eta} \\ \diverge{v}=0 & \text{in } \Omega_{b+ \eta} \\ (q I- \mathbb{D} v) \mathcal{N} = (\eta -\sigma \mathcal{H}(\eta) )\mathcal{N} + (\mathcal{T}\vert_{\Sigma_{b+\eta}} + S_{b+\eta}T) \mathcal{N} & \text{on } \Sigma_{b+\eta} \\ - \gamma \partial_1 \eta = v \cdot \mathcal{N} &\text{on } \Sigma_{b+\eta} \\ v =0 &\text{on } \Sigma_0. \end{cases} \end{equation} \item $(v\circ \mathfrak{F}_\eta, q \circ \mathfrak{F}_\eta, \eta) \in \mathcal{O}^s \subset \mathcal{X}^s$. \item If $(\gamma_\ast,\mathcal{T}_\ast, T_\ast,\mathfrak{f}_\ast,f_\ast) \in \mathcal{U}^s$ and \begin{equation} \abs{\gamma-\gamma_\ast} +\norm{\mathcal{T}-\mathcal{T}_\ast}_{H^{s+2}} +\norm{T-T_\ast}_{H^{s+1/2}} + \norm{\mathfrak{f}-\mathfrak{f}_\ast}_{H^{s+1}} + \norm{f-f_\ast}_{H^s} < R, \end{equation} then for $(v_\ast,q_\ast,\eta_\ast)$ the corresponding solution triple we have the local Lipschitz estimate \begin{multline} \norm{ (v\circ \mathfrak{E}_\eta, q\circ \mathfrak{E}_\eta, \eta) - (v_\ast\circ \mathfrak{E}_{\eta_\ast}, q_\ast \circ \mathfrak{E}_{\eta_\ast}, \eta_\ast) }_{\mathcal{X}^s} \\ \le C \left(\abs{\gamma-\gamma_\ast} + \norm{\mathcal{T}-\mathcal{T}_\ast}_{H^{s+2}} +\norm{T-T_\ast}_{H^{s+1/2}} + \norm{\mathfrak{f}-\mathfrak{f}_\ast}_{H^{s+1}} + \norm{f-f_\ast}_{H^s} \right). \end{multline} \end{enumerate} \end{theorem} We conclude with a couple of remarks about Theorem \ref{main_thm_euler}. First note that the functional framework requires that $\eta \to 0$, $\mathfrak{F}_\eta \to I$, $v \to 0$, and $q \to 0$ as $\abs{x'} \to \infty$. This means that our traveling wave solutions correspond to what are called solitary waves in the inviscid traveling wave literature. Second, note that solutions with different free surface functions, say $\eta$ and $\eta_\ast$, have velocities and pressures defined in different domains, $\Omega_{b+\eta}$ and $\Omega_{b+\eta_\ast}$ respectively, so there is no natural way to compare the velocities and pressures with Sobolev norms. In the local Lipschitz estimate of the third item we have chosen to measure the difference in velocity and pressure by pulling back to the flattened domain $\Omega$ and using the $\mathcal{X}^s$ norm, which we believe is a reasonable metric given how our solutions are constructed. Third, note that while we have treated the bulk force and surface stress as distinct, in some cases it is possible shift terms from one to the other in the same way that we shifted the gravitational force from the bulk to the boundary. Indeed, if $\mathfrak{f} = \mathfrak{f}_0 + \nabla \psi$, then the potential gradient term can be shifted to the boundary by redefining the pressure via $q \mapsto q -\psi$ and the stress via $\mathcal{T} \mapsto \mathcal{T} -\psi I$, which leaves $\mathfrak{f}_0$ in place of $\mathfrak{f}$ in the bulk forcing. The regularity requirements for $\psi$ are the same, though: we need $\psi \in H^{s+2}(\mathbb{R}^n)$ to guarantee that the bulk force term satisfies $\nabla \psi \in H^{s+1}(\mathbb{R}^n;\mathbb{R}^n)$ and the stress term satisfies $\psi I \in H^{s+2}(\mathbb{R}^n;\mathbb{R}^{n \times n}_{\operatorname*{sym}})$. \subsection{Notational conventions}\label{sec_notation} Here we record some notational conventions used throughout the paper. We always write $2 \le n \in \mathbb{N}$ for the dimension of the fluid domain $\Omega$. We will also need to talk about function spaces defined on other sets, and in particular on subsets of $\partial \Omega$. To avoid confusion and tedious appearances of $n-1$, we will often describe these other sets as subsets of $\mathbb{R}^d$ for $1 \le d \in \mathbb{N}$. In other words $d \ge 1$ is a generic dimensional parameter, and $n \ge 2$ always refers to the dimension of the fluid. We write $\mathscr{S}(\mathbb{R}^d)$ for the usual Schwartz class of complex-valued functions and $\mathscr{S}'(\mathbb{R}^d)$ for the space of tempered distributions. We define the Fourier transform, $\hat{\cdot}$, and inverse Fourier transform, $\check{\cdot}$, on $\mathbb{R}^d$ via \begin{equation}\label{FT_def} \hat{f}(\xi) = \int_{\mathbb{R}^d} f(x) e^{-2\pi i x\cdot \xi} dx \text{ and } \check{f}(x) = \int_{\mathbb{R}^d} f(\xi) e^{2\pi i x\cdot \xi} d\xi. \end{equation} By employing the Parseval and Tonelli-Fubini theorems, we extend \eqref{FT_def} to horizontal Fourier transforms acting on functions defined on $\Omega$ via \begin{equation}\label{FT_hor_def} \hat{f}(\xi,x_n) = \int_{\Omega} f(x',x_n) e^{-2\pi i x' \cdot \xi} dx \text{ for } \xi' \in \mathbb{R}^{n-1}, \text{ and } \check{f}(x) = \check{f}(x',x_n) = \int_{\Omega} f(\xi,x_n) e^{2\pi i x' \cdot \xi} d\xi. \end{equation} For $k\in \mathbb{N}$, an open set $\varnothing \neq U \subseteq \mathbb{R}^d$, and a finite dimensional inner-product space $W$ we define the usual $L^2-$Sobolev space \begin{equation} H^k(U;W) = \{f: U \to W \;\vert\; \partial^\alpha f \in L^2(U;W) \text{ for } \abs{\alpha} \le k\}. \end{equation} For $0 \le s \in \mathbb{R}$ we then let $H^s(U;W)$ denote the fractional spaces obtained by interpolation. In the event that $U = \mathbb{R}^d$ we take the norm on these spaces to be the standard one defined on the Fourier side, and we also extend to $s \in (-\infty,0) \subset \mathbb{R}$ in the usual way. When the target is $W = \mathbb{R}$ we will usually drop this in the notation, writing simply $H^s(U)$. For $0 < r \in \mathbb{R}$ we define the real-valued negative homogeneous Sobolev space to be \begin{equation}\label{homogeneous_def} \dot{H}^{-r}(\mathbb{R}^d) = \{f \in \mathscr{S}(\mathbb{R}^d) \;\vert\; f = \bar{f}, \hat{f} \in L^1_{loc}(\mathbb{R}^d), \text{ and } \snorm{f}_{\dot{H}^{-r}} < \infty \} \end{equation} for \begin{equation} \snorm{f}_{\dot{H}^{-r}}^2 = \int_{\mathbb{R}^d} \frac{1}{\abs{\xi}^{2r}} \abs{\hat{f}(\xi)}^2 d\xi. \end{equation} Suppose now that $\zeta : \mathbb{R}^{n-1} \to \mathbb{R}$ is Lipschitz and satisfies $\inf \zeta >0$. For $1/2 < s \in \mathbb{R}$ we can use trace theory to define \begin{equation} {_{0}}H^{s}(\Omega_{\zeta};\mathbb{R}^n ) = \{u\in H^{s}(\Omega_{\zeta};\mathbb{R}^n) \;\vert\; u=0\text{ on } \Sigma_{0}\}, \end{equation} where the equality $u=0$ on $\Sigma_{0}$ is in the sense of traces. We will mostly employ these spaces in the case $\Omega_\zeta = \Omega$ (i.e. $\zeta = b$), in which case we will need the following extra definitions. Recall that the symmetrized gradient $\mathbb{D}$ is defined by \eqref{sym_grad_def}. We endow ${_{0}}H^{1}(\Omega;\mathbb{R}^n )$ with the inner-product \begin{equation} (u,v)_{{_{0}}H^{1}} = \frac{1}{2} \int_{\Omega}\mathbb{D}{u}:\mathbb{D}{v}, \end{equation} which, thanks to Korn's inequality (see Lemma \ref{korn}), is indeed an inner-product and generates the same topology as the standard $H^1$ norm. We define the closed subspace of solenoidal vector fields to be \begin{equation}\label{H1 div zero} {_{0}}H_{\sigma}^{1}(\Omega;\mathbb{R}^{n}) = \{ u \in {_{0}}H^{1}(\Omega;\mathbb{R}^{n}) \;\vert\; \diverge u=0\}. \end{equation} Then ${_{0}}H^{1}_\sigma(\Omega;\mathbb{R}^n)$ is a Hilbert space with the same inner-product. In what follows we will often use the fact that by the symmetry of $\mathbb{D}{u}$, \begin{equation}\label{symmetrized product} \int_{\Omega }\mathbb{D}u:\nabla v =\frac{1}{2}\int_{\Omega} \mathbb{D}{u}:\mathbb{D}{v} \end{equation} for all $u,v\in H^{1}(\Omega;\mathbb{R}^{n})$. Given $k \in \mathbb{N}$, a real Banach space $V$, and an open set $\varnothing \neq U \subseteq \mathbb{R}^d$, we define the Banach space \begin{equation} C^k_b(U;V) = \{f: U \to V \;\vert\; f \text{ is k-times continuously differentiable, and} \norm{f}_{C^k_b} < \infty\}, \end{equation} where \begin{equation} \norm{f}_{C^k_b} = \sum_{\abs{\alpha} \le k} \sup_{x \in U} \norm{\partial^\alpha f(x)}_{V}. \end{equation} When $V = \mathbb{R}$ we will typically write $C^k_b(U) = C^k_b(U;\mathbb{R})$. We also define $C^k_0(\mathbb{R}^d;V) \subset C^k_b(\mathbb{R}^d;V)$ to be the closed subspace \begin{equation} C^k_0(\mathbb{R}^d;V) = \{ f\in C^k_b(\mathbb{R}^d;V) \;\vert\; \lim_{\abs{x} \to \infty} \partial^\alpha f(x) =0 \text{ for all }\abs{\alpha}\le k\}, \end{equation} which we endow with the norm from $C^k_b(\mathbb{R}^d;V)$. Again we typically write $C^k_0(\mathbb{R}^d) = C^k_0(\mathbb{R}^d;\mathbb{R})$. Finally, we introduce a convenient abuse of notation that we will use throughout the paper. The hyperplane $\Sigma_b = \{x \in \mathbb{R}^n \;\vert\; x_n=b \}$ is canonically diffeomorphic to $\mathbb{R}^{n-1}$ via the map $\Sigma_b \ni (x',b) \mapsto x' \in \mathbb{R}^{n-1}$. Using this, we can identify $H^s(\Sigma_b;W)$ with $H^s(\mathbb{R}^{n-1};W)$ for any finite dimensional inner-product space $W$. This abuse of notation is justified by a gain in brevity, as it allows us to write $f(x')$ in place of $f(x',b)$ for $x' \in \mathbb{R}^{n-1}$, etc. It also allows us to use the Fourier transform on $\Sigma_b$ in a natural way. \subsection{Plan of paper} In Section \ref{sec_stress_bcs} we study the Stokes problem with stress boundary conditions \eqref{intro_stokes_stress} and characterize its solvability in standard $L^2-$based Sobolev spaces. In Section \ref{sec_overdetermined} we study the over-determined problem \eqref{intro_stokes_overdet}, derive its compatibility conditions, and characterize its solvability in Sobolev spaces. In Section \ref{sec_fourier} we turn our attention to an ODE associated to the horizontal Fourier transform of the problem \eqref{intro_stokes_stress}. We study some special solutions to this ODE and derive their asymptotic developments. In Section \ref{sec_specialized_sobolev} we study some specialized Sobolev spaces. Section \ref{sec_gravity_capillary} concerns the analysis of the linearized problem \eqref{intro_stokes_full}. We characterize its solvability in terms of the specialized spaces from Section \ref{sec_specialized_sobolev}. Section \ref{sec_navier_bcs} contains a brief digression on the solvability of the Stokes problem with Navier boundary conditions \eqref{intro_stokes_navier}. In Section \ref{sec_nonlinear_analysis} we employ nonlinear analysis to prove all of the main theorems. Appendix \ref{sec_analysis_tools} contains some analysis tools used throughout the paper. \section{The $\gamma-$Stokes equations with stress boundary conditions}\label{sec_stress_bcs} In this section we study the linear problem \begin{equation}\label{problem_gamma_stokes_stress} \begin{cases} \diverge S(p,u) - \gamma \partial_1 u =f & \text{in }\Omega\\ \diverge u=g & \text{in }\Omega\\ S(p,u)e_{n}=k, & \text{on }\Sigma_b \\ u=0 & \text{on }\Sigma_{0}, \end{cases} \end{equation} where $f\in ({_{0}}H^{1}(\Omega;\mathbb{R}^{n}))^{\ast}$, $g\in L^{2}(\Omega)$, $k\in H^{-1/2}(\Sigma_b;\mathbb{R}^{n})$ are given data. A related problem with $\gamma =0$ was studied in \cite{Abels_2006} in $L^p-$Sobolev spaces. Here we work only in $L^2-$based spaces but also go to higher regularity than \cite{Abels_2006}. Of course, the regularity gain is not surprising and can be derived from the general theory of \cite{ADN_1964}. Here we present a self-contained and elementary treatment for the reader's convenience. \subsection{The specified divergence problem and the pressure as Lagrange multiplier} Before addressing \eqref{problem_gamma_stokes_stress} we need to develop a couple auxiliary tools related to the divergence operator. We develop these now. The first allows us to solve the specified divergence problem, which is useful in reducing to the case $g =0$ in \eqref{problem_gamma_stokes_stress} and is essential in dealing with the pressure in the weak formulation. The following proof is adapted from Theorem 2 in \cite{BB_2007}. \begin{proposition}\label{specified_divergence} Let $g\in L^{2}(\Omega)$. Then there exists $v\in {_{0}}H^{1}(\Omega;\mathbb{R}^{n})$ such that $\diverge v=g$ in $\Omega$ and \begin{equation}\label{specified_divergence_0} \Vert v\Vert_{ {_{0}}H^{1} }\leq c\Vert g\Vert_{L^{2}} \end{equation} for some constant $c=c(b,n)>0$. \end{proposition} \begin{proof} Let $U=\mathbb{R}^{n-1}\times(-3b,b)$ and define $g_1 \in L^2(U)$ via \begin{equation} g_{1}(x)= \begin{cases} g(x) & \text{in }\Omega\\ 0 & \text{in }U\setminus\Omega. \end{cases} \end{equation} Consider the Dirichlet problem \begin{equation} \begin{cases} \Delta\varphi=g_{1} & \text{in }U\\ \varphi=0 & \text{on }\partial U. \end{cases} \end{equation} The unique weak solution $\varphi \in H^1_0(U)$ to this problem is given by the minimizer of the functional \begin{equation} H^1_0(U)\ni v \mapsto\int_{U} \frac{1}{2}|\nabla v|^{2}+g_{1}v. \end{equation} This functional is coercive thanks to the Poincar\'{e} inequality, Lemma \ref{poincare} (which continues to hold in $H_{0}^{1}(U)$ via a translation and scaling argument), and the Cauchy-Schwarz inequality. Moreover, using $v=0$ as a comparison, we find that \begin{equation} \int_{U} \frac{1}{2}|\nabla\varphi|^{2}+g_{1}\varphi \leq \int_{U} \frac{1}{2} \abs{\nabla 0}^2 + g_1 0 = 0 \end{equation} and so again by Poincar\'{e}'s inequality, \begin{equation} \Vert\nabla\varphi\Vert_{L^{2}(U)}^{2}\leq 2 \Vert\varphi\Vert_{L^2(U)} \Vert g_{1}\Vert_{L^{2}(U)} \leq c(b) \Vert\nabla\varphi\Vert_{L^{2}(U)}\Vert g\Vert_{L^{2}(\Omega)}, \end{equation} which yields the estimate $\Vert\nabla\varphi\Vert_{L^{2}(U)} \leq c(b) \Vert g\Vert_{L^{2}(\Omega)}$. Using standard regularity results (see Theorem \ref{theorem_regularity_linear} below for a sketch) we deduce that $\varphi\in H^{2}(U)$ and \begin{equation}\label{specified_divergence_1} \Vert\varphi\Vert_{H^{2}(U)}\leq c\Vert g\Vert_{L^{2}(\Omega)} \end{equation} for a constant $c = c(n,b) >0$. We now define $v : \Omega \to \mathbb{R}^n$ via \begin{equation} \begin{split} v^{\prime}(x) & = \nabla^{\prime}\varphi(x^{\prime},x_{n}) + 3\nabla^{\prime} \varphi(x^{\prime},-x_{n}) - 4\nabla^{\prime}\varphi(x^{\prime},-2x_{n}),\\ v_{n}(x) & = \partial_{n}\varphi(x^{\prime},x_{n}) - 3\partial_{n} \varphi(x^{\prime},-x_{n}) + 2\partial_{n}\varphi(x^{\prime},-2x_{n}). \end{split} \end{equation} Then, using the fact that $g_1 = 0$ in $\mathbb{R}^{n-1} \times (-3b,0)$, we find that \begin{equation} \diverge v(x)=\Delta\varphi(x)+3\Delta\varphi(x^{\prime} ,-x_{n})-4\Delta\varphi(x^{\prime},-2x_{n})=g(x) \text{ for }x \in \Omega. \end{equation} Moreover, $v=0$ on $\Sigma_{0}$ by construction, so $v\in{_{0}}H^{1}(\Omega;\mathbb{R}^{n})$. The estimate \eqref{specified_divergence_0} then follows directly from \eqref{specified_divergence_1} and the definition of $v$. \end{proof} Next we aim to use Proposition \ref{specified_divergence} to perform the usual trick of introducing the pressure as a Lagrange multiplier associated to the divergence free condition. Given $p\in L^{2}(\Omega)$, consider the linear functional $L_{p}:{_{0}}H^{1}(\Omega ; \mathbb{R}^{n})\rightarrow\mathbb{R}$ defined by \begin{equation} L_{p} v = \int_{\Omega}p\diverge v \text{ for } v\in{_{0}}H^{1}(\Omega;\mathbb{R}^{n}). \end{equation} Then $\Vert L_{p}\Vert_{({_{0}}H^{1})^{\ast}} \leq c(n,b) \Vert p\Vert_{L^{2}}$, and so the Riesz representation theorem shows that there exists a unique $w_{p}\in{_{0}}H^{1}(\Omega;\mathbb{R}^{n})$ such that $\Vert w_{p}\Vert_{{_{0}}H^{1}}=\Vert L_{p}\Vert_{({_{0}}H^{1})^{\ast}}$ and \begin{equation}\label{operator Q} \int_{\Omega}p\diverge v=(w_{p},v)_{{_{0}}H^{1}(\Omega)} \text{ for all }v\in{_{0}}H^{1}(\Omega;\mathbb{R}^{n}). \end{equation} We then use this to define the bounded linear operator $Q:L^{2}(\Omega) \to {_{0}}H^{1}(\Omega;\mathbb{R}^{n})$ via $Qp = w_p$. The next result records some essential properties of $Q$. \begin{proposition}\label{orth_decomp} Let $Q:L^{2}(\Omega)\rightarrow{_{0}}H^{1}(\Omega;\mathbb{R}^{n})$ be the linear operator defined above. Then $Q$ has closed range, and $(\operatorname*{Ran}Q)^\bot = {_{0}}H_{\sigma}^{1}(\Omega;\mathbb{R}^{n})$, where ${_{0}}H_{\sigma}^{1}(\Omega;\mathbb{R}^{n})$ is defined in \eqref{H1 div zero}. Consequently, we have the orthogonal decomposition \begin{equation} {_{0}}H^{1}(\Omega;\mathbb{R}^{n})={_{0}}H_{\sigma}^{1}(\Omega;\mathbb{R}^{n}) \oplus \operatorname*{Ran}Q. \end{equation} \end{proposition} \begin{proof} We divide the proof into two steps. \textbf{Step 1 -- Closed range:} For every $p\in L^{2}(\Omega)$ we have \begin{equation} \Vert Q p \Vert_{{_{0}}H^{1}}= \Vert w_{p}\Vert_{{_{0}}H^{1}} \leq c(n,b) \Vert p\Vert_{L^{2}}. \end{equation} On the other hand, by Proposition \ref{specified_divergence} there exists $v_{0}\in{_{0}}H^{1}(\Omega;\mathbb{R}^{n})$ such that $\diverge v_{0}=p$ and $\Vert v_{0}\Vert_{{_{0}}H^{1} }\leq c\Vert p \Vert_{L^{2}}$. Hence, by \eqref{operator Q}, \begin{equation} \Vert p\Vert_{L^{2}}^{2} =\int_{\Omega}p\diverge v_{0}= (w_{p},v)_{{_{0}}H^{1}} \leq \Vert w_{p}\Vert_{{_{0}}H^{1}} \Vert v_{0} \Vert_{{_{0}}H^{1}} = \Vert Qp \Vert_{{_{0}}H^{1} } \Vert v_{0}\Vert_{{_{0}}H^{1} } \leq c\Vert Qp \Vert_{{_{0}}H^{1} }\Vert p\Vert_{L^{2}}, \end{equation} and so \begin{equation} \Vert p\Vert_{L^{2}}\leq c\Vert Q(p)\Vert_{{_{0}}H^{1}}. \end{equation} Hence, we have shown that \begin{equation} \label{Q injective} c^{-1}\Vert p\Vert_{L^{2} }\leq\Vert Q(p)\Vert_{{_{0}}H^{1}}\leq\sqrt{n}\Vert p\Vert_{L^{2}} \end{equation} for all $p\in L^{2}(\Omega)$, which implies that $Q$ has closed range. \textbf{Step 2 -- Orthogonal decomposition:} From the first step we know that $\operatorname*{Ran}Q$ is closed, and so we have the orthogonal decomposition ${_{0}}H^{1}(\Omega;\mathbb{R}^{n}) = \operatorname*{Ran}Q \oplus (\operatorname*{Ran}Q)^\perp$. We now endeavor to identify the subspace $(\operatorname*{Ran}Q)^\perp$. Let $v\in(\operatorname*{Ran}Q)^{\perp}$, that is, \begin{equation} (Qp,v)_{{_{0}}H^{1}(\Omega)}=0 \end{equation} for all $p\in L^{2}(\Omega)$. Then by \eqref{operator Q}, \begin{equation} \int_{\Omega}p\diverge v=0 \end{equation} for all $p\in L^{2}(\Omega)$, which implies that $\diverge v=0$, and so $v\in{_{0}}H_{\sigma}^{1}(\Omega;\mathbb{R}^{n})$. Conversely, if $v\in{_{0}}H_{\sigma}^{1}(\Omega;\mathbb{R}^{n})$, then $\diverge v=0$ and so by \eqref{operator Q}, \begin{equation} (Q(p),v)_{{_{0}}H^{1}(\Omega)}=0 \end{equation} for all $p\in L^{2}(\Omega)$, which implies that $v\in(\operatorname*{Ran} Q)^{\perp}$. This shows that $(\operatorname*{Ran}Q)^{\perp}={_{0}}H_{\sigma}^{1}(\Omega;\mathbb{R}^{n})$, which completes the proof. \end{proof} The following corollary is essential in introducing the pressure in the weak formulation of \eqref{problem_gamma_stokes_stress}. \begin{corollary} \label{pressure_introduction} Let $\Lambda\in({_{0}}H^{1}(\Omega;\mathbb{R}^{n}))^{\ast}$ be such that $\left\langle \Lambda,v\right\rangle=0$ for all $v\in{_{0}}H_{\sigma}^{1}(\Omega;\mathbb{R}^{n})$. Then there exists a unique function $p\in L^{2}(\Omega)$ such that \begin{equation} \left\langle \Lambda,v\right\rangle =\int_{\Omega}p\diverge v\quad\text{for all }v\in{_{0}}H^{1}(\Omega;\mathbb{R}^{n}). \end{equation} Moreover, there is a constant $c = c(n,b)>0$ such that \begin{equation} \Vert p\Vert_{L^{2}}\leq c\Vert\Lambda\Vert_{({_{0}}H^{1})^{\ast}}. \end{equation} \end{corollary} \begin{proof} In view of the Riesz representation theorem, there exists $w\in{_{0}}H^{1}(\Omega;\mathbb{R}^{n})$ such that \begin{equation} \left\langle \Lambda,v\right\rangle=(w,v)_{{_{0}}H^{1}}\quad\text{for all }v\in{_{0}}% H^{1}(\Omega;\mathbb{R}^{n}), \end{equation} and $\Vert w\Vert_{{_{0}}H^{1}}=\Vert\Lambda\Vert_{({_{0}}H^{1})^{\ast}}$. Then by hypothesis $w$, is orthogonal to ${_{0}}H_{\sigma}^{1}(\Omega;\mathbb{R}^{n})$, and so Proposition \ref{orth_decomp} implies that $w \in \operatorname*{Ran}Q$, which provides us with $p\in L^{2}(\Omega)$ such that $Q p=w$. It follows from \eqref{Q injective} that \begin{equation} \Vert p\Vert_{L^{2}}\leq c\Vert Q(p)\Vert_{{_{0}}H^{1} } = c\Vert w\Vert_{{_{0}}H^{1}} =c\Vert\Lambda\Vert_{({_{0}}H^{1})^{\ast}}. \end{equation} Moreover, $p$ is unique since $Q$ is injective by \eqref{Q injective}. The conclusion now follows from \eqref{operator Q}. \end{proof} \subsection{Solving \eqref{problem_gamma_stokes_stress} } We are now ready to prove the existence of solutions to \eqref{problem_gamma_stokes_stress}. We begin with weak solutions. Employing the identity \eqref{symmetrized product}, a simple computation reveals that the weak formulation of \eqref{problem_gamma_stokes_stress} is to find a velocity field $u\in{_{0}}H^{1}(\Omega;\mathbb{R}^{n})$ and a pressure $p\in L^{2}(\Omega)$ satisfying $\diverge{u}=g$ in $\Omega$ as well as \begin{equation}\label{weak_solution} \int_{\Omega}\frac{1}{2}\mathbb{D}u:\mathbb{D}v-p\diverge v - \gamma\partial_{1}u\cdot v =\left\langle f,v\right\rangle -\left\langle k,v \right\rangle_{\Sigma_b} \end{equation} for all $v\in{_{0}}H^{1}(\Omega;\mathbb{R}^{n})$, where here $\br{f,v}$ denotes the dual pairing of $f \in ({_{0}}H^{1}(\Omega;\mathbb{R}^{n})^\ast$ and $v \in {_{0}}H^{1}(\Omega;\mathbb{R}^{n})$, and $\br{k,v}_{\Sigma_b}$ denotes the dual pairing of $k\in H^{-1/2}(\Sigma_b;\mathbb{R}^{n}) = (H^{1/2}(\Sigma_b;\mathbb{R}^{n}))^\ast$ and $v \vert_{\Sigma_b} \in H^{1/2}(\Sigma_b;\mathbb{R}^{n})$. \begin{theorem}[Existence of weak solutions]\label{theorem existence linear} Let $f\in({_{0}}H^{1}(\Omega;\mathbb{R}^{n}))^{\ast }$, $g\in L^{2}(\Omega)$, and $k\in H^{-1/2}(\Sigma_b;\mathbb{R}^{n})$. Then there exist unique $u\in{_{0}}H^{1}(\Omega;\mathbb{R}^{n})$ and $p\in L^2(\Omega)$ satisfying $\diverge u=g$ in $\Omega$ and \eqref{weak_solution}. Moreover, \begin{equation}\label{bounds u p} \Vert u\Vert_{{_{0}}H^{1} } + \Vert p\Vert_{L^{2}}\leq c\Vert f \Vert_{({_{0}}H^{1})^{\ast}} + c\Vert g\Vert_{L^{2}} +c\Vert k \Vert_{H^{-1/2}} \end{equation} for some constant $c=c(b,n)>0$. \end{theorem} \begin{proof} We divide the proof into three steps. \textbf{Step 1 -- Setup:} Consider the bilinear map $B: {_{0}}H^{1}(\Omega;\mathbb{R}^{n})\times{_{0}}H^{1}(\Omega;\mathbb{R}^{n})\rightarrow\mathbb{R}$ given by \begin{equation} B(u,v) =\int_{\Omega}\frac{1}{2}\mathbb{D}u:\mathbb{D}v-\gamma\partial _{1}u\cdot v. \end{equation} In light of Korn's inequality, Lemma \ref{korn}, $B$ is well-defined and continuous. Note that \begin{equation}\label{p1_annihilate} \int_{\Omega} \partial_1 u \cdot u = \int_{\Omega} \partial_1 \frac{\abs{u}^2}{2} =0, \end{equation} and hence \begin{equation} B(u,u) = \frac{1}{2}\int_{\Omega}|\mathbb{D}u|^{2} = \norm{u}_{{_{0}}H^{1}}^2, \end{equation} which shows that $B$ is coercive. The Hilbert space ${_{0}}H_{\sigma}^{1}(\Omega;\mathbb{R}^{n})$, defined in \eqref{H1 div zero}, is a closed subspace of ${_{0}}H^{1}(\Omega;\mathbb{R}^{n})$, so this analysis also shows that $B$ is well-defined, continuous, and coercive on ${_{0}}H_{\sigma}^{1}(\Omega;\mathbb{R}^{n})$. \textbf{Step 2 -- A special case:} Assume now that $g=0$. Thanks to the first step, we are in a position to apply Lax--Milgram to find a unique $u\in{_{0}}H_{\sigma}^{1}(\Omega ;\mathbb{R}^{n})$ such that \begin{equation} B(u,v)-\left\langle f,v\right\rangle +\left\langle k,v\right\rangle _{\Sigma_b}=0 \end{equation} for all $v\in{_{0}}H_{\sigma}^{1}(\Omega;\mathbb{R}^{n})$. Moreover, \begin{equation}\label{lm1} \Vert u\Vert_{{_{0}}H^{1} }\leq c\Vert f\Vert_{({_{0}}H^{1})^{\ast}} +c\Vert k\Vert_{H^{-1/2}} \end{equation} for some constant $c= c(n,b)>0$. The functional $\Lambda\in({_{0}}H^{1}(\Omega;\mathbb{R}^{n}))^{\ast}$ defined by \begin{equation} \left\langle \Lambda,v\right\rangle:=B(u,v)-\left\langle f,v\right\rangle +\left\langle k,v\right\rangle_{\Sigma_b} \text{ for } v\in{_{0}}H^{1}(\Omega;\mathbb{R}^{n}) \end{equation} vanishes on ${_{0}}H_{\sigma}^{1}(\Omega;\mathbb{R}^{n})$. Then according to Corollary \ref{pressure_introduction} there exists a unique function $p\in L^{2}(\Omega)$ such that \begin{equation} B(u,v)-\left\langle f,v\right\rangle +\left\langle k,v\right\rangle _{\Sigma_b}=\int_{\Omega}p\diverge v \end{equation} for all $v\in{_{0}}H^{1}(\Omega;\mathbb{R}^{n})$, and we have the estimate \begin{equation} \Vert p\Vert_{L^{2}} \leq c\Vert\Lambda\Vert_{({_{0}}H^{1})^{\ast}}\leq c\Vert u\Vert_{{_{0}}H^{1}} +c\Vert f \Vert_{({_{0}}H^{1})^{\ast}} + c\Vert k\Vert_{H^{-1/2} } \leq c\Vert f\Vert_{({_{0}}H^{1} )^{\ast}}+c\Vert k\Vert_{H^{-1/2}}, \end{equation} where in the last inequality we used \eqref{lm1}. \textbf{Step 3 -- The general case:} Finally, given $g\in L^{2}(\Omega)$ we use Proposition \ref{specified_divergence} to find $w\in{_{0}}H^{1}(\Omega;\mathbb{R}^{n})$ such that $\diverge w=g$ and $\Vert w\Vert_{{_{0}}H^{1}}\leq c\Vert g \Vert_{L^{2}}$. We define $f_{1}\in({_{0}}H^{1}(\Omega;\mathbb{R}^{n}))^{\ast}$ via $\left\langle f_{1},v\right\rangle :=\left\langle f,v\right\rangle - B(w,v)$ and apply Step 2 with $f$ replaced by $f_{1}$ to find $u_{0}\in{_{0}}H_{\sigma}^{1}(\Omega;\mathbb{R}^{n})$ and $p\in L^{2}(\Omega)$ such that \begin{equation} \int_{\Omega} \left(\frac{1}{2}\mathbb{D}u_{0}:\mathbb{D}v -\gamma\partial _{1}u_{0}\cdot v \right) -\left\langle f,v\right\rangle +\int_{\Omega}\left( \frac{1}{2}\mathbb{D}w:\mathbb{D}v - \gamma\partial_{1}w\cdot v\right) + \left\langle k,v\right\rangle _{\Sigma_b}=\int_{\Omega} p\diverge v \end{equation} for all $v\in{_{0}}H^{1}(\Omega;\mathbb{R}^{n})$, and \begin{equation}\label{bounds u0} \Vert u_{0}\Vert_{{_{0}}H^{1}}+\Vert p\Vert_{L^{2}} \leq c\Vert f_{1}\Vert_{({_{0}}H^{1} )^{\ast}} +c\Vert k\Vert_{H^{-1/2}} \leq c\Vert f\Vert_{({_{0}}H^{1})^{\ast}} +c\Vert g \Vert_{L^{2}} +c\Vert k\Vert_{H^{-1/2}}, \end{equation} where in the last inequality we used the fact that $\Vert w\Vert_{{_{0}}H^{1}}\leq c\Vert g\Vert_{L^{2}}$. Then the function $u:=u_{0}+w \in{_{0}}H^{1}(\Omega;\mathbb{R}^{n})$ satisfies $\diverge u=g$ in $\Omega$ and \begin{equation} \int_{\Omega} \left( \frac{1}{2}\mathbb{D}u:\mathbb{D}v-\gamma\partial_{1}u\cdot v \right) - \left\langle f,v\right\rangle +\left\langle k,v\right\rangle _{\Sigma_b}=\int_{\Omega}p\diverge v \end{equation} for all $v\in{_{0}}H^{1}(\Omega;\mathbb{R}^{n})$, which gives \eqref{weak_solution}. In view of \eqref{bounds u0} and again the fact $\Vert w \Vert_{{_{0}}H^{1}}\leq c\Vert g\Vert_{L^{2}}$ we have that the function $u$ satisfies \eqref{bounds u p}. The uniqueness of the pair $(u,p)$ then follows the uniqueness component of Step 2. \end{proof} Next we prove some regularity results. These may be derived from the well known regularity results for elliptic systems proved in \cite{ADN_1964}. We include an elementary proof here for the convenience of the reader. \begin{theorem}[Regularity of weak solutions]\label{theorem_regularity_linear} Let $s \ge 0$, $f\in H^{s}(\Omega;\mathbb{R}^{n})$, $g\in H^{s+1}(\Omega)$, and $k\in H^{s+1/2}(\Sigma_b;\mathbb{R}^{n})$. If $u \in {_{0}}H^{1}(\Omega;\mathbb{R}^{n})$ and $p\in L(\Omega)$ satisfy $\diverge u=g$ in $\Omega$ and \eqref{weak_solution}, then $u\in {_{0}}H^{s+2}(\Omega;\mathbb{R}^{n})$ and $p\in H^{s+1}(\Omega)$. Moreover, we have the estimate \begin{equation}\label{bounds s} \Vert u\Vert_{H^{s+2}}+\Vert p\Vert_{H^{s+1}}\leq c\Vert f \Vert_{H^{s}}+c\Vert g\Vert_{H^{s+1}}+c\Vert k\Vert_{H^{s+1/2}} \end{equation} for a constant $c=c(b,n,s)>0$. \end{theorem} \begin{proof} We divide the proof into two steps. \textbf{Step 1 -- The base case:} Assume that $s=0$. Given $h\in\mathbb{R} \setminus \{0\}$, $i=1,\ldots,n-1$, and $w:\Omega\rightarrow \mathbb{R}^m$, we write $\delta_{h}^i w(x):=\frac{w(x+he_{i})-w(x)}{h}$ for the horizontal difference quotient in the direction $e_i$. Given $w \in{_{0}}H^{1} (\Omega;\mathbb{R}^{n})$,\ take $v:=\delta_{-h}^i w\in{_{0}}H^{1}(\Omega;\mathbb{R}^{n})$ in \eqref{weak_solution}. Then the change of variables $y=x - he_{i}$ shows that we have the identity \begin{equation}\label{pde difference quotient} \int_{\Omega} \frac{1}{2}\mathbb{D}\delta_{h}^i u :\mathbb{D}w - \delta_{h}^i p \diverge w - \gamma\partial_{1}\delta_{h}^i u \cdot w =\int_{\Omega}\delta_{h}^i f \cdot w - \int_{\Sigma_b} \delta_{h}^i k \cdot w , \end{equation} which shows that $\delta_{h}^i u$ and $\delta_{h}^i p$ satisfy \eqref{weak_solution} with $f$, $g$, and $k$ replaced by $\delta_{h}^i f$, $\delta_{h}^i g$, and $\delta_{h}^i k$, respectively. Hence, by Theorem \ref{theorem existence linear}, \begin{equation} \Vert \delta_{h}^i u\Vert_{{_{0}}H^{1}} + \Vert \delta_{h}^i p \Vert_{L^{2}} \leq c\Vert\delta_{h}^i f \Vert_{({_{0}}H^{1})^{\ast}} + c\Vert \delta_{h}^i g \Vert_{L^{2}} + \Vert\delta_{h}^i k\Vert_{H^{-1/2}}. \end{equation} Employing the change of variables $y=x+he_{i}$, the Cauchy-Schwarz inequality, and Corollary \ref{diff_quote_omega}, we may bound \begin{equation} \left\vert \int_{\Omega}\delta_{h}^i f \cdot v\right\vert =\left\vert \int_{\Omega} f\cdot \delta_{-h}^i v \right\vert \leq\Vert f\Vert_{L^{2}}\Vert \delta_{-h}^i v \Vert_{L^{2}} \leq \Vert f\Vert_{L^{2}}\Vert \partial_{i} v \Vert_{L^{2}}. \end{equation} Hence, from Korn's inequality, Lemma \ref{korn}, we have the bound $\Vert\delta_{h}^i f\Vert_{({_{0}}H^{1})^{\ast}} \leq c(n,b) \Vert f \Vert_{L^{2}}$. Similarly, Corollary \ref{diff_quote_omega} tells us that $\Vert \delta_{h}^i g\Vert_{L^{2}} \leq \Vert\partial_{i} g \Vert_{L^{2}}$, while Proposition \ref{diff_quote_fullspace} implies that $\Vert\delta_{h}^i k \Vert_{H^{-1/2}} \leq c\norm{k}_{H^{1/2}}$. We deduce from these that \begin{equation} \Vert \delta_{h}^i u\Vert_{{_{0}}H^{1}} + \Vert \delta_{h}^i p \Vert_{L^{2}} \leq c\Vert f\Vert_{L^{2}}+c\Vert\partial_{i} g \Vert_{L^{2}}+c\Vert k\Vert_{H^{1/2}} \end{equation} for all $h\neq0$ and $1 \le i \le n-1$. In turn, these bounds imply (see, for instance, Section 11.5 of \cite{Leoni_2017}) that $\partial_{i} u \in{_{0}}H^{1}(\Omega;\mathbb{R}^{n})$ and that $\partial_{i}p\in L^{2}(\Omega)$, with \begin{equation} \label{bounds 1} \Vert\partial_{i}u\Vert_{{_{0}}H^{1}} + \Vert\partial_{i}p\Vert_{L^{2}}\leq c\Vert f\Vert_{L^{2}} + c\Vert\partial_{i} g \Vert_{L^{2}} + c\Vert k\Vert_{H^{1/2}} \end{equation} for all $i=1,\ldots,n-1$. Differentiating the equation $\diverge u=g$ with respect to $x_{n}$, we find that \begin{equation} \partial_{n}^{2}u_{n}=-\diverge ^{\prime}\partial_{n}u^{\prime}+\partial_{n}g \end{equation} and so by \eqref{bounds 1}, $\partial_n^2 u_n \in L^2(\Omega)$ and \begin{equation} \Vert\partial_{n}^{2}u_{n}\Vert_{L^{2}} \leq \Vert \diverge ^{\prime}\partial_{n}u^{\prime}\Vert_{L^{2}} +\Vert\partial_{n}g\Vert_{L^{2} } \leq c\Vert f\Vert_{L^{2}} + c\Vert\nabla g\Vert_{L^{2} }. \end{equation} For $i=1,\ldots,n-1$, taking $v=\varphi e_{i}$ with $\varphi\in C_{c}^{\infty}(\Omega)$, we have that \begin{equation} \frac{1}{2} \mathbb{D} u : \mathbb{D} v = \nabla u_i \cdot \nabla \varphi + \partial_i u \cdot \nabla \varphi, \end{equation} and so upon using $v$ in \eqref{weak_solution} we find that \begin{multline}\label{equation 1} 0 =\int_{\Omega} \nabla u_{i}\cdot\nabla\varphi+\partial_{i}u\cdot \nabla\varphi-p\diverge (\varphi e_i)-\gamma\partial_{1}u\cdot (\varphi e_i) -f \cdot (\varphi e_i) \\ =\int_{\Omega} \nabla u_{i}\cdot\nabla\varphi + (\diverge u) \partial_i \varphi-p \partial_i \varphi - \gamma \partial_{1}u_i \varphi -f_i \varphi , \end{multline} where we integrated the second term by parts. Hence, $u_{i}$ is a distributional solution to the equation \begin{equation} \label{equation i} \Delta u_{i}=\partial_{i}p-\partial_{i}g-\gamma\partial_{1}u_{i}-f_{i}\in L^{2}(\Omega). \end{equation} From the standard weak existence and local regularity theory for Poisson's equation, together with Weyl's lemma, we deduce that $u_{i}\in H_{\operatorname*{loc}}^{2}(\Omega)$ and that the previous equation holds almost everywhere in $\Omega$. In particular, $\partial_n^2 u_i \in L^2(\Omega)$, and we have the estimate \begin{equation} \Vert\partial_{n}^{2}u_{i}\Vert_{L^{2}} \leq\Vert\Delta^{\prime} u_{i}\Vert_{L^{2}} + \Vert\partial_{i} p -\partial_{i} g -\gamma \partial_{1}u_{i}-f_{i}\Vert_{L^{2}} \leq c\Vert f\Vert_{L^{2} }+c\Vert\partial_{i}g\Vert_{L^{2}} + c\Vert k\Vert_{H^{1/2}}. \end{equation} It remains to show that $\partial_{n}p$ exists and is in $L^{2}(\Omega)$. Taking $v=\varphi e_{n}$ in \eqref{weak_solution} for some $\varphi\in C_{c}^{\infty}(\Omega)$ and integrating by parts, we see that \begin{equation} \int_{\Omega}p\partial_{n}\varphi =\int_{\Omega} \nabla u_{n} \cdot \nabla \varphi + \partial_{n} u \cdot \nabla \varphi-\gamma\partial_{1} u_{n} \varphi-f_{n}\varphi =\int_{\Omega}(-\Delta u_{n} - \partial_{n}g-\gamma\partial_{1}u_{n} -f_{n})\varphi, \end{equation} which implies that the weak derivative $\partial_n p$ exists and satisfies $\partial_{n}p=-\Delta u_{n}+\partial _{n}g-\gamma\partial_{1}u_{n}-f_{n} \in L^2(\Omega)$. In turn, we may combine with the above estimates to arrive at the bound \begin{equation} \norm{\partial_n p}_{L^2} \le c\Vert f\Vert_{L^{2} }+c\Vert\partial_{i}g\Vert_{L^{2}} + c\Vert k\Vert_{H^{1/2}}. \end{equation} This completes the proof in the case $s =0$. \textbf{Step 2 -- Induction and interpolation:} The case $s\in\mathbb{N}$ can be obtained through induction by reasoning as in Step 1. Indeed, the base case $s=0$ has been established in Step 1. Assume that the result is true for $s\in \mathbb{N}$. More precisely, assume that if $f\in H^{s}(\Omega;\mathbb{R}^{n})$, $g\in H^{s+1}(\Omega)$, and $k\in H^{s+1/2}(\Sigma_b;\mathbb{R}^{n})$, then $u\in{_{0}}H^{s+2}(\Omega;\mathbb{R}^{n})$, $p\in H^{s+1}(\Omega)$, and the bound \eqref{bounds s} holds. Let $f\in H^{s+1}(\Omega;\mathbb{R}^{n})$, $g\in H^{s+2}(\Omega)$, and $k\in H^{s+3/2}(\Sigma_b;\mathbb{R}^{n})$. Then by \eqref{pde difference quotient} we have that $\delta_{h}^i u$ and $\delta_{h}^i p$ satisfy \eqref{weak_solution} with $f$, $g$, and $k$ replaced by $\delta_{h}^i f$, $\delta_{h}^i g$, and $\delta_{h}^i k$, respectively. Hence, by the induction hypothesis \begin{equation} \Vert\delta_{h}^i u \Vert_{H^{s+2}} + \Vert\delta_{h}^i p \Vert_{H^{s+1}} \leq c\Vert\delta_{h}^i f \Vert_{H^{s}} +c\Vert\delta_{h}^i g \Vert_{H^{s+1}}+c\Vert\delta_{h}^i k \Vert_{H^{s+1/2}}. \end{equation} Reasoning as in Step 1 and again employing Proposition \ref{diff_quote_fullspace} and Corollary \ref{diff_quote_omega}, we can bound the right-hand side from above by $c\Vert f\Vert_{H^{s+1} }+c\Vert g\Vert_{H^{s+2}}+c\Vert k\Vert_{H^{s+3/2}}$ and then in turn conclude that $\partial_{i}u\in H^{s+2}(\Omega;\mathbb{R}^{n})$ and $\partial_{i}p\in H^{s+2}(\Omega)$ for all $i=1,\ldots,n-1$. As in Step 1, we then use the identity $\diverge u=g$ to show that $\partial_{n}^{s+3}u_{n}$ exists in $L^{2}(\Omega)$ with the appropriate bounds. We then use \eqref{equation i} to show that $\partial_{n}^{s+3}u_{i}$ exists in $L^{2}(\Omega)$ for $1 \le i \le n-1$, and then use the equation $-\partial_{n}p=-\Delta u_{n}- \partial_{n}g-\gamma\partial_{1}u_{n}-f_{n}$ to prove that $\partial_{n}^{s+2}p$ exists in $L^{2}(\Omega)$ and obeys the appropriate bounds. The non integer case $s \in (0,\infty) \backslash \mathbb{N}$ can then be obtained by interpolation. Indeed, we have now shown that the linear operator \begin{equation} T:H^{s}(\Omega;\mathbb{R}^{n})\times H^{s+1}(\Omega)\times H^{s+1/2}(\Sigma_b;\mathbb{R}^{n}) \rightarrow{_{0}}H^{s+2}(\Omega;\mathbb{R}^{n}) \times H^{s+1}(\Omega) \end{equation} defined by $T(f,g,k) = (u,p)$ is continuous for all $s\in\mathbb{N}$. We can now use classical interpolation theory (see, for instance \cite{BL_1976,Leoni_2017,Triebel_1995}) to prove that $T$ is continuous for all $s>0$. \end{proof} We are now ready to prove the main theorem of this section. \begin{theorem}\label{iso_gamma_stokes} For every $\gamma\in\mathbb{R}$ and every $s \ge 0$, the bounded linear operator \begin{equation} \Phi_\gamma : {_{0}}H^{s+2}(\Omega;\mathbb{R}^{n})\times H^{s+1}(\Omega) \rightarrow H^{s}(\Omega;\mathbb{R}^{n})\times H^{s+1}(\Omega) \times H^{s+1/2}(\Sigma_b;\mathbb{R}^{n}) \end{equation} given by \begin{equation} \Phi_\gamma(u,p) = (\diverge{S(p,u)} - \gamma \partial_1 u, \diverge{u}, \left. S(p,u)e_{n}\right\vert _{\Sigma_b} ) \end{equation} is an isomorphism. \end{theorem} \begin{proof} Theorems \ref{theorem existence linear} and \ref{theorem_regularity_linear} show that the bounded linear operator $\Phi_\gamma$ is surjective. Theorem \ref{theorem existence linear} shows that it is injective. \end{proof} \section{The over-determined $\gamma-$Stokes equations}\label{sec_overdetermined} In this section we study the over-determined problem \begin{equation} \label{problem_gamma_stokes_overdet} \begin{cases} \diverge S(p,u)-\gamma\partial_{1}u=f & \text{in }\Omega \\ \diverge u=g & \text{in }\Omega \\ S(p,u)e_{n}=k,\quad u_{n}=h & \text{on }\Sigma_b \\ u=0 & \text{on }\Sigma_{0}, \end{cases} \end{equation} where, for $s \ge 0$, $f\in H^{s}(\Omega;\mathbb{R}^{n})$, $g\in H^{s+1}(\Omega)$, $k \in H^{s+1/2}(\Sigma_b;\mathbb{R}^{n})$, and $h\in H^{s+3/2}(\Sigma_b)$. In view of Theorem \ref{iso_gamma_stokes}, the value of $u_n$ on $\Sigma_b$ is completely determined by $f$, $g$, and $k$. Hence, in general the problem \eqref{problem_gamma_stokes_overdet} is over-determined and admits no solution. In this section we identify compatibility conditions on the data $(f,g,h,k)$ that are necessary and sufficient for solutions to \eqref{problem_gamma_stokes_overdet} to exist, and we prove a corresponding isomorphism theorem. \subsection{Divergence compatibility } In the over-determined problem \eqref{problem_gamma_stokes_overdet} we seek to specify both $\diverge{u} = g$ in $\Omega$ and the boundary conditions $u_n =0$ on $\Sigma_0$ and $u_n = h$ on $\Sigma_b$. If we were to posit integrability of $g$ and $h$, then the divergence theorem would require the compatibility condition \begin{equation} \int_{\Omega} g = \int_{\Sigma_b} h. \end{equation} The functional framework we employ in this paper is built on subspaces of $L^2(\Omega)$, and $\Omega$ has infinite measure, so in general we cannot verify these integrability conditions. As such, the form of compatibility between $g$ and $h$ is somewhat more subtle than the condition stated above. We record this condition now. \begin{theorem}[Divergence-trace compatibility condition]\label{cc_divergence} Suppose that $u\in {_{0}}H^{1}(\Omega;\mathbb{R}^{n})$ and define $g = \diverge{u} \in L^2(\Omega)$ and $h = u_n\vert_{\Sigma_b} \in H^{1/2}(\Sigma_b;\mathbb{R})$. Then \begin{equation}\label{H minus 1} h-\int_{0}^{b}g(\cdot,x_{n}) dx_{n} \in \dot{H}^{-1}(\mathbb{R}^{n-1}) \end{equation} and \begin{equation}\label{H minus 1 alt} \snorm{h-\int_{0}^{b}g(\cdot,x_{n}) dx_{n}}_{\dot{H}^{-1}} \le 2\pi \sqrt{b} \norm{u}_{L^2}. \end{equation} \end{theorem} \begin{proof} Since $u_n \in H^1(\Omega)$ we have that $u_n(x',\cdot)$ is absolutely continuous for almost every $x' \in \mathbb{R}^{n-1}$ (see, for instance, Theorem 11.45 in \cite{Leoni_2017}). Since $u=0$ on $\Sigma_{0}$ and $\diverge u=g$ in $\Omega$, we may then compute \begin{equation} u_{n}(x^{\prime},b)=\int_{0}^{b}\partial_{n}u_{n}(x^{\prime},x_{n} ) dx_{n} =\int_{0}^{b}(g(x^{\prime},x_{n})-\diverge ^{\prime} u^{\prime}(x^{\prime},x_{n})) dx_{n} \end{equation} for almost every $x' \in \mathbb{R}^{n-1}$. Hence, \begin{equation} u_{n}(x^{\prime},b)-\int_{0}^{b}g(x^{\prime},x_{n})dx_{n} =-\diverge ^{\prime}\int_{0}^{b}u^{\prime}(x^{\prime},x_{n}) dx_{n}. \end{equation} Write $R\in H^1(\mathbb{R}^{n-1};\mathbb{R}^{n-1})$ for $R(x') = \int_0^b u'(x',x_n) dx_n$. Then we may use the Cauchy-Schwarz inequality, Parseval's identity, and Tonelli's theorem to bound \begin{multline} \snorm{ \diverge{R}}_{\dot{H}^{-1}}^2 = \int_{\mathbb{R}^{n-1}} \frac{1}{\abs{\xi}^2} \abs{2\pi i \xi \cdot \hat{R}(\xi)}^2 d\xi \le 4\pi^2 \int_{\mathbb{R}^{n-1}} \abs{\hat{R}(\xi)}^2 d\xi = 4\pi^2 \int_{\mathbb{R}^{n-1}} \abs{R(x')}^2 dx' \\ \le 4\pi^2 b \int_{\Omega} \abs{u'(x)}^2 dx = 4\pi^2 b \norm{u'}_{L^2}^2, \end{multline} which proves \eqref{H minus 1} and \eqref{H minus 1 alt}. \end{proof} \subsection{Adjoint problem and compatibility } In the spirit of the closed range theorem, we seek to understand when the over-determined problem \eqref{problem_gamma_stokes_overdet} admits a solution in terms of a corresponding adjoint problem. To motivate the form of the adjoint problem we first present the following calculation. \begin{lemma}\label{adjoint_calc} Suppose that $u,v \in {_{0}}H^{2}(\Omega;\mathbb{R}^n)$ and $p,q \in H^1(\Omega)$. Then \begin{multline} \int_{\Omega} (\diverge S(p,u) - \gamma \partial_1 u) \cdot v - (\diverge{u})q - \int_{\Omega} u\cdot (\diverge S(q,v) + \gamma \partial_1 v) - p \diverge{v} \\ = \int_{\Sigma_b} S(p,u) e_n \cdot v - u \cdot S(q,v) e_n. \end{multline} \end{lemma} \begin{proof} We simply integrate by parts to see that \begin{multline} \int_{\Omega} (\diverge S(p,u) - \gamma \partial_1 u) \cdot v - (\diverge{u})q = \int_{\Omega} - S(p,u) : \nabla v + \gamma u \cdot \partial_1 v - (\diverge{u})q + \int_{\Sigma_b} S(p,u) e_n \cdot v \\ = \int_{\Omega} \frac{1}{2} \mathbb{D} u : \mathbb{D} v -p \diverge{v} + \gamma u \cdot \partial_1 v - (\diverge{u})q + \int_{\Sigma_b} S(p,u) e_n \cdot v, \end{multline} and similarly, \begin{equation} \int_{\Omega} u\cdot (\diverge S(q,v) + \gamma \partial_1 v) - p \diverge{v} = \int_{\Omega} \frac{1}{2} \mathbb{D} u : \mathbb{D} v - (\diverge{u}) q + \gamma u \cdot \partial_1 v - p \diverge{v} + \int_{\Sigma_b} u \cdot S(q,v) e_n. \end{equation} The result follows by subtracting these expressions. \end{proof} This lemma shows that the formal adjoint of the over-determined problem \eqref{problem_gamma_stokes_overdet} is the under-determined problem \begin{equation}\label{underdetermined} \begin{cases} \diverge S(q,v)+\gamma\partial_{1}v= f & \text{in }\Omega \\ \diverge v= g & \text{in }\Omega \\ (S(q,v)e_{n})^{\prime}=k' & \text{on }\Sigma_b \\ v=0 & \text{on }\Sigma_{0}. \end{cases} \end{equation} Note that this is under-determined in the sense that on $\Sigma_b$ we only specify $n-1$ boundary conditions instead of the standard $n$. Taking a cue from the closed range theorem, we then examine the space of solutions to the homogeneous under-determined problem, i.e. \eqref{underdetermined} with $f=0$, $g=0$, and $k'=0$. In light of Theorem \ref{iso_gamma_stokes} (with $\gamma$ replaced by $-\gamma$) the solution to this problem is completely determined by the boundary condition $S(p,u)e_{n}=\psi e_{n}$ on $\Sigma_b$. In other words, we may parameterize the space of homogeneous solutions to the under-determined problem \eqref{underdetermined} with $\psi$ by way of the $(-\gamma)-$Stokes problem \begin{equation}\label{problem_adjoint} \begin{cases} \diverge S(q,v)+\gamma\partial_{1}v=0 & \text{in }\Omega \\ \diverge v=0 & \text{in }\Omega \\ S(q,v)e_{n}=\psi e_n & \text{on }\Sigma_b \\ v=0 & \text{on }\Sigma_{0}. \end{cases} \end{equation} Using this parameterization, we arrive at a convenient formulation of the second compatibility condition associated to the over-determined problem. \begin{theorem}[Over-determined compatibility condition]\label{cc_over-det} Let $s \ge 0$ and suppose that $f\in H^{s}(\Omega;\mathbb{R}^{n})$, $g \in H^{s+1}(\Omega)$, $h\in H^{s+3/2}(\Sigma_b)$, and $k\in H^{s+1/2}(\Sigma_b;\mathbb{R}^{n})$. Assume that the problem \eqref{problem_gamma_stokes_overdet} admits a solution $u\in{_{0}}% H^{s+2}(\Omega;\mathbb{R}^{n})$ and $p\in H^{s+1}(\Omega)$. For every $\psi\in H^{s+1/2}(\Sigma_b)$ let $v\in{_{0}}H^{s+2}(\Omega;\mathbb{R}^{n})$ and $q\in H^{s+1}(\Omega)$ be the unique solution (given by Theorem \ref{iso_gamma_stokes}) to the adjoint problem \eqref{problem_adjoint}. Then the following compatibility condition holds: \begin{equation}\label{compatibility_condition} \int_{\Omega}(f\cdot v-gq)-\int_{\Sigma_b}(k\cdot v-h\psi) =0. \end{equation} \end{theorem} \begin{proof} In light of Lemma \ref{adjoint_calc}, \eqref{problem_gamma_stokes_overdet}, and \eqref{problem_adjoint} we have that \begin{equation} \int_{\Omega} f \cdot v - gq = \int_{\Sigma_b} k \cdot v - u\cdot \psi e_n = \int_{\Sigma_b} k\cdot v - h \psi. \end{equation} Then \eqref{compatibility_condition} follows by rearranging. \end{proof} \subsection{Some function spaces and the over-determined isomorphism } With the compatibility conditions of Theorems \ref{cc_divergence} and \ref{cc_over-det} in hand, we may now completely characterize the solvability of the over-determined problem \eqref{problem_gamma_stokes_overdet}. To do so, we first need to introduce a pair of function spaces for the data. For $s \ge 0$ we define the space \begin{equation}\label{Ys_def} \mathcal{Y}^s = \{(f,g,h,k) \in H^s(\Omega; \mathbb{R}^n) \times H^{s+1}(\Omega) \times H^{s+3/2}(\Sigma_b) \times H^{s+1/2}(\Sigma_b;\mathbb{R}^n) \;\vert\; h\text{ and }g \text{ satisfy } \eqref{H minus 1} \}. \end{equation} We endow $\mathcal{Y}^s$ with the norm defined by \begin{equation} \norm{(f,g,h,k)}_{\mathcal{Y}^s}^2 = \norm{f}_{H^s}^2 + \norm{g}_{H^{s+1}}^2 + \norm{h}_{H^{s+3/2}}^2 + \norm{k}_{H^{s+1/2}}^2 + \snorm{h - \int_0^b g(\cdot,x_n) dx_n}_{\dot{H}^{-1}}^2, \end{equation} which clearly makes $\mathcal{Y}^s$ into a Hilbert space (with the obvious inner-product associated to the norm). Similarly, for $s \ge 0$ we define the subspace \begin{equation}\label{Zs_def} \mathcal{Z}^s = \{(f,g,h,k) \in \mathcal{Y}^s \;\vert\; \eqref{compatibility_condition} \text{ holds for every } \psi \in H^{s+1/2}(\Sigma_b) \}. \end{equation} The topology of $\mathcal{Y}^s$ guarantees that $\mathcal{Z}^s$ is a closed subspace, and so $\mathcal{Z}^s$ is a Hilbert space when endowed with the inner-product from $\mathcal{Y}^s$. Next we establish the main result of this section, which shows that a necessary and sufficient condition for the existence of a solution to \eqref{problem_gamma_stokes_overdet} is that the $f$, $g$, $k$, $h$ satisfy the compatibility conditions \eqref{H minus 1} and \eqref{compatibility_condition} for every $\psi\in H^{s+1/2}(\Sigma)$. \begin{theorem}\label{iso_overdetermined} Let $\gamma \in \mathbb{R}$, $s \ge 0$, and $\mathcal{Z}^s$ be the Hilbert space defined in \eqref{Zs_def}. Then the bounded linear operator $\Psi_\gamma : {_{0}}H^{s+2}(\Omega;\mathbb{R}^{n})\times H^{s+1}(\Omega) \rightarrow \mathcal{Z}^{s}$ given by \begin{equation} \Psi_\gamma(u,p) = (\diverge{S(p,u)} - \gamma \partial_1 u, \diverge{u}, \left. u_{n}\right\vert_{\Sigma_b}, \left. S(p,u)e_{n}\right\vert_{\Sigma_b}) \end{equation} is an isomorphism. \end{theorem} \begin{proof} First note that in light of Theorems \ref{cc_divergence} and \ref{cc_over-det}, the map $\Psi_\gamma$ takes values in $\mathcal{Z}^s$ and is thus well-defined. It is clearly a bounded linear operator. The injectivity of $\Psi_\gamma$ follows from Theorem \ref{iso_gamma_stokes}. To prove that $\Psi_\gamma$ is surjective let $(f,g,h,k)\in \mathcal{Z}^{s}$. Using $f$, $g$, and $k$ in Theorem \ref{iso_gamma_stokes}, we find the unique solution $u\in{_{0}}H^{s+2}(\Omega;\mathbb{R}^{n})$ and $p\in H^{s+1}(\Omega)$ to \eqref{problem_gamma_stokes_stress}. Given $\psi\in H^{s+1/2}(\Sigma)$, let $v\in{_{0}}H^{s+2}(\Omega;\mathbb{R}^{n})$ and $q\in H^{s+1}(\Omega)$ be the unique solution to \eqref{problem_adjoint} (the existence of which is again guaranteed by Theorem \ref{iso_gamma_stokes}). Applying Theorem \ref{cc_over-det} and using the fact that $(f,g,h,k)$ satisfy the compatibility condition \eqref{compatibility_condition}, we then find that \begin{equation} \int_{\Sigma_b}u_{n}\psi=-\int_{\Omega}(f\cdot v-gq) + \int_{\Sigma_b}k\cdot v=\int_{\Sigma_b}h\psi. \end{equation} Then $\int_{\Sigma_b}(u_{n}-h)\psi=0$ for all $\psi\in H^{s+1/2}(\Sigma_b)$, which implies that $u_{n}=h$ on $\Sigma_b$. Hence $\Psi_\gamma$ is surjective. \end{proof} \section{Fourier analysis}\label{sec_fourier} In this section we consider the horizontal Fourier transform (as defined in Section \ref{sec_notation}) of the linear problem \eqref{problem_gamma_stokes_stress}, where $f\in H^{s}(\Omega;\mathbb{R}^{n})$, $g\in H^{s+1}(\Omega)$, $k\in H^{s+1/2}(\Sigma_b;\mathbb{R}^{n})$. Note that the boundary condition $S(p,u)e_{n}=k$ on $\Sigma_b$ may be decomposed into horizontal and vertical components: $-\partial_n u' - \nabla' u_n = k'$ and $p - 2 \partial_n u_n = k_n.$ Applying the horizontal Fourier transform to \eqref{problem_gamma_stokes_stress} then yields the following ODE boundary value problem for $\hat{u}(\xi,\cdot) \in H^2((0,b);\mathbb{C}^n)$ and $\hat{p}(\xi,\cdot) \in H^1((0,b);\mathbb{C})$: \begin{equation}\label{fourier system} \begin{cases} \left( -\partial_{n}^{2}+4\pi^{2} \abs{\xi}^{2} \right) \hat{u}' + 2\pi i\xi\hat {p}- 2\pi i\xi_1 \gamma\hat{u}' =\hat{f}'+2\pi i \xi \hat{g} & \text{in } (0,b) \\ \left( -\partial_{n}^{2}+4\pi^{2} \abs{\xi}^{2} \right) \hat{u}_{n}+\partial_{n} \hat{p} - 2 \pi i \xi_1 \gamma \hat{u}_{n} = \hat{f}_{n}+\partial_{n}\hat{g} & \text{in } (0,b)\\ 2\pi i\xi \cdot \hat{u}'+\partial_{n}\hat{u}_{n}=\hat{g} & \text{in } (0,b)\\ -\partial_{n}\hat{u}' -2\pi i\xi \hat{u}_{n} = \hat{k}',\quad \hat{p} - 2\partial_{n}\hat{u}_{n}=\hat{k}_{n} & \text{for }x_{n}=b\\ \hat{u}=0 & \text{for }x_{n}=0. \end{cases} \end{equation} \subsection{Generalities about the ODE system \eqref{fourier system}} We begin our discussion of the ODE system \eqref{fourier system} by deriving an ODE variant of \eqref{weak_solution} and proving uniqueness of solutions. \begin{proposition}\label{ODE_int_unique} Suppose that $F \in L^2((0,b);\mathbb{C}^2),$ $G \in H^1((0,b);\mathbb{C})$, and $K \in \mathbb{C}^2$. Then the following hold. \begin{enumerate} \item If $w \in H^2((0,b);\mathbb{C}^n)$ and $q \in H^1((0,b);\mathbb{C})$ satisfy \begin{equation}\label{ODE_int_unique_01} \begin{cases} \left( -\partial_{n}^{2}+4\pi^{2} \abs{\xi}^{2} \right) w' + 2\pi i\xi q- 2\pi i\xi_1 \gamma w' = F'+2\pi i \xi G & \text{in } (0,b)\\ \left( -\partial_{n}^{2}+4\pi^{2} \abs{\xi}^{2} \right) w_{n} + \partial_{n}q - 2 \pi i \xi_1 \gamma w_{n} = F_{n}+\partial_{n} G & \text{in } (0,b)\\ 2\pi i\xi \cdot w'+\partial_{n} w_{n}= G & \text{in } (0,b) \\ -\partial_{n} w' -2\pi i\xi w_{n} = K',\quad q-2\partial_{n} w_{n}=K_{n}, & \text{for }x_{n}=b \\ w=0 & \text{for }x_{n}=0, \end{cases} \end{equation} then for $v \in H^1((0,b);\mathbb{C}^n)$ satisfying $v(0)=0$ we have that \begin{multline}\label{ODE_int_unique_02} - K \cdot \overline{v(b)} + \int_0^b F \cdot \overline{v} + q \overline{\left(2\pi i \xi \cdot v' + \partial_n v_n \right)} = \int_0^b -\gamma 2\pi i \xi_1 w \cdot \overline{v} + 2 \partial_n w_n \overline{\partial_n v_n} +(\partial_n w' + 2\pi i \xi w_n) \cdot \overline{(\partial_n v' + 2\pi i \xi v_n) } \\ + \frac{1}{2} \int_0^b (2\pi i \xi \otimes w' + w' \otimes 2\pi i \xi) : \overline{(2\pi i \xi \otimes v' + v' \otimes 2\pi i \xi)}. \end{multline} \item There exists at most one pair $(w,q) \in H^2((0,b);\mathbb{C}^n) \times H^1((0,b);\mathbb{C})$ solving \eqref{ODE_int_unique_01}. \end{enumerate} \end{proposition} \begin{proof} Using the third equation in \eqref{ODE_int_unique_01}, we compute \begin{multline} (-\partial_n^2 + 4 \pi^2 \abs{\xi}^2) w' + 2\pi i \xi q - 2\pi i \xi G = (-\partial_n^2 + 4 \pi^2 \abs{\xi}^2) w' + 2\pi i \xi q - 2\pi i \xi (2\pi i \xi \cdot w' + \partial_n w_n) \\ = 2\pi i \xi q - (2\pi i \xi \otimes w' + w' \otimes 2\pi i \xi) 2\pi i \xi - \partial_n (\partial_n w' + 2\pi i \xi w_n) \end{multline} and \begin{multline} (-\partial_n^2 + 4 \pi^2 \abs{\xi}^2) w_n + \partial_n q - \partial_n G = (-\partial_n^2 + 4 \pi^2 \abs{\xi}^2) w_n + \partial_n q - \partial_n (2\pi i \xi\cdot w' + \partial_n w_n) \\ = -2\pi i \xi \cdot (\partial_n w' + 2\pi i \xi w_n) + \partial_n(q-2\partial_n w_n). \end{multline} Using these and the first two equations of \eqref{ODE_int_unique_01}, we then find that \begin{equation} \int_0^b F' \cdot \overline{v'} + \gamma 2\pi i \xi_1 w'\cdot \overline{v'} = \int_0^b -q \overline{2\pi i \xi \cdot v'} + (2\pi i \xi \otimes w' + w' \otimes 2\pi i \xi) : \overline{ v' \otimes 2 \pi i \xi } - \partial_n(\partial_n w' + 2\pi i \xi w_n) \cdot \overline{v'} \end{equation} and \begin{equation} \int_0^b F_n \overline{v_n} + \gamma 2\pi i \xi_1 w_n \overline{v_n} = \int_0^b (\partial_n w' + 2\pi i \xi w_n) \cdot \overline{2 \pi i \xi v_n} + \partial_n (q-2\partial_n w) \overline{v_n}. \end{equation} We then integrate by parts and use the boundary conditions in \eqref{ODE_int_unique_01} to see that \begin{equation} - \int_0^b \partial_n(\partial_n w' + 2\pi i \xi w_n) \cdot \overline{v'} = K' \cdot \overline{v'(b)} + \int_0^b (\partial_n w' + 2\pi i \xi w_n) \cdot \overline{\partial_n v'} \end{equation} and \begin{equation} \int_0^b \partial_n (q-2\partial_n w) \overline{v_n}= K_n \overline{v_n}(b) - \int_0^b (q-2\partial_n w) \overline{\partial_n v_n}. \end{equation} Combining these then shows that \begin{multline} - K \cdot \overline{v(b)} + \int_0^b F \cdot \overline{v} + q \overline{\left(2\pi i \xi \cdot v' + \partial_n v_n \right)} \\ = \int_0^b -\gamma 2\pi i \xi_1 w \cdot \overline{v} + 2 \partial_n w_n \overline{\partial_n v_n} +(\partial_n w' + 2\pi i \xi w_n) \cdot \overline{(\partial_n v' + 2\pi i \xi v_n) } \\ + \int_0^b (2\pi i \xi \otimes w' + w' \otimes 2\pi i \xi) : \overline{v' \otimes 2\pi i \xi}, \end{multline} and we conclude the proof of the first item by using the symmetry of $(2\pi i \xi \otimes w' + w' \otimes 2\pi i \xi)$ to rewrite \begin{equation} (2\pi i \xi \otimes w' + w' \otimes 2\pi i \xi) : \overline{v' \otimes 2\pi i \xi} = \frac{1}{2} (2\pi i \xi \otimes w' + w' \otimes 2\pi i \xi) : \overline{(2\pi i \xi \otimes v' + v' \otimes 2\pi i \xi)}. \end{equation} We now prove the second item. If $w^j \in H^2((0,b);\mathbb{C}^n)$ and $q^j \in H^1((0,b);\mathbb{C})$ for $j=1,2$ solve \eqref{ODE_int_unique_01}, then $w = w^1-w^2 \in H^2((0,b);\mathbb{C}^n)$ and $q = q^1 -q^2 \in H^1((0,b);\mathbb{C})$ solve \eqref{ODE_int_unique_01} with $F=0$, $G=0$, $K=0$. The first item with $v=w$ then implies that \begin{equation} \int_0^b -\gamma 2\pi i \xi_1 \abs{w}^2 + 2 \abs{\partial_n w_n}^2 + \abs{\partial_n w' + 2\pi i \xi w_n}^2 + \frac{1}{2} \abs{2\pi i \xi \otimes w' + w' \otimes 2\pi i \xi}^2 =0. \end{equation} Taking the real part of this identity then shows that $\partial_n w_n =0$ and $\partial_n w' + 2\pi i \xi w_n =0$ in $(0,b)$. Due to the boundary condition $w_n(0)=0$, we then have that $w_n=0$, which then implies that $\partial_n w' =0$ and hence that $w'=0$ since $w'(0)=0$. The second and fifth equations in \eqref{ODE_int_unique_01} then require that $\partial_n q =0$ and $q(b)=0$, which imply that $q=0$. Hence $w^1=w^2$ and $q^1=q^2$, which proves the second item. \end{proof} In order to analyze the system \eqref{fourier system} it is convenient to decompose it into a pair of decoupled sub-systems. We present this decoupling now. In the following result we suppress the functional dependence on $\xi$ for the sake of brevity, i.e. we write simply $\hat{u}(x_n)$ in place of $\hat{u}(\xi,x_n)$, etc. \begin{proposition}\label{ODE_equivalence_full} Suppose that $\hat{f} \in L^2((0,b);\mathbb{C}^n)$, $\hat{g} \in H^1((0,b);\mathbb{C})$ and $\hat{k} \in \mathbb{C}^n$. Further suppose that $\hat{u} \in H^2((0,b);\mathbb{C}^n)$, $\hat{p} \in H^1((0,b);\mathbb{C})$, $\varphi,\psi \in H^2((0,b);\mathbb{C})$, $q \in H^1((0,b);\mathbb{C})$, and $\vartheta \in H^2((0,b);\mathbb{C}^{n-1})$. Then the following are equivalent for every $\xi \in \mathbb{R}^{n-1} \backslash \{0\}$. \begin{enumerate} \item $\hat{p},\hat{u}$ solve \eqref{fourier system}. \item We have that \begin{equation}\label{ODE_equivalence_0} \hat{p} = q, \; \hat{u}' = -i \varphi \frac{\xi}{\abs{\xi}} + \vartheta, \text{ and } \hat{u}_n = \psi, \end{equation} $\varphi,\psi,q$ solve \begin{equation}\label{phi_psi_system} \begin{cases} \left( -\partial_{n}^{2}+4\pi^{2} \abs{\xi}^{2} \right) \varphi - 2\pi \abs{\xi} q- 2\pi i\xi_1 \gamma \varphi =i \hat{f}'\cdot \xi/\abs{\xi} - 2\pi \abs{\xi} \hat{g} & \text{in } (0,b) \\ \left( -\partial_{n}^{2}+4\pi^{2} \abs{\xi}^{2} \right) \psi + \partial_n q - 2 \pi i \xi_1 \gamma \psi = \hat{f}_{n} + \partial_{n} \hat{g} & \text{in } (0,b) \\ 2\pi \abs{\xi} \varphi +\partial_{n} \psi =\hat{g} & \text{in } (0,b) \\ -\partial_{n} \varphi + 2\pi \abs{\xi} \psi = i\hat{k}'\cdot \xi/\abs{\xi} ,\quad q-2\
partial_{n} \psi =\hat{k}_{n} & \text{for }x_{n}=b \\ \varphi = \psi =0 & \text{for }x_{n}=0, \end{cases} \end{equation} and $\vartheta$ solves \begin{equation}\label{theta_system} \begin{cases} \left( -\partial_{n}^{2}+4\pi^{2} \abs{\xi}^{2} \right) \vartheta - 2\pi i\xi_1 \gamma \vartheta = (1-\xi \otimes \xi/ \abs{\xi}^2) \hat{f}' & \text{in } (0,b) \\ -\partial_{n} \vartheta = (1-\xi \otimes \xi/ \abs{\xi}^2) \hat{k}' & \text{for }x_{n}=b \\ \vartheta = 0 & \text{for }x_{n}=0, \end{cases} \end{equation} which in particular requires that $\vartheta \cdot \xi =0$ on $(0,b)$. \end{enumerate} In either case (and hence both), the solutions are unique. \end{proposition} \begin{proof} First note that if $\vartheta$ solves \eqref{theta_system}, then taking the dot product with $\xi$ reveals that $\chi := \xi \cdot \vartheta \in H^2((0,b);\mathbb{C})$ solves \begin{equation} \begin{cases} \left( -\partial_{n}^{2}+4\pi^{2} \abs{\xi}^{2} \right) \chi - 2\pi i\xi_1 \gamma \chi = 0 & \text{in } (0,b) \\ -\partial_{n} \chi = 0 & \text{for }x_{n}=b \\ \chi = 0 & \text{for }x_{n}=0. \end{cases} \end{equation} We then multiply the first equation by $\bar{\chi}$ and integrate by parts over $(0,b)$ to conclude that \begin{equation} \int_0^b \abs{\partial_n \chi}^2 + (4 \pi^2 \abs{\xi}^2 - 2\pi i \xi_1 \gamma) \abs{\chi}^2 =0. \end{equation} Taking the real part of this equation then shows that $\chi =0$ on $(0,b)$, and hence $\vartheta \cdot \xi =0$. Now suppose $\hat{p},\hat{u}$ solve \eqref{fourier system}. Then we define $q = \hat{p}$, $\varphi = i \hat{u}' \cdot \xi/\abs{\xi}$, $\psi = \hat{u}_n$, and $\vartheta = (1-\xi\otimes \xi/\abs{\xi}^2) \hat{u}'$, which implies \eqref{ODE_equivalence_0}. Then \eqref{phi_psi_system} follows from \eqref{fourier system} by taking the dot product with $i \xi /\abs{\xi}$, and \eqref{theta_system} follows by multiplying by the projector matrix $(1-\xi \otimes \xi/ \abs{\xi}^2)$. On the other hand, if $\varphi,\psi,q$ solve \eqref{phi_psi_system} and $\vartheta$ solves \eqref{theta_system}, then we define $\hat{u}$ and $\hat{p}$ via \eqref{ODE_equivalence_0}. We then multiply the first and fourth equations in \eqref{phi_psi_system} by $i \xi/\abs{\xi}$ and combine with \eqref{theta_system} and the remaining equations in \eqref{phi_psi_system} to obtain \eqref{fourier system}. The uniqueness claim follows from the uniqueness result of Proposition \ref{ODE_int_unique}. \end{proof} It is also convenient to reformulate the coupled system \eqref{phi_psi_system} as a first-order equation. We present this equivalent formulation now. Note that in this result we present the system with slightly more general data and we allow for $\xi =0$ as well. \begin{proposition}\label{ODE_equivalence_reduced} Suppose that $F \in L^2((0,b);\mathbb{C}^2),$ $G \in H^1((0,b);\mathbb{C})$, and $K \in \mathbb{C}^2$. Further suppose that $y \in H^1((0,b);\mathbb{C}^4)$, $\varphi,\psi \in H^2((0,b);\mathbb{C})$, $q \in H^1((0,b);\mathbb{C})$. Then the following are equivalent for every $\xi \in \mathbb{R}^{n-1}$. \begin{enumerate} \item $\varphi,\psi,q$ solve the second-order boundary value problem \begin{equation}\label{general_phi_psi} \begin{cases} \left( -\partial_{n}^{2}+4\pi^{2} \abs{\xi}^{2} \right) \varphi - 2\pi \abs{\xi} q- 2\pi i\xi_1 \gamma \varphi =F_1 - 2\pi \abs{\xi} G & \text{in } (0,b) \\ \left( -\partial_{n}^{2}+4\pi^{2} \abs{\xi}^{2} \right) \psi + \partial_n q - 2 \pi i \xi_1 \gamma \psi = F_2 + \partial_{n} G & \text{in } (0,b) \\ 2\pi \abs{\xi} \varphi +\partial_{n} \psi =G & \text{in }(0,b) \\ -\partial_{n} \varphi + 2\pi \abs{\xi} \psi = K_1 ,\quad q-2\partial_{n} \psi =K_2 & \text{for }x_{n}=b \\ \varphi = \psi =0 & \text{for }x_{n}=0. \end{cases} \end{equation} \item $y=(\varphi,\psi,q,\partial_n \varphi)$ and $y$ solves the first-order two-point boundary value problem \begin{equation}\label{problem two point} \begin{cases} \partial_n y =A y +z \text{ in } (0,b) \\ M y(0)+N y(b)=d, \end{cases} \end{equation} \end{enumerate} where $A \in \mathbb{C}^{4 \times 4}$ is given by \begin{equation}\label{matrix A} A= \begin{pmatrix} 0 & 0 & 0 & 1\\ -2\pi \abs{\xi} & 0 & 0 & 0\\ 0 & -(4\pi^2 \abs{\xi}^2- i 2\pi \xi_1 \gamma ) & 0 & -2\pi \abs{\xi} \\ 4\pi^2 \abs{\xi}^2- i 2\pi \xi_1 \gamma & 0 & -2\pi \abs{\xi} & 0, \end{pmatrix}, \end{equation} $z \in L^2((0,b);\mathbb{C}^4)$ and $d \in \mathbb{C}^4$ are given by \begin{equation} z(x_n)= \begin{pmatrix} 0\\ G(x_n)\\ F_2(x_n)+2\partial_{n}G(x_n)\\ -F_1(x_n) + 2 \pi \abs{\xi} G(x_n), \end{pmatrix} \text{ and } d= \begin{pmatrix} 0\\ 0\\ K_1 \\ K_2 + 2G(b) \end{pmatrix}, \end{equation} and $M,N \in \mathbb{C}^{4 \times 4}$ are given by \begin{equation}\label{matrices M and N} M = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{pmatrix} \text{ and } N= \begin{pmatrix} 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 2\pi \abs{\xi} & 0 & -1\\ 4\pi \abs{\xi} & 0 & 1 & 0 \end{pmatrix}. \end{equation} \end{proposition} \begin{proof} Suppose that $\varphi,\psi$ and $q$ solve \eqref{general_phi_psi} and let $y = (\varphi,\psi,q,\partial_n \varphi)$. Note that $y_1,y_2 \in H^2((0,b);\mathbb{C})$. We differentiate the third equation to obtain the equation \begin{equation}\label{ODE_equivalence_reduced_2} \partial_n^2 y_2 = \partial_n^2 \psi = \partial_n G - 2\pi \abs{\xi} \partial_n \varphi = \partial_n G - 2\pi \abs{\xi} y_4. \end{equation} From this we readily deduce that $y$ solves the system \begin{equation}\label{ODE_equivalence_reduced_1} \begin{cases} \partial_n y_{1}=y_{4} & \text{in } (0,b) \\ \partial_n y_{2}=-2\pi \abs{\xi} y_{1}+G & \text{in } (0,b) \\ \partial_n y_{3}=-(4\pi^2 \abs{\xi}^2- 2\pi i \xi_1 \gamma )y_{2}- 2\pi \abs{\xi} y_{4} + F_2 + 2\partial_{n} G & \text{in } (0,b) \\ \partial_n y_{4}=(4\pi^2 \abs{\xi}^2- 2\pi i \xi_1 \gamma )y_{1}- 2\pi \abs{\xi} y_{3}- F_1 +2\pi \abs{\xi} G & \text{in } (0,b) \\ -y_{4} + 2\pi \abs{\xi} y_{2} = K_1 ,\quad y_{3}+4\pi \abs{\xi} y_{1}=K_2 + 2 G & \text{for }x_{n}=b \\ y_{1}=0,\quad y_{2}=0 & \text{for }x_{n}=0, \end{cases} \end{equation} which may be compactly rewritten as \eqref{problem two point}. Now suppose that $y$ solves \eqref{problem two point}, which is equivalent to \eqref{ODE_equivalence_reduced_1}. Define $\varphi = y_1$, $\psi = y_2$, and $q = y_3$, all of which then belong to $H^1((0,b);\mathbb{C})$. However, $\partial_n \varphi = \partial_n y_1 = y_4 \in H^1((0,b);\mathbb{C})$ and $\partial_n \psi = \partial_n y_2 = G -2 \pi \abs{\xi} \varphi \in H^1((0,b);\mathbb{C})$, so $\varphi,\psi \in H^2((0,b);\mathbb{C})$. In turn this implies that we may differentiate the second equation in \eqref{ODE_equivalence_reduced_1} to see that \eqref{ODE_equivalence_reduced_2} holds. Then the second equation in \eqref{ODE_equivalence_reduced_1} corresponds to the third in \eqref{general_phi_psi}, the fourth in \eqref{ODE_equivalence_reduced_1} corresponds to the first in \eqref{general_phi_psi}, and the third in \eqref{ODE_equivalence_reduced_1} corresponds to the second in \eqref{general_phi_psi} in light of the identity \eqref{ODE_equivalence_reduced_2}. The equivalence of the boundary conditions follows similarly. \end{proof} Consider the matrix $A\in \mathbb{C}^{4 \times 4}$ given by \eqref{matrix A}. Given $z \in L^2((0,b);\mathbb{C}^4)$, the unique solution $y\in H^1((0,b);\mathbb{C}^4)$ to the ODE \begin{equation} \begin{cases} \partial_n y = A y + z &\text{in }(0,b) \\ y(0) = y_0 \end{cases} \end{equation} is given by \begin{equation}\label{ODE_general_soln} y(x_n) = \exp(x_n A) y_0 + \int_0^{x_n} \exp((x_n-t)A) z(t) dt. \end{equation} Let $M,N \in \mathbb{C}^{4 \times 4}$ be given by \eqref{matrices M and N} and define the boundary matrix \begin{equation}\label{boundary matrix} B:=M+N\exp(bA) \in \mathbb{C}^{4 \times 4}. \end{equation} Thus the solvability of the two-point problem \eqref{problem two point} reduces to solving for $y_0 \in \mathbb{C}^4$ such that $d = M y_0 + N y(b)$, which in light of \eqref{ODE_general_soln} is equivalent to \begin{equation}\label{B y0} By_{0} = M y_{0} + N \exp(b A)y_{0} = d -N\int_{0}^{b}\exp((b-t)A)z(t)dt . \end{equation} Our next result establishes that $B$ is invertible for every $\xi \in \mathbb{R}^{n-1}$, which then allows us to make various conclusions about \eqref{problem two point}. An interesting feature of our approach is that we establish the invertibility of $B$ by using the isomorphism from Theorem \ref{iso_gamma_stokes} rather than through direct computation. We do this because although $\det{B}$ can be computed by hand (and we will do so later in Section \ref{sec_infty_asympt}), the resulting expression is quite cumbersome, and it is rather tricky to prove directly that it never vanishes. \begin{theorem}\label{ODE_B_inversion} Let $\xi \in \mathbb{R}^{n-1}$ and $A,M,N,B \in \mathbb{C}^{4 \times 4}$ be given by \eqref{matrix A}, \eqref{matrices M and N}, and \eqref{boundary matrix}, respectively. Then the following hold. \begin{enumerate} \item The boundary matrix $B$ has the block structure \begin{equation} B= \begin{pmatrix} I_{2\times 2} & 0_{2\times 2}\\ B_{3} & B_{4} \end{pmatrix} \end{equation} where $B_3,B_4 \in \mathbb{C}^{2 \times 2}$ are given by \begin{equation} B_3 = \begin{pmatrix} 2\pi \abs{\xi} \exp(bA)_{21} - \exp(bA)_{41} & 2\pi \abs{\xi} \exp(bA)_{22} - \exp(bA)_{42} \\ 4 \pi \abs{\xi} \exp(bA)_{11} + \exp(bA)_{31} & 4 \pi \abs{\xi} \exp(bA)_{12} + \exp(bA)_{32} \end{pmatrix} \end{equation} and \begin{equation} B_4 = \begin{pmatrix} 2\pi \abs{\xi} \exp(bA)_{23} - \exp(bA)_{43} & 2\pi \abs{\xi} \exp(bA)_{24} - \exp(bA)_{44} \\ 4\pi \abs{\xi} \exp(bA)_{13} + \exp(bA)_{33} & 4\pi \abs{\xi} \exp(bA)_{14} + \exp(bA)_{34} \end{pmatrix}. \end{equation} \item $B_4 \in \mathbb{C}^{2 \times 2}$ is invertible. \item $B$ is invertible, and we have the identities $\det{B} = \det{B_4}$ and \begin{equation} B^{-1} = \begin{pmatrix} I_{2 \times 2} & 0_{2 \times 2} \\ - B_4^{-1} B_3 & B_4^{-1} \end{pmatrix}. \end{equation} \item For every $z \in L^2((0,b);\mathbb{C}^4)$ and $d \in \mathbb{C}^4$ there exists a unique solution $y \in H^1((0,b);\mathbb{C}^4)$ to the problem \begin{equation} \begin{cases} \partial_n y = Ay + z &\text{in }(0,b) \\ My(0) + N y(b) = d, \end{cases} \end{equation} which is given by \begin{equation}\label{general solution} y(x_n) = \exp(x_n A)B^{-1} \left(d - N \int_{0}^{b}\exp((b-t)A)z(t)dt \right) + \int_0^{x_n} \exp((x_n-t)A) z(t) dt. \end{equation} \end{enumerate} \end{theorem} \begin{proof} The first item follows from a direct calculation, using the block structure of $M,N$: \begin{equation}\label{MN_block_form} M= \begin{pmatrix} I_{2\times 2} & 0_{2\times 2}\\ 0_{2\times 2} & 0_{2\times 2}% \end{pmatrix}, \text{ and } N= \begin{pmatrix} 0_{2\times 2} & 0_{2\times 2}\\ N_{3} & N_{4} \end{pmatrix} \end{equation} for $N_3,N_4 \in \mathbb{C}^{2 \times 2}$. The third item follows from the second and a simple calculation. The fourth item then follows from the third item, combined with \eqref{ODE_general_soln} and \eqref{B y0}. It remains only to prove the second item. Suppose initially that $\xi =0$. In this case we may readily compute \begin{equation} B_4 = N_4 = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \end{equation} to deduce that $B_4$ is invertible. In the case $\xi \in \mathbb{R}^{n-1} \backslash \{0\}$ the value of $\det{B_4}$ can be computed explicitly from the first item, but the resulting expression is rather complicated. To avoid working directly with $\det{B_4}$ we will instead employ Theorem \ref{iso_gamma_stokes} to show that $B_4$ is invertible. Let $m \in \mathbb{N}$ and pick a radial function $\zeta \in C^\infty_c(\mathbb{R}^{n-1})$ such that $\zeta =1$ on $B(0,2^m) \backslash B[0,2^{-m}]$. For $j =1,2$ let $k^1,k^2 \in \mathscr{S}(\mathbb{R}^{n-1};\mathbb{C}^n)$ be given via \begin{equation}\label{ODE_B_inversion_1} \hat{k}^1(\xi) = (-i \zeta(\xi) \xi / \abs{\xi},0) \text{ and } \hat{k}^2(\xi) = \zeta(\xi) e_n. \end{equation} Then by construction $\overline{\hat{k}^j(\xi)} = \hat{k}^j(-\xi)$, and so Lemma \ref{tempered_real_lemma} shows that $k^j$ actually takes values in $\mathbb{R}^n$. We may then use $f=0$, $g =0$, and $k = k^j \in \bigcap\limits_{s>0} H^s(\Sigma_b;\mathbb{R}^n)$ for $j=1,2$ in Theorem \ref{iso_gamma_stokes} to produce $(u^j,p^j) \in \bigcap\limits_{s >0} {_{0}}H^{s+2}(\Omega;\mathbb{R}^{n})\times H^{s+1}(\Omega)$ solving \eqref{problem_gamma_stokes_stress}. For $\xi \in \mathbb{R}^{n-1}\backslash \{0\}$ define $y^j(\xi,\cdot) \in C^\infty([0,b];\mathbb{C}^4)$ via \begin{equation} y^j(\xi,x_n) = (i \hat{u}^j(\xi,x_n) \cdot \xi /\abs{\xi}, \hat{u}^j_n(\xi,x_n), \hat{p}^j(\xi,x_n), i \partial_n \hat{u}^j(\xi,x_n) ). \end{equation} Since $(\hat{u}^j,\hat{p}^j)$ satisfy \eqref{fourier system}, Propositions \ref{ODE_equivalence_full} and \ref{ODE_equivalence_reduced}, together with \eqref{B y0} and \eqref{ODE_B_inversion_1} and the fact that $z=0$, imply that if $2^{-m} < \abs{\xi} < 2^m$ then $B y^j(\xi,0) = e_{2+j}$. Since $y^j(\xi,0) \cdot e_1 = y^j(\xi,0) \cdot e_2 =0$ for all $\xi \neq 0$, we may write $y^j(\xi,0) = (0,0,\nu^j(\xi))$ for $\nu^j(\xi) \in \mathbb{C}^2$. Then the identity $B y^j(\xi,0) = e_{2+j}$ is equivalent to $B_4 \nu^j(\xi) = e_j$ for $j =1,2$, and we deduce that for $2^{-m} < \abs{\xi} < 2^m$ the matrix $B_4 \in \mathbb{C}^{2\times 2}$ has rank two and is thus invertible. Since $m \in \mathbb{N}$ was arbitrary we then conclude that $B_4$ is invertible for all $\xi \in \mathbb{R}^{n-1} \backslash \{0\}$, which concludes the proof of the second item. \end{proof} \subsection{Some special functions} With Theorem \ref{ODE_B_inversion} in hand we are now in a position to introduce some functions that will play a fundamental role in our subsequent analysis. For $\xi \in \mathbb{R}^{n-1}$ and $\gamma \in \mathbb{R}$ write $A(\xi,\gamma), B(\xi,\gamma) \in \mathbb{C}^{4 \times 4}$ for the matrices defined by \eqref{matrix A} and \eqref{boundary matrix}, respectively. In light of Theorem \ref{ODE_B_inversion} we may then define $Q: \mathbb{R}^{n-1} \times [0,b] \times \mathbb{R} \to \mathbb{C}$, $V : \mathbb{R}^{n-1} \times [0,b] \times \mathbb{R} \to \mathbb{C}^n$, and $m: \mathbb{R}^{n-1}\times \mathbb{R} \to \mathbb{C}$ via \begin{equation}\label{QVm_def} \begin{split} Q(\xi,x_n,\gamma) &= \exp(x_n A(\xi,\gamma)) B^{-1}(\xi,\gamma) e_4 \cdot e_3 \in \mathbb{C} \\ V'(\xi,x_n,\gamma) &= -i \left( \exp(x_n A(\xi,\gamma)) B^{-1}(\xi,\gamma) e_4 \cdot e_1 \right) \frac{\xi}{\abs{\xi}} \in \mathbb{C}^{n-1} \text{ for } \xi \neq 0 \text{ and } V'(0,x_n,\gamma) = 0 \in \mathbb{C}^{n-1} \\ V_n(\xi,x_n,\gamma) &= \exp(x_n A(\xi,\gamma)) B^{-1}(\xi,\gamma) e_4 \cdot e_2 \in \mathbb{C} \\ m(\xi,\gamma) &= V_n(\xi,b,\gamma) = \exp(b A(\xi,\gamma)) B^{-1}(\xi,\gamma) e_4 \cdot e_2 \in \mathbb{C}. \end{split} \end{equation} The following result records some essential properties of these functions. \begin{theorem}\label{QVm_properties} Let $Q: \mathbb{R}^{n-1} \times [0,b] \times \mathbb{R} \to \mathbb{C}$, $V : \mathbb{R}^{n-1} \times [0,b] \times \mathbb{R} \to \mathbb{C}^n$, and $m: \mathbb{R}^{n-1}\times \mathbb{R} \to \mathbb{C}$ be as defined in \eqref{QVm_def}. Then the following hold. \begin{enumerate} \item $Q$, $V$, and $m$ are continuous, $Q$ and $V$ are smooth on $(\mathbb{R}^{n-1} \backslash \{0\}) \times [0,b] \times \mathbb{R}$, and $m$ is smooth on $(\mathbb{R}^{n-1} \backslash \{0\} )\times \mathbb{R}$. Also, for each $\xi \in \mathbb{R}^{n-1}$ we have that $Q(\xi,\cdot)$ and $V(\xi,\cdot)$ are smooth on $[0,b]$. \item $V(0,x_n,\gamma) =0$, $Q(0,x_n,\gamma) =1$, and $m(0,\gamma) =0$. \item For each $\xi \in \mathbb{R}^{n-1}$, $x_n \in [0,b]$, and $\gamma \in \mathbb{R}$ we have that $\overline{V(\xi,x_n,\gamma)} = V(-\xi,x_n,\gamma)$, $\overline{Q(\xi,x_n,\gamma)} = Q(-\xi,x_n,\gamma)$, and $\overline{m(\xi,\gamma)} = m(-\xi,\gamma)$. \item For each $\xi \in \mathbb{R}^{n-1}$ we have that $Q(\xi,\cdot,\gamma)$, $V(\xi,\cdot,\gamma)$ solve \begin{equation}\label{QVm_properties_0} \begin{cases} \left( -\partial_{n}^{2}+4\pi^{2} \abs{\xi}^{2} \right) V' + 2\pi i \xi Q - 2\pi i\xi_1 \gamma V' =0 & \text{in } (0,b) \\ \left( -\partial_{n}^{2}+4\pi^{2} \abs{\xi}^{2} \right) V_{n} + \partial_{n} Q - 2 \pi i \xi_1 \gamma V_{n} =0 & \text{in } (0,b) \\ 2\pi i\xi \cdot V' + \partial_{n}V_{n}=0 & \text{in } (0,b) \\ -\partial_{n} V' -2\pi i\xi V_{n} = 0,\quad Q - 2\partial_{n} V_{n}=1 & \text{for }x_{n}=b \\ V=0 & \text{for }x_{n}=0. \end{cases} \end{equation} \item If $(u,p) \in {_{0}}H^{2}(\Omega;\mathbb{R}^{n})\times H^{1}(\Omega)$ solve \eqref{problem_gamma_stokes_stress} with $f=0$, $g=0$, and $k = \zeta e_n$ for $\zeta \in H^{1/2}(\mathbb{R}^{n-1})$, then $\hat{u} = \hat{\zeta} V(\cdot,\cdot,\gamma)$ and $\hat{p} = \hat{\zeta} Q(\cdot,\cdot,\gamma)$. \item $\RE{m(\xi,\gamma)} \le 0$ for all $\xi \in \mathbb{R}^{n-1}$ and $\gamma \in \mathbb{R}$, and $\RE{m(\xi,\gamma)} =0$ if and only if $\xi =0$. \end{enumerate} \end{theorem} \begin{proof} Define $y : \mathbb{R}^{n-1} \times [0,b] \times \mathbb{R} \to \mathbb{C}^4$ via $y(\xi,x_n,\gamma) = \exp(x_n A(\xi,\gamma)) B^{-1}(\xi,\gamma) e_4$. Theorem \ref{ODE_B_inversion} shows that $y$ is continuous, smooth on $(\mathbb{R}^{n-1} \backslash \{0\}) \times [0,b] \times \mathbb{R}$, and that for $\xi$ fixed $y(\xi,\cdot,\cdot)$ is smooth on $[0,b] \times \mathbb{R}$. We have that $Q= y_3$, $V_n = y_2$, $m = y_2(\cdot,b,\cdot)$, and for $\xi \neq 0$, $V'(\xi,x_n,\gamma) = -i y_1(\xi,x_n,\gamma) \xi / \abs{\xi}$. Thus, to complete the proof of the first two items it suffices to notice that \begin{equation} \lim_{(\xi,t,\gamma) \to (0,x_n,\gamma_0)} y(\xi,t,\gamma) = \exp(x_n A(0,\gamma_0)) B^{-1}(0,\gamma_0) e_4 = \begin{pmatrix} 1 & 0 & -x_n & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 &-1 & 0 \end{pmatrix} \begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 1 \\ 0 \end{pmatrix}, \end{equation} and hence $y(\xi,t,\gamma) \to e_3 = y(0,x_n,\gamma_0)$ as $(\xi,t,\gamma) \to (0,x_n,\gamma_0)$. To prove the third item we note that $A(-\xi,\gamma) = \overline{A(\xi,\gamma)}$, and if we write $N(\xi) \in \mathbb{C}^{4 \times 4}$ to emphasize the $\xi$ dependence of the matrix defined in \eqref{matrices M and N}, then $N(-\xi) = N(\xi)$. From this we have that $B(-\xi,\gamma) = M + N(-\xi) \exp(x_n A(-\xi,\gamma)) = M + N(\xi) \exp(x_n \overline{A(\xi,\gamma)}) = \overline{B(\xi,\gamma)}$, and hence that $B^{-1}(-\xi,\gamma) = \overline{B^{-1}(\xi,\gamma)}$. Hence $\overline{y(\xi,x_n,\gamma)} = y(-\xi,x_n,\gamma)$ for all $\xi \in \mathbb{R}^{n-1}$, $x_n \in [0,b]$, and $\gamma \in \mathbb{R}$. The third item then follows directly from this and the definitions of $V,Q,$ and $m$ in terms of $y$. The fourth item follows immediately from Propositions \ref{ODE_equivalence_full} and \ref{ODE_equivalence_reduced} when $\xi \neq 0$ and from the second item and a trivial calculation when $\xi =0$. The fifth item follows from the fourth and Proposition \ref{ODE_equivalence_full}. We now turn to the proof of the sixth item. In light of the fourth item and Proposition \ref{ODE_int_unique} we have the identity \begin{multline} \int_0^b \left( -\gamma 2\pi i \xi_1 \abs{V(\xi,x_n,\gamma)}^2 + 2 \abs{\partial_n V_n(\xi,x_n,\gamma)}^2 + \abs{\partial_n V'(\xi,x_n,\gamma) + 2\pi i \xi V_n(\xi,x_n,\gamma)}^2 \right) dx_n \\ + \frac{1}{2}\int_0^b \abs{2\pi i \xi \otimes V'(\xi,x_n,\gamma) + V'(\xi,x_n,\gamma) \otimes 2\pi i \xi}^2 dx_n= -m(\xi,\gamma). \end{multline} Taking the real part of this identity yields \begin{multline} - \RE{m(\xi,\gamma)} = \int_0^b \left( 2 \abs{\partial_n V_n(\xi,x_n,\gamma)}^2 + \abs{\partial_n V'(\xi,x_n,\gamma) + 2\pi i \xi V_n(\xi,x_n,\gamma)}^2 \right) dx_n \\ + \frac{1}{2}\int_0^
B^{2}}\left(\frac{F'}{F}+2\frac{H'}{H}\right),\\\label{23} \psi_{22}&=&\frac{\ddot{f_R}}{F^{2}}-\frac{f''_R}{G^{2}}-\frac{\dot{f_R}}{F^{2}}\left(\frac{\dot{F}}{F}-\frac{\dot{G}}{G}-\frac{\dot{H}}{H}\right) -\frac{f'_R}{G^{2}}\left(\frac{F'}{F}-\frac{G'}{G}+\frac{H'}{H}\right). \end{eqnarray} Thorne \cite{s5} proposed the idea that total amount of energy in a cylindrical celestial object can defined through the gravitational C-energy, which takes the following form for the case under consideration \begin{eqnarray}\label{24} m(t,r)&=&\left\{\left(\frac{\dot{H}}{F}\right)^2-\left(\frac{H'}{G}\right)^2\right\}\frac{H}{2}+\frac{l}{8}. \end{eqnarray} Before moving towards the further computations, it is worthwhile to define some notations. The $D_T$ and $D_H$ are operators which represent proper time and radial derivatives, respectively, and are defined as \begin{eqnarray}\label{25} &&D_T=\frac{1}{F}\frac{\partial}{\partial t},\quad D_H=\frac{1}{H'}\frac{\partial}{\partial r}, \end{eqnarray} whereas relativistic velocity of interior of collapsing fluid is given by \begin{eqnarray}\label{26} &&U=D_TH=\frac{\dot{H}}{F}<0, \end{eqnarray} From Eq.$(\ref{24})$, we can obtain \begin{eqnarray}\label{27} &&\tilde{E}=\frac{H'}{G}=\sqrt{\frac{l}{4H}+U^2-\frac{2}{H}m(t,r)}. \end{eqnarray} Using above equation together with Eq.$(\ref{17})$, we can develop the expression given below \begin{eqnarray}\label{28} \tilde{E}\left(\sqrt{3}\frac{\sigma}{H}-\frac{1}{3}D_H(\Theta-\sqrt{3}\sigma)\right) &=& \frac{1}{2 f_R}\left(-q(1+f_T)+\frac{\psi_{01}}{FG}\right). \end{eqnarray} Eq.$(\ref{24})$ together with $(\ref{16})-(\ref{19})$ and $(\ref{25})$ provides \begin{eqnarray}\label{29} D_Tm &=&\frac{H^2}{2f_R}\left\{-(1+f_T)\tilde{E}q-(1+f_T)UP_r+f_T\mu U+\frac{\tilde{E}}{FG}\pi_{01}-U\left(\Psi+\psi_{11}\right)\right\}, \end{eqnarray} whereas radial derivative of mass provides \begin{eqnarray}\label{30} D_Hm &=& \frac{H^2}{2f_R}\left\{\mu+\frac{U}{\tilde{E}}(1+f_T)q+\pi+\pi_{00}-\frac{U}{\tilde{E}}\frac{\psi_{01}}{FG}\right\}, \end{eqnarray} which further leads towards the following expression \begin{eqnarray}\label{31} m &=&\frac{1}{2}\int_0^r \frac{H^2}{f_R}\left\{\mu+\frac{U}{\tilde{E}}(1+f_T)q+\psi+\psi_{00}-\frac{U}{\tilde{E}}\frac{\psi_{01}}{FG}\right\}H'dr, \end{eqnarray} It can also be written as \begin{eqnarray}\label{32} \frac{3m}{H^3} &=&\frac{3}{2H^3}\int_0^r \frac{H^2}{f_R}\left\{\mu+\frac{U}{\tilde{E}}(1+f_T)q+\psi+\psi_{00}-\frac{U}{\tilde{E}}\frac{\psi_{01}}{FG}\right\}H'dr. \end{eqnarray} \section{Weyl tensor and Structure scalars} In order to define the structures scalars, we first need to find the weyl tensor which has two parts, i.e, electric and magnetic parts. The electric part is given below \begin{eqnarray}\label{w} E_{\alpha\beta} &=& C_{\alpha\mu\beta\nu}V^{\mu}V^{\nu}, \end{eqnarray} The non-trivial components of electric component of weyl tensor are \begin{eqnarray}\label{E} E_{11} = \frac{2}{3}G^2\eta,\quad E_{22}= -\frac{1}{3}H^2\eta=E_{33}, \end{eqnarray} where \begin{eqnarray}\nonumber \eta &=& \frac{1}{2F^2}\left\{\frac{\ddot{H}}{H}-\frac{\ddot{G}}{G}-\left(\frac{\dot{H}}{H} -\frac{\dot{G}}{G}\right)\left(\frac{\dot{F}}{F}+\frac{\dot{H}}{H}\right)\right\}\\\label{ee} &&+\frac{1}{2B^2}\left\{\frac{F''}{F}-\frac{H''}{H}+\left(\frac{G'}{G}+\frac{H'}{H}\right) \left(\frac{H'}{H}-\frac{F'}{F}\right)\right\}-\frac{1}{2H^2}. \end{eqnarray} With the help of $(\ref{24})$ and $(\ref{32})$, we can find the expression for the above scalar value \begin{eqnarray}\nonumber \eta &=& \frac{1}{2f_R}\left[\mu-(1+f_T)\Pi+\Psi+\psi_{00}-\psi_{11}+\psi_{22}\right] \\\label{ex}&&-\frac{3}{2H^3}\int_0^r \frac{H^2}{f_R}\left\{\mu+\frac{U}{E}(1+f_T)q-\Psi+\psi_{00} -\frac{U}{E}\frac{\psi_{01}}{FG}\right\}H'dr, \end{eqnarray} The electric component $E_{\alpha\beta}$, in view of unit four-velocity and four-vectors can be given by \begin{eqnarray} E_{\alpha\beta} &=& \eta(\chi_\alpha\chi_\beta-\frac{1}{3}h_{\alpha\beta}) \end{eqnarray} Following Bel \cite{Bel} and Herrera et al. \cite{H1}-\cite{HRv}, we develop formalism for structure scalars in $f(R,T)$ gravity and introduce a couple of tensors, named $Y_{\alpha\beta}$ and $X_{\alpha\beta}$. For this, we orthogonally decompose the Riemann curvature tensor and find that \begin{eqnarray}\label{xx} X_{\alpha\beta}&=& \frac{1}{3f_R}\left[\mu+\Psi+\psi_{00}\right]h_{\alpha\beta}-\frac{1}{2f_R}\left[(1+f_T)\Pi+\psi_{11}-\psi_{22}\right] \left(\chi_\alpha\chi_\beta-\frac{1}{3}h_{\alpha\beta}\right)-E_{\alpha\beta},\\\nonumber Y_{\alpha\beta}&=& \frac{1}{6f_R}\left[\mu+3f_T\mu+(1+f_T)(3P_r-2\Pi)+\Psi+\psi_{00}+\psi_{11}+2\psi_{22}\right]h_{\alpha\beta}-\frac{1}{2f_R}\left[(1+f_T)\Pi \right.\\\label{yy}&&+\left.\psi_{11}-\psi_{22}\right] \left(\chi_\alpha\chi_\beta-\frac{1}{3}h_{\alpha\beta}\right)+E_{\alpha\beta}. \end{eqnarray} For detailed discussion of these quantities, one can see \cite{HRv}. These tensors can be written in the combination of structure scalars ($X_T$, $X_{TF}$, $Y_T$ and $Y_{TF}$). \begin{eqnarray}\label{X} X_{\alpha\beta} &=& \frac{1}{3}X_T h_{\alpha\beta}+X_{TF}\left(\chi_\alpha \chi_\beta-\frac{1}{3}h_{\alpha\beta}\right),\\\label{Y} Y_{\alpha\beta} &=& \frac{1}{3}Y_T h_{\alpha\beta}+Y_{TF}\left(\chi_\alpha \chi_\beta-\frac{1}{3}h_{\alpha\beta}\right). \end{eqnarray} By making use of Eqs.$(\ref{16})$, $(\ref{18})$, $(\ref{19})$, $(\ref{24})$ and $(\ref{ex})$, we have the following expression \begin{eqnarray}\label{Y1} \frac{3}{H^3}\left(m-\frac{l}{8}\right)&=& \frac{1}{2f_R}\left(\mu+\psi-(1+f_T)\Pi+\psi_{00}-\pi_{11}+\psi_{22}\right)-\eta, \end{eqnarray} which makes it possible to produce the following expression with the help of Eqs.(\ref{32}) and (\ref{Y1}) \begin{eqnarray}\nonumber Y_{TF} &=& \frac{1}{2f_R}\left\{\mu-2(1+f_T)\Pi+\pi+\pi_{00}-2\psi_{11}+2\pi_{22}\right\}-\frac{3}{2H^3}\int_0^r\frac{H^2}{f_R}\left\{ \mu+\psi+\psi_{00}\right.\\\label{Y}&&+\left.\frac{U}{E}q(1+f_T) -\frac{U}{E}\frac{\psi_{01}}{FG}\right\}H'dr+\frac{3l}{8H^3}, \end{eqnarray} whereas $X_{TF}$ takes the form as \begin{eqnarray}\label{X} X_{TF} &=& -\frac{1}{2f_R}\left(\mu+\psi+\psi_{00}\right)+\frac{3}{2H^3}\int_0^r\frac{H^2}{f_R}\left\{ \mu+\psi+\psi_{00}+\frac{U}{E}q(1+f_T)-\frac{U}{E}\frac{\psi_{01}}{FG}\right\}H'dr+\frac{3l}{8H^3}. \end{eqnarray} Thus, we are now able to construct a differential equation which shows a relationship between energy density inhomogeneity and weyl tensor. \begin{eqnarray} \left( X_{TF}+\mu+\frac{1}{2f_{R}}(\mu+\psi+\psi_{00})\right)' &=&-3\frac{H'}{H}X_{TF}+\frac{(\Theta-\sigma)}{2f_R}\left(q(1+f_T)G+\frac{\psi_{01}}{G}\right), \end{eqnarray} If we choose $X_{TF}=0$ in the absence of dark source and dissipation, then we have \begin{eqnarray}\label{t1} (\mu+\psi+\psi_{00})' &=& 0, \end{eqnarray} however, in general dissipative case it assumes the form \begin{eqnarray}\label{t2} (\mu+\psi+\psi_{00})' &=& \frac{(\Theta-\sigma)}{2f_R}\left(q(1+f_T)G+\frac{\psi_{01}}{G}\right). \end{eqnarray} It shows that $X_{TF}$ controls the energy density homogeneity along with dark source terms. \section{The Complexity Factor} The definition of quantity measuring the complexity of a dynamical system is more generalized than for the static one as it faces two additional factors. In static case, only fluid parameters are involved, while in non-static case, complexity of structure of system and of patterns of evolution also contribute to the situation. For static case, definition is based on the assumptions that homogenous energy density and isotropic pressure corresponds to the simplest system. However, for the later case, simplest possible patterns are also considered in order to measure the degree of complexity of evolutionary patterns. Recently, we have analyzed definition of complexity for anisotropic fluid non-static sphere in $f(R,T)$ gravity. We chose $Y_{TF}$ as complexity factor as it covers all the components that contributes to the complexity of a system. In the case under consideration, we again found $Y_{TF}$ as most suitable scalar in order to analyze the components that trigger complications in a system. It also incorporates the effects of dark source terms. We can see in Eq.(\ref{Y}) that it also contains the term comprising length of cylinder. Thus, it also measures the geometric variations in a system. \section{The Homologous Evolution And The Homogeneous Expansion Condition} After making the choice of $Y_{TF}$ as complexity factor, our next task is to analyze the complexity of evolutionary patterns of the system. Such analysis involves two possibilities: the homologous condition and homogeneous expansion. Homogeneous expansion corresponds to the zero value of prime derivative of expansion scalar which measures infinitesimal changes in fluid distribution, whereas homologous evolution corresponds to the similarity of the patterns. \subsection{The Homologous Evolution} We can see that Eq.$(\ref{28})$ can be written as \begin{eqnarray} D_H\left(\frac{U}{H}\right) &=& \frac{1+f_T}{f_R}\frac{q}{\tilde{E}}-\frac{1}{FG f_R \tilde{E}}\pi_{01}+\sqrt{3}\frac{\sigma}{H}. \end{eqnarray} whose integration leads to the equation \begin{eqnarray} \frac{U}{H} &=&\int_0^r\left(\frac{1+f_T}{f_R}\frac{q}{\tilde{E}}-\frac{1}{FG f_R \tilde{E}}\pi_{01}+\sqrt{3}\frac{\sigma}{H}\right)H'dr+h(t), \end{eqnarray} where $h(t)$ is function of integration. \begin{eqnarray}\label{101} U &=&H\int_0^r\left(\frac{1+f_T}{f_R}\frac{q}{\tilde{E}}-\frac{1}{FG f_R \tilde{E}}\psi_{01}+\sqrt{3}\frac{\sigma}{H}\right)H'dr+Hh(t), \end{eqnarray} which yields \begin{eqnarray}\label{102} U &=& \frac{U_\Sigma}{H_\Sigma}H-H\int_0^r\left(\frac{1+f_T}{f_R}\frac{q}{\tilde{E}}-\frac{1}{FG f_R \tilde{E}}\psi_{01}+\sqrt{3}\frac{\sigma}{H}\right)H'dr. \end{eqnarray} If the integral in Eq.$(\ref{101})$ and Eq.$(\ref{102})$ vanishes, then $U\sim H$ which is characteristic of homologous evolution; it would be possible if $\sigma=0$, $q=0$ and $\psi_{01}=0$ or the terms cancel each other. \\ For homologous evolution, $U=h(t) H$ and $h(t)=\frac{U_\sigma}{H_\sigma}$ where $U=D_T H$. It makes us to follow that $H$ is separable and can be written as \begin{eqnarray}\label{103} H &=& H_1(t)H_2(r) \end{eqnarray} The term with negative sign in Eq.$(\ref{102})$ shows that dissipation, shear and dark source entities are responsible for the deviation of evolution from being homologous. Thus, we have \begin{eqnarray}\label{105} \frac{1+f_T}{f_R}\frac{qG}{H'}-\frac{1}{f_RF H'}\psi_{01}+\sqrt{3}\frac{\sigma}{H} &=& 0 \end{eqnarray} It represents homologous condition in general. For non-dissipative case, it takes the form as \begin{eqnarray}\label{106} \sqrt{3}\frac{\sigma}{H} &=& \frac{1}{f_RF H'}\psi_{01} \end{eqnarray} It is obvious that homologous evolution does not correspond to the shear free condition in general, rather it depends on the choice of $f(R,T)$ model. \subsection{The homogeneous Expansion} Homogeneous expansion also represents simple pattern of evolution. Under homogeneous expansion, Eq.$(\ref{28})$ assumes the following form \begin{eqnarray}\label{106i} f_R\left(\sqrt{3} \frac{\sigma}{H}+\frac{1}{3}D_H\sigma\right)-\frac{1}{FH'}\psi_{01} &=& -\frac{qG}{H'}(1+f_T). \end{eqnarray} If we analyze the Eqs.$(\ref{105})$ and $(\ref{106i})$, then it can be clearly observe that imposition of these two conditions is followed by $D_H(\sigma)=0$. Here, its implication is based on the regularity conditions in the neighborhood of the center, that shear free condition implies zero dissipation. \section{The $f(R, T)$ Model} We can see that the result depends on the choice of $f(R,T)$ model. So we need to choose a viable $f(R, T)$ model in order to represent our results in a meaningful way. The $f(R, T)$ model we have selected for discussion is developed by Sharif and Zubair \cite{zub} and has the following mathematical form \begin{eqnarray}\label{106a} f(R, T) &=& \alpha_1 R^m T^n +\alpha_2T(1+\alpha_3 T^p R^q), \end{eqnarray} where $\alpha_i's$ are positive real numbers , whereas $m, n, p, q$ assumes some value greater than or equal to $1$ . We will analyze our results considering different cases of above mentioned model and we will proceed our further discussion under following three cases:\\ \begin{enumerate} \item $f(R,T)= R+\alpha_2 T$, for $\alpha_1=1, m=1, n=0, \alpha_3=0$ \item $f(R, T)= \alpha_1 R+\alpha_2 T+\alpha_4 T^2$, for $m=1, n=0, \alpha_4=\alpha_1\alpha_3, p=1, q=0$ \item $f(R, T)= \alpha_1 R+\alpha_2 T(1+\alpha_3 TR^2)$, for $ m=1, n=0, p=1, q=2$ \end{enumerate} \section{Kinematics and Dynamics of Stellar systems} \subsection{Case I: $f(R,T)= R+\alpha_2 T$} This form of model involves direct minimal curvature matter coupling, which has been used widely to explore a number of cosmological phenomena because of its theoretical and cosmological consistency \cite{1*}. Many theoretical cosmological models of the universe have been proposed to analyze the behavior of mysterious components and their physical and cosmological consequences are explored \cite{s6, s7}. Galactic structures and their existence have also been discussed, and results are found in agreement with previously established solutions and assumptions \cite{s8}. For this model, homologous condition and homogeneous expansion condition given in Eqs.$(\ref{105})$ and $(\ref{106i})$ take the form as \begin{eqnarray}\label{1071} (1+\alpha_2)qG &=&\sqrt{3}\frac{\sigma H'}{H},\\\label{1081} \left(\sqrt{3} \frac{\sigma}{H}+\frac{1}{3}D_H\sigma\right)&=& -\frac{qG}{H'}(1+\alpha_2). \end{eqnarray} Here, Eq.$(\ref{1081})$ clearly depicts that fluid cannot be dissipative under shear free condition and homogeneous expansion. However, if Eqs.$(\ref{1071})$ and $(\ref{1081})$ hold at the same time, then we have \begin{eqnarray}\label{1081a} (\Theta-\sqrt{3}\sigma)' &=& 0. \end{eqnarray} By inserting the values of expansion scalar and shear scalar given in Eqs.$(\ref{13})$ and $(\ref{sh})$, respectively, we have \begin{eqnarray}\label{1091} (\Theta-\sqrt{3}\sigma)' &=& \left(\frac{3}{F}\frac{\dot{H}}{H}\right)'= 0. \end{eqnarray} Eq.$(\ref{103})$ together with above equation follows that $F'=0$ which ensures the geodesic condition of the fluid. As $F$ possesses an arbitrary constant value, so we may choose $F=1$. With this choice, we get the following expression from Eq.(\ref{1091}) \begin{eqnarray}\label{1101} \Theta-\sqrt{3}\sigma &=& 3\frac{\dot{H}}{H}. \end{eqnarray} We analyze this expression closer to the center and obtain the condition $(\Theta-\sqrt{3}\sigma)'=0$. The successive derivatives of Eq.$(\ref{1101})$ with respect to $r$, also support our argument and strengthens the point that fluid is homologous. \\ Again we analyze the Eq.$(\ref{28})$, if we assume $\sigma=0$, (also fluid is non-dissipative), then we have \begin{eqnarray}\label{1121} \Theta' &=&0. \end{eqnarray} It can clearly be observe that homologous patterns of the evolution imply homogeneity of the expansion scalar. \\ If we assume $\Theta'=0$, then Eq.$(\ref{28})$ takes the following form \begin{eqnarray}\label{1131} \left(\frac{\sqrt{3}\sigma}{H}-\frac{1}{\sqrt{3}}D_H(\sigma)\right) &=& 0, \end{eqnarray} which further takes the form as \begin{eqnarray}\label{1131a} \frac{\sigma'}{\sigma} &=& \frac{3R'}{R}, \end{eqnarray} implying \begin{eqnarray}\label{1131b} \sigma &=& \frac{f_1(t)}{H^3}, \end{eqnarray} where $f_1(t)$ is an arbitrary function of integration. As $r$ assumes zero value at the center, so $H$ will also be zero. Thus, we must have $f_1(t)=0$ in order to avoid unboundedness of the expression. This situation lead to the vanishing of shear scalar. On the other hand, if we choose the zero value for shear scalar, then Eq.$(\ref{28})$ ensures the homogeneous expansion. Here, it is obvious that homogeneous expansion and homologous condition imply each other in non dissipative case. Further, we have analyzed the situation for homogeneous expansion in the presence of dissipation and obtained \begin{eqnarray}\label{1131c} \sigma &=& -\frac{\sqrt{3}}{2H^3}\int^r_0 H^3 qG(1+\alpha_2)dr. \end{eqnarray} This expression makes it clear that homogeneous expansion and homologous condition are not compatible in the presence of dissipation. Now, we consider some dynamical situations for the system under consideration. As our previous discussion shows that fluid is geodesic under homologous condition in both dissipative and non-dissipative cases. Thus, if we apply homologous condition on the Eq.(\ref{B4}), we obtain \begin{eqnarray}\label{1141} D_TU &=& -\frac{m}{H^2}-\frac{H}{2}\left\{\alpha_2\mu -(1+\alpha_2)P_r+\frac{\alpha_2 T}{2}\right\}. \end{eqnarray} This equation can be re-written in the form of $Y_{TF}$ as \begin{eqnarray}\label{1151} \frac{3 D_TU}{H} &=& -\frac{1}{2}\left\{(1-3\alpha_2)\mu-\alpha_2 T-2(1+\alpha_2)\Pi +3(1+\alpha_2)P_r\right\}+Y_{TF}-\frac{3l}{8H^3}. \end{eqnarray} Now, the manipulation of Eqs.$(\ref{16})$, $(\ref{18})$ and $(\ref{19})$ provides \begin{eqnarray}\label{1151a} -\frac{2\ddot{H}}{H}-\frac{\ddot{G}}{G} &=& \frac{1}{2}\left\{(1-3\alpha_2)\mu-\alpha_2 T-2(1+\alpha_2)\Pi +3(1+\alpha_2)P_r\right\}+Y_{TF}-\frac{3l}{8H^3}, \end{eqnarray} while the definition of velocity `U' of collapsing star provides \begin{eqnarray}\label{1171} \frac{3 D_TU}{H}&=& \frac{3\ddot{H}}{H}. \end{eqnarray} Insertion of above two equations into Eq.$(\ref{1151a})$ leads to \begin{eqnarray}\label{1181} Y_{TF} &=& \frac{\ddot{H}}{H}-\frac{\ddot{G}}{G}-\frac{3l}{8H^3}. \end{eqnarray} If $Y_{TF}=0$, then Eq.$(\ref{1181})$ becomes \begin{eqnarray}\label{118a} \frac{3l}{8H^3} &=& \frac{\ddot{H}}{H}-\frac{\ddot{G}}{G}. \end{eqnarray} Since we are working on the assumption that fluid is homologous, so we can write the Eq.$(\ref{1151a})$ as \begin{eqnarray}\label{1151b} 3\left(\dot{h}(t)+h(t)\frac{\dot{H}}{H}\right) &=& -\frac{1}{2}\left\{(1-3\alpha_2)\mu-\alpha_2 T-2(1+\alpha_2)\Pi +3(1+\alpha_2)P_r\right\}+Y_{TF}-\frac{3l}{8H^3}, \end{eqnarray} Now, we see the both cases when the fluid is non-dissipative or dissipative. \subsubsection{The Dissipative and Non-dissipative Scenarios} Here, we assume another condition that fluid is non-dissipative. Under this assumption and for the choice of $f(R,T)$ model in case I, Eq.(\ref{106}) implies shear free condition for fluid configuration. With this implication, Eq.$(\ref{sh})$ leads to \begin{eqnarray}\label{1251} \frac{\ddot{H}}{H}-\frac{\ddot{G}}{G} &=& 0 \quad \Rightarrow\quad Y_{TF}=\frac{3l}{8H^3}. \end{eqnarray} Here, we can see that complexity factor $Y_{TF}=0$ is proportional to the fraction of $l$ and $H^3$. We inserted this relation in Eq.$(\ref{Y})$ and found $\Pi=0$, which implies that $\mu'=0$. Thus, $Y_{TF}=\frac{3l}{8H^3}$ represents simplest modes of evolution in the case of cylinder. Further, Eqs.$(\ref{118a})$ and $(\ref{1251})$ also strengthens our argument. Now, we analyze the situation in the presence of dissipation. In this case, Eqs.$(\ref{1081})$ and $(\ref{sh})$ produce the following expression \begin{eqnarray}\label{1261} \dot{\sigma} &=& \frac{1}{\sqrt{3}}\left\{\left(\frac{\dot{H}}{H}\right)^2- \left(\frac{\dot{G}}{G}\right)^2+\frac{3l}{8H^3}-Y_{TF}\right\}. \end{eqnarray} The time derivative of Eq.$(\ref{1081})$ and above equation provide the following mathematical expression \begin{eqnarray}\label{1271} Y_{TF}\frac{H'}{H} &=& \frac{1}{2}Bq(1+\alpha_2)\left(\frac{\dot{q}}{q}+\frac{2\dot{B}}{B}+\frac{\dot{H}}{H}\right)+\frac{3l}{8H^3}. \end{eqnarray} If we assign zero value to complexity factor, then we get \begin{eqnarray}\label{1281} Bq(1+\alpha_2)\left(\frac{\dot{q}}{q}+\frac{2\dot{B}}{B}+\frac{\dot{H}}{H}\right)+\frac{3l}{4H^3} &=& 0. \end{eqnarray} This differential equation can further be solved by using some suitable numerical or analytical methods of integration. It actually holds to represent simplest dissipative regime. \subsection{ Case-II: $f(R, T)= \alpha_1 R+\alpha_2 T+\alpha_4 T^2$} This form of our selected model comprises linear and quadratic terms in the trace of the energy-momentum tensor( EMT). The squared terms of EMT was first introduced in \cite
{s9}. This particular form of $f(R, T)$ model has been used to explore non-exotic matter wormholes \cite{s10}. This type of choice usually contrast with higher order gravity and the results provide description of the universe that enters from a decelerated phase of expansion to an accelerated one and in agreement with observational data. For this choice of $f(R,T)$ model, conditions obtained against simplest modes of evolution given in Eqs.$(\ref{105})$ and $(\ref{106i})$ take the form as \begin{eqnarray}\label{129} \frac{1+\alpha_2+2\alpha_4 T}{\alpha_1}qG &=&\sqrt{3}\frac{\sigma H'}{H},\\\label{108a} \left(\sqrt{3} \frac{\sigma}{H}+\frac{1}{3}D_H\sigma\right)&=& -\frac{qG}{\alpha_1H'}(1+\alpha_2+2\alpha_4 T) \end{eqnarray} Here, if we assume $\sigma=0$, Eq.(\ref{108a}) ensures the vanishing of dissipative variable. Thus, homogeneous expansion and shear-free condition again ceases the fluid to be dissipative. All the situations that exits in Eqs.(\ref{1131b}-\ref{108a}) are also valid in this case. However, in the presence of dissipation, shear scalar assume the form as \begin{eqnarray}\label{113*c} \sigma &=& -\frac{\sqrt{3}}{2H^3}\int^r_0 \frac{H^3}{\alpha_1} qG(1+\alpha_2+\alpha_4 T)dr. \end{eqnarray} In this case, dissipative variable again affects the homogeneous expansion and homologous condition and these are not found compatible. Now, we consider some dynamical situations for the system under consideration. As our previous discussion shows that fluid is geodesic under homologous condition in both dissipative and non-dissipative cases. Thus, if we apply homologous condition on the Eq.(\ref{B4}), we obtain \begin{eqnarray}\label{114*} D_TU &=& -\frac{m}{H^2}-\frac{H}{2\alpha_1}\left\{(\alpha_2+2\alpha_4 T)\mu -(1+\alpha_2+2\alpha_4 T)P_r+\frac{(\alpha_2+2\alpha_4 T) T}{2}\right\}. \end{eqnarray} This equation can be re-written in the form of $Y_{TF}$ as \begin{eqnarray}\nonumber \frac{3 D_TU}{H} &=& -\frac{1}{2\alpha_1}\left\{(1-3\alpha_2-6\alpha_4 T)\mu-\alpha_2 T-\alpha_4 T^2-2(1+\alpha_2+\alpha_4 T)\Pi +3(1+\alpha_2+\alpha_4 T)P_r\right\}\\\label{115*}&&+Y_{TF}-\frac{3l}{8H^3}. \end{eqnarray} Now, the manipulation of Eqs.$(\ref{16})$, $(\ref{18})$ and $(\ref{19})$ provides \begin{eqnarray}\nonumber -\frac{2\ddot{H}}{H}-\frac{\ddot{G}}{G} &=& \frac{1}{2}\left\{(1-3\alpha_2-6\alpha_4 T)\mu-\alpha_2 T-2\alpha_4 T^2-2(1+\alpha_2+2\alpha_4 T)\Pi +3(1+\alpha_2 +2\alpha_4 T)P_r\right\}\\\label{115**}&&+Y_{TF}-\frac{3l}{8H^3}, \end{eqnarray} while the definition of velocity `U' of collapsing star provides \begin{eqnarray}\label{117*} \frac{3 D_TU}{H}&=& \frac{3\ddot{H}}{H}, \end{eqnarray} Insertion of above two equations into Eq.$(\ref{115**})$ leads to \begin{eqnarray}\label{118*} Y_{TF} &=& \frac{\ddot{H}}{H}-\frac{\ddot{G}}{G}-\frac{3l}{8H^3}. \end{eqnarray} If $Y_{TF}=0$, then Eq.$(\ref{118*})$ becomes \begin{eqnarray}\label{118a*} \frac{3l}{8H^3} &=& \frac{\ddot{H}}{H}-\frac{\ddot{G}}{G}. \end{eqnarray} Since we are working on the assumption that fluid is homologous, so we can write the Eq.$(\ref{115**})$ as \begin{eqnarray}\nonumber 3\left(\dot{h}(t)+h(t)\frac{\dot{H}}{H}\right) &=& -\frac{1}{2}\left\{(1-3\alpha_2-6\alpha_4 T)\mu-\alpha_2 T-\alpha_4 T^2-2(1+\alpha_2+\alpha_4 T)\Pi +3(1+\alpha_2+\alpha_4 T)P_r\right\}+Y_{TF}\\\label{115w}&&-\frac{3l}{8H^3}, \end{eqnarray} \subsubsection{The Dissipative and Non-dissipative Scenarios} In non-dissipative case, we again observe the same scenario as it is discussed in case I. Thus, we need to discuss only dissipative case. Here, we analyze the situation in the presence of dissipation. In this case, Eqs. $(\ref{sh})$ and $(\ref{108a})$ produce the following expression \begin{eqnarray}\label{126p} \dot{\sigma} &=& \frac{1}{\sqrt{3}}\left\{\left(\frac{\dot{H}}{H}\right)^2- \left(\frac{\dot{G}}{G}\right)^2+\frac{3l}{8H^3}-Y_{TF}\right\}. \end{eqnarray} The time derivative of Eq.$(\ref{108a})$ and above equation provide the following mathematical expression \begin{eqnarray}\label{127p} Y_{TF}\frac{H'}{H} &=& \frac{1}{2\alpha_1}Bq(1+\alpha_2+2\alpha_4 T)\left(\frac{\dot{q}}{q} +\frac{2\dot{B}}{B}+\frac{\dot{H}}{H}\right)+\frac{\alpha_4}{\alpha_2}\dot{T}+\frac{3l}{8H^3}. \end{eqnarray} If we assign zero value to complexity factor, then we get \begin{eqnarray}\label{128p} \frac{Bq}{\alpha_1}(1+\alpha_2+2\alpha_4 T)\left(\frac{\dot{q}}{q}+\frac{2\dot{B}}{B} +\frac{\dot{H}}{H}\right)+2\frac{\alpha_4}{\alpha_2}\dot{T}+\frac{3l}{4H^3} &=& 0. \end{eqnarray} This differential equation can further be solved by using some suitable numerical or analytical methods of integration. It actually holds to represent simplest dissipative regime. \subsection{ Case III: $f(R, T)= \alpha_1 R+\alpha_2 T(1+\alpha_3 TR^2)$} This type of models offer the non-minimal coupling of curvature and matter components. The similar type of choice has been recently used to measure the impact of collision matter on the late-time dynamics of $f(R, T)$ gravity \cite{s11}. In this case, homologous and homogeneous conditions will take the form as \begin{eqnarray}\label{lab1} \frac{1+\gamma_2}{\gamma_1}qG+\sqrt{3}\sigma \frac{H'}{H}&=& \frac{1}{\gamma_1}\left(\gamma_5-\frac{\dot{B}}{B}\gamma_4\right), \\\label{lab2} \gamma_1\left(\sqrt{3}\frac{\sigma}{H}+\frac{1}{\sqrt{3}}D_H\sigma\right)&=&\frac{1}{FH'} \left(\gamma_5-\frac{A'}{A}\gamma_3-\frac{\dot{B}}{B}\gamma_4\right), \end{eqnarray} however, Eq.(\ref{106}) takes the form as \begin{eqnarray}\label{dq} \sqrt{3}\frac{\sigma}{H} &=& \frac{1}{\gamma_1FH'} \left(\gamma_5-\frac{A'}{A}\gamma_3-\frac{\dot{B}}{B}\gamma_4\right), \end{eqnarray} where \begin{eqnarray}\nonumber \gamma_1&=&\alpha_1+2\alpha_4T^2 R,\\\nonumber \gamma_2&=& \alpha_2+2\alpha_4TR^2,\\\nonumber \gamma_3&=&\alpha_4(T^2\dot{R}+2 T \dot{T}R),\\\nonumber \gamma_4&=&\alpha_4(T^2R'+2 T T'R),\\\nonumber \gamma_5&=&2\alpha_4(T^2\dot{R'}+2TT'\dot{R}+2T'\dot{T}R+2T\dot{T'}R+2T\dot{T}R'). \end{eqnarray} Here, Eq.(\ref{dq}) clearly shows that homologous evolution does not imply shear free condition in non-dissipative case. Nevertheless, validity of the Eqs.$(\ref{lab1})$ and $(\ref{lab2})$ at the same time again makes us to believe that fluid is homologous. However, if we choose $\Theta'=0$ and consider the Eq.$(\ref{28})$, then we have \begin{eqnarray}\label{lab3} \sigma &=& \frac{\sqrt{3}}{2H^3}\int\frac{H^3}{F\gamma_1}\left(\gamma_5-\frac{F'}{F}\gamma_3 -\frac{\dot{B}}{B}\gamma_4\right)dr. \end{eqnarray} Under the same condition (i.e., $q=0$), if we assume $\sigma=0$, then Eq.$(\ref{28})$ does not imply the homogeneous expansion, rather it takes the form as \begin{eqnarray}\label{lab4} \Theta' &=& -\frac{\sqrt{3}}{F\gamma_1}\left(\gamma_5-\frac{F'}{F}\gamma_3-\frac{\dot{G}}{G}\gamma_4\right). \end{eqnarray} It is obvious from Eqs.$(\ref{lab3})$ and $(\ref{lab4})$ that homologous and homogeneous expansion conditions do not imply each other in non-dissipative case. In the presence of dissipation, $\Theta'=0$ produce the following result \begin{eqnarray}\label{lab5} \sigma &=& \frac{\sqrt{3}}{2H^3}\int\frac{H^3}{F\gamma_1}\left\{\left(\gamma_5-\frac{F'}{F}\gamma_3 -\frac{\dot{B}}{B}\gamma_4\right)-FGq(1+\gamma_2)\right\}dr. \end{eqnarray} Again, we take into account dynamical considerations and apply homologous condition and obtain from \textcolor[rgb]{0.98,0.00,0.00}{\textbf{Eq.$(\ref{B4})$}} \begin{eqnarray}\label{lab6} D_T U &=& -\frac{m}{H^2}-\frac{H}{2\gamma}\left\{\gamma_2 \mu+(1+\gamma_2)P_r+\phi+\phi_{11}\right\}, \end{eqnarray} which further takes the form as \begin{eqnarray}\label{lab7} \frac{3D_T U}{H} &=& -\frac{1}{2\gamma_1}\left\{(1+3\gamma_2)\mu-2\phi-2(1+\gamma_2)\Pi+\phi_{11} +2\phi_{22}+3(1+\gamma_2)P_r \right\}+Y_{TF}-\frac{3l}{8H^3}, \end{eqnarray} where $\phi$ and $\phi_{ii}$ occur due to $f(R, T)$ extra degrees of freedom involved in the evolution. The above expression can further be re-written as \begin{eqnarray}\label{lab8} 3\left(\dot{h}(t)+h(t)\frac{\dot{H}}{H}\right) &=& -\frac{1}{2\gamma_1}\left\{(1+3\gamma_2)\mu-2\phi-2(1+\gamma_2)\Pi+\phi_{11} +2\phi_{22}+3(1+\gamma_2)P_r \right\}+Y_{TF}-\frac{3l}{8H^3}. \end{eqnarray} Further, from field equations we have extracted the following result \begin{eqnarray}\label{lab9} \frac{1}{2\gamma_1}\left\{(1+3\gamma_2)\mu-2\phi-2(1+\gamma_2)\Pi+\phi_{11} +2\phi_{22}+3(1+\gamma_2)P_r \right\}+Y_{TF}-\frac{3l}{8H^3} &=& -\frac{2\ddot{H}}{H}-\frac{\ddot{G}}{G}, \end{eqnarray} Using the definition of velocity `$U$' of collapsing star together with above equation, we again obtain the same result as given in Eq.(\ref{1181}) and vanishing of complexity factor $Y_{TF}$ yields the same expression as given in Eq.(\ref{118a}). \subsubsection{The Dissipative and Non-dissipative Scenarios} In this case, homologous condition does not imply shear free condition for non-dissipative scenario. Thus, under homologous condition, we have $Y_{TF}= \frac{\ddot{H}}{H}-\frac{\ddot{G}}{G}-\frac{3l}{8H^3}$, which shows that complexity of the system is increased in the presence of higher order curvature terms. Even in simplest modes of evolution, system have enough complexity index and fluid configuration does not corresponds to the isotropic pressure and homogenous energy density. Using Eqs.$(\ref{sh})$ and $(\ref{lab2})$, we find the relation for dissipative case as \begin{eqnarray}\label{127pp} Y_{TF}\frac{H'}{H} &=& \frac{1}{2\alpha_1}Gq\frac{1+\gamma_2}{\gamma_1}\left(\frac{\dot{q}}{q} +\frac{2\dot{G}}{G}+\frac{\dot{H}}{H}\right)+\left(\frac{\gamma_2}{\gamma_1}\right)_{,0}qG- \frac{1}{\gamma_1}\left(\gamma_5-\frac{F'}{F}\gamma_3-\frac{\dot{G}}{G}\gamma_4\right) +\frac{3l}{8H^3}. \end{eqnarray} We can see dark source terms play important role and vanishing of complexity factor yields the following expression \begin{eqnarray}\label{127ppp} \frac{1}{2\alpha_1}Gq\frac{1+\gamma_2}{\gamma_1}\left(\frac{\dot{q}}{q} +\frac{2\dot{G}}{G}+\frac{\dot{H}}{H}\right)+\left(\frac{\gamma_2}{\gamma_1}\right)_{,0}qG- \frac{1}{\gamma_1}\left(\gamma_5-\frac{F'}{F}\gamma_3-\frac{\dot{G}}{G}\gamma_4\right) +\frac{3l}{8H^3}&=& 0. \end{eqnarray} \section{Stability of The Vanishing Complexity Factor Condition} In this section, our task is to find and analyze the conditions which are responsible for an initial state of vanishing complexity factor under homologous condition. For this, we need to develop the the evolution equation for structure scalar $Y_{TF}$ with the help of Eqs.$(\ref{xx})$, $(\ref{yy})$, and (\ref{B2s}) which takes the form as \begin{eqnarray}\nonumber &&\dot{Y}_{TF}+(1+\alpha_2)\dot{\Pi}+\frac{3\dot{H}}{H}Y_{TF} +2(1+\alpha_2)\Pi\frac{\dot{H}}{H}+\frac{(1+\alpha_2)}{2} (\mu+P_r)\sigma-\frac{1+\alpha_2}{2}(\mu+P_r)\frac{\dot{G}}{G} -(1-\alpha_2^2)(\mu+P_\perp)\frac{\dot{H}}{H}\\\label{133*}&&+\alpha_2\frac{\dot{G}}{G}(\mu+P_\perp)-\frac{3}{2}\frac{H'}{H}(1+\alpha_2)q \frac{1-\alpha_2^2}{2G^2}(Gq)'+\frac{q(1-\alpha^2)}{G^2}\left(\frac{2H'}{H}-\frac{G'}{G}\right)+\alpha_2 q'+\Lambda= 0, \end{eqnarray} where $\Lambda$ contains dark source entities. Now, we analyze this equation for dissipative and non-dissipative cases turn by turn. First we consider the non-dissipative case at some initial moment where $Y_{TF}=q=\sigma=\Pi=0$, then previous equation takes the form as \begin{eqnarray}\label{134*} \dot{Y}_{TF}+(1+\alpha_2)\dot{\Pi}-\frac{1-\alpha_2^2}{2}(\mu+P_\perp)\frac{\dot{H}}{H}+ \Lambda&=& 0. \end{eqnarray} In the most general case, when the system is dissipative, we have at the initial moment \begin{eqnarray}\nonumber &&\dot{Y}_{TF}+(1+\alpha_2)\dot{\Pi}+2(1+\alpha_2)\Pi\frac{\dot{H}}{H}+\frac{1+\alpha_2}{2} (\mu+P_r)\sigma-\frac{1+\alpha_2}{2}(\mu+P_r)\frac{\dot{G}}{G} -(1-\alpha_2^2)(\mu+P_\perp)\frac{\dot{H}}{H}\\\label{135*}&&+\alpha_2\frac{\dot{G}}{G}(\mu+P_\perp)-\frac{3}{2}\frac{H'}{H}(1+\alpha_2)q \frac{1-\alpha_2^2}{2G^2}(Gq)'+\frac{q(1-\alpha^2)}{G^2}\left(\frac{2H'}{H}-\frac{G'}{G}\right)+\alpha_2 q'+ \Lambda= 0. \end{eqnarray} It is obvious from above equation that pressure anisotropy, energy density inhomogeneity, dissipative variable and dark source entities all are crucial for complexity of a cylindrical system. Last term $\Lambda$ on the right hand side of the above equation incorporates the effects of dark source terms which can be measured for any particular model. \section{Conclusion} The study of complexity on astrophysical scales is an interesting concept which helps the researchers to explore and identify the factors that are responsible for the emergence of complexity in a system. In the pursuance of a complete and comprehensive definition of complexity, Herrera \cite{12b} proposed a new definition for an anisotropic fluid sphere. Following Herrera's work, we have explored the behavior of complexity factor for a cylindrically symmetric dynamical object in $f(R,T)$ gravity. For this, we have explored field equations for cylindrical symmetry and developed mass function using C-energy expression. We have constructed structure scalars for cylindrical geometry in $f(R,T)$ gravity and identified $Y_{TF}$ as complexity factor among these scalars which incorporates the effects of inhomogeneous energy density, anisotropic pressure and dissipative variable together with dark source terms. The definition of complexity for a dynamical system involves not only the complexity of the structure of the system, but also considers degree of complexity of the pattern of evolution of the system. Thus, we have studied the complexity of patterns of evolution by assuming two simplest modes of evolution,i.e., homologous evolution and homogeneous expansion. We have worked out the conditions representing homogeneous expansion and homologous evolution, where we have found that shear free condition implies zero dissipation if both of these conditions hold at the same time. In order to present detailed study in $f(R,T)$ gravity, we have selected a generic non-minimally coupled $f(R,T)$ model and discussed our findings for three different cases of model under consideration. \begin{itemize} \item In the first case, our $f(R,T)$ model is linear combination of first order terms of curvature `$R$' and trace of energy momentum tensor `$T$'. Here, we have found that homogeneous expansion and shear free condition cease the fluid to be dissipative. We have also found that shear free condition and homologous condition imply each other for this particular choice. In non-dissipative scenario, simplest modes of evolution are found to be compatible with each other. However, in the presence of dissipative variable, they are not found compatible. We have discussed dissipative and non-dissipative scenarios and found complexity factor is proportional to the fraction of length of cylinder and curvature term of cubic order which represents minimum value of complexity factor when total amount of energy in a cylindrical system is represented through gravitational C-energy. \item In the second case, our model is of the form $f(R,T)=f_1(R)+f_2(T)$ , where $f_1(R)$ is linear function, while $f_2(T)$ is quadratic one. All the results that are obtained in the first case are same for the second choice of $f(R,T)$ model. However in dissipative case, complexity of the system is increased due to the presence of quadratic $T$ terms. \item In the third case, our assumptions yields the form which consists on two parts. The first part coincides with case I, however second part contains product of quadratic terms of `$R$' and `$T$'. In this case, shear free condition and homologous condition are not found to be compatible for both dissipative and non-dissipative cases. It is observed that complexity of a system is increased in the presence of dark source entities, even in the simplest modes of evolution it does not attain the value that represents minimum complexity of the system. \end{itemize} In the end, we analyzed the stability of vanishing complexity condition and observed different outcomes in both dissipative and non-dissipative scenarios. In the absence of dissipation, significance of pressure isotropy and dark source entities is obvious from Eq.(\ref{134*}), however in general scenario, Eq.(\ref{135*}) shows that multiple factors are involved. We have compared our results with previous literature \cite{12a, s3} and found that complexity factor for cylindrical system contains some extra terms because of the geometrical difference of the self gravitating systems, which ceases it to attain zero value in simplest modes of evolution. The complexity system for static self-gravitating systems has been explored extensively, but it needs more study for dynamical systems. We intend to extend our work for dynamical self-gravitating systems in the presence of charge. \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} \section*{Appendix A} The divergence of the energy-momentum tensor is nonzero in $f(R, T)$ gravity and is found as \begin{eqnarray}\setcounter{equation}{1}\label{B1s} \nabla{^{\alpha}}T_{\alpha\beta}=\frac{f_T}{1-f_T}\left[(\Theta_{\alpha\beta}-T_{\alpha\beta})\nabla{^{\alpha}}ln f_T-\frac{1}{2}g_{\alpha\beta}\nabla{^{\alpha}}T+\nabla^{^{\alpha}}\Theta_{\alpha\beta}\right]. \end{eqnarray} Its divergence yields the following two equations: \begin{eqnarray}\nonumber &&\dot{\mu}\left(\frac{1-f_T-f_R f_T}{f_R(1-f_T)}\right)-\mu\frac{\dot{f_R}}{f_R^2}+ \frac{\dot{G}}{G}\frac{1}{f_R}(1+f_T)(\mu+P_r)+\frac{\dot{2H}}{H}\frac{1}{f_R}(1+f_T)(\mu+P_\perp) +\left(\frac{\dot{T}}{2}\right)\frac{f_T}{1-f_T }\\\nonumber&&-q'\frac{F}{G}\frac{(1+f_T)}{f_R}-\frac{q}{G^2}\left\{\frac{FG(1+f_T)}{f_R}\right\}_{,1} -\frac{F}{G}\left(\frac{F'}{F}-\frac{G'}{G}+\frac{2H'}{H}\right)(1+f_T)q +\left\{\frac{1}{f_R }\left(\Psi +\psi_{00}\right)\right\}_{,0}\\\nonumber&&-\frac{1}{G^2}\left\{\frac{\psi_{01}}{f_R}\right\}_{,1}+{\frac{\psi_{01}}{G^{2}f_R} }\left(\frac{F'}{F}-\frac{G'}{G}+\frac{2H'}{H}\right)+\frac{\psi_{00}}{f_R}\left(\frac{\dot{G}}{G }+\frac{\dot{2H}}{H}\right)+\frac{\dot{G}}{G}\frac{\psi_{11}}{f_R}+\frac{\dot{2H}}{H}\frac{\psi_{22}}{f_R} \\\label{B2s}&&-\left(\frac{2\dot{G}}{G}(\mu+P_r)+\frac{F}{G}2q'+4q\frac{F'}{G}\right)\frac{f_T}{1-f_T} -\frac{F}{G}q\frac{f'_T}{1-f_T}=0,\\\nonumber &&P'_r\left(\frac{1-f^{2}_T+2f_R f_T}{f_R(1-f_T)}\right)+\frac{P_r}{f_R}\left(f'_T-\frac{f'_R(1+f_T)}{f_R}\right)+\mu'\frac{f_T}{f_R}+ \frac{\mu}{f_R}\left(f'_T-\frac{f_T f'_R}{f_R}\right)+\frac{F'}{F}\frac{(1+f_T)}{f_R}(\mu+P_r)\\\nonumber &&+\frac{2H'}{H}\frac{1}{f_R}(1+f_T)(P_r-P_\perp)+\frac{f'_T}{1-f_T}(\mu+P_r) -\frac{f_T}{1-f_T}\mu'+\frac{G}{F}\frac{(1+f_T)}{f_R}\dot{q}+\frac{q}{F^2}\left\{\frac{FG(1+f_T)}{f_R}\right\}_{,0} \\\nonumber&&+ \left\{\frac{1}{f_R }\left(\Psi+\psi_{11}\right)\right\}_{,1}-\frac{1}{F^2}\left\{\frac{\psi_{01}}{f_R}\right\}_{,0}+{\frac{\psi_{01}}{G^{2}f_R}} \left(\frac{\dot{F}}{F}-\frac{\dot{G}}{G}+\frac{\dot{2H}}{H}\right) +q\frac{G}{F}(1+f_T)\left(\frac{\dot{F}}{F}-\frac{\dot{G}}{G}+\frac{\dot{2H}}{H}\right)\\\nonumber&&+\frac{f_T}{1-f_T}\left(\frac{T'}{2}+\mu'\right) +\frac{F'}{F}\frac{1}{f_R}\left(\psi_{00}+\pi_{11}\right)+\frac{2H'}{H}\frac{1}{f_R}\left(\pi_{11}-\psi_{22}\right)+\frac{2G}{F}q\left(1+\frac{F'}{F}\right)\frac{f_T}{1-f_T} \\\label{B3s}&&+\frac{G}{F}\left(q\frac{\dot{f_T}}{f_R}+\frac{\dot{G}}{F}(\mu_+2P_r)\right)\frac{f_T}{1-f_T} +\frac{f_T}{1-f_T}\frac{2}{A}(Bq)_{,0} =0. \end{eqnarray} The acceleration $D_TU$ of a collapsing star can be obtained by manipulating the Eqs. $(\ref{13})$, $(\ref{17})$, $(\ref{24})$ and $(\ref{27})$: \begin{eqnarray}\label{B4} D_TU &=& \frac{m}{H^2}-\frac{R}{2f_R}\left(1+f_T)P_r+f_T\mu+\pi+\pi_{11}\right)+Ea. \end{eqnarray} \vspace{.5cm} \section*{Acknowledgments} ``Authors thank the Higher Education Commission, Islamabad, Pakistan for its financial support under the NRPU project with grant number $\text{5329/Federal/NRPU/R\&D/HEC/2016}$''.
\section{Introduction} \label{sec:intro} The rectified linear unit (ReLU), $\max\{x,0\}$, is one of the most successful and widely-used activation functions in deep learning \citep{lecun2015deep, ramachandran2017searchingAct, nair2010rectified}. The success of ReLU is based on its superior training performance \citep{glorot2011deep, sun2015deeply} over other activation functions such as the logistic sigmoid and the hyperbolic tangent \citep{glorot2010understanding, lecun1998efficient}. The ReLU has been used in various applications including image classification \citep{krizhevsky2012imagenet, szegedy2015going}, natural language processes \citep{maas2013rectifier}, speech recognition \citep{hinton2012deep}, and game intelligence \citep{silver2016mastering}, to name a few. The use of gradient-based optimization is inevitable in training deep neural networks. It has been widely known that the deeper a neural network is, the harder it is to train \citep{srivastava2015training, du2018gradient}. A fundamental difficulty in training deep neural networks is the vanishing and exploding gradient problem \citep{poole2016exponential, hanin2018whichNN, chen2018dynamical}. The dying ReLU is a kind of vanishing gradient, which refers to a problem when ReLU neurons become inactive and only output 0 for any input. It has been known as one of the obstacles in training deep ReLU neural networks \citep{trottier2017parametric, agarap2018deep}. To overcome this problem, a number of methods have been proposed. Broadly speaking, these can be categorized into three general approaches. One approach modifies the network architectures. This includes but not limited to the changes in the number of layers, the number of neurons, network connections, and activation functions. In particular, many activation functions have been proposed to replace the ReLU \citep{maas2013rectifier,he2015delving,clevert2015fast, klambauer2017self}. However, the performance of other activation functions varies on different tasks and data sets \citep{ramachandran2017searchingAct} and it typically requires a parameter to be turned. Thus, the ReLU remains one of the popular activation functions due to its simplicity and reliability. Another approach introduces additional training steps. This includes several normalization techniques \citep{ioffe2015batch, salimans2016weight, ulyanov2016instance, ba2016layer, wu2018group} and dropout \citep{srivastava2014dropout}. One of the most successful normalization techniques is the batch normalization \citep{ioffe2015batch}. It is a technique that inserts layers into the deep neural network that transform the output for the batch to be zero mean unit variance. However, batch normalization increases by 30\% the computational overhead to each iteration \citep{mishkin2015all}. The third approach modifies only weights and biases initialization procedure without changing any network architectures or introducing additional training steps \citep{lecun1998efficient, glorot2010understanding, he2015delving, saxe2013exact, mishkin2015all}. The third approach is the topic of our work presented in this paper. The intriguing ability of gradient-based optimization is perhaps one of the major contributors to the success of deep learning. Training deep neural networks using gradient-based optimization fall into the noncovex nonsmooth optimization. Since a gradient-based method is either a first- or a second-order method, and once converged, the optimizer is either a local minimum or a saddle point. The authors of \citep{fukumizu2000local} proved that the existence of local minima poses a serious problem in training neural networks. Many researchers have been putting immense efforts to mathematically understand the gradient method and its ability to solve nonconvex nonsmooth problems. Under various assumptions, especially on the landscape, many results claim that the gradient method can find a global minimum, can escape saddle points, and can avoid spurious local minima \citep{lee2016gradient, amari2006singularities, ge2015escaping, ge2016matrix, zhou2017critical, wu2018no, yun2018small, du2017gradexp, du2017gradonehidden, du2018gradient, jin2017escape}. However, these assumptions do not always hold and are provably false for deep neural networks \citep{safran2018spurious, kawaguchi2016deep, arora2018convergence}. This further limits our understanding on what contributes to the success of the deep neural networks. Often, theoretical conditions are impossible to be met in practice. Where to start the optimization process plays a critical role in training and has a significant effect on the trained result \citep{nesterov2013introductory}. This paper focuses on a particular kind of bad local minima due to a bad initialization. Such a bad local minimum causes the dying ReLU. Specifically, we consider the worst case of dying ReLU, where the entire network dies, i.e., the network becomes a constant function. We refer this as \textit{the dying ReLU neural networks} (NNs). This phenomenon could be well illustrated by a simple example. Suppose $f(x) = |x|$ is a target function we want to approximate using a ReLU network. Since $|x| = \text{ReLU}(x) + \text{ReLU}(-x)$, a 2-layer ReLU network of width 2 can exactly represent $|x|$. However, when we train a deep ReLU network, we frequently observe that the network is collapsed. This trained result is shown in Fig.~\ref{fig:absx-intro}. Our 1,000 independent simulations show that there is a high probability (more than 90\%) for the deep ReLU network to collapse to a constant function. In this example, we employ a 10-layer ReLU network of width 2 which should perfectly recover $f(x)=|x|$. \begin{SCfigure}[2][htbp] \centering \includegraphics{fnn2mean_abs1d_die.pdf} \caption{An approximation result for $f(x)=|x|$ using a 10-layer ReLU neural network of width 2. Among 1,000 independent simulations, this trained result is obtained with more than 90\% probability. One of the most popular initialization procedures \citep{he2015delving} is employed.} \label{fig:absx-intro} \end{SCfigure} Almost all common initialization schemes in training deep neural networks use symmetric probability distributions around 0. For example, zero mean uniform distributions and zero mean normal distributions were proposed and used in \citep{lecun1998efficient, glorot2010understanding, he2015delving}. We show that when weights and biases are initialized from symmetric probability distributions around 0, the dying ReLU NNs occurs in probability as the number of depth goes to infinite. To the best of our knowledge, this is the first theoretical work on the dying ReLU. This result explains why training extremely deep networks is challenging. Furthermore, it says that the dying ReLU is inevitable as long as the network is deep enough. Also, our result implies that it is the network architecture that decides whether an initialization procedure is good or bad. Our analysis reveals that a specific network architecture can avoid the dying ReLU NNs with high probability. That is, for any $\delta > 0$, when a symmetric initialization is used and $L = \Omega(\log_2 (N/\delta))$ is satisfied where $L$ is the number of depth and $N$ is the number of width at each layer, with probability $1-\delta$, the dying ReLU NNs will not happen. Although there are other approaches to avoid the dying ReLU, we aim to overcome it without changing any network architectures or introducing additional training steps like normalizations. Perhaps, changing the initialization procedure might be one of the simplest treatments among others. We thus propose a new initialization procedure, namely, a randomized asymmetric initialization (RAI). The new initialization is designed to directly overcome the dying ReLU, while having similar generalization performance to the He initialization \cite{he2015delving}. We show that our initialization has a smaller upper bound of the probability of the dying ReLU NNs. All parameters used in our initialization are theoretically chosen to avoid the exploding gradient problem. This is done by the second moment analysis where we derive the expected length map relations between layers \citep{he2015delving, hanin2018whichNN, poole2016exponential}. The rest of the paper is organized as follows. After setting up notation and terminology in Section~\ref{sec:set-up}, we present the main theoretical results in Section~\ref{sec:theory}. In Section~\ref{sec:new-init}, upon introducing a randomized asymmetric initialization, we discuss its theoretical properties. Numerical examples are provided in Section~\ref{sec:example}, before the conclusion in Section~\ref{sec:conclusion}. \section{Mathematical Setup} \label{sec:set-up} Let $\mathcal{N}^L: \mathbb{R}^{d_{\text{in}}} \to \mathbb{R}^{d_{\text{out}}}$ be a feed-forward neural network with $L$ layers and $N_\ell$ neurons in the $\ell$-th layer ($N_0 = d_{\text{in}}$, $N_L = d_{\text{out}}$). Let us denote the weight matrix and bias vector in the $\ell$-th layer by $\bm{W}^\ell \in \mathbf{R}^{N_\ell \times N_{\ell-1}}$ and $\mathbf{b}^\ell \in \mathbb{R}^{N_\ell}$, respectively. Given an activation function $\phi$ which is applied element-wise, the feed-forward neural network is recursively defined as follows: $\mathcal{N}^1(\textbf{x}) = \bm{W}^{1}\textbf{x} + \bm{b}^{1}$ and \begin{align} \mathcal{N}^{\ell}(\textbf{x}) &= \bm{W}^{\ell}\phi(\mathcal{N}^{\ell-1}(\textbf{x})) + \bm{b}^{\ell} \in \mathbb{R}^{N_\ell}, \qquad \text{for} \quad 2 \le \ell \le L. \end{align} Here $\mathcal{N}^L$ is called a $L$-layer neural network or a $(L-1)$-hidden layer neural network. In this paper, the rectified linear unit (ReLU) activation function is employed, i.e., $$ \phi(\textbf{x}) = \text{ReLU}(\textbf{x}) :=\left(\max\{x_1, 0\}, \cdots, \max\{x_{\text{fan-in}}, 0\} \right), \quad \text{where} \quad \textbf{x} = (x_1,\cdots,x_{\text{fan-in}}). $$ Let $\bm{\theta}_L = \{\bm{W}^\ell, \bm{b}^\ell\}_{1 \le \ell \le L}$ be the set of all weight matrices and bias vectors. Let $\mathcal{T}_m = \{\textbf{x}_i, y_i\}_{1\le i \le m}$ be the set of $m$ training data and let $\mathcal{D} = \{\textbf{x}_i\}_{i=1}^m$ be the training input data. We assume that $\mathcal{D} \subset B_r(0)$ for some $r > 0$. Given $\mathcal{T}_m$, in order to train $\bm{\theta}_L$, we consider the standard loss function $\mathcal{L}(\bm{\theta}_L, \mathcal{T}_m)$: \begin{equation} \label{def:loss-fns} \mathcal{L}(\bm{\theta}_L, \mathcal{T}_m) = \frac{1}{m}\sum_{(\textbf{x},y) \in \mathcal{T}_m} \ell(\mathcal{N}^L(\textbf{x}; \bm{\theta}_L),y), \end{equation} where $\ell:\mathbb{R}^{d_{\text{out}}} \times \mathbb{R}^{d_{\text{out}}} \mapsto \mathbb{R}$ is a loss criterion. In training neural networks, the gradient-based optimization is typically employed to minimize the loss $\mathcal{L}$. The first step for training would be to initialize weight matrices and bias vectors. Typically, they are initialized according to certain probability distributions. For example, uniform distributions around 0 or zero-mean normal distributions are common choices. The dying ReLU refers to a problem when some ReLU neurons become inactive. In this paper, we focus on the worst case of dying ReLU, where the entire network dies, i.e., the network becomes a constant function. We refer this as the dying ReLU neural networks. We then define two phases: (1) a network is dead before training, and (2) a network is dead after training. The phase 1 implies the phase 2, but not vice versa. When the phase 1 happens, we say \textit{the network is born dead} (BD). \section{Theoretical analysis} \label{sec:theory} In this section, we present a theoretical analysis of the dying ReLU neural networks. We show that a deep ReLU network will eventually be BD in probability as the number of depth $L$ goes to infinity. \begin{theorem} \label{thm:dying-prob} Let $\mathcal{N}^L(\textbf{x})$ be a ReLU neural network with $L$ layers, each having $N$ neurons. Suppose that all weights and biases are randomly initialized from probability distributions, which satisfy \begin{equation} \label{thm1:condition} P\left( \bm{W}^\ell_j \in \mathbb{R}^{N}_{-}, \bm{b}^\ell_j < 0 \right) \ge p > 0, \qquad \forall 1 \le j \le N, \end{equation} for some constant $p > 0$, where $\bm{W}^\ell_j$ is the $j$-th row of the $\ell$-th layer weight matrix and $\bm{b}^\ell_j$ is the $j$-th component of the $\ell$-th layer bias vector. Then \begin{equation*} \begin{split} \lim_{L\to \infty} P\left( \mathcal{N}^L(\textbf{x}; \bm{\theta}_L) \text{ is born dead in } \mathcal{D} \right) = 1, \end{split} \end{equation*} where $\mathcal{D}$ is the training input data. \end{theorem} \begin{proof} The proof can be found in Appendix~\ref{app:thm:dying-prob}. \end{proof} We remark that Equation~\ref{thm1:condition} is a very mild condition and it can be satisfied in many cases. For example, when symmetric probability distributions around 0 are employed, the condition is met with $p = 2^{-N-1}$. Theorem~\ref{thm:dying-prob} implies that the fully connected ReLU network will be dead at the initialization as long as the network is deep enough. This explains theoretically why training a very deep network is hard. Theorem~\ref{thm:dying-prob} shows that the ReLU network asymptotically will be dead. Thus, we are now concerned with the convergence behavior of the probability of NNs being BD. Since almost all common initialization procedures use symmetric probability distributions around 0, we derive an upper bound of the born dead probability (BDP) for symmetric initialization. \begin{theorem} \label{thm:main} Let $\mathcal{N}^L(\textbf{x})$ be a ReLU neural network with $L$ layers, each having $N_1, \cdots, N_L$ neurons. Suppose that all weights are independently initialized from symmetric probability distributions around 0 and all biases are either drawn from a symmetric distribution or set to zero. Then \begin{equation} \label{eqn:sym-dying-prob-upp} P\left( \mathcal{N}^L(\textbf{x}; \bm{\theta}_L) \text{ is born dead in } \mathcal{D} \right) \le 1 - \prod_{\ell=1}^{L-1}(1-(1/2)^{N_\ell}), \end{equation} where $\mathcal{D}$ is the training input data. Furthermore, assuming $N_\ell = N$ for all $\ell$, \begin{equation*} \begin{split} \lim_{L\to \infty} P\left( \mathcal{N}^L(\textbf{x}; \bm{\theta}_L) \text{ is born dead in } \mathcal{D} \right) = 1, \qquad \lim_{N\to \infty} P\left( \mathcal{N}^L(\textbf{x}; \bm{\theta}_L) \text{ is born dead in } \mathcal{D} \right) = 0. \end{split} \end{equation*} \end{theorem} \begin{proof} The proof can be found in Appendix~\ref{app:thm:main}. \end{proof} Theorem~\ref{thm:main} provides an upper bound of the BDP. It shows that at a fixed depth $L$, the network will not be BD in probability as the number of width $N$ goes to infinite. In order to understand how this probability behaves with respect to the number of width and depth, a lower bound is needed. We thus provide a lower bound of the BDP of ReLU NNs at $d_{\text{in}} = 1$. \begin{theorem} \label{thm:nnwidthN} Let $\mathcal{N}^L(\textbf{x})$ be a bias-free ReLU neural network with $L \ge 2$ layers, each having $N$ neurons at $d_{\text{in}} = 1$. Suppose that all weights are independently initialized from continuous symmetric probability distributions around 0, which satisfies \begin{align*} P(\langle \bm{W}_j^\ell, \bm{v}_{1}\rangle > 0, \langle \bm{W}_j^\ell, \bm{v}_{2}\rangle < 0 | \bm{v}_{1},\bm{v}_2) \le \frac{1}{4}, \quad \bm{0} \ne \bm{v}_1, \bm{v}_2 \in \mathbb{R}_{+}^N, \quad \forall 1 \le j \le N, \end{align*} where $\bm{W}_j^\ell$ is the $j$-th row of the $\ell$-th layer weight matrix. Then \begin{equation*} p_{\text{low}}(L,N) \le P\left( \mathcal{N}^L(\textbf{x}; \bm{\theta}_L) \text{ is born dead in } \mathcal{D} \right) \le 1 - \prod_{\ell=1}^{L-1}(1-(1/2)^{N}), \end{equation*} where $a_1 = 1 - (1/2)^N$, $a_2 = 1-(1/2)^{N-1}-(N-1)(1/4)^N$, and \begin{equation*} p_{\text{low}}(L,N) = 1 - a_1^{L-2} + \frac{(1-2^{-N+1})(1-2^{-N})}{1+(N-1)2^{-N}} (-a_1^{L-2} + a_2^{L-2}). \end{equation*} \end{theorem} \begin{proof} The proof can be found in Appendix~\ref{app:thm:nnwidthN}. \end{proof} Theorem~\ref{thm:nnwidthN} reveals that the BDP behavior depends on the network architecture. In Fig.~\ref{fig:p_zeroinit_din1}, we plot the BDP with respect to increasing the number of layers at varying width from $N=2$ to $N=5$. A bias-free ReLU feed-forward NN with $d_{\text{in}}=1$ is employed with weights randomly initialized from symmetric distributions. The results of one million independent simulations are used to calculate each probability estimation. Numerical estimations are shown as symbols. The upper and lower bounds from Theorem~\ref{thm:nnwidthN} are also plotted with dash and dash-dot lines, respectively. We see that when the NN gets narrower, the probability of NN being BD grows faster as the depth increases. Also, at a fixed width $N$, the BDP grows as the number of layer increases. This is expected by Theorems~\ref{thm:dying-prob} and \ref{thm:nnwidthN}. \begin{SCfigure}[1][htbp] \centering \includegraphics{p_zeroinit_din1.pdf} \caption{Probability of a ReLU NN to be born dead as a function of the number of layers for different widths. The dash lines represent the upper and lower bounds from Theorem~\ref{thm:nnwidthN}. The symbols represent our numerical estimations. Similar colors correspond to the same width. }\label{fig:p_zeroinit_din1} \end{SCfigure} Once the network is BD, we have no hope to train the network successfully. Here we provide a formal statement of the consequence of the network being BD. \begin{theorem}\label{thm:nn2mean} Suppose that the feed-forward ReLU neural network is BD. Then, for any loss function $\mathcal{L}$, and for any gradient based method, the ReLU network is optimized to be a constant function, which minimizes the loss. \end{theorem} \begin{proof} The proof can be found in Appendix~\ref{app:thm:nn2mean}. \end{proof} Theorem~\ref{thm:nn2mean} implies that no matter what gradient-based optimizers are employed including stochastic gradient desecent (SGD), SGD-Nesterov~\citep{sutskever2013importance}, AdaGrad~\citep{duchi2011adaptive}, AdaDelta~\citep{zeiler2012adadelta}, RMSProp~\citep{hintonlecture6a}, Adam~\citep{kingma2014adam}, BFGS~\citep{nocedal2006nonlinear}, L-BFGS~\citep{byrd1995limited}, the network is trained to be a constant function which minimizes the loss. If the online-learning or the stochastic gradient method is employed, where the training data are independently drawn from a probability distribution $P_{\mathcal{D}}$, the optimized network is $$ \mathcal{N}^L(\textbf{x};\bm{\theta}^*) = \bm{c}^* = \argmin_{\bm{c} \in \mathbb{R}^{N_L}} \mathbb{E} \left[ \ell(\bm{c}, f(\textbf{x}))) \right], $$ where the expectation $\mathbb{E}$ is taken with respect to $\textbf{x} \sim P_{\mathcal{D}}$. For example, if $L^2$-loss is employed, i.e., $\ell(\mathcal{N}^L(\textbf{x}),f(\textbf{x})) = (\mathcal{N}^L(\textbf{x})-f(\textbf{x}))^2$, the resulting network is $\mathbb{E}[f(\textbf{x})]$. If $L^1$ loss is employed, i.e., $\ell(\mathcal{N}^L(\textbf{x}),f(\textbf{x})) = |\mathcal{N}^L(\textbf{x})-f(\textbf{x}))|$, the resulting network is the median of $f(\textbf{x})$ with respect to $\textbf{x} \sim P_\mathcal{D}$. Note that the mean absolute error (MAE) and the mean squared error (MSE) used in practice are discrete versions of $L^1$ and $L^2$ loss, respectively, if the size of minibatch is large. When we design a neural network, we want the BDP to be small, say, less than 1\% or 10\%. Then, the upper bound (Equation~\ref{eqn:sym-dying-prob-upp}) of Theorem~\ref{thm:main} can be used for designing a specific network architecture, which has a small probability of NNs being born dead. \begin{corollary}\label{cor:p} Suppose $N_\ell = N$ for all $\ell$. For fixed depth $L$ and $\delta > 0$, if the width $N$ is $N = \log_2 L/\delta$, with probability exceeding $1-\delta$, the ReLU neural network will not be initialized to be dead in $\mathcal{D}$. \end{corollary} \begin{proof} This readily follows from \begin{align*} P\left( \mathcal{N}^L(\textbf{x}; \bm{\theta}_L) \text{ is born dead in } \mathcal{D} \right) \le 1 - (1-2^{-N})^{L-1} \le 1 - (1-(L-1)2^{-N}) \le L2^{-N} = \delta. \end{align*} \end{proof} As a practical guide, we constructed a diagram shown in Fig.~\ref{fig:max_layer} that includes both theoretical predictions and our numerical tests. We see that as the number of layers increases, the numerical tests match closer the theoretical results. It is clear from the diagram that a 10-layer NN of width 10 has a probability of dying less than 1\% whereas a 10-layer NN of width 5 has a probability of dying greater than 10\%; for width of three the probability is about 60\%. Note that the growth rate of the maximum number of layers is exponential which is expected by Corollary~\ref{cor:p}. \begin{SCfigure}[2][htbp] \centering \includegraphics{max_layer.pdf} \caption{Diagram indicating safe operating regions for a ReLU NN. The dash lines represent Corollary~\ref{cor:p} while the symbols represent our numerical tests. The maximum number of layers of a neural network can be used at different width to keep the probability of collapse less than 1\% or 10\%. The region below the blue line is the safe region when we design a neural network. As the width increases the theoretical predictions match closer with our numerical simulations.}\label{fig:max_layer} \end{SCfigure} \section{Randomized Asymmetric Initialization} \label{sec:new-init} The so-called `He initialization' \citep{he2015delving} is perhaps one of the most popular initialization schemes in deep learning community, especially when the ReLU activation function is concerned. The effectiveness of the He initialization has been shown in many machine learning applications. The He initialization uses mean zero normal distributions. Thus, as we discussed earlier, it suffers from the dying ReLU. We thus propose a new initialization procedure, namely, a randomized asymmetric initialization. The motivation is in twofolds. One is to mimics the He initialization so that the new scheme can produce similar generalization performance. The other is to alleviate the problem of dying ReLU neural networks. For ease of discussion, we introduce some notation. For any vector $\bm{v} \in \mathbb{R}^{n+1}$ and $k \in \{1,\cdots, n+1\}$, we define \begin{equation} \label{def-v_-k} \bm{v}_{-k} = \left(v_1,\cdots, v_{k-1}, v_{k+1}, \cdots, v_{n+1}\right)^T \in \mathbb{R}^n. \end{equation} In order to train a $L$-layer neural network, we need to initialize $\bm{\theta}_L = \{\bm{W}^\ell, \bm{b}^\ell\}_{1\le \ell \le L}$. At each layer, let $\textbf{V}^{\ell} = [\bm{W}^{\ell}, \bm{b}^{\ell}] \in \mathbb{R}^{N_\ell \times (N_{\ell-1}+1)}$. We denote the $j$-th row of $\textbf{V}^{\ell}$ by $\textbf{V}^{\ell}_j = [\bm{W}^{\ell}_{j}, \bm{b}^{\ell}_j] \in \mathbb{R}^{N_{\ell-1}+1}$, $j=1,\cdots,N_\ell$ where $N_0 = d_{\text{in}}$ and $N_L = d_{\text{out}}$. \subsection{Proposed initialization} \label{subsec:asyminit} We propose to initialize $\textbf{V}^{\ell}$ as follows. Let $\text{P}_\ell$ be a probability distribution defined on $[0,M_\ell]$ for some $M_\ell > 0$ or $[0,\infty)$. Note that $\text{P}_\ell$ is asymmetric around 0. At the first layer of $\ell=1$, we employ the `He initialization' \citep{he2015delving}, i.e., $\textbf{W}^1_{ij} \sim N(0,2/d_{\text{in}})$ and $\textbf{b}^1 = \textbf{0}$. For $\ell \ge 2$, and each $1\le j \le N_{\ell}$, we initialize $\bm{V}_j^\ell$ as follows: \begin{enumerate} \item Randomly choose $k^\ell_j$ in $\{1,2,\cdots, N_{\ell-1}, N_{\ell-1}+1\}$. \item Initialize $(\textbf{V}^{\ell}_{j})_{-k^\ell_j} \sim \mathcal{N}(0,\sigma_\ell^2\bm{I})$ and $(\textbf{V}^{\ell}_{j})_{k^\ell_j} \sim \text{P}_\ell$. \end{enumerate} Since an index is randomly chosen at each $\ell$ and $j$ and a positive number is randomly drawn from an asymmetric probability distribution around 0, we name this new initialization a randomized asymmetric initialization. Only for the first layer, the He initialization is employed. This is because since an input could have a negative value, if the weight which corresponds to the negative input were to be initialized from $\text{P}_\ell$, this could cause the dying ReLU. We note that the new initialization requires us to choose $\sigma_\ell^2$ and $\text{P}_\ell$. In Subsection~\ref{subsec:length-map}, these will be theoretically determined. One could choose multiple indices in the step 1 of the new initialization. However, for simplicity, we constraint ourselves to a single index case. We first show that this new initialization procedure results in a smaller upper bound of the BDP. \begin{theorem} \label{thm:asym-prob} If a ReLU feed-forward neural network $\mathcal{N}^L$ with $L$ layers, each having width $N_1, \cdots, N_L$, is initialized by the randomized asymmetric initialization, then \begin{equation*} P\left( \mathcal{N}^L(\textbf{x}; \bm{\theta}_L) \text{ is born dead in } \mathcal{D} \right) \le 1 - \prod_{\ell=1}^{L-1} \left(1 - \left(1/2 - \gamma_{\ell}\right)^{N_{\ell}}\right), \end{equation*} where $\gamma_1 = 0$ and $\gamma_j$'s are some constants in $(0,0.5]$, which depend on $\{N_\ell\}_{\ell=1}^{L-1}$ and the training input data $\mathcal{D}$. \end{theorem} \begin{proof} The proof can be found in Appendix~\ref{app:thm:asym-prob}. \end{proof} When a symmetric initialization is employed, $\gamma_j = 0$ for all $1\le j < N_L$, which results in Equation~\ref{eqn:sym-dying-prob-upp} of Theorem~\ref{thm:main}. Although the new initialization has a smaller upper bound compared to those by symmetric initialization, as Theorem~\ref{thm:dying-prob} suggests, it also asymptotically suffers from the dying ReLU. \begin{corollary} Assuming the same conditions in Theorem~\ref{thm:asym-prob}, and $N_\ell = N$ for all $\ell$. Then, there exists $2 < \gamma$, which depends on $N, L$ and the training input data $\mathcal{D}$, such that \begin{equation*} P\left( \mathcal{N}^L(\textbf{x}; \bm{\theta}_L) \text{ is born dead in } \mathcal{D} \right) \le 1 - \prod_{\ell=1}^{L-1} \left(1 - (1/\gamma)^{N}\right). \end{equation*} For fixed depth $L$ and $\delta > 0$, if the width $N$ is $N = \log_\gamma L/\delta$, with probability exceeding $1-\delta$, the ReLU neural network will not be initialized to be dead. Furthermore, \begin{equation*} \begin{split} \lim_{L\to \infty} P\left( \mathcal{N}^L(\textbf{x}; \bm{\theta}_L) \text{ is born dead in } \mathcal{D} \right) = 1, \qquad \lim_{N\to \infty} P\left( \mathcal{N}^L(\textbf{x}; \bm{\theta}_L) \text{ is born dead in } \mathcal{D} \right) = 0. \end{split} \end{equation*} \end{corollary} \begin{proof} The proof is readily followed from Theorem~\ref{thm:dying-prob},~\ref{thm:asym-prob} and Corollary~\ref{cor:p}. \end{proof} \subsection{Second moment analysis} \label{subsec:length-map} The proposed randomized asymmetric initialization described in Subsection~\ref{subsec:asyminit} requires us to determine $\sigma_\ell^2$ and $\text{P}_\ell$. Similar to the He initialization (\cite{he2016deep}), we aim to properly choose initialization parameters from the length map analysis. Following the work of~\cite{poole2016exponential}, we present the analysis of a single input propagation through the deep ReLU network. To be more precise, we track the expectation of the normalized squared length of the input vector at each layer, $ \mathbb{E}[q^\ell(\textbf{x})]$, where $q^\ell(\textbf{x}) = \frac{\|\mathcal{N}^\ell(\textbf{x})\|^2}{N_\ell}$. The expectation $\mathbb{E}$ is taken with respect to all weights and biases. \begin{theorem} \label{thm:2ndmo-asyminit} Let $\text{P}_\ell$ be a probability distribution whose support is $[0, M_\ell] \subset \mathbb{R}^+$. Let $X_\ell \sim \text{P}_\ell$ have finite first and second moments, i.e., $\mu_{\ell,i}'=E[X_\ell^i] < \infty$, for $i=1,2$, and $\mu_{\ell,1}' \ge M_\ell/2$. Suppose the $\ell$-th layer weights and biases are initialized by the randomized asymmetric initialization described in Subsection~\ref{subsec:asyminit}. Then for any input $\textbf{x} \in \mathbb{R}^{d_{in}}$, we have \begin{align*} \frac{\mathcal{A}_{low, \ell}}{2}\mathbb{E}[q^{\ell}(\textbf{x})] + \sigma_{b,\ell}^2 \le \mathbb{E}[q^{\ell+1}(\textbf{x})] \le \frac{\mathcal{A}_{upp, \ell}}{2} \mathbb{E}[q^{\ell}(\textbf{x})] + \sigma_{b,\ell}^2, \end{align*} where $\sigma_{b,\ell}^2 = \frac{\mu_{\ell+1,2}'+ \sigma_{\ell+1}^2N_{\ell+1}}{N_{\ell+1}+1}$, $\mathcal{A}_{low, \ell} = \frac{\sigma_{b,\ell+1}^2}{\sigma_{b,\ell}^2} \left(\frac{N_\ell\mu_{\ell,2}'+N_{\ell-1}\sigma_w^2}{N_{\ell-1}+1}\right)$, and $$\mathcal{A}_{upp, \ell} = \frac{\sigma_{b,\ell+1}^2}{\sigma_{b,\ell}^2}\left( \frac{N_{\ell-1}\sigma_w^2 +2\sqrt{2/\pi}N_{\ell}\mu_{\ell,1}'\sigma_w+ 2N_{\ell}\mu_{\ell,2}'}{N_{\ell-1}+1}\right).$$ \end{theorem} \begin{proof} The proof can be found in Appendix~\ref{app:thm:2ndmo-asyminit}. \end{proof} \begin{corollary} \label{cor:qell} Under the same conditions of Theroem~\ref{thm:2ndmo-asyminit}, if $N_\ell = N$, $M_\ell = M$, $E[X_{\ell}^i] = \mu_i'$, $i=1,2$ for all $\ell$, and $\mu_1' \ge M/2$, we have \begin{align*} \frac{\mathcal{A}_{low, \ell}}{2}E[q^{\ell}(\textbf{x})] + \sigma_{b,\ell}^2 \le E[q^{\ell+1}(\textbf{x})] \le \frac{\mathcal{A}_{upp, \ell}}{2} E[q^{\ell}(\textbf{x})] + \sigma_{b,\ell}^2, \end{align*} where $ \sigma_{b,\ell}^2 = \frac{\mu_{2}' +\sigma_{w}^2}{N+1}$, $\mathcal{A}_{low, \ell} = \frac{N(\mu_{2}'+\sigma_w^2)}{N+1}$, and $\mathcal{A}_{upp, \ell} = \frac{N(\sigma_w^2 + 2\sqrt{2/\pi}\mu_1'\sigma_w + 2\mu_2')}{N+1}$. \end{corollary} Since $\sigma_{b,\ell}^2 > 0$, $\lim_{\ell \to \infty} E[q^{\ell}(\textbf{x})]$ cannot be zero. In order for $\lim_{\ell \to \infty} E[q^{\ell}(\textbf{x})] < \infty$, the initialization parameters $(\sigma_w^2, \mu_{\ell,1}', \mu_{\ell,2}')$ have to be chosen to satisfy $\mathcal{A}_{upp, \ell} < 2$. Assuming $\mu_2' < 1$, if $\sigma_w$ is chosen to be \begin{equation} \label{def:sigmW} \sigma_w = \sqrt{2}\left(-\frac{\mu_1'}{\sqrt{\pi}} + \sqrt{\frac{\mu_1'^2}{\pi}+1-\mu_2'}\right), \end{equation} we have $\frac{\mathcal{A}_{upp, \ell}}{2}= \frac{N}{N+1}$ which satisfies the condition. \begin{figure}[htbp] \centering \includegraphics[height=5cm, width=7cm]{DyingProbN2_all5.pdf} \includegraphics[height=5cm, width=7cm]{DyingProbN3_all5.pdf} \includegraphics[height=5cm, width=7cm]{DyingProbN4_all5.pdf} \includegraphics[height=5cm, width=7cm]{DyingProbN5_all5.pdf} \caption{The BDPs are plotted with respect to increasing the number of depth $L$ at varying width $N=2$ (top left), $N=3$ (top right), $N=4$ (bottom left) and $N=5$ (bottom right). The ReLU neural networks in $d_{\text{in}}=1$ are employed. The square, diamond and circle symbols correspond to the He initialization \citep{he2015delving} with constant bias 0, 10 and 100, respectively. The inverted triangle symbols correspond to the orthogonal initialization \citep{saxe2013exact}. The asterisk symbols correspond to the proposed randomized asymmetric initialization (RAI). }\label{fig:orthogonal} \end{figure} \subsection{Comparison against other initialization procedures} In Fig.~\ref{fig:orthogonal}, we demonstrate the probability that the network is BD by the proposed randomized asymmetric initialization (RAI) method. Here we employ $\text{P}=\text{Beta}(2,1)$ and $\sigma_w=-\frac{2\sqrt{2}}{3\sqrt{\pi}} + \sqrt{1+\frac{8}{9\pi}} \approx 0.6007$ from Equation~\ref{def:sigmW}. To compare against other procedures, we present the results by the He initialization \citep{he2015delving}. We also present the results of existing asymmetric initialization procedures; the orthogonal ~\citep{saxe2013exact} and the layer-sequential unit-variance (LSUV)~\citep{mishkin2015all} initializations. The LSUV is the orthogonal initialization combined with rescaling of weights such that the output of each layer has unit variance. Because weight rescaling cannot make the output escape from the negative part of ReLU, it is sufficient to consider the orthogonal initialization. We see that the BDPs by the orthogonal initialization are very close to and a little lower than those by the He initialization. This implies that the orthogonal initialization cannot prevent the dying ReLU network. Furthermore, we show the results by the He initialization with positive constant bias of 10 and 100. Naively speaking, having a big positive bias will help in preventing dying ReLU neurons, as the input of each layer is pushed to be positive, although this might cause the exploding gradient problem in training. We see that the BDPs by the He with bias 10 and 100 are lower than those by the He with bias 0 and the orthogonal initialization. However, it is clearly observed that our proposed initialization (RAI) drastically drops the BDPs compared to all others. This is implied by Theorem~\ref{thm:asym-prob}. \section{Numerical examples} \label{sec:example} We demonstrate the effectiveness of the proposed randomized asymmetric initialization (RAI) in training deep ReLU networks. Test functions include one- and two-dimensional functions of different regularities. The following test functions are employed as unknown target functions. For one dimensional cases, \begin{equation} \label{test-func} \begin{split} f_1(x) = |x|, \qquad f_2(x) = x\sin(5x), \qquad f_3(x) = 1_{\{x>0\}}(x) + 0.2\sin(5x). \end{split} \end{equation} For two dimensional case, \begin{equation} \label{test-func-2d} f_4(x_1,x_2) = \begin{bmatrix} |{x}_1 + {x}_2 | \\ |{x}_1 - {x}_2 | \end{bmatrix}. \end{equation} We employ the network architecture having the width of $d_{\text{in}} + d_{\text{out}}$ at all layers. Here $d_{\text{in}}$ and $d_{\text{out}}$ are the dimensions of the input and output, respectively. It was shown in \citep{hanin2017approximating} that the minimum number of width required for the universal approximation is less than or equal to $d_{\text{in}} + d_{\text{out}}$. We thus choose this specific network architecture. as it theoretically guarantees to approximate any continuous function. In all numerical examples, we employ one of the most popular first-order gradient-based optimization, \texttt{Adam} \citep{kingma2014adam} with the default parameters. The minibatch size is chosen to be either 64 or 128. The standard $L_2$-loss function $\ell(\mathcal{N}^L(\textbf{x},\bm{\theta}),(\textbf{x},y)) = (\mathcal{N}^L(\textbf{x};\bm{\theta}) - y)^2$ is used on 3,000 training data. The training data are randomly uniformly drawn from $[-\sqrt{3},\sqrt{3}]^{d_{\text{in}}}$. Without changing any setups described above, we present the approximation results based on different initialization procedures. The results by our proposed randomized asymmetric initialization are referred to `Rand. Asymmetric' or `RAI'. Specifically, we use $\text{P} = \text{Beta}(2,1)$ with $\sigma_w$ defined in Equation~\ref{def:sigmW}. To compare against other methods, we also show the results by the He initialization \citep{he2015delving}. We present the ensemble of 1,000 independent training simulations. In one dimensional examples, we employ a 10-layer ReLU network of width 2. It follows from Fig.~\ref{fig:orthogonal} that we expect to observe at least 88\% training results by the symmetric initialization and 22\% training results by the RAI are collapsed. In the two dimensional example, we employ a 20-layer ReLU network of width 4. According to Fig.~\ref{fig:orthogonal}, we expect to see at least 63\% training results by the symmetric initialization and 3.7\% training results by the RAI are collapsed. Fig.~\ref{fig:abs} shows all training outcomes of our 1,000 simulations for approximating $f_1(x)$ and its corresponding empirical probabilities by different initialization schemes. For this specific test function, we observe only 3 trained results shown in \text{A, B, C}. In fact, $f_1(x)$ can be represented exactly by a 2-layer ReLU network with width 2, $f_1(x) = |x| = \text{ReLU}(x)+\text{ReLU}(-x)$. It can clearly be seen that the He initialization results in the collapse with probability more than 90\%. However, this probability is drastically reduced to 40\% by the RAI. These probabilities are different from the probability that the network is BD. This implies that even though the network wasn't BD, there are cases that after training, the network dies. In this example, 5.6\% and 18.3\% of results by the symmetric and our method, respectively, are not dead at the initialization, however, they are ended up with collapsing after training. The 37.3\% of training results by the RAI perfectly recover the target function $f_1(x)$, however, only 2.2\% of results by the He initialization achieve this success. Also, 22.4\% of the RAI and 4.2\% of the He initialization produce the half-trained results which correspond to Fig.~\ref{fig:abs} (B). We remark that the only difference in training is the initialization. This implies that our new initialization does not only prevent the dying ReLU network but also improves the quality of the training in this case. \begin{figure}[htbp] \centering \includegraphics{fnn2mean_abs1d.pdf} \begin{tabular}{|>{\centering}m{4.0cm}|>{\centering}m{2.55cm}|>{\centering}m{2.6cm}|>{\centering}m{2.55cm}|@{}m{0pt}@{}} \hline $ \small \bm{f_1(x)}$ & \small \textbf{Collapse (A)} & \small \textbf{Half-Trained (B)} & \small \textbf{Success (C)} &\\[10pt] \hline \small Symmetric (He init.) & 93.6\% & 4.2\% & 2.2\% &\\ \small Rand. Asymmetric & \textbf{40.3\%} & \textbf{22.4\%} & \textbf{37.3\%} &\\ \hline \end{tabular} \caption{The approximation results for $f_1(x)$ using a 10-layer ReLU network of width 2. For this specific test function, we observe only 3 trained results shown in \text{A, B, C}. The table shows the corresponding empirical probabilities from 1,000 independent simulations. The only difference is the initialization. } \label{fig:abs} \end{figure} The approximation results for $f_2(x)$ are shown in Fig.~\ref{fig:xsin5x}. Note that $f_2$ is a $C^{\infty}$ function. It can be seen that 91.9\% of training results by the symmetric initialization and 29.2\% of training results by the RAI are collapsed which correspond to Fig.~\ref{fig:xsin5x} (A). This indicates that the RAI can effectively alleviate the dying ReLU. In this example, 3.9\% and 7.2\% of results by the symmetric and our method, respectively, are not dead at the initialization, however, they are ended up with collapsing after training. Except for the collapse, other training outcomes are not easy to be classified. Fig.~\ref{fig:xsin5x} (B,C,D) show three training results among many others. We observe that the behavior and result of training are not easily predictable in general. However, we consistently observe partially collapsed results after training. Such partial collapses are also observed in Fig.~\ref{fig:xsin5x} (B,C,D). We believe that this requires more attention and postpone the study of this partial collapse to future work. \begin{figure}[htbp] \centering \includegraphics{fnn2mean_xsin5x.pdf} \begin{tabular}{|>{\centering}m{4.0cm}|>{\centering}m{4.05cm}|>{\centering}m{4.05cm}|@{}m{0pt}@{}} \hline $ \small \bm{f_2(x)}$ & \small \textbf{Collapsed (A)} & \small \textbf{Not collapsed (B,C,D)} &\\ \hline \small Symmetric (He init.) & 91.9\% & 8.1\% &\\ \small Rand. Asymmetric & \textbf{29.2\%} & \textbf{70.8\%} &\\ \hline \end{tabular} \caption{ The approximation results for $f_2(x)$ using a 10-layer ReLU network of width 2. Among many trained results, four are shown. The table shows the corresponding empirical probabilities from 1,000 independent simulations. The only difference is the initialization. } \label{fig:xsin5x} \end{figure} \begin{figure}[htbp] \centering \includegraphics{fnn2mean_stepsin.pdf} \begin{tabular}{|>{\centering}m{4.0cm}|>{\centering}m{4.05cm}|>{\centering}m{4.05cm}|@{}m{0pt}@{}} \hline $\small \bm{f_3(x)}$ & \small \textbf{Collapsed (A)} & \small \textbf{Not collapsed (B,C,D)} &\\ \hline \small Symmetric (He init.) & 93.8\% & 6.2\% &\\ \small Rand. Asymmetric & \textbf{32.6\%} & \textbf{67.4\%} &\\ \hline \end{tabular} \caption{ The approximation results for $f_3(x)$ using a 10-layer ReLU network of width 2. Among many trained results, four are shown. The table shows the corresponding empirical probabilities from 1,000 independent simulations. The only difference is the initialization. } \label{fig:stepsin} \end{figure} Similar behavior is observed for approximating a discontinuous function $f_3(x)$. The approximation results for $f_3(x)$ and its corresponding empirical probabilities are shown in Fig.~\ref{fig:stepsin}. We see that 93.8\% of training results by the He initialization and 32.6\% of training results by the RAI are collapsed which correspond to Fig.~\ref{fig:stepsin} (A). In this example, the RAI drops the probability of collapsing by 60.3 percentage point. Again, this implies that the RAI can effectively avoid the dying ReLU, especially when deep and narrow ReLU networks are employed. Fig.~\ref{fig:stepsin} (B,C,D) show three trained results among many others. Again, we observe partially collapsed results. Next we show the approximation result for a multi-dimensional inputs and outputs function $f_4(\textbf{x})$ defined in Equation~\ref{test-func-2d}. We observe similar behavior. Fig.~\ref{fig:abs2d} shows some of approximation results for $f_4$ and its corresponding probabilities. Among 1,000 independent simulations, the collapsed results are obtained by the He initialization with 76.8\% probability and by the RAI with 9.6\% probability. From Fig.~\ref{fig:orthogonal}, we expect to observe at least 63\% and 3.7\% of results by the symmetric and the RAI to be collapsed. Thus, in this example, 13.8\% and 5.9\% of results by the symmetric and our method, respectively, are not dead at the initialization, however, they are ended up with Fig.~\ref{fig:abs2d} (A) after training. This indicates that the RAI can also effectively overcome the dying ReLU in multi-dimensional inputs and outputs tasks. \begin{figure}[htbp] \centering \includegraphics{fnn2mean_abs2d.pdf} \begin{tabular}{|>{\centering}m{4.0cm}|>{\centering}m{4.05cm}|>{\centering}m{4.05cm}|@{}m{0pt}@{}} \hline $\small \bm{f_4(x)}$ &\small \textbf{Collapsed (A)} &\small \textbf{Not collapsed (B)} &\\ \hline \small Symmetric (He init.) & 76.8\% & 23.2\% &\\ \small Rand. Asymmetric & \textbf{9.6\%} & \textbf{90.4\%} &\\ \hline \end{tabular} \caption{ The approximation results for $f_4(\textbf{x})$ using a 20-layer ReLU network of width 4. Among many trained results, two are shown. The table shows the corresponding empirical probabilities from 1,000 independent simulations. The only difference is the initialization. } \label{fig:abs2d} \end{figure} As a last example, we demonstrate the performance of the RAI on the MNIST dataset. For the training, we employ the cross-entropy loss and the mini-batch size of 100. The networks are trained using \texttt{Adam} \citep{kingma2014adam} with its default values. In Fig. \ref{fig:mnist}, the convergence of the test accuracy is shown with respect to the number of epochs. On the left and right, we employ the ReLU network of depth 2 and width 1024 and of depth 50 and with 10, respectively. We see that when the shallow and wide network is employed, the RAI and the He initialization show similar generalization performance (test accuracy). However, when the deep and narrow network is employed, the RAI performs better than the He initialization. This indicates that the proposed RAI not only reduces the BDP of deep networks, but also has good generalization performance. \begin{figure}[htbp] \centering \includegraphics{mnist.pdf} \caption{ The test accuracy on the MNIST of five independent simulations are shown with respect to the number of epochs by the He initialization and the RAI. (Left) A shallow (depth 2, width 1024) ReLU network is employed. The He initialization and the RAI have similar performance. (Right) A deep (depth 50, width 10) ReLU network is employed. The RAI results in higher test accuracy than the He initialization.} \label{fig:mnist} \end{figure} \section{Conclusion} \label{sec:conclusion} In this paper, we establish, to the best of our knowledge, the first theoretical analysis on the dying ReLU. By focusing on the worst case of dying ReLU, we define `the dying ReLU network' which refers to the problem when the ReLU network is dead. We categorize the dying process into two phases. One phase is the event where the ReLU network is initialized to be a constant function. We refer to this event as `the network is born dead'. The other phase is the event where the ReLU network is collapsed after training. Certainly, the first phase implies the second, but not vice versa. We show that the probability that the network is born dead goes to 1 as the depth goes infinite. Also, we provide an upper and a lower bound of the dying probability in $d_{\text{in}}=1$ when the standard symmetric initialization is used. Furthermore, in order to overcome the dying ReLU networks, we propose a new initialization procedure, namely, a randomized asymmetric initialization (RAI). We show that the RAI has a smaller upper bound of the probability of NNs being born dead. By establishing the expected length map relation (second moment analysis), all parameters needed for the new method are theoretically designed. Numerical examples are provided to demonstrate the performance of our method. We observe that the RAI does not only overcome the dying ReLU but also improves the training and generalization performance. \section{Proof of Theorem~\ref{thm:dying-prob}} \label{app:thm:dying-prob} The proof starts with the following lemma. \begin{lemma} \label{app:lemma:nndying} Let $\mathcal{N}^L(\textbf{x})$ be a $L$-layer ReLU neural network with $N_\ell$ neurons at the $\ell$-th layer. Suppose all weights are randomly independently generated from probability distributions satisfying $P(\bm{W}_j^\ell \bm{z} = \bm{0}) = 0$ for any nonzero vector $\bm{z} \in \mathbb{R}^{N_{\ell-1}}$ and any $j$-th row of $\bm{W}^\ell$. Then \begin{equation*} P(\mathcal{N}^L(\textbf{x}) \text{ is born dead in } \mathcal{D}) = P(\exists \hspace{0.1cm} \ell \in \{1,\dots,L-1\} \text{ such that } \phi(\mathcal{N}^\ell(\textbf{x})) = \bm{0} \hspace{0.1cm} \forall \textbf{x} \in \mathcal{D}), \end{equation*} where $\mathcal{D} \subset B_r(\bm{0})=\{\textbf{x} \in \mathbb{R}^{d_{\text{in}}} | \|x\| < r\}$ for any $r>0$. \end{lemma} \begin{proof} Suppose $\mathcal{N}^L(\textbf{x}) = \mathcal{N}^L(\bm{0})$ for all $\textbf{x} \in \mathcal{D} \subset B_r(\bm{0})$. Then $\phi(\mathcal{N}^{L-1}(\textbf{x})) = \phi(\mathcal{N}^{L-1}(\bm{0}))$ for all $\textbf{x} \in \mathcal{D}$. If $\phi(\mathcal{N}^{L-1}(\textbf{x})) = \phi(\mathcal{N}^{L-1}(\bm{0})) = \bm{0}$, we are done as $\ell = L-1$. If it is not the case, there exists $j$ in $\{1,\cdots, N_{L-1}\}$ such that for all $\textbf{x} \in \mathcal{D}$, $$ (\mathcal{N}^{L-1}(\textbf{x}))_j = \bm{W}^{L-1}_j\phi(\mathcal{N}^{L-2}(\textbf{x})) + \bm{b}^{L-1}_j = \bm{W}^{L-1}_j\phi(\mathcal{N}^{L-2}(\bm{0})) + \bm{b}^{L-1}_j = (\mathcal{N}^{L-1}(\bm{0}))_j > 0. $$ Thus we have $\bm{W}^{L-1}_j \left(\phi(\mathcal{N}^{L-2}(\textbf{x}))-\phi(\mathcal{N}^{L-2}(\bm{0}))\right) = 0$ for all $\textbf{x} \in \mathcal{D}$. Let consider the following events: \begin{align*} &G_{L-1}:=\{\bm{W}^{L-1}_j \phi(\mathcal{N}^{L-2}(\textbf{x}))= \bm{W}^{L-1}_j\phi(\mathcal{N}^{L-2}(\bm{0})), \forall \textbf{x} \in \mathcal{D} \}, \\ &R_{L-2} := \{ \phi(\mathcal{N}^{L-2}(\bm{x})) = \phi(\mathcal{N}^{L-2}(\bm{0})), \forall \textbf{x} \in \mathcal{D} \}. \end{align*} Note that $P(G_{L-1}|R_{L-2}) = 1$. Also, since $P(\bm{W}^\ell \bm{z}) = 0$ for any nonzero vector $\bm{z}$, we have $P(G_{L-1}|R_{L-2}^c) = 0$. Therefore, \begin{align*} P(G_{L-1}) = P(G_{L-1}|R_{L-2})P(R_{L-2}) + P(G_{L-1}|R_{L-2}^c)P(R_{L-2}^c) = P(R_{L-2}). \end{align*} Thus we can focus on $\phi(\mathcal{N}^{L-2}(\textbf{x})) = \phi(\mathcal{N}^{L-2}(\bm{0})), \forall \textbf{x} \in \mathcal{D}$. If $\phi(\mathcal{N}^{L-2}(\textbf{x})) = \phi(\mathcal{N}^{L-2}(\bm{0})) = \bm{0}$, we are done as $\ell = L-2$. If it is not the case, it follows from the similar procedure that $\phi(\mathcal{N}^{L-3}(\textbf{x})) = \phi(\mathcal{N}^{L-3}(\bm{0}))$ in $\mathcal{D}$. By repeating these, we conclude that $$ P(\mathcal{N}^\ell(\textbf{x}) \text{ dies in } \mathcal{D}) = P(\exists \hspace{0.1cm} \ell \in \{1,\dots,L-1\} \text{ such that } \phi(\mathcal{N}^\ell(\textbf{x}))) = \bm{0} \hspace{0.1cm} \forall \textbf{x} \in \mathcal{D}). $$ \end{proof} \begin{proof} Let $\mathcal{D} \subset B_r(0) \subset \mathbb{R}^{d_{\text{in}}}$ be a training domain where $r$ is any positive real number. We consider a probability space $(\Omega, \mathcal{F}, P)$ where all random weight matrices and bias vectors are defined on. For every $\ell \ge 1$, let $\mathcal{F}_\ell$ be a sub-$\sigma$-algebra of $\mathcal{F}$ generated by $\{\bm{W}^j, \bm{b}^j\}_{1\le j \le \ell}$. Since $\mathcal{F}_k \subset \mathcal{F}_\ell$ for $k \le \ell$, $(\mathcal{F}_\ell)$ is a filtration. Let us define the events of our interest $\{A_\ell\}_{2 \le \ell}$ where \begin{equation} \label{app:def:A_ell} \begin{split} A_\ell &= \{\mathcal{N}^\ell(\textbf{x}) \text{ is born dead in } \mathcal{D} \} \\ &\overset{a.s.}{=} \{\exists \hspace{0.1cm} j \in \{1,\dots,\ell-1\} \text{ such that } \phi(\mathcal{N}^j(\textbf{x})) = \bm{0} \hspace{0.1cm} \forall \textbf{x} \in \mathcal{D}\} \end{split} \end{equation} where the second equality is from Lemma~\ref{app:lemma:nndying}. Note that $A_\ell$ is measurable in $\mathcal{F}_{\ell-1}$. Here $\{\bm{b}^{\ell}\}_{1 \le \ell}$ could be either $0$ or random vectors. Since $\mathcal{N}^1(\textbf{x}) = \bm{W}^1\textbf{x} + \bm{b}^1$, $P(A_1) = 0$. To calculate $P(A_\ell)$ for $\ell \ge 2$, let consider another event $C_{\ell,k}$ on which exactly $(N_\ell-k)$-components of $\phi(\mathcal{N}^\ell(\textbf{x}))$ are zero on $\mathcal{D}$. For notational completeness, we set $C_{1,k} = \emptyset$ for $0 \le k < N_1$ and $C_{1,N_1} = \Omega$. Then since $C_{\ell-1, 0} \subset A_\ell$, we have \begin{equation} \label{app:thm1:eqn2} P(A_\ell) \ge P(C_{\ell-1,0}). \end{equation} We want to show $\lim_{\ell \to \infty} P(C_{\ell,0}) = 1$. Since $\{C_{\ell-1,k}\}_{0\le k \le N_{\ell-1}}$ is a partition of $\Omega$, by the total law of probability, we have \begin{align*} P(C_{\ell,s}) &= \sum_{k=0}^{N_{\ell-1}} P(C_{\ell,s}|C_{\ell-1,k}) P(C_{\ell-1,k}), \end{align*} where $P(C_{\ell,0}|C_{\ell-1,0}) = 1$, and $P(C_{\ell,s}|C_{\ell-1,0}) = 0, \forall s \ge 1$. Since $\bm{W}_{ij}^\ell$ and $\bm{b}^\ell_j$ are independently initialized, we have \begin{align*} P(C_{\ell,0}|C_{\ell-1,k}) = \left( P( \langle \bm{W}^{\ell}_j, \phi(\mathcal{N}^{\ell-1}(\textbf{x})) \rangle + \bm{b}^{\ell}_j < 0 |C_{\ell-1,k}) \right)^{N_{\ell}} \ge p_k^{N_{\ell}} > 0, \end{align*} where the second and the third inequalities hold from the assumption. Here $p_k > 0$ does not depend on $\ell$. If $\bm{W}_{ij}^\ell, \bm{b}_j^\ell$ are randomly initialized from symmetric distributions around 0, \begin{align*} P(C_{\ell,0}|C_{\ell-1,k}) \ge \begin{cases} (2^{-k})^{N_{\ell}} & \text{if $\bm{b}^{\ell} = 0$} \\ (2^{-(k+1)})^{N_{\ell}} & \text{if $\bm{b}^{\ell}$ is generated from a symmetric distribution} \end{cases} \end{align*} Let define a transition matrix $V_\ell$ of size $(N_{\ell-1}+1) \times (N_{\ell}+1)$ such that the $(i+1,j+1)$-component is defined to be \begin{align*} V_\ell(i+1,j+1) = P(C_{\ell,j}|C_{\ell-1,i}), \qquad \text{where} \quad 0 \le j \le N_{\ell} \quad \text{and} \quad 0 \le i \le N_{\ell-1}. \end{align*} Then given $$\pi_1 = [P(C_{1,0}),P(C_{1,1}),\cdots, P(C_{1,N_1})] = [0,\cdots, 0,1] \in \mathbb{R}^{N_1+1},$$ we have \begin{equation*} \pi_{\ell} = \pi_1 V_2 \cdots V_\ell = [P(C_{\ell,0}),P(C_{\ell,1}),\cdots, P(C_{\ell,N_{\ell}})], \qquad \ell \ge 2. \end{equation*} Suppose $N_\ell = N$ for all $\ell \ge 1$. Note that the first row of $V_\ell$ is $[1,0,\cdots, 0]$ for all $\ell \ge 2$. Thus we have the following strictly increasing sequence $\{a_\ell\}_{\ell=1}^\infty$: $$ a_\ell := (\pi_\ell)_1 = P(C_{\ell,0}). $$ Since $a_\ell \le 1$, it converges, say, $\lim_{\ell \to \infty} a_\ell = a \le 1$. Suppose $a \ne 1$, i.e., $a-1 < 0$ and let $p_* = \min_{1\le k \le N} p_k$. Then since $a_{k+1} = (\pi_k V_{k+1})_1$, we have \begin{align*} a_{k+1} = a_k + \sum_{j =1}^{N} P(C_{k+1,0}|C_{k,j}) (\pi_k)_{j+1} \ge a_k + (1-a_k)(p_*^{-(N+1)})^N. \end{align*} Thus \begin{align*} 0 \le a - a_{k+1} \le a - a_k + (a_k - 1)(p_*^{-(N+1)})^N. \end{align*} By taking limit on the both sides, we have \begin{align*} 0 \le (a-1)(p_*^{-(N+1)})^N < 0, \end{align*} which leads a contradiction. Therefore, $a = \lim_{\ell \to \infty} P(C_{\ell,0}) = 1$. It then follows from Equation~\ref{app:thm1:eqn2} that $$ \lim_{\ell \to \infty} P(\mathcal{N}^\ell(\textbf{x}) \text{ is born dead in } \mathcal{D}) \ge \lim_{\ell \to \infty} P(C_{\ell-1,0}) = 1, $$ which completes the proof. \end{proof} \section{Proof of Theorem~\ref{thm:main}} \label{app:thm:main} \begin{proof} Based on Lemma~\ref{app:lemma:nndying}, let consider \begin{align*} A_\ell &= \{\exists \hspace{0.1cm} j \in \{1,\dots,\ell-1\} \text{ such that } \phi(\mathcal{N}^\ell(\textbf{x}))) = \bm{0} \hspace{0.1cm} \forall \textbf{x} \in \mathcal{D}\}, \\ A_\ell^c &= \{\text{$\forall 1 \le j < \ell$ there exists $\textbf{x} \in \mathcal{D}$ such that } \phi(\mathcal{N}^{j}(\textbf{x})) \ne \bm{0}\},\\ \tilde{A}_{\ell,\textbf{x}}^c &= \{\forall 1 \le j < \ell, \hspace{0.1cm} \phi(\mathcal{N}^{j}(\textbf{x})) \ne \bm{0} \}, \\ \tilde{A}_{\ell,\textbf{x}} &= \{\exists \hspace{0.1cm} j \in \{1,\dots,\ell-1\} \text{ such that } \phi(\mathcal{N}^{j}(\textbf{x})) = \bm{0} \}. \end{align*} Then if $\textbf{x} \in \mathcal{D}$, $\tilde{A}_{\ell,\textbf{x}}^c \subset A_\ell^c$. Thus it suffices to compute $P(\tilde{A}_{\ell,\textbf{x}}^c)$ as $$ P(A_\ell) = 1-P(A_\ell^c) \le 1-P(\tilde{A}_{\ell,\textbf{x}}^c). $$ For $\textbf{x} \ne \bm{0}$, let consider \begin{align*} U_{j,\textbf{x}} = \{\phi(\mathcal{N}^j(\textbf{x})) = \bm{0} \}, \qquad U_{j,\textbf{x}}^c = \{\phi(\mathcal{N}^j(\textbf{x})) \ne \bm{0} \}. \end{align*} Note that $\bigcup_{1\le j < \ell} U_{j,\textbf{x}} = \tilde{A}_{\ell,\textbf{x}}$ and $\tilde{A}_{\ell,\textbf{x}}^c = \bigcap_{1\le j < \ell} U_{j,\textbf{x}}^c$. Since $P(\tilde{A}_{j,\textbf{x}}^c|\tilde{A}_{j-1,\textbf{x}}) = 0$ for all $j$, we have \begin{equation} \label{app:thm1:eqn3} P(\tilde{A}_{\ell,\textbf{x}}^c) = P(\tilde{A}_{\ell,\textbf{x}}^c | \tilde{A}_{\ell-1,\textbf{x}}^c)P(\tilde{A}_{\ell-1,\textbf{x}}^c) = \cdots = P(\tilde{A}_{1,\textbf{x}}^c)\prod_{j=2}^\ell P(\tilde{A}_{j,\textbf{x}}^c | \tilde{A}_{j-1,\textbf{x}}^c). \end{equation} Note that $P(\tilde{A}_{1,\textbf{x}}^c)=1$. Also, note that since the rows of $\bm{W}^\ell$ and $\bm{b}_j^\ell$ are independent, \begin{equation} \label{app:thm1:eqn4-prob} P(\tilde{A}_{j,\textbf{x}} |\tilde{A}_{j-1,\textbf{x}}^c) = \prod_{s=1}^{N_{j-1}} P\left(\bm{W}^{j-1}_s\phi(\mathcal{N}^{j-2}(\textbf{x})) + \bm{b}^{j-1}_s \le 0 |\tilde{A}_{j-1,\textbf{x}}^c\right). \end{equation} Since the weight and biases are randomly drawn from symmetry distribution around 0 and $P\left(\bm{W}^{j-1}_s\phi(\mathcal{N}^{j-2}(\textbf{x})) + \bm{b}^{j-1}_s = 0 |\tilde{A}_{j-1,\textbf{x}}^c\right) = 0$, we obtain $$ P\left(\bm{W}^j_s\phi(\mathcal{N}^{j-1}(\textbf{x})) + \bm{b}^j_s \le 0 |\tilde{A}_{j-1,\textbf{x}}^c\right) = \frac{1}{2}. $$ Therefore, $P(\tilde{A}_{j,\textbf{x}}|\tilde{A}_{j-1,\textbf{x}}^c)=2^{-N_{j-1}}$ and thus, $P(\tilde{A}_{j,\textbf{x}}^c|\tilde{A}_{j-1,\textbf{x}}^c)=1-2^{-N_{j-1}}$. It then follows from Equation~\ref{app:thm1:eqn3} that \begin{equation*} P(\tilde{A}_{\ell,\textbf{x}}^c) = \prod_{j=1}^{\ell-1} (1-2^{-N_{j}}), \end{equation*} which completes the proof. \end{proof} \section{Proof of Theorem~\ref{thm:nnwidthN}} \label{app:thm:nnwidthN} \begin{proof} We now assume $d_{\text{in}}=1$, $N_\ell = N$ and without loss of generality let $\mathcal{D} \subset [-r,r]$ be a training domain for any $r>0$. Also we assume that all weights are initialized from continuous symmetric probability distributions around 0. And the biases are set to zeros. Since $d_{\text{in}}= 1$, for each $\ell$, there exist non-negative vectors $\bm{v}_{\pm} \in \mathbb{R}^{N_\ell}_{+}$ such that \begin{equation} \label{app:thm2:l-th-layer} \phi(\mathcal{N}^\ell(x)) = \bm{v}_{+} \phi(x) + \bm{v_{-}}\phi(-x). \end{equation} Let $B_{\ell,0}$ be the event where $\phi(\mathcal{N}^\ell(x)) = 0$, and let $B_{\ell,1}$ be the event where $$ \phi(\mathcal{N}^\ell(x)) = \bm{v}_{+} \phi(x), \quad \text{or} \quad \bm{v_{-}}\phi(-x), \quad \text{or} \quad \bm{v}(\phi(x) + b\phi(-x)) $$ for some $\bm{v}_\pm$, $\bm{v} \in \mathbb{R}_+^{N_\ell}$ and $b > 0$. Let $B_{\ell,2}$ be the event where $$ \phi(\mathcal{N}^\ell(x)) = \bm{v}_{+} \phi(x) + \bm{v_{-}}\phi(-x) $$ for some linearly independent vectors $\bm{v}_{\pm}$. Then it can be checked that $P(B_{\ell+1,k}|B_{\ell,s}) = 0$ for all $2 \ge k > s \ge 0$. Thus, it suffices to consider $P(B_{\ell+1,k}|B_{\ell,s})$ where $0 \le k \le s \le 2$. At $\ell=1$, since $\phi(\mathcal{N}^1(\textbf{x})) = \phi(\bm{W}^{1}\textbf{x})$ and $\mathcal{D} \subset [-r,r]$, we have \begin{align*} P(B_{1,0}) &= 0, \qquad P(B_{1,1}) = 2^{-N+1}, \qquad P(B_{1,2}) = 1-2^{-N+1}. \end{align*} For $\ell > 1$, it can be checked that $P(B_{\ell,0}|B_{\ell-1,0}) = 1$, $P(B_{\ell,0}|B_{\ell-1,1}) = 2^{-N}$, and thus $P(B_{\ell,1}|B_{\ell-1,1}) = 1-2^{-N}$. For $P(B_{\ell,0}|B_{\ell-1,2})$ and $P(B_{\ell,1}|B_{\ell-1,2})$, we observe the followings. In $B_{\ell,2}$, we have \begin{align*} \phi(\langle \bm{w},\phi(\mathcal{N}^\ell(x)) \rangle) = \phi(\langle \bm{w}, \bm{v}_{+}\rangle) \phi(x) + \phi(\langle \bm{w}, \bm{v}_{-}\rangle) \phi(-x),. \end{align*} Since $\bm{v}_{\pm}$ are nonzero vectors in $\mathbb{R}^{N_\ell}_+$ and thus satisfy $\langle \bm{v}_+, \bm{v}_- \rangle \ge 0$, by the assumption, we obtain \begin{align*} P(\langle \bm{w}, \bm{v}_{+}\rangle < 0, \langle \bm{w}, \bm{v}_{-}\rangle < 0|\bm{v}_{\pm}) &= \frac{1}{2} - p_{\bm{v}_\pm} \ge 1/4, \\ P(\langle \bm{w}, \bm{v}_{+}\rangle > 0, \langle \bm{w}, \bm{v}_{-}\rangle > 0 | \bm{v}_{\pm}) &= \frac{1}{2} - p_{\bm{v}_\pm} \ge 1/4, \\ P(\langle \bm{w}, \bm{v}_{+}\rangle > 0, \langle \bm{w}, \bm{v}_{-}\rangle < 0 | \bm{v}_{\pm}) &= P(\langle \bm{w}, \bm{v}_{+}\rangle < 0, \langle \bm{w}, \bm{v}_{-}\rangle > 0 | \bm{v}_{\pm}) =p_{\bm{v}_\pm} \le \frac{1}{4}. \end{align*} Thus we have \begin{align*} P(B_{\ell,0}|B_{\ell-1,2}) = \mathbb{E}\left[ \left(\frac{1}{2} - p_{\bm{v}_\pm} \right)^N \right] \ge (1/4)^N \end{align*} where the expectation is taken under $p_{\bm{v}_\pm}$. For $P(B_{\ell,1}|B_{\ell-1,2})$, there are only three ways to move from $B_{\ell-1,2}$ to $B_{\ell,1}$. That is \begin{align*} B_{\ell,1}^{(A)}|B_{\ell-1,2}: \phi(\mathcal{N}^{\ell-1}(x)) &\to \phi(\mathcal{N}^{\ell}(x)) = \bm{v}_{+} \phi(x), \\ B_{\ell,1}^{(B)}|B_{\ell-1,2}: \phi(\mathcal{N}^{\ell-1}(x)) &\to \phi(\mathcal{N}^{\ell}(x)) =\bm{v_{-}}\phi(-x), \\ B_{\ell,1}^{(C)}|B_{\ell-1,2}: \phi(\mathcal{N}^{\ell-1}(x)) &\to \phi(\mathcal{N}^{\ell}(x)) =\bm{v}(\phi(x) + b\phi(-x)). \end{align*} Thus $ P(B_{\ell,1}|B_{\ell-1,2}) = P(B_{\ell,1}^{(A)}|B_{\ell-1,2}) + P(B_{\ell,1}^{(B)}|B_{\ell-1,2}) + P(B_{\ell,1}^{(C)}|B_{\ell-1,2}). $ Note that due to the symmetry, $P(B_{\ell,1}^{(A)}|B_{\ell-1,2}) = P(B_{\ell,1}^{(B)}|B_{\ell-1,2})$. Thus we have \begin{align*} P(B_{\ell,1}|B_{\ell-1,2}) &= 2\left(\sum_{j =1}^N \binom{N}{j}\mathbb{E}\left[\left(\frac{1}{2} - p_{\bm{v}_\pm}\right)^{N-j}\left(p_{\bm{v}_\pm}\right)^j\right] \right) + \binom{N}{1}\mathbb{E}\left[\left(\frac{1}{2} - p_{\bm{v}_\pm}\right)^N\right] \\ &= 2^{-N+1} + (N-2)P(B_{\ell,0}|B_{\ell-1,2}) \\ &\ge 2^{-N+1} + (N-2)4^{-N}. \end{align*} Note that $$ P(B_{\ell,0}|B_{\ell-1,2}) + P(B_{\ell,1}|B_{\ell-1,2}) = 2^{-N+1} + (N-1)P(B_{\ell,0}|B_{\ell-1,2}). $$ Since $P(A_{\ell+1}) = P(B_{\ell,0})$ where $A_{\ell}$ is defined in Equation~\ref{app:def:A_ell}, we aim to estimate $P(B_{\ell,0})$. Let $V_\ell$ be the transition matrix whose $(i+1,j+1)$-component is $P(B_{\ell,j}|B_{\ell-1,i})$. Then $P(B_{\ell,0}) = \pi_1 V_2 \cdots V_\ell$ where $(\pi_1)_j = P(B_{1,j-1})$. By letting $$ \gamma_\ell = 1+\frac{2^{-N}- P(B_{\ell,0}|B_{\ell-1,2})}{2^{-N} + (N-1)P(B_{\ell,0}|B_{\ell-1,2})}. $$ we obtain \begin{align*} V_\ell = Q_\ell D_\ell Q_\ell^{-1}, \qquad Q_\ell = \begin{bmatrix} 1/\sqrt{3} & 0 & 0 \\ 1/\sqrt{3} & \frac{1}{\sqrt{1+\gamma_\ell^2}} & 0 \\ 1/\sqrt{3} & \frac{\gamma_\ell}{\sqrt{1+\gamma_\ell^2}}& 1 \end{bmatrix}, \quad Q_\ell^{-1} = \begin{bmatrix} \sqrt{3} & 0 & 0 \\ -\sqrt{1+\gamma_\ell^2} & \sqrt{1+\gamma_\ell^2} & 0 \\ -(1-\gamma_\ell) & -\gamma_\ell & 1 \end{bmatrix} \end{align*} where $D_\ell = \text{diag}(V_\ell)$. To find a lower bound of $P(B_{\ell,0})$, we consider the following transition matrix $\mathcal{P}$ of size $3 \times 3$ which is defined to be \begin{align*} \mathcal{P} = \begin{bmatrix} 1 & 0 & 0 \\ (1/2)^N & 1 - (1/2)^N & 0 \\ (1/4)^{N} & (1/2)^{N-1}+(N-2)(1/4)^N & 1-(1/2)^{N-1}-(N-1)(1/4)^N \end{bmatrix}. \end{align*} It can be checked that \begin{align*} \mathcal{P} = QDQ^{-1}, \qquad Q = \begin{bmatrix} 1/\sqrt{3} & 0 & 0 \\ 1/\sqrt{3} & \frac{1}{\sqrt{1+\gamma^2}} & 0 \\ 1/\sqrt{3} & \frac{\gamma}{\sqrt{1+\gamma^2}}& 1 \end{bmatrix}, \quad Q^{-1} = \begin{bmatrix} \sqrt{3} & 0 & 0 \\ -\sqrt{1+\gamma^2} & \sqrt{1+\gamma^2} & 0 \\ -(1-\gamma) & -\gamma & 1 \end{bmatrix} \end{align*} where $\gamma = \frac{\mathcal{P}_{32}}{\mathcal{P}_{22} -\mathcal{P}_{33}} = \frac{2^{-N+1}+(N-2)4^{-N}}{2^{-N}+(N-1)4^{-N}} = 1 + \frac{2^{-N}-4^{-N}}{2^{-N}+(N-1)4^{-N}}$ and $D = \text{diag}(\mathcal{P})$. Thus we have \begin{equation*} \mathcal{P}^\ell = \begin{bmatrix} 1 & 0 & 0 \\ 1 - (\mathcal{P}_{22})^\ell & (\mathcal{P}_{22})^\ell & 0 \\ 1-(\mathcal{P}_{22})^\ell-(\gamma-1)((\mathcal{P}_{22})^\ell-(\mathcal{P}_{33})^\ell) & \gamma((\mathcal{P}_{22})^\ell-(\mathcal{P}_{33})^\ell) & (\mathcal{P}_{33})^\ell \end{bmatrix}. \end{equation*} Similarly, we obtain \begin{equation*} V_2\cdots V_{\ell+1} = \begin{bmatrix} 1 & 0 & 0 \\ 1 - (\mathcal{P}_{22})^\ell & (\mathcal{P}_{22})^\ell & 0 \\ \xi_{\ell,31} & \xi_{\ell,32} & \xi_{\ell,33} \end{bmatrix} \end{equation*} where $g(x) = 1 - 2^{-N+1} - (N-1)x$, $p_\ell = (V_{\ell})_{31} = P(B_{\ell,0}|B_{\ell-1,2})$, $\gamma_{\ell,i} = \gamma_i$ for $1 \le i \le \ell$, $\gamma_{\ell,\ell+1} = 1$, \begin{align*} \xi_{\ell, 31} &= (\mathcal{P}^\ell)_{31} -(\gamma_{\ell,1}-1)(g(p_1))^\ell +\sum_{i=1}^\ell (\gamma_{\ell,i} - \gamma_{\ell,i+1})(\mathcal{P}_{22})^{\ell-i} \prod_{j=1}^i g(p_j), \\ \xi_{\ell, 33} &= \prod_{j=1}^\ell g(p_j), \qquad \xi_{\ell, 32} = 1 - \xi_{\ell, 31} - \xi_{\ell, 33}. \end{align*} We want to show that $$ (\pi_1\mathcal{P}^{\ell})_1 \le (\pi_1 V_2 \cdots V_{\ell+1})_1 = P(B_{\ell+1,0}) $$ where \begin{align*} \pi_1 = [P(B_{1,0}), P(B_{1,1}), P(B_{1,2})] = [0, 2^{-N+1}, 1 - 2^{-N+1} ]. \end{align*} Let denote $\bar{\gamma_{\ell,i}}:= \gamma_{\ell,i}-1$ and $g_i := g(p_i)$. It then suffices to show that \begin{align*} \mathcal{J}:= \sum_{i=1}^\ell (\bar{\gamma_{\ell,i}} - \bar{\gamma_{\ell,i+1}})(\mathcal{P}_{22})^{\ell-i} \prod_{j=1}^i g_j -\bar{\gamma_{\ell,1}}(g_1)^\ell \ge 0. \end{align*} Note that $4^{-N}=p_1 \le p_j <2^{-N}$, $\mathcal{P}_{22} > g(p_j)$, and thus $\mathcal{P}_{22}^\ell > (g(p_1))^\ell \ge \prod_{j=1}^\ell g(p_j)$. Also, \begin{align*} \mathcal{P}_{22} - g(p_i) = 2^{-N} + (N-1)p_i, \qquad \bar{\gamma_{\ell,i}} = \frac{2^{-N}-p_i}{2^{-N} + (N-1)p_i} = \frac{2^{-N}-p_i}{\mathcal{P}_{22} - g(p_i)}. \end{align*} Thus we have \begin{align*} \mathcal{J} &= \bar{\gamma_{\ell,1}}(g_1\mathcal{P}_{22}^{\ell-1} - g_1^\ell) - \sum_{i =2}^\ell \bar{\gamma_{\ell,i}} (\mathcal{P}_{22} - g_i) \mathcal{P}_{22}^{\ell-i} \prod_{j=1}^{i-1} g_j \\ &\ge \bar{\gamma_{\ell,1}}(\mathcal{P}_{22}^\ell - g_1^\ell) - \sum_{i=1}^\ell \bar{\gamma_{\ell,i}} (\mathcal{P}_{22} - g_i) \mathcal{P}_{22}^{\ell-i} g_1^{i-1} \\ &\ge \bar{\gamma_{\ell,1}}(\mathcal{P}_{22}^\ell - g_1^\ell) - \bar{\gamma_{\ell,1}}(\mathcal{P}_{22} - g_1)\frac{\mathcal{P}_{22}^\ell - g_1^\ell}{\mathcal{P}_{22} - g_1} = 0. \end{align*} Therefore, $$ (\mathcal{P}^\ell)_{31} \le (V_2\cdots V_{\ell+1})_{31}, $$ which implies that $$ (\pi_1\mathcal{P}^{\ell})_1 \le (\pi_1 V_2 \cdots V_{\ell+1})_1 = P(B_{\ell+1,0}) = P(A_{\ell+2}). $$ Furthermore, it can be checked that \begin{align*} (\pi \mathcal{P}^{\ell})_1 = 1 - (\mathcal{P}_{22})^\ell - \left(\frac{(1-2^{-N})(1-2^{-N+1})}{1+(N-1)2^{-N}}\right) ((\mathcal{P}_{22})^\ell - (\mathcal{P}_{33})^\ell). \end{align*} \end{proof} \section{Proof of Theorem~\ref{thm:nn2mean}} \label{app:thm:nn2mean} \begin{proof} Since $\mathcal{N}^L(\mathbf{x})$ is a constant function, it follows from Lemma~\ref{app:lemma:nndying} that with probability 1, there exists $\ell$ such that $\phi(\mathcal{N}^\ell(\textbf{x})) \equiv \mathbf{0}$. Then the gradients of the loss function with respect to the weights and biases in the $1,\cdots, \ell$-th layers vanish. Hence, the weights and biases in layers $1, \dots, \ell$ will not change when a gradient based optimizer is employed. This implies that $\mathcal{N}^L(\textbf{x})$ always remains to be a constant function as $\phi(\mathcal{N}^\ell(\textbf{x})) \equiv \mathbf{0}$. Furthermore, the gradient method changes only the weights and biases in layers $\ell+1, \dots, L$. Therefore, the ReLU NN can only be optimized to a constant function, which minimizes the loss function $\mathcal{L}$. \end{proof} \section{Proof of Theorem~\ref{thm:asym-prob}} \label{app:thm:asym-prob} \begin{lemma} \label{app:lemma:prob-positive-real} Let $\bm{v} \in \mathbb{R}^{n+1}$ be a vector such that $\bm{v}_k \sim \text{P}$ and $\bm{v}_{-k} \sim N(0,\sigma^2\bm{I}_n)$ where $\bm{v}_{-k}$ is defined in Equation~\ref{def-v_-k}. For any nonzero vector $\textbf{x} \in \mathbb{R}^{n+1}$ whose $k$-th element is positive (i.e., $\textbf{x}_k > 0$), let \begin{equation*} \|\tilde{\textbf{x}}_{-k}\|^2 = \frac{\sum_{j \ne k} \textbf{x}_j^2}{\textbf{x}_k^2}, \qquad \tilde{\sigma}^2 = \|\tilde{\textbf{x}}_{-k}\|^2\sigma^2. \end{equation*} Then \begin{equation} \label{app:lemma:eqn:prob-positive} P\left(\langle \bm{v}, \textbf{x} \rangle > 0 \right) = \begin{cases} \frac{1}{2} + \int_0^{M} (1-F_{\text{P}}(t)) \frac{1}{\sqrt{2\pi}\tilde{\sigma}}e^{-\frac{t^2}{2\tilde{\sigma}^2}} dt & \text{if $\tilde{\sigma}^2 > 0$ and $\textbf{x}_k > 0$} \\ 1/2 & \text{if $\|{\textbf{x}}_{-k}\|^2 > 0$ and $\textbf{x}_k = 0$} \\ 1 & \text{if $\tilde{\sigma}^2 = 0$ and $\textbf{x}_k > 0$}, \end{cases} \end{equation} where $F_{\text{P}}(t)$ is the cdf of ${\text{P}}$. \end{lemma} \begin{proof} We first recall some properties of the normal distribution. Let $Y_1, \cdots, Y_n$ be i.i.d. random variables from $N(0,\sigma^2)$ and let $\bm{Y}_n = (Y_1, \cdots, Y_n)$. Then for any vector $\bm{a} \in \mathbb{R}^n$, $$ \langle \bm{Y}_n, \bm{a} \rangle = \sum_{i=1}^n \bm{a}_i Y_i \sim N(0,\|\bm{a}\|^2\sigma^2). $$ Suppose $\bm{v}$ is a random vector generated in the way described in Subsection~\ref{subsec:asyminit}. Then for any $\bm{x} \in \mathbb{R}^{n+1}$, $$ \langle \bm{v}, \bm{x} \rangle = \sum_{i \ne k} \bm{v}_i \bm{x}_i + \bm{v}_k\bm{x}_k = \bm{x}_k \left(Z + \bm{v}_k \right), $$ where $\tilde{\sigma}^2 = \|\tilde{\bm{x}}_{-k}\|^2\sigma^2$ and $Z \sim N(0,\tilde{\sigma}^2)$. If $\bm{x}_k > 0$ and $\|{\textbf{x}}_{-k}\|^2 > 0$, we have $$ \langle \bm{v}, \bm{x} \rangle > 0 \quad \iff \quad \bm{v}_k > -Z \overset{d}{=} Z. $$ Therefore, it suffices to compute $P(\bm{v}_k > Z)$. Let $f_Z(z)$ be the pdf of $Z$ and $f_{\text{P}}(x)$ be the pdf of $\bm{v}_k\sim \text{P}$. Then \begin{align*} P(\bm{v}_k > Z) &= \int_{-\infty}^\infty \int_{z}^M f_{\text{P}}(x)dx f_Z(z)dz \\ &= \int_{-\infty}^0 \int_{0}^M f_{\text{P}}(x)dx f_Z(z)dz + \int_0^M \int_z^M f_{\text{P}}(x)dx f_Z(z)dz \\ &= \frac{1}{2} + \int_0^M (1-F_{\text{P}}(z))f_Z(z)dz, \end{align*} which completes the proof. \end{proof} \begin{proof} For each $\ell$ and $s$, let $k_s^\ell$ be randomly uniformly chosen in $\{1,\cdots,N_{\ell-1}+1\}$. Let $\bm{V}^{\ell}_s = [\bm{W}^\ell_s, \bm{b}^\ell_s]$ and $\textbf{n}^\ell(\textbf{x}) = \left[\phi(\mathcal{N}^\
ell(\textbf{x})),1 \right]$. Recall that for a vector $\textbf{v} \in \mathbb{R}^{N+1}$, $$ \textbf{v}_{-k} := [v_1,\cdots,v_{k-1},v_{k+1},\cdots,v_{N+1}]. $$ To emphasize the dependency of $k_s^\ell$, we denote $\bm{V}^\ell_s$ whose $k$-th component is generated from $\text{P}$ by $\bm{V}^\ell_s(k)$. The proof can be started from Equation~\ref{app:thm1:eqn4-prob}. It suffices to compute \begin{equation*} P(\tilde{A}_{j,\textbf{x}}|\tilde{A}_{j-1,\textbf{x}}^c) = \prod_{s=1}^{N_{j-1}} P\left(\bm{W}^{j-1}_s\phi(\mathcal{N}^{j-2}(\textbf{x})) + \bm{b}^{j-1}_s \le 0 |\tilde{A}_{j-1,\textbf{x}}^c\right). \end{equation*} Note that for each $s$, $$ P\left(\langle \bm{V}^j_s, \textbf{n}^{j-1} \rangle \le 0 |\tilde{A}_{j-1,\textbf{x}}^c\right) = \frac{1}{N_{j-1}+1} \sum_{k=1}^{N_{j-1}+1} P\left( \langle \bm{V}^j_s(k), \textbf{n}^{j-1} \rangle \le 0 | \tilde{A}_{j-1,\textbf{x}}^c\right). $$ Also, from Lemma~\ref{app:lemma:prob-positive-real}, we have \begin{align*} P\left( \langle \bm{V}^j_s(k), \textbf{n}^{j-1} \rangle \le 0 | \tilde{A}_{j-1,\textbf{x}}^c\right) = \frac{1}{2} - \int_0^{M} (1-F_P(z)) \frac{1}{\sqrt{2\pi}\tilde{\sigma}_{\ell,k}}e^{-\frac{z^2}{2\tilde{\sigma}_{\ell,k}^2}} dz \end{align*} where $$ \tilde{\sigma}_{\ell,k}^2 = \frac{\|\textbf{n}^{\ell-1}_{-k}(\textbf{x}))\|^2\sigma_\ell^2}{ \left(\textbf{n}^{\ell-1}_k(\textbf{x})\right)^2}. $$ Note that if $\textbf{n}^{\ell-1}_k(\textbf{x}) = 0$, the above probability is simply $1/2$. Also, since the event $\tilde{A}_{j-1,\textbf{x}}^c$ is given, we know that $\phi(\mathcal{N}^\ell(\textbf{x})) \ne 0$. Thus, $$ P\left( \langle \bm{V}^j_s, \textbf{n}^{j-1} \rangle \le 0 | \tilde{A}_{j-1,\textbf{x}}^c\right) < \frac{1}{2}. $$ We thus denote $$ \left(\frac{1}{2}-\delta_{\ell-1,\textbf{x}}(\omega)\right)^{N_{\ell-1}}:= P(\tilde{A}_{\ell,\textbf{x}}|\tilde{A}_{\ell-1,\textbf{x}}^c(\omega)), $$ where $\delta_{\ell-1,\textbf{x}}$ is $\mathcal{F}_{\ell-2}$ measurable whose value lies in $(0,0.5]$ and it depends on $\textbf{x}$. It then follows from Equation~\ref{app:thm1:eqn3} that \begin{align*} P(\tilde{A}_{\ell,\textbf{x}}^c) = \int \prod_{j=1}^{\ell-1} \left(1 - \left(\frac{1}{2}-\delta_{j,\textbf{x}}(\omega)\right)^{N_{j}}\right) d{P}(\omega) \end{align*} where ${P}$ is the probability distribution with respect to $\{\bm{W}^{j}, \bm{b}^{j}\}_{1\le j \le \ell}$. Note that since the weight matrix and the bias vector of the first layer is initialized from a symmetric distribution, $\delta_{1,\textbf{x}} = 0$. Let $\delta_{1,\textbf{x}}^* = 0$. By the mean value theorem, there exists some numbers $\delta_{2,\textbf{x}}^*,\cdots, \delta_{\ell-1,\textbf{x}}^* \in (0,0.5]$ such that $$ P(\tilde{A}_{\ell,\textbf{x}}^c) = \prod_{j=1}^{\ell-1} \left(1- \left(\frac{1}{2}-\delta_{j,\textbf{x}}^*\right)^{N_{j-1}}\right). $$ Let $\bm{\delta}^*_{\textbf{x}} = (\delta_{1,\textbf{x}}^*,\cdots,\delta_{\ell-1,\textbf{x}}^*)$. By setting $$ \bm{\delta}^* = \bm{\delta}^*_{\textbf{x}^*}, \qquad \text{where} \quad \textbf{x}^*= \argmin_{\textbf{x} \in \mathcal{D} \backslash \{\bm{0}\}} \hspace{0.1cm} \prod_{j=1}^{\ell-1} \left(1- \left(\frac{1}{2}-\delta_{j,\textbf{x}}^*\right)^{N_{j-1}}\right), $$ the proof is completed. \end{proof} \section{Proof of Theorem~\ref{thm:2ndmo-asyminit}} \label{app:thm:2ndmo-asyminit} Let $X_\ell \sim \text{P}_\ell$ whose pdf is denoted by $f_{P_\ell}(x)$. Suppose $0 < \mu_{\ell,i}' = E[X_\ell^i] < \infty$ for $i=1,2$. We then define three probability distribution functions: \begin{equation*} f^{(2)}_{P_\ell}(x) = \frac{x^2}{\mu_{\ell,2}'}f_{P_\ell}(x), \qquad f^{(1)}_{P_\ell}(x) = \frac{x}{\mu_{\ell,1}'}f_{P_\ell}(x), \qquad f^{(0)}_{P_\ell}(x) = f_{P_\ell}(x). \end{equation*} Also we denote its corresponding cdfs by $F^{(i)}_{P_\ell}(t) = \int_0^t f^{(i)}_{P_\ell}(x)dx$. For notational convenience, let us assume that $\text{P}_{\ell} = \text{P}$ and $M_\ell = M$ for all $\ell$ and define $$ \textbf{n}^{\ell}(\textbf{x}):=\left[\phi(\mathcal{N}^{\ell}(\textbf{x})), 1 \right] \in \mathbb{R}_+^{N_{\ell}+1}. $$ We denote the pdf of normal distributions by $f_Y^{\ell,k}(y)$ where \begin{align*} f_Y^{\ell,k}(y) = \frac{1}{\sqrt{2\pi}\tilde{\sigma}_{\ell,k}}e^{-\frac{y^2}{2\tilde{\sigma}_{\ell,k}^2}}, \qquad \tilde{\sigma}_{\ell,k}^2 = \frac{\|\textbf{n}^{\ell-1}_{-k}(\textbf{x}))\|^2\sigma_\ell^2}{ \left(\textbf{n}^{\ell-1}_k(\textbf{x})\right)^2}, \qquad \sigma_\ell^2 = \frac{\sigma_w^2}{N_{\ell-1}} \end{align*} where $1 \le k \le N_{\ell-1}+1$. Recall that for a vector $\textbf{v} \in \mathbb{R}^{N+1}$, $$ \textbf{v}_{-k} := [v_1,\cdots,v_{k-1},v_{k+1},\cdots,v_{N+1}]. $$ For any probability distribution $\text{P}$ defined on $[0,M]$ whose pdf is denoted by $f_P(x)$, let \begin{equation} \label{app:thm:2ndmo-asyminit-defJ} \mathcal{J}^{\ell,k}_{\text{P}}:= \int_0^M \int_y^M f_P(x)dx f_Y^{\ell,k}(y)dy. \end{equation} Let $Y_{\ell,k} \sim N(0,\tilde{\sigma}_{\ell,k}^2)$ and $X \sim \text{P}$. Then \begin{align*} P\left(X > Y_{\ell,k} | \tilde{\sigma}_{\ell,k}^2\right) &= \int_{-\infty}^\infty \int_{0}^M \mathds{1}_{\{X >Y_{\ell,k}\}} f_{P}(x) f^{\ell,k}_Y(y) dxdy \\ &= \int_{-\infty}^0 \int_{0}^M f_P(x)dx f^{\ell,k}_Y(y)dy + \int_{0}^M \int_{y}^M f_P(x) dxf^{\ell,k}_Y(y)dy \\ &= \frac{1}{2} +\mathcal{J}^{\ell,k}_{\text{P}}. \end{align*} Therefore, $0 \le \mathcal{J}^{\ell,k}_{\text{P}} \le 0.5$. The proof of Theorem~\ref{thm:2ndmo-asyminit} starts with the following lemma. \begin{lemma} \label{app:lemma-J} Suppose $Y \sim N(0,\tilde{\sigma}_{\ell,k}^2)$, $X \sim \text{P}_\ell$ and $\mu_{\ell,1}' \ge M/2$. Then \begin{equation*} \begin{split} \int_{0}^M \int_{y}^M (x-y)^2 f_{P_\ell}(x)dx f_Y^{\ell,k}(y)dy \le \mu_{\ell,2}'/2. \end{split} \end{equation*} \end{lemma} \begin{proof} Let $I_{\ell,k}$ be the quantity of our interest: \begin{align*} I_{\ell,k}:=\int_{0}^M \int_{y}^M (x-y)^2 f_{P_\ell}(x)dx f_Y^{\ell,k}(y)dy. \end{align*} Without loss of generality, we can assume $M=1$. This is because \begin{align*} I_{\ell,k} &= \int_0^M \int_y^M (x-y)^2 f_{P_\ell}(x) f_Y^{\ell,k}(y) dxdy \\ &= \int_0^M \int_{y/M}^1 (Mu-y)^2 f_{P_\ell}(Mu) f_Y^{\ell,k}(y) Mdu dy \\ &= \int_0^1 \int_{s}^1 M^2(u-s)^2 (Mf_{P_\ell}(Mu)) (Mf_Y^{\ell,k}(Ms)) du ds \\ &= M^2 \int_0^1 \int_s^1 (u-s)^2 f_{U}(u) f_{S}^{\ell,k}(s) du ds \\ &= M^2 \tilde{I}_{\ell,k} \end{align*} where $u = x/M$ and $s = y/M$. Here $MU \sim \text{P}_\ell$ and $S \sim N(0,\tilde{\sigma}^2_{\ell,k}/M^2)$. Let us first consider the inner integral. Let $G(y)=\int_{y}^1 (x^2-2xy+y^2) f_{P_\ell}(x) dx$. It then can be written as follow: \begin{align*} G(y)&= \int_{y}^1 (x^2-2xy+y^2) f_{P_\ell}(x) dx = \int_{y}^1 (x^2f_{P_\ell}(x) -2yxf_{P_\ell}(x)+y^2f_{P_\ell}(x)) dx\\ &= \int_y^1 \left(\mu_{\ell,2}' f^{(2)}_{P_\ell}(x) - 2\mu_{\ell,1}'y f^{(1)}_{P_\ell}(x) + y^2f^{(0)}_{P_\ell}(x) \right) dx \\ &= \mu_{\ell,2}'(1 - F^{(2)}_{P_\ell}(y)) -2y\mu_{\ell,1}'(1 - F^{(1)}_{P_\ell}(y)) + y^2(1-F^{(0)}_{P_\ell}(y)) \\ &= -2\mu_{\ell,1}'y + y^2 + \mu_{\ell,2}'\int_y^1 f^{(2)}_{P_{\ell}}(x)dx +2\mu_{\ell,1}' yF^{(1)}_{P_\ell}(y) - y^2F^{(0)}_{P_\ell}(y). \end{align*} Note that since $0 \le y \le 1$, and thus $y^2 \le y$, we have \begin{align*} G(y) &= y^2(1-F^{(0)}_{P_\ell}(y)) - 2\mu_{\ell,1}'y(1-F^{(1)}_{P_\ell}(y)) + \mu_{\ell,2}'\int_y^1 f^{(2)}_{P_{\ell}}(x)dx\\ &\le y(1-F^{(0)}_{P_\ell}(y)) - 2\mu_{\ell,1}'y(1-F^{(1)}_{P_\ell}(y)) + \mu_{\ell,2}'\int_y^1 f^{(2)}_{P_{\ell}}(x)dx \\ &= -y(2\mu_{\ell,1}'-1)\left[1 - \left(\frac{2\mu_{\ell,1}'F^{(1)}_{P_\ell}(y) - F^{(0)}_{P_\ell}(y)}{2\mu_{\ell,1}'-1}\right)\right] + \mu_{\ell,2}'\int_y^1 f^{(2)}_{P_{\ell}}(x)dx. \end{align*} Since $2\mu_{\ell,1}' \ge 1$ and $$ 1 - \left(\frac{2\mu_{\ell,1}'F^{(1)}_{P_\ell}(y) - F^{(0)}_{P_\ell}(y)}{2\mu_{\ell,1}'-1}\right) \ge 0, \qquad \forall y \in [0,1], $$ we have $G(y) \le \mu_{\ell,2}'\int_y^1 f^{(2)}_{P_{\ell}}(x)dx$. Therefore, \begin{align*} I_{\ell,k} \le \mu_{\ell,2}' \int_0^1 \int_y^1 f^{(2)}_{P_{\ell}}(x)dx f^{\ell,k}_Y(y)dy= \mu_{\ell,2}' \mathcal{J}^{\ell,k}_{f^{(2)}_{P_\ell}} \end{align*} where $\mathcal{J}^{\ell,k}_{\text{P}}$ is defined in Equation~\ref{app:thm:2ndmo-asyminit-defJ}. Since $0\le \mathcal{J}^{\ell,k}_{\text{P}} \le 0.5$, we conlcude that $I_{\ell,k} \le \mu_{\ell,2}'/2$. \end{proof} \begin{proof} It follows from $$ \mathcal{N}^{\ell}_j(\textbf{x}) = \bm{W}_j^{\ell}\phi(\mathcal{N}^{\ell-1}(\textbf{x})) + \bm{b}^{\ell}_j, \qquad 1\le j \le N_\ell $$ that we have \begin{align*} E\left[\mathcal{N}^{\ell}_j(\textbf{x})^2 | \mathcal{N}^{\ell-1}(\textbf{x}) \right] = \sigma^2_\ell \sum_{t \ne k_j^\ell} \phi(\mathcal{N}^{\ell-1}_t(\textbf{x}))^2 + \mu_{\ell,2}'\phi(\mathcal{N}^{\ell-1}_{k_j^\ell}(\textbf{x}))^2. \end{align*} Since $k_j^\ell$ is randomly uniformly selected, by taking the expectation with respect to $k_j^\ell$, we have \begin{align*} \mathbb{E}\left[\mathcal{N}^{\ell}_j(\textbf{x})^2 | \mathcal{N}^{\ell-1}(\textbf{x}) \right] =\frac{\sigma_\ell^2N_{\ell} + \mu_{\ell,2}'}{N_{\ell}+1}\left(1+ \|\phi(\mathcal{N}^{\ell-1}(\textbf{x}))\|^2 \right). \end{align*} Since the above is independent of $j$, and $q^\ell(\textbf{x}) = \|\mathcal{N}^\ell(\textbf{x})\|^2/N_\ell$, \begin{equation} \label{app:length-map-main-eqn} \mathbb{E} \left[ q^{\ell}(\textbf{x}) | \mathcal{N}^{\ell-1}(\textbf{x}) \right] = \frac{\sigma_\ell^2N_{\ell} +\mu_{\ell,2}'}{N_{\ell}+1}\left(1+ \|\phi(\mathcal{N}^{\ell-1}(\textbf{x}))\|^2 \right). \end{equation} At a fixed $\ell$ and for each $t=1,\cdots, N_\ell$, let $\textbf{v}^{\ell,t}=[\textbf{W}^\ell_t, \textbf{b}^\ell_t] \in \mathbb{R}^{N_{\ell-1}+1}$ be a random vector such that $\textbf{v}_{-k_t}^{\ell,t} \sim N(0,\sigma_\ell^2 \bm{I}_{N_{\ell-1}})$ and $\textbf{v}_{k_t}^{\ell,t} \sim \text{P}_\ell$. Then $\mathcal{N}^\ell_t(\textbf{x}) = \langle \textbf{v}^{\ell,t}, \textbf{n}^\ell(\textbf{x}) \rangle$. Given $\mathcal{N}^{\ell-1}(\textbf{x})$, assuming $(\textbf{n}^{\ell-1}_{k_t}(\textbf{x})) > 0$, one can view $\mathcal{N}^{\ell}_t(\textbf{x})$ as $$ \mathcal{N}^\ell_t(\textbf{x}) = \textbf{n}^{\ell-1}_{k_t}(\textbf{x})\left(\sigma_{\ell,k_t}Z + X_\ell\right), \qquad Z \sim N(0,1) \text{ and } X_\ell \sim \text{P}_\ell. $$ Thus $\phi(\mathcal{N}^\ell_t(\textbf{x}))^2 = \left(\textbf{n}^{\ell-1}_{k_t}(\textbf{x})\right)^2\phi\left(\sigma_{\ell,k_t}Z + X_\ell\right)^2$ and we obtain \begin{align*} &\frac{1}{(\textbf{n}^{\ell-1}_{k_t}(\textbf{x}))^2}E\left[\phi(\mathcal{N}^{\ell}_t(\textbf{x}))^2 | \mathcal{N}^{\ell-1}(\textbf{x}) \right] =E\left[\phi\left(\sigma_{\ell,k_t}Z + X_\ell\right)^2 | \mathcal{N}^{\ell-1}(\textbf{x}) \right] \\ &= \int_{-\infty}^M \int_{y}^M (x-y)^2 dF_P(x) f_Y^{\ell,k_t}(y)dy \\ &= \int_{-\infty}^0 \int_{0}^M (x-y)^2 dF_P(x) f_Y^{\ell,k_t}(y)dy + \int_{0}^M \int_{y}^M (x-y)^2 dF_P(x) f_Y^{\ell,k_t}(y)dy \\ &= \int_{-\infty}^0 \int_{0}^M \left(x^2+y^2-2xy\right) dF_P(x) f_Y^{\ell,k_t}(y)dy + \int_{0}^M \int_{y}^M (x-y)^2 dF_P(x) f_Y^{\ell,k_t}(y)dy \\ &= \int_{-\infty}^0 \left(\mu_{\ell,2}' + y^2 - 2\mu_{\ell,1}'y\right) f_Y^{\ell,k_j}(y)dy + \int_{0}^M \int_{y}^M (x-y)^2 dF_P(x) f_Y^{\ell,k_t}(y)dy \\ &= \frac{1}{2}\left(\mu_{\ell,2}' + \tilde{\sigma}_{\ell,k_t}^2 \right) + \mu_{\ell,1}'\sqrt{\frac{2}{\pi}}\tilde{\sigma}_{\ell,k_t} + \int_{0}^M \int_{y}^M (x-y)^2 dF_P(x) f_Y^{\ell,k_t}(y)dy. \end{align*} By multiplying $(\textbf{n}^{\ell-1}_{k_t}(\textbf{x}))^2$ in the both sides, we have \begin{align*} E[\phi(\mathcal{N}^{\ell}_t(\textbf{x}))^2 | \mathcal{N}^{\ell-1}(\textbf{x}) ] &= \frac{1}{2}\left((\mu_{\ell,2}'-\sigma_{\ell}^2)(\textbf{n}^{\ell-1}_{k_t}(\textbf{x}))^2 + (\|\phi(\mathcal{N}^{\ell-1}(\textbf{x}))\|^2 + 1)\sigma_{\ell}^2 \right) \\ &\qquad + (\textbf{n}^{\ell-1}_{k_t}(\textbf{x}))^2\int_{0}^M \int_{y}^M (x-y)^2 dF_P(x) f_Y^{\ell,k_t}(y)dy \\ &\qquad\qquad + \mu_{\ell,1}'\sigma_\ell \sqrt{\frac{2}{\pi}} \textbf{n}^{\ell-1}_{k_t}(\textbf{x}) \|\textbf{n}^{\ell-1}_{-k_t}(\textbf{x})\|. \end{align*} Since $k_t$ is randomly selected, by taking expectation w.r.t. $k_j$, we have \begin{equation} \label{app:thm:2ndmo-asyminit-eqn1} \begin{split} \mathbb{E}[\phi(\mathcal{N}^{\ell}_t(\textbf{x}))^2 | \mathcal{N}^{\ell-1}(\textbf{x}) ] &= \frac{(1 + \|\phi(\mathcal{N}^{\ell-1}(\textbf{x}))\|^2)}{2}\left(\frac{\mu_{\ell,2}'-\sigma_\ell^2}{N_{\ell-1}+1} + \sigma_\ell^2\right) \\ &\qquad + \sum_{k_t=1}^{N_{\ell-1}+1}\frac{(\textbf{n}^{\ell-1}_{k_t}(\textbf{x}))^2}{N_{\ell-1}+1}\int_{0}^M \int_{y}^M (x-y)^2 dF_P(x) f_Y^{\ell,k_t}(y)dy \\ &\qquad\qquad + \mu_{\ell,1}'\sqrt{\frac{2}{\pi}}\sigma_\ell \sum_{k_t=1}^{N_{\ell-1}+1} \frac{\textbf{n}^{\ell-1}_{k_t}(\textbf{x}) \|\textbf{n}^{\ell-1}_{-k_t}(\textbf{x})\|}{N_{\ell-1}+1}. \end{split} \end{equation} By the Cauchy-Schwarz inequality, the third term in the right hand side of Equation~\ref{app:thm:2ndmo-asyminit-eqn1} can be bounded by \begin{align*} \mu_{\ell,1}'\sqrt{\frac{2}{\pi}}\sigma_\ell \sum_{k_t=1}^{N_{\ell-1}+1} \frac{\textbf{n}^{\ell-1}_{k_t}(\textbf{x}) \|\textbf{n}^{\ell-1}_{-k_t}(\textbf{x})\|}{N_{\ell-1}+1} \le \mu_{\ell,1}'\sqrt{\frac{2}{\pi}}\frac{\sigma_w(1 + \|\phi(\mathcal{N}^{\ell-1}(\textbf{x}))\|^2)}{N_{\ell-1}+1}. \end{align*} Let $I_{\ell,k_t} = \int_{0}^M \int_{y}^M (x-y)^2 dF_P(x) f_Y^{\ell,k_t}(y)dy$. It then follows from Lemma~\ref{app:lemma-J} that $I_{\ell,k} \le \mu_{\ell,2}'/2$ for any $k$. Thus the second term in the right hand side of Equation~\ref{app:thm:2ndmo-asyminit-eqn1} can be bounded by \begin{align*} \sum_{k_t=1}^{N_{\ell-1}+1}\frac{(\textbf{n}^{\ell-1}_{k_t}(\textbf{x}))^2}{N_{\ell-1}+1}\int_{0}^M \int_{y}^M (x-y)^2 dF_P(x) f_Y^{\ell,k_t}(y)dy \le \frac{\mu_{\ell,2}'}{2} \frac{\left(1+\|\phi(\mathcal{N}^{\ell-1}(\textbf{x}))\|^2\right)}{N_{\ell-1}+1} \end{align*} Therefore, we obtain \begin{align*} E[\phi(\mathcal{N}^{\ell}_t(\textbf{x}))^2 | \mathcal{N}^{\ell-1}(\textbf{x}) ] \le \mathcal{C}_\ell \frac{(1 + \|\phi(\mathcal{N}^{\ell-1}(\textbf{x}))\|^2)}{2} \end{align*} where $$ \mathcal{C}_\ell = \frac{\mu_{\ell,2}'-\sigma_\ell^2}{N_{\ell-1}+1} + \sigma_\ell^2 + \frac{2\sqrt{2}\mu_{\ell,1}'\sigma_w}{(N_{\ell-1}+1)\sqrt{\pi}} +\frac{\mu_{\ell,2}'}{N_{\ell-1}+1}. $$ Since $\mathcal{C}_\ell$ is independent of $t$, $$E[\|\phi(\mathcal{N}^{\ell}(\textbf{x}))\|^2 | \mathcal{N}^{\ell-1}(\textbf{x}) ] \le N_\ell \mathcal{C}_\ell \frac{(1 + \|\phi(\mathcal{N}^{\ell-1}(\textbf{x}))\|^2)}{2}. $$ It then follows from Equation~\ref{app:length-map-main-eqn}, \begin{align*} 1+ \|\phi(\mathcal{N}^{\ell-1}(\textbf{x}))\|^2 =\frac{N_{\ell}+1}{\sigma_\ell^2N_{\ell} + \mu_{\ell,2}'} E\left[q^{\ell}(\textbf{x}) | \mathcal{N}^{\ell-1}(\textbf{x}) \right] \end{align*} that \begin{align*} E[\|\phi(\mathcal{N}^{\ell}(\textbf{x}))\|^2 | \mathcal{N}^{\ell-1}(\textbf{x}) ] \le \frac{\mathcal{C}N_{\ell}(N_{\ell}+1)}{2(\sigma_\ell^2N_{\ell}+ \mu_{\ell,2}')} E\left[q^{\ell}(\textbf{x}) | \mathcal{N}^{\ell-1}(\textbf{x}) \right]. \end{align*} Thus we have \begin{align*} E\left[q^{\ell+1}(\textbf{x}) | \mathcal{N}^{\ell-1}(\textbf{x}) \right] &= \frac{\sigma_{\ell+1}^2N_{\ell+1} + \mu_{\ell+1,2}'}{N_{\ell+1}+1}\left(1+ E[\|\phi(\mathcal{N}^{\ell}(\textbf{x}))\|^2 | \mathcal{N}^{\ell-1}(\textbf{x}) ] \right) \\ &\le \sigma_{b,\ell}^2 + \frac{\mathcal{A}_{\ell,upp}}{2} E\left[q^{\ell}(\textbf{x}) | \mathcal{N}^{\ell-1}(\textbf{x}) \right] \end{align*} where \begin{align*} \sigma_{b,\ell}^2 = \frac{\sigma_{\ell+1}^2N_{\ell+1} + \mu_{\ell+1,2}'}{N_{\ell+1}+1}, \qquad \mathcal{A}_{\ell,upp} = \frac{\sigma_{b,\ell}^2 N_{\ell}(N_{\ell}+1)}{\sigma_\ell^2N_{\ell} + \mu_{\ell,2}'}\mathcal{C}_\ell. \end{align*} By taking expectation with respect to $\mathcal{N}^{\ell-1}(\textbf{x})$, we obtain \begin{align*} E[q^{\ell+1}(\textbf{x})]&\le \frac{\mathcal{A}_{\ell,upp}}{2} E[q^{\ell}(\textbf{x})] + \sigma_{b,\ell}^2 \end{align*} The lower bound can be obtained by dropping the second and the third terms in Equation~\ref{app:thm:2ndmo-asyminit-eqn1}. Thus \begin{align*} \frac{\mathcal{A}_{\ell,low}}{2} E[q^{\ell}(\textbf{x})] + \sigma_{b,\ell}^2 \le E[q^{\ell+1}(\textbf{x})] \le \frac{\mathcal{A}_{\ell,upp}}{2} E[q^{\ell}(\textbf{x})] + \sigma_{b,\ell}^2 \end{align*} where $$ \mathcal{A}_{\ell,low} = \frac{\sigma_{\ell+1}^2N_{\ell+1} + \mu_{\ell+1,2}'}{N_{\ell+1}+1} \frac{N_{\ell}+1}{\sigma_\ell^2N_{\ell} + \mu_{\ell,2}'} \left(\frac{\mu_{\ell,2}'N_\ell-\sigma_w^2}{N_{\ell-1}+1} + \sigma_w^2\right). $$ \end{proof}
\section{Introduction} Chimeras in non-locally coupled oscillator populations are spectacular patterns combining synchronous and asynchronous patches. Since the first observation by Kuramoto and Battogtokh (KB)~\cite{Kuramoto-Battogtokh-02}, a significant progress has been achieved in theoretical and experimental exploration of chimeras, see recent reviews~\cite{Panaggio-Abrams-15,Omelchenko-18}. On a microscopic level, the KB-chimera demonstrates coexistence of ordered and disordered domains: neighboring units are either fully synchronized or partially correlated. On a mesoscopic level, when one introduces a coarse-grained order parameter, chimera constitutes a nonhomogeneous pattern with a continuous profile of the complex order parameter, the latter has modulus one in synchronized domains and is less than one in disordered regions. On a macroscopic level, chimera can be treated as an inhomogeneous oscillating structure. In a homogeneous medium, because of the invariance to shifts in space and time, chimera pattern is expected to be sensitive to a weak inhomogeneity in space and to a small time-dependent forcing. The former option was explored in refs.~\cite{Bick-Martens-15,Ruzzene_etal-19}, where it has been demonstrated that a weak inhomogeneity controls chimera's position in space. In this paper we focus on a time-periodic, uniform in space forcing. The main guiding point is that the chimera as whole is an oscillating object, and thus, similar to simple self-sustained oscillators, can be phase locked or frequency entrained. In particular, two coupled chimera can synchronize in the sense of entrainment of their mean frequencies, while the internal inhomogeneous structure of order-disorder is preserved~\cite{Andrzejak_etal-18}. In this paper we explore synchronization properties of chimera patterns subject to a periodic external force. We employ the reduction approach of \cite{Smirnov-Osipov-Pikovsky-17}, and formulate the problem of finding chimera patterns locked to the external field as a problem of finding periodic orbits in a system of ordinary differential equations (ODEs). At this macroscopic level, we determine chimeras with stable and unstable phase shifts to the forcing, and regions of their existence (``Arnold tongues'' (AT)) on the plane of forcing parameters ``amplitude -- frequency''. On the mesoscopic level, locked chimeras can be stationary, breathing, or turbulent, and we characterize these state through distributions of the oscillators mean frequencies. Outside of the locked region, interesting microscopic patterns appear, where some subgroups of oscillators are mutually entrained, and some of them are entrained by the external force (while the chimera as a whole is not). As a basic model we use the KB setup~\cite{Kuramoto-Battogtokh-02} and consider a one-dimensional (1D) oscillatory medium of length $L$, enclosed in a ring and described by phases $\varphi(x,t)$ which are coupled non-locally. This medium is additionally subject to a uniform force with amplitude $\varepsilon$ and frequency{\,}$\Omega$: \begin{equation} \!\partial_{t}\varphi\!=\!\IM\!\left(\! e^{-\imath(\varphi+\alpha)}\! \int_0^L\!\!\!G(x\!-\!\tilde x)e^{\imath\varphi(\tilde x,t)}\mathrm{d} \tilde x + \varepsilon e^{\imath(\Omega t - \varphi)} \!\right)\!.\!\!\!\! \label{eq:be} \end{equation} Here the kernel $G(x)={\cosh\bigl(|x|-L/2\bigr)}\bigl/{2\sinh\bigl(L/2\bigr)}\bigr.$ is a regularized (for periodic boundary condition) version of exponential kernel used by KB~\cite{Kuramoto-Battogtokh-02,Smirnov-Osipov-Pikovsky-17} [this kernel is the inverse of the operator $(\partial_{xx}-1)$ in the periodic domain while the exponential kernel is the inverse in the infinite domain, cf. Eq.~\eqref{eq_pdeH} below]. \looseness=-1 The first step in the analysis is introduction of the coarse-grained order parameter $Z(x,t) \!=\! \langle e^{\imath\varphi(x,t)} \rangle_{\text{loc}}$ by using the procedure of averaging over a small vicinity of the point $x$~\cite{Ott-Antonsen-08,Laing-09,Bordyugov-Pikovsky-Rosenblum-10,Smirnov-Osipov-Pikovsky-17}. Physically, the amplitude of this complex function (which satisfies the inequality $|Z(x,t)| \leq {1}$) fully describes the level of synchrony of the phases $\varphi(x,t)$ in a smallz neighborhood of the point $x$. In region where $|Z(x,t)|={1}$, the neighboring elements move synchronously. When $|Z(x,t)|<{1}$ the neighboring phase oscillators rotates asynchronously. The complex order parameter $Z(x,t)$ obeys the Ott-Antonsen equation~\cite{Ott-Antonsen-08,Laing-09,Bordyugov-Pikovsky-Rosenblum-10,Smirnov-Osipov-Pikovsky-17} \begin{gather} \partial_t Z = (e^{-\imath \alpha}H - e^{\imath \alpha}H^{*}Z^2) / 2,\label{eq_Z}\\ H(x,t) = \varepsilon e^{\imath (\Omega t + \alpha)}+\!\int_{0}^{L}\!\!\!G(x-\tilde{x}) Z(\tilde{x},t)\dd\tilde{x}. \label{eq_IntH} \end{gather} Equation \eqref{eq_IntH} can be equivalently written as a partial differential equation (PDE) \begin{equation} \partial_{xx}^2 H-H = -Z-\varepsilon e^{\imath (\Omega t + \alpha)}. \label{eq_pdeH} \end{equation} Thus, the problem of finding forced chimera states can be formulated as that of finding nontrivial patterns in the system of PDEs~\eqref{eq_Z},\eqref{eq_pdeH}. We look for patterns uniformly rotating with the frequency of forcing: $Z(x,t)=z(x)e^{\imath \Omega t}$, $H(x,t)=h(x)e^{\imath \Omega t}$. This yields a system of equations for the spatial profiles \begin{equation} e^{\imath \alpha}h^{*}z^2+2\imath\Omega z-e^{-\imath\alpha}h=0,\quad h''-h=-z-\varepsilon e^{\imath \alpha}. \label{eq_h_z} \end{equation} Hereafter, by primes at the functions of the variable $x$ we will denote derivatives with respect to the spatial coordinate $x$. Expressing from the first algebraic equation $z$ through $h$ (where one of two roots is chosen according to the local stability condition), we obtain the final 4-th order ODE system \begin{equation} h''-h=\displaystyle \imath \dfrac{\Omega + \sqrt{\Omega^2 - |h|^2}}{e^{\imath \alpha}h^{*}}-\varepsilon e^{\imath\alpha}. \label{eq_h} \end{equation} This equation for the autonomous case ($\varepsilon=0$) has been explored in \cite{Smirnov-Osipov-Pikovsky-17} to find free (unforced) chimera patterns. The main difference to the free case is that the forcing term $\varepsilon$ breaks the phase shift invariance $\theta\!\to\!\theta\!+\!\theta_0$, where $\theta\!=\!\text{arg}(h)$. The latter invariance allowed us to reduce Eq.~\eqref{eq_h} to a 3-dimensional system in the absence of forcing, but now the full 4-th order system is to be\,considered. Our strategy (described in more details in \footnote{see Supplementary Material (SM)}) is to find, for each pair of values $\varepsilon,$ $\Omega$, a symmetric periodic solution of~\eqref{eq_h} starting from a state $|h|(0)$, $\theta(0)$, $|h|'(0)=0$, $\theta'(0)\!=\!0$. The initial value $|h|(0)$ is adjusted to find a periodic solution, period of which $L$ will depend on the initial phase $\theta(0)$. As the dependence $L(\theta(0))$ is a $2\pi$-periodic function of $\theta(0)$, in a certain range of system lengths $L_{\min}<L<L_{\max}$ we have at least two solutions for each $L$ (in fact, in all cases presented below this number is exactly two because we consider only 1:1 locking). We illustrate this in Fig.~1 of SM. The next step is determining $L_{\min}(\varepsilon, \Omega)$ and $L_{\max}(\varepsilon,\Omega)$ in a range of values of the forcing frequency $\Omega$, for fixed $\varepsilon$. These curves are also illustrated in Fig.~2 of SM. To find the borders of the AT, i.e. the synchronization region, for a fixed length of the medium $L$, we have to inverse these dependencies as $\Omega_{\min}(\varepsilon,L)$ and $\Omega_{\max}(\varepsilon,L)$. Then, the phased locked solutions for a chimera pattern in the system of length $L$ exist in the AT defined as $\Omega_{\min}(\varepsilon,L)<\Omega<\Omega_{\max}(\varepsilon,L)$. As $\varepsilon\to 0$, this tongue shrinks to the frequency $\Omega_0(L)$ of the autonomous chimera pattern. \begin{figure}[t] \includegraphics[width = \columnwidth]{fig1.pdf} \caption{Direct numerical simulations of the set of $N=4096$ oscillators performed within the phase model~\eqref{eq:be} with the parameters $\varepsilon = 0.025$ and $\Omega = -0.86$ (a,d), $\Omega = -0.83$ (b,e), and $\Omega = -0.725$ (c,f). Panels (a,b,c): spatio-temporal plot of the phases in the reference frame rotating with an angular velocity $\Omega$. Panels (d,e,f): average frequencies of the elements (blue lines) together with the forcing frequency $\Omega$ (gray doted line). In each case, the initial state was close to an autonomous single-cluster chimera state at the length $L \approx 4.41$ and the corresponding natural frequency $\Omega_{0}=-0.8$. For situations depicted in panels (a) and (c), an external force was present during the full simulation time. In the case shown in panel (b), the force was switched on abruptly at an instant of time $t_0=500$ (black dashed straight line in panel (b)). A red dashed curve in panel (e) shows a profile of average frequencies of the oscillators for a free-runing chimera state.} \label{fig:atst} \end{figure} \begin{figure}[!ht] \includegraphics[width = 0.8\columnwidth]{fig2.pdf} \caption{The dynamics of the phase of an oscillator from the subgroup of\,Fig.\,\ref{fig:atst}(c), entrained by the external force, $x\!=\!0.64$. } \label{fig:ph} \end{figure} Additionally, we have to
check general stability of the found patterns. To this end one linearizes the full system~\eqref{eq_Z},~\eqref{eq_pdeH} and solves the eigenvalue problem. Practically, this can be done via a finite-difference representation described in~\cite{Smirnov-Osipov-Pikovsky-17} and SM, allowing for reducing the problem to that of finding eigenvalues of large matrices. The resulting spectrum of eigenvalues has a continuous branch (essential spectrum) and discrete eigenvalues, which are responsible for possible instabilities (see detailed discussion in Refs.~\cite{Omelchenko-13,Xie_etal-14}). The stability analysis can be also applied to autonomous chimera patterns~\cite{Smirnov-Osipov-Pikovsky-17}, correspondingly one distinguishes stable and unstable chimeras. The latter solutions evolve typically into breathing (time-periodic)~\cite{Kemeth_etal-16,Bolotov_etal-17a,Suda-Okuda-18,Bolotov-Smirnov-Osipov-Pikovsky-18} or turbulent chimeras~\cite{Bordyugov-Pikovsky-Rosenblum-10,Bolotov_etal-17a}. Below we discuss the effect of periodic forcing on these states. We fix the value of parameter $\alpha=1.457$, so the only parameter of a free chimera to vary is the system length $L$; the natural frequency $\Omega_0$ is uniquely determined by $L$. Let us first consider the effect of external periodic forcing on a stable chimera. We exemplify this case with parameters $L\!=\!4.41$, $\Omega_0\!=\!-0.8$. Here, for small forcing amplitudes $\varepsilon \!\lesssim\! 0.05$ we obtain a standard AT on the plane $\varepsilon$, $\Omega$, inside which a locked chimera is stable (see Fig.\,3 of SM). The dynamics of locking is illustrated in Fig.\,\ref{fig:atst}(b). Here we show the phases of oscillators in a free-running state until time $t_0\!=\!500$, at which the forcing with frequency $\Omega \!=\! -0.83$ and amplitude $\varepsilon\!=\!0.025$ is applied. The phases are shown in the reference frame rotating with the external frequency $\Omega$, thus for $t\!<\!t_0$ one observes rotation of the phase in the synchronous domain. The effect of locking is evident by inspection of the phase difference between the external force and the coherent domain for $t\!>\!t_0$: in the locked state it is constant. This corresponds to the shift of the profile of average frequencies of all the oscillators (Fig.\,\ref{fig:atst}(e)): the whole profile shifts so that the frequency of the coherent domain becomes exactly the external one (depicted by the dotted line). \looseness=-1 Outside of the locking region (i.e. for a large mismatch between $\Omega$ and $\Omega_0$), one observes unlocked quasiperiodic regimes (Fig.~\ref{fig:atst}(a),(c)). For $\Omega<\Omega_0$, all the oscillators have frequencies larger than $\Omega$, one can clearly see the plateaus at the modulation frequency and its harmonics in the profile of average frequencies (Fig.~\ref{fig:atst}(d)). For $\Omega>\Omega_0$, the major part of the coherent oscillators have a frequency less than $\Omega$, but there exists another plateau exactly at the driving frequency (Fig.~\ref{fig:atst}(f)). Remarkably, the phases of these oscillators, in the reference frame rotating with $\Omega$, are not constants, but experience rather large variations (see Fig.~\ref{fig:ph}) -- nevertheless, they are perfectly frequency entrained by the force. The existence of the plateaus in the frequency profile resembles that for breathing chimeras~\cite{Kemeth_etal-16,Bolotov_etal-17a}. In the latter case, however, an extra modulation frequency appears due to instability of the stationary chimera; in our case the modulation frequency is due to an imperfect locking to the external field. \begin{figure}[ht] \includegraphics[width = \columnwidth]{fig3.pdf} \caption{Existence domains of locked chimera patterns (AT) for (a) $L \approx 6.854$ and (b) $L \approx 7.332$. here unforced chimeras with natural frequencies (a) $\Omega_{0}=-0.69$ and (b) $\Omega_{0}=-0.68$, are unstable. Inside AT, in region $A$ (blue color) a locked chimera is stable. In regions $B$,\,$C$ (red and yellow colors), all stationary chimeras are unstable, and the observed state is either turbulent in region $B$ (see Fig.~\ref{fig:atinst} below) or time-periodic (breathing chimera, region $C$).} \label{fig:oscd} \end{figure} \begin{figure}[!htb] \includegraphics[width = \columnwidth]{fig4.pdf} \caption{Synchronization of breathing chimera: the spatial distribution of the phases $\varphi(x,t) - \Omega t$ (a), and the absolute value $\left|Z(x,t)\right|$ of the complex order parameter (b). Numerical simulations of the set of $N=8192$ oscillators were performed within the framework of the phase model~\eqref{eq:be} with $\varepsilon = 0.06$, $\Omega = -0.69$. The initial conditions was chosen in the form of a free breathing chimera state at the length $L \approx 6.854$ of the oscillatory medium. The force was switched on abruptly at an instant of time $t_0=1000$, as marked by a black dashed straight line. The coarse-grained order parameter $Z(x,t)$ was calculated via local averaging with a Gaussian kernel $\exp\left(-x^{2}\bigl/2\varsigma^{2}\bigr.\right)$, with $\varsigma=0.1$.} \label{fig:atbr} \end{figure} \begin{figure}[ht] \includegraphics[width = \columnwidth]{fig5.pdf} \caption{Dynamics of amplitude of the local complex order parameter $\left|Z(x,t)\right|$ calculated as in Fig.~\ref{fig:atbr} according to the results of the direct numerical simulations of the set of $N=8192$ oscillators within the phase model~\eqref{eq:be} with the parameters (a) $\varepsilon = 0.03$, $\Omega = -0.68$, (b) $\varepsilon = 0.06$, $\Omega = -0.68$. The initial conditions was chosen in the form of a standing single-cluster chimera at the length $L \approx 7.332$ of the oscillatory medium. The force was switched on abruptly at an instant of time $t_0=1500$, which is marked by a black dashed straight line. One can see that a large enough forcing can stabilize and regularize the behaviour of the system which evolves to a standard chimera regime (see fragment (b)).} \label{fig:atinst} \end{figure} Next we consider, how the periodic force affects a breathing chimera. The latter exists for parameters $L=6.854,\Omega_0=-0.69$. Here the stationary chimera state is weakly unstable with two complex eigenvalues having a positive real part. In the autonomous, unforced situation, a breathing, time-periodic state appears. Here also the AT can be constructed as described above, however only in a part of this locked region the constructed stationary chimera state is stable (Fig.~\ref{fig:oscd}(a)). For very small forcing, the locked chimera inherits the instability of the autonomous chimera, and evolves into a breathing state (a tiny yellow region C close to the tongue tip in Fig.~\ref{fig:oscd}(a)). However, a large enough forcing can suppress this ``transversal'' instability, so that in a part of the AT (blue region A in Fig.~\ref{fig:oscd}(a)) a stationary phase-locked chimera is observed. In Fig.~\ref{fig:atbr} this regime is shown with the phases (panel (a)) and with the order parameter (panel (b)). One can clearly observe the free breathing chimera up to time $t_0=1000$, at which the forcing is switched on. Then, for $t>t_0$, both the mean frequency becomes locked by the force, and periodic modulations of the order parameter disappear, what means establishing of a standard stationary chimera state. In a part B of the AT colored red in Fig.~\ref{fig:oscd}(a), the constructed stationary chimera state is unstable and evolves into a turbulent chimera. Finally, we discuss regularization of a turbulent chimera. The latter is observed for $L=7.332$, $\Omega_0=-0.68$. Here the instability of a free chimera solution is so strong that a disordered state where the local order parameter fluctuates in space and time is observed. The calculated AT is presented in Fig.~\ref{fig:oscd}(b). Again, the domain of existence of a locked stationary chimera looks like a standard triangular synchronization domain, but only in a relatively small part (blue region) this solution is stable. We illustrate this situation with the evolution of the order parameter in Fig.~\ref{fig:atinst}(b). A turbulent chimera is observed prior to force onset time $t_0=1500$, under forcing it is transformed to a stable stationary chimera. For larger values of the driving frequency (red region in Fig.~\ref{fig:oscd}(b)), locked solutions in presence of driving typically inherit the instability of the free chimera, so that also under periodic forcing turbulent states are observed (Fig.~\ref{fig:atinst}(a)). We stress here, that for very large forcing amplitudes, an observed turbulent state is a transient one, resulting for large times in an absorbing fully synchronized regime. Summarizing, we studied the effect of a periodic forcing on a chimera state in a one-dimensional medium. We have constructed stationary locked chimera patterns as periodic in space and time profiles via solutions of a proper ordinary differential equation. The most simple picture is observed if the free chimera is stable. Here the macroscopic effect of forcing on it is very similar to a general synchronization setup: there is a locking region (AT) within which the chimera is locked by the forcing, while outside of the AT a quasiperiodic dynamics is observed. Inside the AT no essential microscopic and mesoscopic effects are observed. Dynamics on the mesoscopic level becomes nontrivial outside the AT, with several plateaus of locking of subgroups of oscillators appear. On the microscopic level of the phase dynamics of individual units, nontrivial states with rather large deviations of the phase from the forcing one are observed despite of frequency entrainment. Another effect not existing in simple synchronization setups is regularization of nonstationary chimeras, breathing or turbulent. Here, inside the AT there are subdomains, at sufficiently strong coupling, where external forcing stabilizes stationary chimera. On the contrary, in some domains a weakly nonstationary (breathing) chimera may become turbulent due to forcing. \acknowledgments This paper was supported by the Russian Science Foundation (grant No.\ 17-12-01534) and the Russian Foundation for Basic Research (grant No.\ 19-52-12053). AP was partially supported by the Laboratory of Topological Methods in Dynamics NRU HSE, of the Ministry of Science and Higher Education of the RF, grant ag. Nr 075-15-2019-193. The authors thank O. Omelchenko and R. G. Andrzejak for helpful discussions.
\section{Introduction} \begin{figure}[h] \begin{center} \includegraphics[width=.44\textwidth, angle=0]{DVCS_Re_2x2_nice_mod.eps} \hspace*{0.4cm} \includegraphics[width=.44\textwidth, angle=0]{DVCS_Im_2x2_nice_mod.eps} \hspace*{.5cm} \caption{The real (two left panels) and imaginary (two right panels) parts of the spacelike DVCS Compton Form Factor ${\cal H}$ multiplied by $\xi$, as a function of $\xi$ in GK (first and third panels) and MSTW (second and fourth panels) double distribution models, for $\mu_F^2=Q^2=4$ GeV$^2$ and $t =-0.1$ GeV$^2$. In all plots, the LO result is shown as the dotted line, the full NLO result by the solid line and the NLO result without the gluonic contribution as the dashed line. } \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=.12\textwidth, angle=270]{fig4aReTCSHgk.eps} \hspace*{0.4cm} \includegraphics[width=.12\textwidth, angle=270]{fig4bReTCSHmstw.eps} \hspace*{.5cm} \includegraphics[width=.12\textwidth, angle=270]{fig5aImTCSHgk.eps} \hspace*{0.4cm} \includegraphics[width=.12\textwidth, angle=270]{fig5bImTCSHmstw.eps} \caption{The real (two left panels) and imaginary (two right panels) parts of the timelike TCS Compton Form Factor ${\cal H}$ multiplied by $\eta$, as a function of $\eta$ in GK (first and third panels) and MSTW (second and fourth panels) double distribution models, for $\mu_F^2=Q^2=4$ GeV$^2$ and $t =-0.1$ GeV$^2$. } \end{center} \end{figure} \begin{figure}[h] \hspace*{2cm} \includegraphics[width= 0.06\textwidth,angle=270]{fig9abcDVCSJLABgk.eps} \vspace*{3cm} \caption{From left to right, the total DVCS cross section in pb/GeV$^4$, the difference of cross sections for opposite lepton helicities in pb/GeV$^4$, the corresponding asymmetry, all as a function of the usual $\phi$ angle (in Trento conventions \cite{TrentoC}) for $E_e$ = 11 GeV; $\mu_F^2$ = $Q^2$ = 4 GeV$^2$ and t = - 0.2 GeV$^2$. Curves correspond respectively to the pure Bethe-Heitler contribution (dashed), the Bethe Heitler + interference at LO (dotted) and the Bethe-Heitler + interference at NLO (solid). } \end{figure} \begin{figure}[h] \hspace*{3cm} \includegraphics[width=.4\textwidth, angle=0]{xsec_phidep.eps} \caption{ The $\phi$ dependence of the lepton pair photoproduction cross-section at $E_\gamma = 10$ GeV, $Q^2 = \mu ^2 = 4$~GeV$^2$, and $t= -0.1$~GeV$^2$ integrated over $\theta \in (\pi/4,3\pi/4)$: pure Bethe-Heitler contribution (dashed), Bethe-Heitler plus interference contribution at LO (dotted) and NLO (solid). } \end{figure} \begin{figure}[t] \hspace*{3cm} \includegraphics[width=.4\textwidth, angle=0]{R_mod.eps} \caption{The R ratio defined by Eq. 6 as a function of $\eta$, for $Q^2 = \mu_F^2 = 4$ GeV$^2$ and $t= -0.1$ GeV$^2$; the LO result is shown as the dotted line, the full NLO result by the solid line and the NLO result without the gluonic contribution as the dashed line.} \end{figure} Generalized parton distributions (GPDs) \cite{historyofDVCS, gpdrev} are a beautiful tool to access the 3-dimensional inner structure of hadrons \cite{Impact}. A necessary step to extract in a reliable way some information on quark and gluon GPDs is to study \cite{NLO} $O(\alpha_s)$ QCD contributions to the amplitude of spacelike Deeply Virtual Compton Scattering (DVCS) : \begin{equation} \gamma^*(q_{in}) N(P) \to \gamma(q_{out}) N'(P'=P+\Delta) \,,~q_{in}^2 =-Q^2,~q_{out}^2 =0,~t=\Delta^2,~\xi =\frac{Q^2}{(P+P')\cdot(q_{in}+q_{out})} \nonumber \,, \end{equation} and of its crossed reaction, timelike Compton scattering (TCS) : \begin{equation} \gamma(q_{in}) N(P)\to \gamma^*(q_{out}) N'(P'=P+\Delta)\,,~q_{in}^2 =0,~q_{out}^2 =Q^2,~t=\Delta^2,~ \eta =\frac{Q^2}{(P+P')\cdot(q_{in}+q_{out})} \nonumber \,. \end{equation} After factorization, the DVCS (and similarly TCS) amplitude is written in terms of Compton form factors (CFF) $\mathcal{H}$, $\mathcal{E}$ and $\widetilde {\mathcal{H}}$, $\widetilde {\mathcal{E}}$ as : \begin{eqnarray} \mathcal{A}^{\mu\nu}(\xi,t) = \frac{- e^2}{(P+P')^+}\, \bar{u}(P^{\prime}) \Big[\, && g_T^{\mu\nu} \, \Big( {\mathcal{H}(\xi,t)} \, \gamma^+ + {\mathcal{E}(\xi,t)} \, \frac{i \sigma^{+\rho}\Delta_{\rho}}{2 M} \Big) \nonumber\\ & +&i\epsilon_T^{\mu\nu}\, \Big( {\widetilde{\mathcal{H}}(\xi,t)} \, \gamma^+\gamma_5 + {\widet
ilde{\mathcal{E}}(\xi,t)} \, \frac{\Delta^{+}\gamma_5}{2 M} \Big) \,\Big]\, u(P) \, , \label{eq:amplCFF} \end{eqnarray} with the CFFs defined, for instance in the cases of $\mathcal{H}(\xi,t)$ and $\widetilde {\mathcal{H}}(\xi,t)$, as : \begin{eqnarray} \mathcal{H}(\xi,t) &=& + \int_{-1}^1 dx \, \left(\sum_q T^q(x,\xi)H^q(x,\xi,t) + T^g(x,\xi)H^g(x,\xi,t)\right) \; , \nonumber \\ \widetilde {\mathcal{H}}(\xi,t) &=& - \int_{-1}^1 dx \, \left(\sum_q \widetilde {T}^q(x,\xi)\widetilde {H}^q(x,\xi,t) +\widetilde {T}^g(x,\xi)\widetilde {H}^g(x,\xi,t)\right). \label{eq:CFF} \end{eqnarray} To estimate Compton Form Factors (CFF), we use the NLO calculations of the coefficient functions which have been calculated in the DVCS case in the early days of GPD studies and more recently for the TCS case \cite{NLO}, the two results being simply related thanks to the analyticity (in $Q^2$) properties of the amplitude \cite{MPSW}: \begin{eqnarray} ^{TCS}T(x,\eta) = \pm \left(^{DVCS}T(x,\xi=\eta) + i \pi C_{coll}(x,\xi = \eta)\right)^* \,, \label{eq:TCSvsDVCS} \end{eqnarray} where the $+$~$(-)$ sign corresponds to the vector (axial) case. \begin{figure}[t] \begin{center} \includegraphics[width= 0.4\textwidth]{DVCS_Q2_4_t_01_Re_Im.eps} \includegraphics[width= 0.4\textwidth]{TCS_Q2_4_t_01_Re_Im.eps} \caption{ The real (first and third columns) and imaginary (second and fourth columns) parts of the spacelike (first and second columns) Compton Form Factor $\xi \, \cal H$ and timelike (third and fourth columns) Compton Form Factor $\eta \, \cal H$ , for $\mu_F^2 = Q^2, Q^2/2, Q^2/3, Q^2/4$, from top to bottom, and for $Q^2=4$ GeV$^2$, $t= -0.1$ GeV$^2$ and $\alpha_s=0.3$.} \end{center} \end{figure} Our estimates are based on two GPD models based on Double Distributions (DDs), as discussed in detail in Ref. \cite{Moutarde:2013qs} : the Goloskokov-Kroll (GK) model and a model (MSTW) based on the MSTW08 PDF parametrization. Our conclusions do not depend strongly on the GPD model used. We get the results shown in Fig. 1 and Fig. 2 for the real and imaginary parts of the spacelike and timelike dominant CFF $\mathcal{H}(\xi,t) $ and $\mathcal{H}(\eta,t) $, when choosing the factorization scale at the {\em natural} value $\mu_F^2=Q^2$. Comparing dashed and solid lines leads to the surprising observation that gluonic contributions are so important that they even change the sign of the real part of the CFF, and are dominant for almost all values of the skewness parameter. A milder conclusion arises for the imaginary part of the CFF where the gluonic contribution remains sizeable for values of the skewness parameter up to $0.3$. Because of the competing Bethe Heitler mechanism which often dominates, the importance of NLO QCD corrections to observables depend on their sensitivity to the DVCS or TCS amplitudes. This is demonstrated in Fig. 3 in the DVCS case and in Fig. 4 and 5 for the TCS case. Note in particular the strong dependence of the ratio $R(\eta)$ defined \cite{BDP} as : \begin{equation} R(\eta)=\frac{2\int\limits_0^{2\pi}\;d\phi\;cos \phi\;\frac{d\sigma}{dQ^2\,dt\,d\phi}}{\int\limits_0^{2\pi}\;d\phi\frac{d\sigma}{dQ^2\,dt\,d\phi}}, \label{eq:Rratio} \end{equation} which is linear in the real part of the timelike CFF. The fact that both spacelike and timelike Compton form factors receive sizable NLO contributions may worry the reader; indeed one usually tries to resum large radiative corrections to stabilize a perturbative expansion. Although we explored somewhat this possibility \cite{APSW}, we would like to prevent the critical reader from drawing a hasty conclusion on the convergence rate of the perturbative QCD expansion of the amplitude based on our NLO results. Indeed, most of the NLO correction comes from the gluonic term, which does not exist at LO. The large NLO contribution is therefore more a signature of the large size of the gluonic GPD than of the slow rate of the expansion. The real rate of the QCD expansion cannot be accessed before the NNLO contributions are computed. Our only measure of the validity of the QCD expansion is the smallness of the NLO quark contribution to the amplitude, as exemplified by the proximity of the dotted and dashed lines on Fig. 1 and 2. Let us now turn to the factorization scale dependence of our results. There is no proven recipe to optimize the choice of the factorization scale in any QCD process. The question has been raised in several studies of inclusive and exclusive reactions but no definite strategy has yet emerged. In order to pave the way, we show on Fig. 6 the spacelike and timelike Compton form factor with the GK model, letting $\mu_F^2$ vary between $Q^2$ and $Q^2/4$. \vskip.3in \noindent {\bf Acknowledgments} \vskip.1in \noindent This work is partly supported by the Polish Grants NCN No DEC-2011/01/D/ST2/02069, by the Joint Research Activity "Study of Strongly Interacting Matter" (acronym HadronPhysics3, Grant Agreement n.283286) under the Seventh Framework Programme of the European Community, by the GDR 3034 PH-QCD, and the ANR-12-MONU-0008-01, and by the COPIN-IN2P3 Agreement.
\@startsection{section}{1{\@startsection{section}{1} \z@{1.0\linespacing\@plus\linespacing}{.8\linespacing}{\Large}} \def\@startsection{subsection}{2{\@startsection{subsection}{2} \z@{.8\linespacing\@plus.7\linespacing}{.7\linespacing}{\large}} \def\@startsection{subsubsection}{3{\@startsection{subsubsection}{3} \z@{.5\linespacing\@plus.7\linespacing}{-.5em}{\normalfont\bfseries}} \makeatother \setcounter{MaxMatrixCols}{10} \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{corollary}{Corollary}[section] \theoremstyle{definition} \newtheorem{definition}{Definition}[section] \theoremstyle{definition} \newtheorem{assumption}{Assumption}[section] \theoremstyle{definition} \newtheorem{example}{Example}[section] \setlength{\textwidth}{6.4in} \setlength{\textheight}{9in} \setlength{\topmargin}{-0.1in} \setlength{\oddsidemargin}{0in} \setlength{\evensidemargin}{0in} \vfuzz4pt \hfuzz4pt \title{} \begin{document} \vspace*{3ex minus 1ex} \begin{center} \Large \textsc{Quantifying Theory in Politics: \\ Identification, Interpretation and the Role of Structural Methods} \bigskip \end{center} \date{% \today% } \vspace*{2ex minus 1ex} \begin{center} Nathan Canen and Kristopher Ramsay\\ \medskip \medskip \end{center} \thanks{\textbf{Canen:} University of Houston and Research Economist at NBER.\\ \textbf{Ramsay:} Princeton University. \\ We would like to thank Scott Ashworth, Andy Eggers, Sean Gailmard, Mike Gibilisco, Bobby Gulotty, Gleason Judd, Amanda Kennard, Korhan Kocak, Monika Nalepa, V\'{i}tor Possebom, Joseph Ruggiero, Kevin Song, Tara Slough, Dustin Tingley, Scott Tyson, participants at the University of Chicago Workshop on Uses of Formal Theory for comments. The ideas in this paper have been improved and shaped by numerous conversations over many years in various venues and meetings. } \begin{abstract} The best empirical research in political science clearly defines substantive parameters of interest, presents a set of assumptions that guarantee its identification, and uses an appropriate estimator. We argue for the importance of explicitly integrating rigorous theory into this process and focus on the advantages of doing so. By integrating theoretical structure into one's empirical strategy, researchers can quantify the effects of competing mechanisms, consider the ex-ante effects of new policies, extrapolate findings to new environments, estimate model-specific theoretical parameters, evaluate the fit of a theoretical model, and test competing models that aim to explain the same phenomena. As a guide to such a methodology, we provide an overview of structural estimation, including formal definitions, implementation suggestions, examples, and comparisons to other methods. \end{abstract} \maketitle \medskip {\noindent \textsc{Keywords.} Quantitative Methods, Formal Theory, Identification, Structural Methods, Counterfactuals, Research Designs.} \medskip \newpage \@startsection{section}{1{Introduction} All quantitative empirical analysis is based on models. Counterfactuals are derived from models, as is the identification of parameters and interpretation of their estimates. Indeed, it is theoretical assumptions that bridge the gap between estimates and their interpretation \citep{lundberg2021whata}. The validity of such assumptions, and the choice of which specification to use, is necessarily a result of the connection between the empirical strategy and underlying theory, whether motivated formally or qualitatively. Thus, it is critical to think carefully about the underlying model. We provide a framework for empirical analysis grounded in this observation. Since all empirical quantities of interest are defined relative to some theoretical model, the framework can be used by any researcher. It does not depend on the use of `structural' or `reduced-form' methods. Our framework begins with defining an empirical target, which will depend on the research question and theoretical interest. Then, the researcher specifies their underlying modeling assumptions and specification. Together, these allow the researcher to interpret their findings and can guarantee statistical identification for their parameter(s) of interest. Finally, the researcher proposes an estimator to estimate the desired parameter from a sample. To see why our framework is useful, consider the following example. Suppose a researcher has data on vote shares for party $p$ in district $d$ and on a district-level characteristic, $X_d$ (e.g., average education levels, share of women in the district, etc.), and decides to run the following linear regression: \begin{equation} \log(vote~share_{p,d}) = \alpha + \beta X_d + \varepsilon_{p,d}.\label{example_intro} \end{equation} When should this researcher interpret $\beta$ as: (i) the slope of the linear best fit between the outcome and $X_d$, (ii) a (semi)-elasticity of $X_d$ on vote shares of $p$, (iii) the Average Treatment Effect from a policy which increased $X_d$, (iv) preferences from voters with characteristic $X_d$ for party $p$, (iv) the effect from a counterfactual policy that would increase $X_d$? All five interpretations appear in the literature when discussing the results of this regression \citep{bartels2001presidential,gerber1998estimating, nadeau2001national, hansford2010estimating}, but the quantity in each case is defined relative to a different underlying model. Furthermore, these theoretical quantities cannot be differentiated from the estimates of (\ref{example_intro}). And yet, the choice between (i)-(v) can be very consequential: each interpretation may yield vastly different theoretical, policy and welfare conclusions. In this way, the analysis is reliant on a theoretical model all the way down. Motivated by this, in the first part of this article, we elaborate on a framework for analysis that pays careful attention to their model. This framework simply recognizes that all parameters of interested are defined relative to some model. That is, there is no atheoretical, or model free statistics. Quantitative analysis is models all the way down. The validity or usefulness of assumptions that allow statistical analysis, and the choice of which specification to use, is inherently due to theory. In the second part of the paper, starting with Section \ref{section_structural}, we discuss empirical methods for researchers who wish to identify and estimate parameters that are defined relative to a formal theoretical model of politics. Examples include preferences (e.g., ideologies), measures of welfare, or parameters governing agent behavior (e.g., the magnitude of strategic substitutability in the decision to go to war or attend a protest). This approach is often referred to as \textit{structural methods} or, loosely, \textit{structural estimation}. It uses a formalized mathematical theory, motivated by the political phenomenon of interest, as a foundation for the statistical model. For example, it may use a spatial model of legislative voting as a foundation for a statistical model for estimating legislator preferences or a contest model of war fighting between states to estimate the military returns to being a democracy. Structural methods seek to identify and estimate parameters that have a clear interpretation relative to a formal model (see \citealp{Keane10}). This differentiates it from alternative methods. However, contrary to conventional wisdom, structural methods do not have to be computationally intensive or nonlinear, though they can be. They do not have to be derived from a game-theoretic model. There is no inherent tension between structural approaches and experiments, whether lab-based, randomized or quasi-natural ones. In fact, there is structural research that utilizes all these types of data, as we describe in Section \ref{step_by_step}. However, by starting from a model motivated by the subject under study, the structural approach has at least four major benefits. First, structural parameters have a clear causal and theoretical interpretation, which is borne from the formal theoretical foundation. This implies that assumptions for their identification and interpretation are clear. Second, by specifying a clear theoretical domain, it can be possible to recover parameters that may not be identifiable (or observable) otherwise. An example being ideologies or preferences, enabling researchers to answer questions that otherwise could not find empirical quantification.\footnote{The difficulty of estimating preferences a-theoretically are discussed in detail in \citet{AbramsonForth} on conjoin experiments.} Third, the extrapolation of estimated effects to new settings is possible, and defined by the scope of the underlying theory, allowing a wide variety of counterfactual experiments and analysis. Finally, the evaluation of the model itself is of interest. The analyst can ask such questions as: how well does the theoretical model fit the data? Does this model do a better job than other models? What observations does the model explain well, and which does it not explain well? Is the model effective at predicting in sample? Is it effective out-of-sample? These benefits have substantial payoffs for political science as an empirical and theoretical discipline. Once one understands that all quantitative analysis is based on models, theoretical structure (i.e., theoretically motivated assumptions) and identification become complementary, and it becomes clear how those interested in the experimental and reduced-form traditions can fruitfully interact with empirically minded theorists. Second, providing theoretical structure to empirical analysis forces the researcher to focus on mechanisms, rather than effects, which facilitates engagement with the real world. Policy oriented empiricists are often skeptical of structural work due to its reliance on several theoretical assumptions. There is a widely held belief that results that follow from what appear to be minimalist or simple assumptions are more useful. The unease can come from skepticism regarding the behavioral and equilibrium assumptions that underlie formal models. However, if the goal is to produce research that can inform policy, it is important to investigate the mechanisms behind causal effects. Because of its lack of connection to behavior or equilibrium effects, reduced-form methods are limited in their ability to identify efficient ways to change an outcome, as well as to understand the consequences of policy changes and counterfactuals that scale \citep{al-ubaydli2017what, Heckman08}. But we are not completely ignorant of how the world works, theory provides insight that seems wasteful to discard. And this would be especially wasteful for questions and environments dearth of data. Therefore, a researcher with an eye towards policy implications should be especially interested in pursuing a structural approach. Finally, and maybe most importantly for political science, a structural approach can facilitate cumulative science. We have observed that reduced-form studies struggle to generate cumulative knowledge, largely due to our lack of understanding about the external validity of specific results. Frequently, testing general theories solely with reduced-form methods leads us down the path of series after series of contradictory findings, with affirmative results found in one place, negative result in another, in a never-ending process, of which some examples are discussed below. This does not mean either reduce-form result is incorrect. Instead, the problem is our poor understanding of external validity. In principle, although not always in practice, theoretically grounded structural models lend themselves to cumulative knowledge building since they are designed to be externally valid. Then, we would be able to have many scholars developing models, comparing different models, and changing or integrating them in specific attempts to understand features of politics that reduced-form practices cannot discover by construction. Due to these benefits, structural methods have been growing in prominence in political science and political economy, with a wide set of applications across fields. For instance, it has been used extensively in American Politics, with DW-Nominate \citep{poole1984presidential,bonica2013why} being a salient example, in Comparative Politics to study the role of learning about democracy (\citealp{Abramson20}), voter's use of information in different contexts (e.g., \citealp{Kendall15, Cruz20}), or coalition formation in parliamentary democracies (e.g., \citealp{diermeier03}), in International Relations in the study of conflict and civil war (e.g., \citealp{signorino1999strategic,crisman-cox2018audience,Kenkel21}), among many others. However, many find it hard to implement and evaluate structural methods. This is because statistical procedures can be bespoke, and the analyst is often interested in evaluating the whole model or simply quantifying theoretical parameters, rather than testing a parameter's significance. This means that tools that look more like those found in machine learning, such as non-nested models tests, likelihood ratio tests, joint-significance tests, and cross-validation are often more useful than t-tests. Also, standard errors describe the uncertainty surrounding quantities rather than criteria for rejecting a null hypothesis that was never posed. Section \ref{step_by_step} provides examples of the variety of ways scholars have used statistical techniques to estimate model parameters and propose ways in which researchers, editors, and reviewers can evaluate whether the implementation of such methods in a paper are appropriate or not. We provide many examples among existing works in political science and political economy as we go. \@startsection{section}{1{Models Everywhere} We start from the somewhat trivial observations that all quantitative empirical analysis starts with a model. We could also call these theories, theoretical models, or models of a theory. In any case, the research assumes a set of untested assumptions about how the world generates data and what is observed. To fix ideas, consider the following common statistical models. \@startsection{subsection}{2{Linear Model}\label{section_linear} In the classical linear regression model, a researcher posits a linear relationship between an outcome, $Y_i \in \mathbb{R}$, and external variables (e.g., covariates) $X_i \in \mathbb{R}^n$, as: \begin{equation} Y_i = X_i'\beta_0 + \varepsilon_i,\label{linear_ex} \end{equation} where $\varepsilon_i$ is a random variable that is unobserved to the researcher, and $\beta \in \mathbb{R}^n$ are \textit{parameters}. Furthermore, it is assumed that $\mathbb{E} X_i X_i'$ is invertible, and that $\mathbb{E}[\varepsilon_i \mid X_i] = 0$ when $(Y_i, X_i)$ are i.i.d. The combination of the linear specification together with the assumptions on the distribution of unobservables and population counterparts to $X_i$ are the model. The researcher assumes the model then aims to estimate the value of the parameter, denoted $\beta_0$, that generates the observed data $(Y_i, X_i)$. Hence, the statistical model is defined as (\ref{linear_ex}) and its assumptions, while the researcher wants to learn the parameter of interest $\beta_0$ from the data. \@startsection{subsection}{2{Logit}\label{section_logit} In the Logit model, the researcher observes a binary outcome, $Y_i \in \{0,1\}$ and $X_i \in \mathbb{R}^n$, while assuming that: \begin{equation} Y_i = 1\{X_i'\gamma_0 + \varepsilon_i \geq 0\},\label{example_logit} \end{equation} where $1\{.\}$ denotes an indicator variable (i.e., equals 1 if the condition in the brackets is satisfied, and 0 otherwise). Again, this model is fully defined by assumptions on population counterparts to $(X_i, \varepsilon_i)$. More precisely, $\varepsilon_i$, which are assumed to follow a (standard) Logistic distribution with CDF $\Lambda(\cdot)$, so that $P(Y_i = 1 \mid X_i) = \Lambda(X_i'\gamma)$, with $\mathbb{E} X_i X_i'$ invertible. The researcher may wish to learn the value of $\gamma_0$ that generates the data. The model is defined for many values of $\gamma$, but the parameter of interest, $\gamma_0$, is the particular value which induces the observed distribution of $(Y_i, X_i)$, among many possible options. \@startsection{subsection}{2{ATE in the Potential Outcomes Framework} In the potential outcomes framework, researchers consider a situation where there is a treatment, denoted by $D_i\in \{0,1\}$, that is applied to units of interest and each unit has two potential outcomes, \begin{equation} Y_{Di}= \begin{cases} Y_{1i} \textrm{ if i is treated}\\ Y_{0i} \textrm{ if i is untreated.} \end{cases} \end{equation} The causal effect of $D_i$ on $i$ is \begin{equation} \tau_i=Y_{1i}-Y_{0i}. \end{equation} The model is defined such that $D_i Y_{1i}-(1-D_i)Y_{0i}$ is observed, that the treatment assignment is independent of the potential treatment effect $(Y_{1i}-Y_{0i}\perp D_i)$ (e.g., due to randomization) and that there are no spillover effects. \citep[p.140]{cunningham2021causal}. The researcher may be interested, for instance, in the Average Treatment Effect (ATE), $\tau_{ATE} = \mathbb{E} \tau_i$. \@startsection{subsection}{2{Ideological Voting in Legislatures}\label{model_ideo_spatial} Consider a prominent class of models in political science: multidimensional ideological voting. A researcher wants to measure politician ideologies after observing a series of roll-call votes $t=1,...,T$. Each politician $i$ has ideology $\gamma_i \in \mathbb{R}^n$ and chooses to vote ``Yes" ($Y_{i,t}=1$) or ``No" ($Y_{i,t}=0$) on each roll-call depending on whether the alternative policy $x_{t}$ gives them a higher utility than the status-quo policy ($q_t$). The preference of politician $i$ for policy $x_t$ is modeled as a random utility composed by a deterministic part, $U(x_t, \gamma_i)$ and a random part $\varepsilon_{i,x,t}$. Hence, politician $i$ votes ``Yes" on roll call $t$ if: \begin{eqnarray} U(x_t, \gamma_i) + \varepsilon_{i,x,t} \geq U(q_t, \gamma_i) + \varepsilon_{i,q,t}. \end{eqnarray} \noindent If $U(k_t, \gamma_i)$ is a quadratic function $\|k_t - \gamma_i\|^2$ for $k_t \in \{q_t, x_t\}$ and $\varepsilon_{i,\cdot}$ is $i.i.d.$ Normal, then this is the model in \cite{CJR04} (absent party effects), \cite{Heckman97}, \cite{rivers03}, among others. If $U(k_t, \gamma_i)$ is a Gaussian function and $\varepsilon_{i,\cdot,t}$ is Normally distributed, then it is DW-Nominate \citep{carroll, boche}. \medskip All four examples reflect the essence of any empirical exercise. The researcher's objective is to learn the value of an underlying parameter ($\beta_0$, $\gamma_0$, $\{\gamma_i\}_{i=1}^n$, or $\tau_{ATE}$), from the realizations of a random vector ($(Y_i, X_i, D_i)$ here), having in mind a model (i.e., a specification and series of assumptions, characterizing a ``class" of possible relationships between variables in the data). In its most general form, an empirical (statistical) model is a collection of probability distributions over observable variables, derived from a theory of how the world generates the data, such that all possible models are indexed by parameter values and the true distribution is within that class. Statistical identification only makes sense \textit{after} defining a model. It is the promised logical conclusion of the statistical (theoretical) proposition for which the model is the hypothesis. \@startsection{section}{1{Theoretical Models and Identification} \label{identification} The term \emph{identification} is used in many ways in empirical work (\citealp{Lewbel19}). However, there is only one formal definition, which is related to a \textit{minimal requirement} for a well-defined model. Formally, a model is a triple $(\Theta, \gamma, \mathbb{P}_X)$ where $\Theta$ is the set of unobserved parameters and $\gamma$ is a mapping $\gamma: \theta \to \mathbb{P}_X$, where $\mathbb{P}_X$ is the set of all joint distributions over observable random variables $X$. \begin{definition} A model $(\Theta, \gamma ,
\mathbb{P}_X)$ is identified if and only is for every $(\theta, \tilde \theta) \in \Theta^2 $, $\gamma(\theta)=\gamma(\tilde \theta)$ if and only if $\theta=\tilde \theta$ \citep{athey2002identification}. \end{definition} So $\theta$ is \textit{identified relative to a model} if there is a unique value that rationalizes the distribution of data, i.e., there are not two different parameters $\theta$ and $\tilde \theta$ that, within the model, could induce the same distribution of data, $P_X \in \mathbb{P}_X$.\footnote{There are other types of identification, like partial or set identification, which have similar flavor but do not require the relationship to hold with equality. See \citet{Lewbel19} for a review of other forms of identification used in empirical analysis.} In the linear example above, this requires that only one value of $\beta$ (i.e., $\beta_0$) for the model in equation (\ref{linear_ex}) generates the distribution of the observable $(Y_i, X_i)$. \@startsection{subsection}{2{Identification in the Linear Model} Consider the model in Section \ref{section_linear}. The researcher wants to learn the true $\beta_0$. Under the stated assumptions of the model, we can write: \begin{eqnarray} \beta_0 = \left(\mathbb{E} X_i X_i'\right)^{-1} \mathbb{E} X_i Y_i. \label{linear} \end{eqnarray} As we can see, the left-hand side of (\ref{linear}) is the parameter of interest, while the right-hand side is a known function of the distribution of observable data (i.e., of the distribution of $(Y_i, X_i)$). Given the model, \textit{and if the researcher knew the distribution of observables} (including $\mathbb{E} X_i Y_i, \mathbb{E} X_i X_i'$), they could infer $\beta_0$ for sure. That is, there is no other value of $\beta$ that could generate the distribution of the data. Hence, $\beta_0$ is identified. Notice that if $\mathbb{E} X_i X_i'$ failed to be invertible (i.e., $X_i$ suffered from multicollinearity), then $\beta_0$ would not be identified. Indeed, there would be multiple values of $\beta$ that could generate the same data.\footnote{For instance, if $Y_i = \alpha + \beta_0 X_i + \varepsilon_i$, with $Var(X_i) = 0$, then this model would lead to the same joint distribution of $(Y, X)$ as $Y_i = \tilde \alpha + \varepsilon_i$, with $\tilde \alpha = \alpha + \beta_0 \mathbb{E} X_i$.} Note that the multicollinearity is a problem with the data generating population model, not with the sample. Samples play no role in identification, whether a parameter is identified or not is a statement about the nature of the theory of the data generating process and what is observed in hypothetical infinite samples. \@startsection{subsection}{2{Logit}\label{section_logit_id} We now revisit Section \ref{section_logit}. Under the stated assumptions, \begin{eqnarray} \label{logit} \mathbb{E}[Y_i\mid X_i] &=& \Lambda(X_i'\gamma_0) \notag \\ \Lambda^{-1}(\mathbb{E}[Y_i \mid X_i]) &=& X_i'\gamma_0 \notag \\ \mathbb{E} X_i X_i' \gamma_0 &=& \mathbb{E} X_i \Lambda^{-1}\mathbb{E}[Y_i \mid X_i] \notag \\ \gamma_0 &=& (\mathbb{E}X_i X_i')^{-1} \mathbb{E} X_i \Lambda^{-1}(\mathbb{E}[Y_i \mid X_i]). \end{eqnarray} Again, the right-hand side is a known function of the moments of $(Y_i, X_i)$, so $\gamma_0$ is again identified. By comparison, suppose that $\varepsilon_i$ was Logistic, but with scale parameter $\sigma \neq 1$. Then, the model $Y_i = 1\{X_i'\gamma_0/\sigma>\varepsilon_i/\sigma\}$ would generate the same distribution of $(Y_i, X_i)$ as the one above. Hence, the researcher observing the distribution of $(Y_i, X_i)$ cannot know if the data came from the first model (with parameter $\gamma_0$), or from the second model (with parameter $\gamma_0/\sigma$). Hence, $\gamma_0$ would not be identified if the distribution of $\varepsilon_i$ was unknown. \@startsection{subsection}{2{Potential Outcomes and ATE} In the potential outcomes framework, we can write the observed $Y_i$ \begin{equation} Y_i=Y_{i0}+(Y_{i1}-Y_{i0})D_i + \sum_{j\neq i} \rho_{ji} D_j, \end{equation} where the first two terms define the observed outcome from $i$ and the third is the spillover effect of the treatments on the other $j$ observations. Assuming no spillovers, thus that $\rho_{ji}=0$ for $j$ and $i$, we have that \begin{gather*} \mathbb{E}[Y_i|D_i=1]-\mathbb{E}[Y_i|D_i=0]=\mathbb{E}[Y_{i1}|D_i=1]-\mathbb{E}[Y_{i0}|D_i=1] +\mathbb{E}[Y_{i0}|D_i=1]-E[Y_{i0}|D_i=0]. \end{gather*} The latter term is the selection effect, which is zero when $Y_{1i}-Y_{0i}\perp D_i$. Homogeneity and the linearity of the expectation operator give us that \[\mathbb{E}[Y_i|D_i=1]-\mathbb{E}[Y_i|D_i=0]=\mathbb{E}[Y_{i1}-Y_{i0}|D_i=1]= \mathbb{E}[\tau_i].\] If the no spillovers assumption were violated, then there would be many $(\rho, \tau)$ pairs that would produce the same difference in observed $Y_i$ and the model would be unidentified. \@startsection{subsection}{2{Identification of Ideologies} We refer the reader to \cite{rivers03} for a careful discussion and proof of identification for a general dimension of preferences $n$ and quadratic preferences $U(\cdot)$ covered in Section \ref{model_ideo_spatial}. While a similar intuition to the previous sections can be applied, the proofs are more subtle given the non-linearities and the large number of parameters (e.g., $n$ for each politician, plus $x_t, q_t$ for every roll-call). Meanwhile, \cite{CKT22} presents a careful discussion of identification in 1-dimensional DW-Nominate, and the challenges with identification in 2-dimensional DW-Nominate. (See Appendix B in that paper). \@startsection{subsection}{2{Important Messages on Identification} In this discussion of the notion of statistical identification, it is worth drawing attention to some key points. First, identification is always relative to a model. That is, there cannot exist an argument of identification that holds ``atheoretically" or simply by ``data". This is because the model allows us to define the parameters we are interested in, and requires us to be explicit about its underlying assumptions. Indeed, if one changed the theoretical modeling assumptions, there would be no guarantee that the parameter of interest would be identified. For instance, assume that we changed equation (\ref{linear_ex}) in Section \ref{section_linear} to include a new random variable on the right-hand side. Then, the original $\beta_0$ may fail to be identified. Similarly, $\beta_0$ would fail to be identified if we changed the exogeneity assumption to $\mathbb{E}[\varepsilon_i \mid X_i] \neq 0$. There is no reason why identification of a parameter would hold in a different model. Second, identification is always about population counterparts, and \textit{never} about samples. Suppose that a researcher knew everything they could ever wish for from the variables she observes (e.g., the whole distribution of $(Y_i, X_i)$ in Sections \ref{linear}-\ref{logit}). In the notation above, they know $P_X$ and, hence, all moments of the observable data. Identification asks us whether the researcher could then infer the true value of the parameter from this known distribution. In other words, could they uniquely pin down which parameters generated the data we observe? If not, then the model is ``ill-defined": \textit{not even in the ideal world could you know which parameters generated that data.} Of course, we do not live in such an ideal world. We do not observe $P_X$, we observe a sample of $(Y_i, X_i)$. However, identification comes before even thinking about such a sample: if we could not find out the true parameter \textit{even if we knew the true population}, how can we expect to estimate anything close to $\theta_0$ having only a sample instead? Finally, in any empirical work, the researcher's goal is to identify their parameter of interest. In many models, some parameters might be identified while others are not. In this case, the researcher must just be sure that their parameters of interest are identified. It is not strictly necessary \textit{for all} parameters to be identified, if some of them are useless to answer the research question.\footnote{One example is that Probit models are not identified when the error term has variance different than 1. However, the Average Partial Effect, $APE = \int \frac{\partial \mathbb{E}[Y_i \mid X_i = x_i]}{\partial x_i} dF_X$, where $F_X$ denotes the marginal distribution of $X_i$, is identified. See \cite{Marshak53}, \cite{Heckman10} for a discussion.} \@startsection{subsection}{2{Estimation} Estimation is the secondary process of quantifying theoretical parameters from samples. An estimator $\hat \theta$ for $\theta_0$ is a function of the observed sample. In the linear example above, an estimator is any function of the sample $(Y_i, X_i)_{i=1}^n$. That is it. A common estimator in the linear model is the Ordinary Least Squares Estimator (OLS). But that is not the only one. For instance, a Maximum Likelihood Estimator (MLE) could be used, and so could an estimator that is just the first observation of $X_i$. Different estimators will perform ``better" or ``worse" depending on its assumptions. OLS is often used because of its excellent properties for the linear model, but such properties do not hold in other models. We remark on two important aspects in this definition. First, an estimator is always defined relative to a parameter. Otherwise, what is the estimator seeking to quantify? Second, an estimator is based on the sample, not the population. It is a function of realized data, not the ``ideal world" of identification. Hence, estimation can only come after identification. After all, it only makes sense to estimate $\theta_0$ once we know that it is statistically identified. Otherwise, (i.e., if we could not pin $\theta_0$ down even if we knew everything we wished about the observable data), the estimator has no hope of providing a good approximation of $\theta_0$. Figure \ref{fig:model_id_est} below provides an illustration of these concepts. It is common for authors to confound identification and estimation in their presentation.\footnote{See, for example, \citet[p.581]{burke2015climate}. One common such mistake is to say ``estimate an OLS model". This confounds the estimator (OLS), with the model (linear) and its target parameter ($\beta_0$ in Section \ref{section_linear}) that is being estimated.} While usually harmless in terms of the interpretation of empirical results, this can lead to confusion about what assumptions imply the model and observable population moments make the calculation of a parameter possible versus what is necessary for a sample to provide a reliable estimate of this quantity. \begin{figure} \caption{Identification and Estimation} \centering \includegraphics[width=0.6\textwidth]{id_est_graph.png} \label{fig:model_id_est} \parbox[c]{6.2in}{% {\footnotesize{}Notes: A model specifies how parameters generate the distribution of $(Y_i, X_i)$. Identification asks how can we learn $\theta_0$ from the distribution of $(Y_i, X_i)$. Identification is about population features. When we have a sample (i.e., realizations of the distribution of $(Y_i, X_i)$), then we try to estimate $\theta_0$.}\bigskip{} }% \end{figure} \FloatBarrier \@startsection{section}{1{Theory, Interpretation, and Extrapolation\\ \normalsize (or, the Limits of Inference Without Theory)}\label{section_theory} The previous sections show how the technical elements of empirical work requires models.\footnote{The title of this section is drawn from the excellent \cite{Wolpin13} book, from which we draw inspiration.} This conflicts with the widespread view that ``letting the data speak for itself" or that a ``data-driven" analysis is possible. It can be very appealing to think that one's work does not require assumptions, and that one is free from theory in their interpretation of the results. Unfortunately, this is not the case. In addition to providing a technical foundation for a quantitative empirical exercise, theoretical models, whatever they may be, impart meaning and scope to the analysis. \@startsection{subsection}{2{Causality is a Theoretical Claim}\label{section_causal} An early discussion of the deep role of theoretical models goes back to a point made at least since \cite{Koopmans47}, who argued one cannot even measure variables without theory. This point also extends to causal claims. The ``credibility revolution" made an enormous impact in empirical work in the social sciences. In particular, it brought much needed attention to statistical identification and the need for care in empirical work. However, a byproduct of its popularity has been an interest in finding ``causal" effects with little attention the meaning of the estimand \citep{lundberg2021whata}. Causality, derived from an equilibrium model or the Rubin causal model, only exists relative to the theoretical model. For example, return to the Logit model and equation (\ref{logit}), \begin{equation}\gamma_0 =(\mathbb{E}X_iX_i')^{-1} \mathbb{E}X_i \Lambda^{-1}(\mathbb{E}[Y_i|X_i]) \end{equation} where $\Lambda(\cdot)$ is the Cumulative Distribution Function and $X_i$ and $Y_i$ are random variables. This is a formula that maps numbers to numbers and has no intrinsic meaning. As show above, it can be derived from a latent variable model with a logistic distribution function. In that context, $Y_i$ is an indicator that equals 1 when $X'_i \gamma_0 \geq \varepsilon_i$ and $0$ otherwise. $\gamma_0$ equals the coefficient that describes a unit change in $x$ causes a change in the log-odds of the outcome. But one could also come to this formula in at least two different ways. Suppose a researcher were interested in individual choice, and a decision problem where an agent had to choose between two alternatives. A natural first step would be to consider a theory of decision-making where (i) the agent selects alternatives with a probability that is monotone in the alternative's value and (ii) satisfied the principle that the consideration of new alternatives does not change the relative likelihood of choosing between two given alternatives, the latter being a form of statistical independence of irrelevant alternatives. \cite{luce59} shows that there is an axiomatic theoretical connection between this theory of decision-making and the Logit model. Specifically, if the value of an alternative is given by $v(x)=u(x)+\varepsilon$, and $\varepsilon$ has an Extreme Value Type 1 distribution, such a theoretically derived choice rule is consistent with probabilistic utility maximization and is equivalent to Logit. Here the $x$'s are attribute features and $\gamma_0$ is their effect on the valuation of the alternative. That is, the parameter is the effect of $x$ on the decision-maker's utility for selecting an alternative. In the context of strategic contests over a disputed good, the analysis in \citet{Kenkel21} shows that the standard Tullock model, with a natural parametrization of the marginal returns and marginal costs of effort, generate equilibrium strategies which are equivalent to a Logit probability model of winning the contest. That is, their strategic theoretical framework implies this same equation. In this case, $\gamma_0$ are marginal returns to effort from the observed factor. The point of this example - just like the first example in the Introduction - is that any time we want to travel the path between population parameters and meaning, the theoretical model is serving a second important purpose, a point carefully elaborated in \cite{Wolpin13}. It could be a statistical model, the Rubin causal model, or the equilibrium of a game. In any case, the meaning of $\gamma_0$ follows from the application of an underlying theoretical framework. An accurate way to understand structural methods' then is the aim to identify and estimate model-specific parameters in the light of a substantive theoretical model. This may include preferences (e.g., ideologies), measures of welfare, parameters governing agent behavior (e.g., the magnitude of strategic substitutability in the decision to go to war), or policy evaluation which depends on a clearly defined model (e.g., the implementation of a policy that has never happened before). This is not just a philosophical point. One salient example within political science has been the discussion about whether extreme events (e.g., shark attacks, or college football results) affect voter behavior \citep{achen2017democracy}. A positive finding is sometimes interpreted as evidence of voter irrationality - that is, why should voters change their opinions about politicians when faced with independent events?\footnote{See \cite{Graham22} and \cite{Fowler22} for recent results and an overview of this debate.} However, we can use theory to take a step back and guide our interpretation of this debate, an approach pursued in \cite{Fowler22}. In particular, it is completely possible that the presence of extreme events affecting voter behavior may still be consistent with Bayesian learning (e.g., through an extreme event informing voters about politicians' behavior in relevant events) - see \cite{Ashworth18}. Hence, the observation of linear model's estimates \textit{absent theory} is not sufficient to inform us about what is being recovered. Rather, the researcher must stipulate whether it is reasonable to assume that even a rational voter could learn about an incumbent's type from shark attacks. \@startsection{subsection}{2{External Validity and Extrapolation are Theoretical} Current research in statistics is also concerned with the problem of external validity and extrapolation confronting much statistical analysis \citep{findley2021external, egami2022elements,hartman2021generalizing}. Explicitly using substantive theory can solve both of these problems. When it comes to external validity and extrapolation, what researchers want to know is to what population the results naturally generalize? Theoretical models are constructed explicitly to represent a class of well-defined events. For example, a structural model of elections in advanced democracies applies to advanced democracies, a structural model of interstate wars applies to interstate wars, and a structural model of congressional voting applies to congressional voting. Estimates from structural models are also deep parameters of the underlying theory. This allows the research to take those parameters values and utilize them in related but new environments. Extrapolation makes sense once the quantities estimate are critical primitives of the theoretical framework. \@startsection{section}{1{Structural Methods: A Primer}\label{section_structural} The previous sections lay what is needed to execute and interpret any empirical analysis. None of these arguments or claims are limited to research that is structural. We now overview structural methods themselves. To do so, it is convenient to revisit its formal (historical) definition of structural analysis, relating it to the current use of the terms ``structural" and ``reduced-form". \@startsection{subsection}{2{``Structural" and ``Reduced-Form" Methods: A False Dichotomy} Structural methods are often contrasted to reduced-form methods. The latter are now associated with research designs such as Difference-in-Differences (DiD) and Regression Discontinuity Designs (RDD), etc, while the former are more prevalent in papers with formal models.\footnote{However, such definitions are imprecise. Research designs like Randomized
they help us to compare the broad, previous studies \cite{Trajtenberg90,Hall05,Wuchty07} that have used citations. Each patent in the datasets has a number of citations. Citations can be only discussed when used comparatively \cite{Hall00}. We also have to consider the skewness in distributions because old patents have a greater chance of being cited. Therefore, we normalized the number of citations by using the average citations for patents in the same year. In line with previous studies, we defined the impact of patents as the number of normalized citations. Figures \ref{fig:impactInv} and \ref{fig:impactOrg} show the average impacts of solo and team-authored patents by inventors and companies. Each bar has an error bar that represents standard error. The figures at left and right are for US and Japanese results and the upper and lower figures are for inventors and companies. Figure \ref{fig:impactInv} obviously shows that the team-authored patents have more impact. The same phenomenon was already discovered in a previous study \cite{Wuchty07}. As is the case for inventors, teams at companies (Figure \ref{fig:impactOrg}) have a greater impact on Japanese patents. However, we have to consider these facts carefully. The same measures in the US idicate 1.17 for solo companies and 1.23 for teams at companies (the left of Figure \ref{fig:impactOrg})). The results still have a gap but it is apparently small. Therefore, we conducted a Wilcox test on them. The alternative hypothesis is that the impact of teams is greater than the that of solo authors. The $p$ value is 0.0085. We can say that the impact of team is significantly is greater than that of solo authors in the US. A previous study predicted a similar phenomenon in which inter-institutional papers had a greater impact than do intra-institution papers \cite{Katz97}. A part of the results have already been found \cite{Inoue1202} and the inventors and companies were not connected in the previous paper. Hence, the results were discussed separatedly. Since the data structure is different, the analyses in this paper needed to be verified. In the rest of this section, the author will discuss repetition in collaboration. Our previous paper discussed the same aspects, but the data should also be analyzed because of differences in the data. \begin{figure*}[b] \begin{minipage}{0.4\hsize} \begin{center} \includegraphics[scale=0.6]{avereageImpactInventorUS.eps} \end{center} \end{minipage} \hspace{12ex} \begin{minipage}{0.4\hsize} \begin{center} \includegraphics[scale=0.6]{avereageImpactInventorInterJP.eps} \end{center} \end{minipage} \caption{Average impact of inventors: The figures at left and right show US and JP. The two bars are for patents applied for by solo inventors and those applied for by team inventors. The error bars are standard errors.\label{fig:impactInv}} \end{figure*} \begin{figure*}[b] \begin{minipage}{0.4\hsize} \begin{center} \includegraphics[scale=0.6]{avereageImpactCorpUS.eps} \end{center} \end{minipage} \hspace{12ex} \begin{minipage}{0.4\hsize} \begin{center} \includegraphics[scale=0.6]{avereageImpactOrgInterJP.eps} \end{center} \end{minipage} \caption{Average impact of companies: The figures at left and right show US and JP. The two bars are for patents applied for by solo company and those applied for by team companies. The error bars are standard errors.\label{fig:impactOrg}} \end{figure*} So far, we have simply discussed the average impact of links. However, these links have various characteristics, i.e., the geodesic distance between inventors, the attributes of connected nodes, and the age of links. For example, the impact of inventors increases if there is a long geodesic distance or a different nationality \cite{Narin91}. Here, we consider the age of links. Figures \ref{fig:historyInv} and \ref{fig:historyOrg} present transitions in impacts between inventors and companies for the US and Japan. The vertical axis shows the average impact. The horizontal axis shows the number of repetitions by the same inventors (Figure \ref{fig:historyInv}) or the same organizations (Figure \ref{fig:historyOrg}) If inventors A and B apply for a patent for the first time, there is one repetition. If another patent is applied for by the same inventors (or a group including the same inventors), there are two repetitions. All combinations and repetitions between inventors are counted and averaged. In addition to this, the horizontal axis has logarithmic bins. We used logarithmic bins because the number of samples decreased as the number of repetitions increased. The base of a logarithm is 4. The ``2-4'' means the average of all patents that have from two to four repetitions. Apparently, the impacts decrease as the repetitions continue in both figures. No samples fit the ``257-1024''. category in the US. Figures \ref{fig:historyOrg} presents the transitions in impacts for companies in the US and Japan. The Japanese figure also declines globally as repetitions continue, and there is a peak at ``5-16'' repetitions. This means that experience is working more positively at companies than that with inventors. The distribution in the US does not simply decline either, and looks flat. There are no data for ``257-1024'' repetitions. There is a difference between US and Japan. Inter-company joint applications are generally the second best results for collaborations at companies \cite{Hagedoorn03}. Therefore, we can understand the flat impact that inter-company histories in the US have. However, that in Japan is totally different from that in the United States. {\it Keiretsu} (the relationship between Japanese parent and dependent companies) can affect the phenomenon. However, {\it keiretsu} cannot explain why the inter-company patents have a greater impact than that of solo companies. The phenomenon not only indicates individuals but also organizations demonstrate collective behavior in developing inventions. Law systems, corporate cultures, or others can cause the phenomenon, but this requires more investigation. We also fint that inter-inventor relationships and inter-company relationships do not exhibit the same transitions by comparing the historical imapct of inventors and companies. As we mentioned earlier, inventors' relationships just decline. In contrast, companies' relationships do not exhibit simple behavior. If we consider that all inter-company activities results from inter-inventor activities, this difference means different inventors share and take over inter-company relationships. \begin{figure*}[b] \begin{minipage}{0.4\hsize} \begin{center} \includegraphics[scale=0.5]{avereageImpactHistoryInventorUSBase4.eps} \end{center} \end{minipage} \hspace{8ex} \begin{minipage}{0.4\hsize} \begin{center} \includegraphics[scale=0.5]{avereageImpactHistoryInventorJPBase4.eps} \end{center} \end{minipage} \caption{Average impact of repetitions between inventors: The figures at left and right show US and JP. Each bar is the average impact of the duration of repetitions. The error bars are standard errors.\label{fig:historyInv}} \end{figure*} \begin{figure*}[b] \begin{minipage}{0.4\hsize} \begin{center} \includegraphics[scale=0.5]{avereageImpactHistoryCorpUSBase4.eps} \end{center} \end{minipage} \hspace{8ex} \begin{minipage}{0.4\hsize} \begin{center} \includegraphics[scale=0.5]{avereageImpactHistoryOrgJPBase4.eps} \end{center} \end{minipage} \caption{Average impact of repetitions between companies: The figures at left and right show US and JP. Each bar is the average impact of the duration of repetitions. The error bars are standard errors.\label{fig:historyOrg}} \end{figure*} Here, the importance of layered networks is discussed. The impact of patents varies the situation with both inventors' and companies' links because inter-company patents are better than those of solo-companies and the impact declines as repetitions increase between inventors. It is also important that most patents be applied for by two or more inventors. Table \ref{tbl:cross} was prepared to look into and support the discussion. This is a cross table between inventors' and companies' repetitions for Japan. The table contains the repetition of ``1'' and ``2-10'' for each network. The range is just an example to explain the importance of layered networks. All patents in the table are inter-company ones. For instance, if a patent is the first patent between two inventors and two companies, this is categorized in the upper-left cell. If a patent is the first patent for inventors, but some other inventors in companies have already applied for one or more (up to nine) patents, the patent is counted in the lower-left cell of the table. The meaning of the lower right cell is now clear. There are no patents in the upper-right cell from the definition. All combinations of these impacts are significantly different depending on the population and non-parametric tests. The table directly illustrates the importance of layered networks. From the viewpoint of inter-company links, the average impact of ``2-10'' repetitions significantly varies according to repetitions by inventors. In the same way, the average impact from the viewpoint of inter-inventor links of ``1'' repetition again, significantly varies according to the repetitions by the companies. Therefore, the impact of patents is affected by both inventors' and companies' histories. If we knew of a model that could predict the probability where individual combinations of inventors or companies would be connected and could correctly replicate the observed networks, that would greatly help us to think bout invention strategies at companies, especially how to promote inter-inventor or inter-company inventions. From the discussion in this section, the importance of considering the layered networks for inventors and companies and of creating a model of replication is now clear. The author will discuss a replication model in the next section. \begin{table*}[t] \caption{Cross repetition table between inventors and companies: This is a cross table of repetition that specifically shows three cases. Each inter-company patent has a number of repetitions for inventors and companies who are involved. Each cell shows the average impact of their patents. The upper right cell is blank because if more than one patent is applied for by inventors working for different companies, more than one inter-company repetition is carried out.\label{tbl:cross}} \begin{center} \begin{tabular}{ll|r|r} & & \multicolumn{2}{l}{Inter-inventor repetition} \\ & & \multicolumn{1}{l}{1}\hspace{8ex} & \multicolumn{1}{|l}{2-10} \\ \hline Inter-company repetition & 1 & 3.27 & n/a \\ \cline{2-4} \hspace{14ex} & 2-10 & 3.73 & 2.92\\ \end{tabular} \end{center} \end{table*} \section{Model} \label{cha:model} On the basis of the observations so far, the author proposes a model to replicate observed networks. The author especially aims to replicate the degree distributions of networks. This is because the degree distributions directly indicate the topology that is exactly in line with the objectives mentioned in the introduction of this paper. As mentioned ealier, although US and Japanese patents probably have similar structures, joint-applications between US companies are not frequent, and therefore, insufficient data has been accumulated to discuss the structure and its model. Hence, only Japanese patent data are used in the rest of this paper. This paper focuses on two-layered networks that involve people and companies. So far, there have been a lot of generative models for networks \cite{Albert02}. To replicate the networks in this paper, a generative model has to: (1) explicitly assign a group (an organization, a community, or a company) to each node (people) of a replicated network, and (2) replicate not only a node but also a group. Gr\"onlund et al. proposed a modified seceder model to illustrate real social networks \cite{Gronlund04}. Jin et al.'s model was based on the dynamics that actual people meet \cite{Jin01}. Bogu$\tilde{n}$\'a introduced the concept of social distance and found models that could reproduce real social networks \cite{Boguna04}. Those models replicated the formation of groups in observed networks. Since these studies discussed models and formations of groups, they seem quite similar to the proposed model. However, their formations of groups were measured after networks were replicated by detecting methods for groups \cite{Girvan02,Newman0402,Radicchi04}. As previously mentioned, the proposed model has to explicitly provide a group to each node (the previous item (1)). Therefore, these studies are different from this study. There are some models that provide groups to nodes beforehand when they reproduce networks. Motter et al. considered the correlation of friendships, the positions in groups, and the correlation of positions
in groups \cite{Motter03}. Kimura et al. demonstrated that their model improved the prediction of real networks by incorporating directional attachments and community structures \cite{Kimura04}. These models also seem quite similar to the proposed model, yet there are some differences. That is because their organizational structure was fixed and did not grow (the previous item (2)). Goldstein et al. proposed a group-based Yule model \cite{Goldstein05}. Their model satisfied both items, (1) and (2). However, replicating networks with their model was attempted, it was not successful because the model could not replicate the observed data well in either inventors' or companys' networks. The tail of the inventors' distribution specifically had a linear shape in the model. Also, the head of the companies' distribution was curvilinear and it was not similar to the observed distribution. Li and Chen also analyzed their theoretical model that had both items. They showed that the degree distribution of the model was a power law in both nodes and groups \cite{Li05}. As explained in the previous section, the degree distribution of the nodes was not a power law. Therefore, their model could not be applied either. These papers were a survey of relevant studies. Since there are no models that can reproduce the network data in this study, we need a new model. \subsection{Guimera et al.'s model} \label{cha:guimera} The proposed model is based on Guimera et al.'s model \cite{Guimera05}, which aims to replicate the self-assembly of creative teams and has three parameters of team size ($m$), the fraction of newcomers in new productions ($p$), and the tendency of incumbents to repeat previous collaborations ($q$). The model has an endless pool of newcomers. Newcomers become incumbents after being selected. The model adds members to a team according to $m$. Probability $p$ indicates a member drawn from the pool of incumbents. If a member has already been chosen from the pool of incumbents and there is already another incumbent that is already connected but has not been chosen, a new member is chosen with probability $q$ from the incumbents. Otherwise, a member is chosen from all the incumbents. Figure \ref{fig:guimeraProcess} outlines the process of how the model progresses. The process is repeated $m$ times for each team. \begin{figure*}[b] \begin{center} \includegraphics[scale=0.5]{guimeraProcess.eps} \caption{Process in Guimera et al.'s model: The process is executed by a number of members in a team. An incumbent is randomly chosen with probability $p$. If $p$ is not true, a newcomer is created. After $p$ is true, $q$ is tested. With probability $q$, an incumbent is a past collaborator of team members. However, if $q$ is not true, an incumbent is randomly chosen from all incumbents. \cite{Guimera05}\label{fig:guimeraProcess}} \end{center} \end{figure*} \begin{figure*}[b] \begin{minipage}{0.4\hsize} \begin{center} \includegraphics[scale=0.6]{miProbDist2.eps} \end{center} \end{minipage} \hspace{12ex} \begin{minipage}{0.4\hsize} \begin{center} \includegraphics[scale=0.6]{moProbDist2.eps} \end{center} \end{minipage} \caption{Distribution of inventors' and companies' team sizes: The companies' team size distribution is fitted with a power law distribution.\label{fig:miandmo}} \end{figure*} \begin{figure*}[b] \begin{minipage}{0.4\hsize} \begin{center} \includegraphics[scale=0.4]{iDegreeDistLogLogCompareOrgGuimera.eps} \end{center} \end{minipage} \hspace{12ex} \begin{minipage}{0.4\hsize} \begin{center} \includegraphics[scale=0.4]{oDegreeDistLogLogCompareOrgGuimera.eps} \end{center} \end{minipage} \caption{Comparison of inventors' and companies degree distribution: The figures at left plots the inventors' degree distribution. The figure at right plots the companies' degree distribution. The two different plots are observed and generated by the proposed model.\label{fig:compareGuimera}} \end{figure*} There are three ways of creating the sequence for $m$: (a) keep $m$ constant, (b) draw $m$ from a distribution that is expressed by parameters, and (c) draw $m$ from the observed distribution. The $m$ is drawn from a observed distribution in this paper. The team size for the inventors for each patent can be defined by assuming again that one or more than inventors have applied for a patent. The team size for companies can also be defined in the same way. Figure \ref{fig:miandmo} plots the probability distribution for the team size of inventors and companies. The author simply applied Guimera et al.'s model to replicate twofold networks to compare it with the proposed model. Therefore, three parameters were set of $\gamma=0.8$, $p=0.73$, and $q=0.69$. There were 1,307,429 patents ($P$). All these numbers were acquired from the observed networks. Each inventor was randomly assigned to a company from the number of observed networks when the inventor, i.e., a newcomer was created. Even though Guimera et al.'s model could replicate the inventors' network well, the model probably could not replicate the companies' network. This is because the model did not have an apparent architecture to replicate the companies' degree distribution. Figure \ref{fig:compareGuimera} plots the inventors' and companies' degree distributions in Guimera's model and the observed network. We can see the model replicates the observed network well in terms of the inventors' degree distribution. However, the model's distribution looks like a normal distribution with some deviation on the right Figure \ref{fig:compareGuimera}. If we consider the process where all inventors are randomly connected to companies, this may be a natural result. The strange form in the generated distribution can be explained as follows: (1) All inventors are randomly connected to a company. This leads to the large bell curve in the distribution. (2) When $p$ is true and $q$ is false in the model, inter-company connections are created toward the companies that have large numbers of inventors. This seems to be the cause of the small deviation on the right of the bell curve. \subsection{The model} As we saw a simple application of Guimera et al'.s model was not sufficient, the author created a new model on the basis of the same model. Figure \ref{fig:lambdaProcess} outlines the proposed model. Guimera et al.'s model remains at the upper left. The model contains a new process for choosing companies (X) and creating companies (Y). There are two possibilities, when an inventor is a newcomer. If the inventor is the first member of a company, Y is executed. If it is not, X is executed. X has a parameter, $r$. $r^k$, where $k$ is the number of companies that already exist in the patent, is the probability of choosing a company from the pool of all existing companies and is assigned to the newcomer or the incumbent. If $r^k$ is not true, the same company that one of the members already belongs to is chosen for the newcomer or the incumbent. Y has a parameter, $s$. $s$ is the probability of creating a new company and assigning it to the newcomer. If $s$ is not true, a company is randomly chosen from the pool of all existing companies and assigned to the newcomer. These new parameters, $r$ and $s$ can be acquired from the observed data as well. \subsection{Simulation result and discussion} Aditional settings were $r=0.06$ and $s=0.085$. The other settings were the same as those in Guimera et al.'s model. Figure \ref{fig:compareLambda} plots the results, which follow the inventors' degree distribution well. In addition, the companies' degree distribution is improved. \begin{figure*}[b] \begin{center} \includegraphics[scale=0.5]{lambdaProcess.eps} \caption{Process for proposed model: It has Guimera et al.'s model at the upper left. X and Y on the left mean jumps to X and Y on the right.\label{fig:lambdaProcess}} \end{center} \end{figure*} \begin{figure*}[bh] \begin{minipage}{0.4\hsize} \begin{center} \includegraphics[scale=0.45]{iDegreeDistLogLogCompareOrgLambda.eps} \end{center} \end{minipage} \hspace{12ex} \begin{minipage}{0.4\hsize} \begin{center} \includegraphics[scale=0.45]{oDegreeDistLogLogCompareOrgLambda.eps} \end{center} \end{minipage} \caption{Comparison of inventors' and companies' degree distribution: Graph at left plots inventors degree distribution and that at right plots companies' degree distribution. Two different plots are observed and generated by the proposed model.\label{fig:compareLambda}} \end{figure*} The author investigated two other models. These were Guimera's model with a preferential attachment and Goldstein's model. The explanations and the simulation results for them are described in the Appendix. These models could also replicate two-layered networks well. The first one was devised by the author and the second one was devised by Goldstein et al. There are no other common models to replicate two-layered networks. The main reason the author investigated the other two models was to find the realistic characteristics that the proposed model had and that the others did not have. First, as all the parameters were acquired from the observed data, no tuning was required. Hence, we can immediately build the model if we obtain the observed data. The other models, in contrast, require parameters to be tunied. Second, all decision making is local. The other models, on the other hand, have preferential attachment processes. This requires global information, i.e., all degree information has to be known. This is not realistic if we consider the actual activities of inventors or the companies. Apparently, companies choose their partners based on the limited information they own. Since the proposed model just replicates the degree distributions, it is a beneficial to consider that the proposed model has demonstrated one plausible procedure for the growth of two-layered networks. Even so, the proposed model gives us important insights into the inter-organizational activities of innovation. Inventors and companies in actual situations choose their collaborative partners based on specific reasons, such as their specialties or enterprise strategies. However, the proposed model tells us how a current network can predict future networks. We can deduce the following: (1) Inventors with many connections to other inventors have a great possiblity ob obtainig other connections in the future. (2) Companies that have highly connected inventors have a great possiblity of obtaining connections with other companies. (3) Most connections are inside companies. Therefore, companies have to retain many inventors to acquire inter-company connections. The dynamics the author discussed above can only be discussed by using growth models, especially two-layered models. \section{Summary} The author attempted to clarify the dynamics between inventors and companies from the viewpoint of a network of collaborations and used US and Japanese patent data, extracted collaborations between inventors and companies, and created two different networks based on a tripartite graph of patents, companies, and inventors. First, the author discussed the links' impacts, which were calculated as normalized citations. The average impacts of team-authored patents were better than those of solo-authored patents for both inventors and companies. Therefore, the impacts of patents were affected by both conditions. The links also demonstrated characteristics when we established the relationship between repetitions in collaborations and impacts. The impacts of inventors' collaborations declined as the repetitions continued. Japanese companies that demonstrated past experience were positively affected from the beginning. Moreover, the cross table of repetitions between inventors and companies directly illustrated the importance of layered networks. The impact of patents were strongly affected by both inventors' and companies' repetitions in collaborations. Consequently, it was found considering layered networks is important. Next, the author created a model to replicate a two-layered network to understand the evolution of the network. The model was based on Guimera et al's model. It could replicate the observed network well in terms of degree distributions. Compared to two other models, the proposed model had two practical parts. First, all parameters were based on observed data. Second, all decision making was local. The proposed model provided us insights into the inter-organizational activities of innovation. (1) Inventors with many connections to other inventors have a great possiblity of obtaining other connections in the future. (2) Companies that have densely connected inventors have a great possiblity of obtaining onnections with other companies. (3) Most connections are inside companies. Therefore, companies have to retain many inventors to acquire inter-company connections. \ACKNOWLEDGMENT This work was supported by KAKENHI 22730321. I would like to thank Marton Posfai, Gourab Ghoshal, Yang-Yu Liu, and Albert-L\'aszl\'o Barab\'asi for the discussions I had with them. \clearpage \bibliographystyle{unsrt}
\section{Introduction} In recent years, there has been a resurgence of interest in learning deep representations due to the impressive performance of deep neural networks across a range of tasks. Generative modeling is an appealing method of learning representations, partly because one can directly evaluate a model by measuring the probability it assigns to held-out test data. Restricted Boltzmann machines \citep[RBMs;][]{rbm} and deep Boltzmann machines \citep[DBMs;][]{dbm} are highly effective at modeling various complex visual datasets \citep[\emph{e.g.}][]{ais-rbm,dbm}. Unfortunately, measuring their likelihood exactly is intractable because it requires computing the partition function of a Markov random field (MRF). Annealed importance sampling \citep[AIS;][]{ais} has emerged as the state-of-the-art algorithm for estimating MRF partition functions, and is widely used to evaluate MRFs as generative models \citep{ais-rbm,deep-not-enough}. AIS is a consistent estimator of the partition function \citep{ais}, and often performs very well in practice. However, it has a property which makes it unreliable: it tends to underestimate the partition function, which leads to overly optimistic measures of the model likelihood. In some cases, it can overestimate the log-likelihood by tens of nats \citep[\emph{e.g.}][]{moment-averaging}, and one cannot be sure whether impressive test log-probabilities result from a good model or a bad partition function estimator. The difficulty of evaluating likelihoods has led researchers to propose alternative generative models for which the log-likelihood can be computed exactly \citep{nade,sum-product} or lower bounded \citep{darn,nvil}, but RBMs and DBMs remain the state-of-the-art for modeling complex data distributions. \footnotetext[1]{Authors contributed equally} \setcounter{footnote}{1} \citet{bounding-test-loglik} highlighted the problem of optimistic RBM log-likelihood estimates and proposed a pessimistic estimator based on nonparametric density estimation. Unfortunately, they reported that their method tends to underestimate log-likelihoods by tens of nats on standard benchmarks, which is insufficient accuracy since the difference between competing models is often on the order of one nat. We introduce the Reverse AIS Estimator (RAISE), an algorithm which computes conservative estimates of MRF log-likelihoods, but which achieves similar accuracy to AIS in practice. In particular, consider an approximate generative model defined as the distribution of approximate samples computed by AIS. Using importance sampling with a carefully chosen proposal distribution, RAISE computes a stochastic lower bound on the log-likelihood of the approximate model. RAISE is simple to implement, as it requires only the same MCMC transition operators as standard AIS. We evaluated RAISE by using it to estimate test log-probabilities of several RBMs, DBMs, and Deep Belief Networks (DBNs). The RAISE estimates agree closely with the true log-probabilities on small RBMs where the partition function can be computed exactly. Furthermore, they agree closely with the standard AIS estimates for full-size RBMs, DBMs, and DBNs. Since one estimate is optimistic and one is pessimistic, this agreement is an encouraging sign that both estimates are close to the correct value. Our results suggest that AIS and RAISE, used in conjunction, can provide a practical way of estimating MRF test log-probabilities. \section{Background} \vspace{-0.05in} \subsection{Restricted Boltzmann Machines} \label{sec:background-rbm} \vspace{-0.05in} While our proposed method applies to general MRFs, we use as our running example a particular type of MRF called the restricted Boltzmann machine \citep[RBM;][]{rbm}. An RBM is an MRF with a bipartite structure over a set of visible units ${\bf v} = (v_1, \ldots, v_{N_v})$ and hidden units ${\bf h} = (h_1, \ldots, h_{N_h})$. In this paper, for purposes of exposition, we assume that all of the variables are binary valued. In this case, the distribution over the joint state $\{{\bf v}, {\bf h}\}$ can be written as ${f}({\bf v}, {\bf h}) / {\Z}$, where \begin{equation} {f}({\bf v}, {\bf h}) = \exp \left( {\bf a}^\top {\bf v} + {\bf b}^\top {\bf h} + {\bf v}^\top {\bf W} {\bf h} \right), \end{equation} and ${\bf a}$, ${\bf b}$, and ${\bf W}$ denote the visible biases, hidden biases, and weights, respectively. The weights and biases are the RBM's trainable parameters. To train the RBM's weights and biases, one can maximize the log-probability of a set of training examples $\visTrainI{1}, \ldots, \visTrainI{M_{\rm tr}}$. Since the log-likelihood gradient is intractable to compute exactly, it is typically approximated using contrastive divergence \citep{contrastive-divergence} or persistent contrastive divergence \citep{pcd}. The performance of the RBM is then measured in terms of the average log-probability of a set of test examples $\visTestI{1}, \ldots, \visTestI{M_{\rm test}}$. It remains challenging to evaluate the probability ${p}({\bf v}) = {f}({\bf v}) / {\Z}$ of an example. The unnormalized probability ${f}({\bf v}) = \sum_{\bf h} {f}({\bf v}, {\bf h})$ can be computed exactly since the conditional distribution factorizes over the $h_j$. However, ${\Z}$ is intractable to compute exactly, and must be approximated. RBMs can also be extended to deep Boltzmann machines \citep{dbm} by adding one or more additional hidden layers. For instance, the joint distribution of a DBM with two hidden layers $\hidUnits_1$ and $\hidUnits_2$ can be written as ${f}({\bf v}, \hidUnits_1, \hidUnits_2)/{\Z}$, where \begin{align} {f}({\bf v}, \hidUnits_1, \hidUnits_2) &= \exp \left( {\bf a}^\top {\bf v} + \hidBiases_1^\top \hidUnits_1 + \hidBiases_2^\top \hidUnits_2 + \right. \nonumber \\ &\phantom{=} \left. + {\bf v}^\top \weights_1 \hidUnits_1 + \hidUnits_1^\top \weights_2 \hidUnits_2 \right). \end{align} DBMs can be evaluated similarly to RBMs. The main difference is that the unnormalized probability ${f}({\bf v}) = \sum_{\hidUnits_1, \hidUnits_2} {f}({\bf v}, \hidUnits_1, \hidUnits_2)$ is intractable to compute exactly. However, \citet{dbm} showed that, in practice, the mean-field approximation yields an accurate lower bound. Therefore, similarly to RBMs, the main difficulty in evaluating DBMs is estimating the partition function. RBMs are also used as building blocks for training Deep Belief Networks \citep[DBNs;][]{dbn}. For example, a DBN with two hidden layers $\hidUnits_1$ and $\hidUnits_2$ is defined as the probability distribution \begin{equation} {p}({\bf v}, \hidUnits_1, \hidUnits_2) = {p}_2(\hidUnits_1,\hidUnits_2) {p}_1({\bf v} {\,|\,} \hidUnits_1), \end{equation} where ${p}_2(\hidUnits_1, \hidUnits_2)$ is the probability distribution of an RBM, and ${p}_1({\bf v} {\,|\,} \hidUnits_1)$ is a product of independent logistic units. The unnormalized probability ${f}({\bf v}) = \sum_{\hidUnits_1,\hidUnits_2}{{p}_1({\bf v} {\,|\,} \hidUnits_1){f}_2(\hidUnits_1,\hidUnits_2)}$ cannot be computed analytically, but can be approximated using importance sampling or a variational lower bound that utilizes a recognition distribution $q(\hidUnits_1 {\,|\,} {\bf v})$ approximating the posterior ${p}(\hidUnits_1 {\,|\,} {\bf v})$ \citep{dbn}. \subsection{Partition Function Estimation} \label{sec:background-pfn} \vspace{-0.05in} Often we have a probability distribution ${p_{\rm tgt}}({\bf x}) = {f_{\rm tgt}}({\bf x}) / {\Z_{\rm tgt}}$ (which we call the \emph{target distribution}) defined on a space $\mathcal{X}$, where ${f_{\rm tgt}}({\bf x})$ can be computed efficiently for a given ${\bf x} \in \mathcal{X}$, and ${\Z_{\rm tgt}}$ is an intractable normalizing constant. There are two particular cases which concern us here. First, ${p_{\rm tgt}}$ may correspond to a Markov random field (MRF), such as an RBM, where ${f_{\rm tgt}}({\bf x})$ denotes the product of all potentials, and ${\Z_{\rm tgt}} = \sum_{\bf x} {f_{\rm tgt}}({\bf x})$ is the partition function of the graphical model. The second case is where one has a directed graphical model with latent variables ${\bf h}$ and observed variables ${\bf v}$. Here, the joint distribution $p({\bf h}, {\bf v}) = p({\bf h}) p({\bf v} {\,|\,} {\bf h})$ can be tractably computed for any particular pair $({\bf h}, {\bf v})$. However, one often wants to compute the likelihood of a test example $p(\visUnits_{\rm test}) = \sum_{\bf h} p({\bf h}, \visUnits_{\rm test})$. This can be placed in the above framework with \begin{equation} \label{eq:marginal} {f_{\rm tgt}}({\bf h}) = p({\bf h}) p(\visUnits_{\rm test} {\,|\,} {\bf h}) \hspace{0.1in} \textrm{and} \hspace{0.1in} {\Z_{\rm tgt}} = p(\visUnits_{\rm test}). \end{equation} Mathematically, the two partition function estimation problems outlined above are closely related, and the same classes of algorithms are applicable to each. However, they differ in terms of the behavior of approximate inference algorithms in the context of model selection. In particular, many algorithms, such as annealed importance sampling \citep{ais} and sequential Monte Carlo \citep{smc}, yield unbiased estimates ${\hat{\Z}_{\rm tgt}}$ of the partition function, \emph{i.e.}~$\mathbb{E}[{\hat{\Z}_{\rm tgt}}] = {\Z_{\rm tgt}}$. Jensen's Inequality shows that such an estimator tends to underestimate the log partition function on average: \begin{equation} \mathbb{E}[\log {\hat{\Z}_{\rm tgt}}] \leq \log \mathbb{E}[{\hat{\Z}_{\rm tgt}}] = \log {\Z_{\rm tgt}}. \end{equation} In addition, Markov's inequality shows that it is unlikely to substantially overestimate $\log {\Z_{\rm tgt}}$: \begin{equation} {\rm Pr}(\log {\hat{\Z}_{\rm tgt}} > \log {\Z_{\rm tgt}} + b) < e^{-b}. \label{eqn:tail-bound} \end{equation} For these reasons, we will refer to the estimator as a \emph{stochastic lower bound} on $\log {\Z_{\rm tgt}}$. In the MRF situation, ${\Z_{\rm tgt}}$ appears in the denominator, so underestimates of the log partition function translate into overestimates of the log-likelihood. This is problematic, since inaccurate partition function estimates can lead one to dramatically overestimate the performance of one's model. This problem has led researchers to consider alternative generative models where the likelihood can be tractably computed. By contrast, in the directed case, the partition function is the test log-probability (\ref{eq:marginal}), so underestimates correspond to overly conservative measures of performance. For example, the fact that sigmoid belief networks \citep{deep-sigmoid} have tractable lower (rather than upper) bounds is commonly cited as a reason to prefer them over RBMs and DBMs \citep[\emph{e.g.}][]{nvil}. We note that it is possible to achieve stronger tail bounds than (\ref{eqn:tail-bound}) by combining multiple unbiased estimates in clever ways \citep{lower-bounding-evidence}. \subsection{Annealed Importance Sampling} \label{sec:background-ais} \vspace{-0.05in} Annealed importance sampling (AIS) is an algorithm which estimates ${\Z_{\rm tgt}}$ by gradually changing, or ``annealing,'' a distribution. In particular, one must specify a sequence of ${K} + 1$ intermediate distributions ${p}_{k}({\bf x}) = {f}_{k}({\bf x})/{\Z}_{k}$ for ${k} = 0, \ldots {K}$, where ${p_{\rm ini}}({\bf x}) = {p}_0({\bf x})$ is a tractable initial distribution, and ${p_{\rm tgt}}({\bf x}) = {p}_{K}({\bf x})$ is the intractable target distribution. For simplicity, assume all distributions are strictly positive on $\mathcal{X}$. For each ${p}_{k}$, one must also specify an MCMC transition operator ${T}_{k}$ (e.g.~Gibbs sampling) which leaves ${p}_{k}$ invariant. AIS alternates between MCMC transitions and importance sampling updates, as shown in Algorithm~\ref{alg:ais}. \begin{figure}[t] \vspace{-0.2in} \begin{minipage}[t]{0.5\textwidth} \begin{algorithm}[H] \caption{Annealed Importance Sampling} \label{alg:ais} \begin{small} \begin{algorithmic} \FOR{${i} = 1 \textrm{ to } {M}$} \STATE ${\bf x}_0 \gets$ sample from ${p}_0({\bf x}) = {f}_{\textrm{ini}}({\bf x})/{\Z_{\rm ini}}$ \STATE ${w}^{({i})} \gets {\Z_{\rm ini}}$ \FOR{${k} = 1 \textrm{ to } {K}$} \STATE ${w}^{({i})} \gets {w}^{({i})} \frac{{f}_{{k}}({\bf x}_{{k}-1})}{{f}_{{k} - 1}({\bf x}_{{k}-1})}$ \STATE ${\bf x}_{k} \gets$ sample from ${T}_{k}\left(\cdot {\,|\,} {\bf x}_{{k}-1}\right)$ \ENDFOR \ENDFOR \RETURN ${\hat{\Z}_{\rm tgt}} = \sum_{{i}=1}^{M} {w}^{({i})}/{M}$ \end{algorithmic} \end{small} \end{algorithm} \end{minipage} \vspace{-0.1in} \end{figure} The output of AIS is an unbiased estimate \smash{${\hat{\Z}_{\rm tgt}}$} of ${\Z_{\rm tgt}}$. Importantly, unbiasedness is not an asymptotic property, but holds for any ${K}$ \citep{ais, jarzynski97b}. \citet{ais} demonstrated this by viewing AIS as an importance sampling estimator over an extended state space. In particular, define the distributions \begin{align} \pmfProposal_{\rm fwd}({\bf x}_{0:{K}-1}) &= {p}_0({\bf x}_0) \prod_{{k}=1}^{{K}-1} {T}_{k}({\bf x}_{{k}} {\,|\,} {\bf x}_{{k} - 1}) \\ \pmfUnnorm_{\rm rev}({\bf x}_{0:{K}-1}) &= {f_{\rm tgt}}({\bf x}_{{K}-1}) \prod_{{k}=1}^{{K}-1} {\tilde{T}}_{k}({\bf x}_{{k}-1} {\,|\,} {\bf x}_{k}), \end{align} where ${\tilde{T}}_{k}({\bf x}^\prime {\,|\,} {\bf x}) = {T}_{k}({\bf x} {\,|\,} {\bf x}^\prime) {p}_{k}({\bf x}^\prime) / {p}_{k}({\bf x})$ is the reverse transition operator for ${T}_{k}$. Here, $\pmfProposal_{\rm fwd}$ represents the sequence of states generated by AIS, and $\pmfUnnorm_{\rm rev}$ is a fictitious (unnormalized) reverse chain which begins with an exact sample from ${p_{\rm tgt}}$ and applies the transitions in reverse order. \citet{ais} showed that the AIS weights correspond to the importance weights for $\pmfUnnorm_{\rm rev}$ with $\pmfProposal_{\rm fwd}$ as the proposal distribution. The mathematical formulation of AIS leaves much flexibility for choosing intermediate distributions. The choice of distributions can have a large effect on the performance of AIS \citep{moment-averaging}, but the most common choice is to take geometric averages of the initial and target distributions: \begin{equation} \hspace{-0.04in} p_{\beta}({\bf x}) = {f}_{\beta}({\bf x})/\mathcal{Z}(\beta) = {f_{\rm ini}}({\bf x})^{1 - {\beta}} {f_{\rm tgt}}({\bf x})^{\beta} / \mathcal{Z}(\beta), \hspace{-0.02in} \label{eqn:geometric-averages} \end{equation} where $0 = \beta_0 < \beta_1 < ... < \beta_K = 1$ defines the annealing schedule. Commonly, ${f_{\rm ini}}$ is the uniform distribution, and~(\ref{eqn:geometric-averages}) reduces to $p_{\beta}({\bf x}) = {f_{\rm tgt}}({\bf x})^{\beta}/\mathcal{Z}(\beta)$. This motivates the term ``annealing'', and ${\beta}$ resembles an inverse temperature parameter. As in simulated annealing, the ``hotter'' distributions often allow faster mixing between modes which are isolated in ${p_{\rm tgt}}$. Geometric averages are widely used because they often have a simple form; for instance, the geometric average of two RBMs is obtained by linearly averaging the weights and biases. The values of $\beta$ can be spaced evenly between 0 and 1, although other schedules have been explored \citep{neal96,behrens12,calderhead09}. \section{Reverse AIS Estimator} \label{sec:raise} \vspace{-0.1in} A significant difficulty in evaluating MRFs is that it is intractable to compute the partition function. Furthermore, the commonly used algorithms, such as AIS, tend to \emph{overestimate} the log-likelihood. If we cannot hope to obtain provably accurate partition function estimates, it would be far preferable for algorithms to \emph{underestimate}, rather than \emph{overestimate}, the log-likelihoods. This would save us from the embarrassment of reporting unrealistically high test log-probability scores for a given dataset. In this section, we define an approximate generative model which becomes equivalent to the MRF in the limit of infinite computation. We then present a procedure for obtaining unbiased estimates of the probability of a test example (and therefore a stochastic lower bound on the test log-probability) under the approximate model. \subsection{Case of Tractable Posterior} \label{sec:raise-tractable} \vspace{-0.05in} In this section, we denote the model state as ${\bf x} = ({\bf v}, {\bf h})$, with ${\bf v}$ observed and ${\bf h}$ unobserved. Let us first assume the conditional distribution ${p_{\rm tgt}}({\bf h} {\,|\,} {\bf v})$ is tractable, as is the case for RBMs. Define the following generative process, which corresponds to the sequence of transitions in AIS: \begin{equation} \pmf_{\rm fwd}({\bf x}_{0:{K}}) = {p}_0({\bf x}_0) \prod_{{k}=1}^{{K}} {T}_{k}({\bf x}_{k} {\,|\,} {\bf x}_{{k}-1}). \end{equation} By taking the final visible states of this process, we obtain a generative model (which we term the \emph{annealing model}) which approximates ${p_{\rm tgt}}({\bf v})$: \begin{equation} \pmf_{\rm ann}({\bf v}_{K}) = \sum_{{\bf x}_{0:{K}-1}, {\bf h}_{K}} \pmf_{\rm fwd}({\bf x}_{0:{K}-1}, {\bf h}_{K}, {\bf v}_{K}). \label{eq:raise-model} \end{equation} \begin{figure}[t] \vspace{-0.2in} \begin{minipage}[t]{0.5\textwidth} \begin{algorithm}[H] \caption{Reverse AIS Estimator (RAISE)} \label{alg:raise} \begin{small} \begin{algorithmic} \FOR{${i} = 1 \textrm{ to } {M}$} \STATE ${\bf h}_{K} \gets \textrm{sample from } {p_{\rm tgt}}({\bf h} {\,|\,} \visUnits_{\rm test})$ \STATE ${w}^{({i})} \gets {f_{\rm tgt}}(\visUnits_{\rm test}) / {\Z}_0$ \FOR{${k} = {K} - 1 \textrm{ to } 0$} \STATE ${\bf x}_{k} \gets$ sample from ${\tilde{T}}_{k} \left(\cdot {\,|\,} {\bf x}_{{k}+1}\right)$ \STATE ${w}^{({i})} \gets {w}^{({i})} \frac{{f}_{{k}}({\bf x}_{k})}{{f}_{{k} + 1}({\bf x}_{k})}$ \ENDFOR \ENDFOR \RETURN $\hat{\pmf}_{\rm ann}(\visUnits_{\rm test}) = \sum_{{i}=1}^{M} w^{({i})}/{M}$ \end{algorithmic} \end{small} \end{algorithm} \end{minipage} \vspace{-0.1in} \end{figure} Suppose we are interested in estimating the probability of a test example $\visUnits_{\rm test}$. We use as a proposal distribution a reverse chain starting from $\visUnits_{\rm test}$. In the annealing metaphor, this corresponds to gradually ``melting'' the distribution: \begin{equation} \hspace{-0.1in} \pmfProposal_{\rm rev}({\bf x}_{0:{K}-1}, {\bf h}_{K} {\,|\,} \visUnits_{\rm test}) = {p_{\rm tgt}}({\bf h}_{K} {\,|\,} \visUnits_{\rm test}) \prod_{{k}=1}^{{K}} {\tilde{T}}_{k}({\bf x}_{{k}-1} {\,|\,} {\bf x}_{k}), \nonumber \end{equation} where we identify ${\bf v}_{k}=\visUnits_{\rm test}$, and ${\tilde{T}}_{k}({\bf x}^\prime {\,|\,} {\bf x}) = {T}_{k}({\bf x} {\,|\,} {\bf x}^\prime) {p}_{k}({\bf x}^\prime) / {p}_{k}({\bf x})$ is the reverse transition operator for ${T}_{k}$. We then obtain the following identity: \begin{small} \begin{align} \pmf_{\rm ann}(\visUnits_{\rm test}) &= \mathbb{E}_{\pmfProposal_{\rm rev}} \left[ \frac{\pmf_{\rm fwd}({\bf x}_{0:{K}-1}, {\bf h}_{K}, \visUnits_{\rm test})}{\pmfProposal_{\rm rev}({\bf x}_{0:{K}-1}, {\bf h}_{K} {\,|\,} \visUnits_{\rm test})} \right] \nonumber \\ &= \mathbb{E}_{\pmfProposal_{\rm rev}} \left[ \frac{{p}_0({\bf x}_0)}{{p_{\rm tgt}}({\bf h}_{K} {\,|\,} \visUnits_{\rm test})} \prod_{{k}=1}^{K} \frac{{T}_{k}({\bf x}_{k} {\,|\,} {\bf x}_{{k}-1})}{{\tilde{T}}_{k}({\bf x}_{{k}-1} {\,|\,} {\bf x}_{k})} \right] \nonumber \\ &= \mathbb{E}_{\pmfProposal_{\rm rev}} \left[ \frac{{p}_0({\bf x}_0)}{{p_{\rm tgt}}({\bf h}_{K} {\,|\,} \visUnits_{\rm test})} \prod_{{k}=1}^{K} \frac{{f}_{k}({\bf x}_{k})}{{f}_{k}({\bf x}_{{k}-1})} \right] \nonumber \\ &= \mathbb{E}_{\pmfProposal_{\rm rev}} \left[ \frac{{p}_0({\bf x}_0)}{{p_{\rm tgt}}({\bf h}_{K} {\,|\,} \visUnits_{\rm test})} \frac{{f_{\rm tgt}}({\bf x}_{K})}{{f}_0({\bf x}_0)} \prod_{{k}=0}^{{K}-1} \frac{{f}_{k}({\bf x}_{k})}{{f}_{{k}+1}({\bf x}_{k})} \right] \nonumber \\ &= \mathbb{E}_{\pmfProposal_{\rm rev}} \left[ \frac{{f}_{K}(\visUnits_{\rm test})}{{\Z}_0} \prod_{{k}=0}^{{K}-1} \frac{{f}_{k}({\bf x}_{k})}{{f}_{{k}+1}({\bf x}_{k})} \right] \nonumber \\ &\triangleq \mathbb{E}_{\pmfProposal_{\rm rev}} \left[ w \right]. \label{eqn:raise-weight} \end{align} \end{small} This yields the following algorithm: generate ${M}$ samples from $\pmfProposal_{\rm rev}$, and average the values $w$ defined in (\ref{eqn:raise-weight}). There is no need to store the full chains, since the weights can be updated online. We refer to this algorithm as the Reverse AIS Estimator, or RAISE. The full algorithm is given in Algorithm \ref{alg:raise}. We note that RAISE is straightforward to implement, as it requires only the same MCMC transition operators as standard AIS. Our derivation (\ref{eqn:raise-weight}) mirrors the derivation of AIS by \citet{ais}. The difference is that in AIS, the reverse chain is merely hypothetical; in RAISE, the reverse chain is simulated, and it is the forward chain which is hypothetical. By (\ref{eqn:raise-weight}), the weights $w$ are an unbiased estimator of the probability $\pmf_{\rm ann}(\visUnits_{\rm test})$. Therefore, following the discussion of Section~\ref{sec:background-pfn}, $\log w$ is a stochastic lower bound on $\log \pmf_{\rm ann}(\visUnits_{\rm test})$. Furthermore, since $\pmf_{\rm ann}$ converges to ${p_{\rm tgt}}$ in probability as ${K} \rightarrow \infty$ \citep{ais}, we would heuristically expect RAISE to yield a conservative estimate of $\log {p_{\rm tgt}}(\visUnits_{\rm test})$. This is not strictly guaranteed, however; RAISE may overestimate $\log {p_{\rm tgt}}(\visUnits_{\rm test})$ for finite ${K}$ if $\pmf_{\rm ann}(\visUnits_{\rm test}) > {p_{\rm tgt}}(\visUnits_{\rm test})$, which is possible if the AIS approximation somehow attenuates pathologies in the original MRF. (One such example is described in Section~\ref{sec:rbm-experiments}.) However, since RAISE is a stochastic lower bound on the log-probabilities under the annealing model, we can strictly rule out the possibility of RAISE reporting unrealistically high test log-probabilities for a given dataset, a situation frequently observed with AIS. \begin{figure*} \vspace{-0.1in} \begin{center} \includegraphics[width=\textwidth]{figures/raise_dbm.png} \end{center} \vspace{-0.2in} \caption{\small A schematic of RAISE for intractable distributions, applied to DBMs. Green: generative model. Blue: proposal distribution. At the top is shown which distribution the variables at each step are meant to approximate.} \label{fig:raise-dbm} \vspace{-0.1in} \end{figure*} \subsection{Extension to Intractable Posterior Distributions} \label{sec:intractable-posterior} \vspace{-0.05in} Because Algorithm \ref{alg:raise} begins with an exact sample from the conditional distribution ${p_{\rm tgt}}({\bf h} {\,|\,} \visUnits_{\rm test})$, it requires that this distribution be tractable. However, many models of interest, such as DBMs, have intractable posterior distributions. To deal with this case, we augment the forward chain with an additional heating step, such that the conditional distribution in the final step is tractable, but the distribution over ${\bf v}$ agrees with that of $\pmf_{\rm ann}$ in (\ref{eq:raise-model}). We make the further (weak) assumption that ${p}_0({\bf h} {\,|\,} {\bf v})$ is tractable. Let $\transFrozen{{\bf v}}_{k}$ denote an MCMC transition operator which preserves ${p}_{k}({\bf
v}, {\bf h})$, but does not change ${\bf v}$. For example, it may cycle through Gibbs updates to all variables except ${\bf v}$. The forward chain then has the following distribution: \begin{align} \pmf_{\rm fwd}({\bf x}_{0:{K}}, {\bf h}^\prime_{0:{K}-1}) &= {p}_0({\bf x}_0) \prod_{{k}=1}^{{K}} {T}_{k}({\bf x}_{k} {\,|\,} {\bf x}_{{k}-1}) \nonumber \\ &\phantom{=} \prod_{{k}=0}^{{K}-1} \transFrozen{{\bf v}_{K}}_{k}({\bf h}^\prime_{k} {\,|\,} {\bf h}^\prime_{{k}+1}), \nonumber \end{align} where we identify ${\bf h}^\prime_{K} = {\bf h}_{K}$. The reverse distribution is given by: \begin{align} &\pmfProposal_{\rm rev}({\bf x}_{0:{K}-1}, {\bf h}_{K}, {\bf h}^\prime_{0:{K}-1} {\,|\,} \visUnits_{\rm test}) = \nonumber \\ & {p}_0({\bf h}^\prime_0 {\,|\,} \visUnits_{\rm test}) \prod_{{k}=0}^{{K}-1} \transReverseFrozen{\visUnits_{\rm test}}_{{k}}({\bf h}^\prime_{{k}+1} {\,|\,} {\bf h}^\prime_{{k}}) \prod_{{k}=1}^{{K}} {\tilde{T}}_{k}({\bf x}_{{k}-1} {\,|\,} {\bf x}_{k}). \nonumber \end{align} The unbiased estimator is derived similarly to that of Section~\ref{sec:raise}: \begin{align} w &\triangleq \frac{\pmf_{\rm fwd}({\bf x}_{0:{K}-1}, {\bf h}_{K}, \visUnits_{\rm test}, {\bf h}^\prime_{0:{K}-1})}{\pmfProposal_{\rm rev}({\bf x}_{0:{K}-1}, {\bf h}_{K}, {\bf h}^\prime_{0:{K}-1} {\,|\,} \visUnits_{\rm test})} \\ &= {p}_0(\visUnits_{\rm test}) \prod_{{k}=0}^{{K}-1} \frac{{f}_{k}({\bf x}_{k})}{{f}_{{k}+1}({\bf x}_{k})} \prod_{{k}=1}^{K} \frac{{f}_{k}({\bf h}^\prime_{k}, \visUnits_{\rm test})}{{f}_{{k}-1}({\bf h}^\prime_{k}, \visUnits_{\rm test})} \nonumber \end{align} The full algorithm is shown in Algorithm~\ref{alg:raise-intractable-posterior}, and a schematic for the case of DBMs is shown in Figure~\ref{fig:raise-dbm}. \begin{figure} \vspace{-0.1in} \begin{minipage}[t]{0.5\textwidth} \begin{algorithm}[H] \caption{RAISE with intractable posterior} \label{alg:raise-intractable-posterior} \begin{small} \begin{algorithmic} \FOR{${i} = 1 \textrm{ to } {M}$} \STATE ${\bf h}^\prime_0 \gets \textrm{sample from } {p}_0({\bf h} {\,|\,} \visUnits_{\rm test})$ \STATE ${w}^{({i})} \gets {p}_0(\visUnits_{\rm test})$ \FOR{${k} = 1 \textrm{ to } {K}$} \STATE ${\bf h}^\prime_{k} \gets$ sample from $\transReverseFrozen{\visUnits_{\rm test}}_{k} \left(\cdot {\,|\,} {\bf h}^\prime_{{k}-1}\right)$ \STATE ${w}^{({i})} \gets {w}^{({i})} \frac{{f}_{{k}}({\bf h}^\prime_{k}, \visUnits_{\rm test})}{{f}_{{k} - 1}({\bf h}^\prime_{k}, \visUnits_{\rm test})}$ \ENDFOR \FOR{${k} = {K} - 1 \textrm{ to } 0$} \STATE ${\bf x}_{k} \gets$ sample from ${\tilde{T}}_{k} \left(\cdot {\,|\,} {\bf x}_{{k}+1}\right)$ \STATE ${w}^{({i})} \gets {w}^{({i})} \frac{{f}_{{k}}({\bf x}_{k})}{{f}_{{k} + 1}({\bf x}_{k})}$ \ENDFOR \ENDFOR \RETURN $\hat{\pmf}_{\rm ann}(\visUnits_{\rm test}) = \sum_{{i}=1}^{M} w^{({i})}/{M}$ \end{algorithmic} \end{small} \end{algorithm} \end{minipage} \vspace{-0.1in} \end{figure} \subsection{Interpretation as Unrolling} \vspace{-0.05in} \citet{dbn} showed that the Gibbs sampling procedure for a binary RBM could be interpreted as generating from an infinitely deep sigmoid belief net with shared weights. They used this insight to derive a greedy training procedure for Deep Belief Nets (DBNs), where one unties the weights of a single layer at a time. Furthermore, they observed that one could perform approximate inference in the belief net using the transpose of the generative weights to compute a variational approximation. We note that, for RBMs, RAISE can similarly be viewed as a form of unrolling: the annealed generative model $\pmf_{\rm ann}$ can be viewed as a belief net with $K+1$ layers. Furthermore, the RAISE proposal distribution can be viewed as using the transpose of the weights to perform approximate inference. (The difference from approximate inference in DBNs is that RAISE samples the units rather than using the mean-field approximation). This interpretation of RAISE suggests a method of applying it to DBNs. The generative model is obtained by unrolling the RBM on top of the directed layers as shown in Figure~\ref{fig:unroll}. The proposal distribution uses the transposes of the DBN weights for each of the directed layers. The rest is the same as the ordinary RAISE for the unrolled part of the model. \begin{figure} \vspace{-0.1in} \begin{center} \includegraphics[width=0.2\textwidth]{figures/unroll2.png} \end{center} \vspace{-0.1in} \caption{\small RAISE applied to a DBN unrolled into a very deep sigmoid belief net, for ${K}=1000$ intermediate distributions. {\bf Green:} generative model. {\bf Blue:} proposal distribution.} \label{fig:unroll} \vspace{-0.1in} \end{figure} \section{Variance Reduction using Control Variates} \label{sec:variance-reduction} \vspace{-0.1in} One of the virtues of log-likelihood estimation using AIS is its speed: the partition function need only be estimated once. RAISE, unfortunately, must be run separately for every test example. We would therefore prefer to compute the RAISE estimate for only a small number of test examples. Unfortunately, subsampling the test examples introduces a significant source of variability: as different test examples can have wildly different log-likelihoods\footnote{This effect can be counterintuitively large due to different complexities of different categories; \emph{e.g.}, for the mnistCD25-500 RBM, the average log-likelihood of handwritten digits ``1'' was 56.6 nats higher than the average log-likelihood of digits ``8''.}, the estimate of the average log-likelihood can vary significantly depending which batch of examples is selected. We attenuate this variability using the method of control variates \citep{ross-simulation}, a variance reduction technique which has also been applied to black-box variational inference \citep{black-box-variational}. If $Y_1, \ldots, Y_n$ are independent samples of a random variable $Y$, then the sample average $\frac{1}{n}\sum_{i=1}^nY_i$ is an unbiased estimator of $\expectation{Y}$ with variance $\variance{Y}/n$. If $X$ is another random variable (which ideally is both cheap to compute and highly correlated with $Y$), then for any scalar $\alpha$, \begin{equation} \frac{1}{n}\sum_{i=1}^n\left(Y_i- \alpha X_i\right)+\frac{\alpha}{N}\sum_{i=1}^NX_i \label{eqn:control-variate} \end{equation} is an unbiased estimator of $\expectation{Y}$ with variance $$\frac{\variance{Y- \alpha X}}{n}+ \alpha^2 \frac{\variance{X}}{N}+2\alpha \frac{\covariance{Y-\alpha X}{X}}{n}.$$ In our experiments, $Y$ is the RAISE estimate of the log-probability of a test example, and $X$ is the (exact or estimated) log unnormalized probability under the original MRF. Since the unnormalized probability under the MRF is significantly easier to evaluate than the log-probability under the annealing model, we can let $N$ to be much larger than $n$; we set $n=100$ and let $N$ be the total number of test examples. Since the annealing model is an approximation to the MRF, the two models should assign similar log-probabilities, so we set $\alpha = 1$. Hence we expect the variance of $Y-X$ to be smaller than the variance of $Y$, and thus (\ref{eqn:control-variate}) to have a significantly smaller variance than the sample average. Empirically, we have found that $Y-X$ has significantly smaller variance than $Y$, even when the number of intermediate distributions is relatively small. \section{Experimental Results} \vspace{-0.1in} We have evaluated RAISE on several MRFs to determine if its log-probability estimates are both accurate and conservative. We compared our estimates against those obtained from standard AIS. We also compared against the exact log-probabilities of small models for which the partition function can be computed exactly~\citep{ais-rbm}. AIS is expected to overestimate the true log-probabilities while RAISE is expected to underestimate them. Hence, a close agreement between the two estimators would be a strong indication of accurate estimates. We considered two datasets: (1) the MNIST handwritten digit dataset~\citep{mnist}, which has long served as a benchmark for both classification and density modeling, and (2) the Omniglot dataset \citep{omniglot}, which contains images of handwritten characters across many world alphabets.\footnote{We used the standard split of MNIST into 60,000 training and 10,000 test examples and a random split of Omniglot into 24,345 training and 8,070 test examples. In both cases, the inputs are $28 \times 28$ binary images.} Both AIS and RAISE can be used with any sequence of intermediate distributions. For simplicity, in all of our experiments, we used the geometric averages path (\ref{eqn:geometric-averages}) with linear spacing of the parameter ${\beta}$. We tested two choices of initial distribution ${p_{\rm ini}}$: the uniform distribution, and the data base rate (DBR) distribution \citep{ais-rbm}, where all units are independent, all hidden units are uniform, and the visible biases are set to match the average pixel values in the training set. In all cases, our MCMC transition operator was Gibbs sampling. We estimated the log-probabilities of a random sample of 100 examples from the test set using RAISE and used the method of control variates (Sec.~\ref{sec:variance-reduction}) to estimate the average log-probabilities on the full test dataset. For RBM experiments, the control variate was the RBM log unnormalized probability, $\log {f}({\bf v})$, whereas for DBMs and DBNs, we used an estimate based on simple importance sampling as described below. For each of the 100 test examples, RAISE was run with 50 independent chains, while the AIS partition function estimates used 5,000 chains; this closely matched the computation time per intermediate distribution between the two methods. Each method required about 1.5 hours with the largest number of intermediate distributions (${K}= \textrm{100,000}$). \begin{table*}[t] \vspace{-0.1in} \begin{center} \begin{scriptsize} \begin{tabular}{l|r|r|rrr|rrr} \multicolumn{3}{c}{} & \multicolumn{3}{c}{uniform} & \multicolumn{3}{c}{data base rates} \\ Model & exact & CSL & RAISE & AIS & gap & RAISE & AIS & gap \\ \hline mnistCD1-20 & -164.50 & -185.74 & -165.33 & -164.51 & 0.82 & -164.11 & -164.50 & -0.39 \\ mnistPCD-20 & -150.11 & -152.13 & -150.58 & -150.04 & 0.54 & -150.17 & -150.10 & 0.07 \\ mnistCD1-500 & --- & -566.91 & -150.78 & -106.52 & 44.26 & -124.77 & -124.09 & 0.68 \\ mnistPCD-500 & --- & -138.76 & -101.07 & -99.99 & 1.08 & -101.26 & -101.28 & -0.02 \\ mnistCD25-500 & --- & -145.26 & -88.51 & -86.42 & 2.09 & -86.39 & -86.35 & 0.04 \\ omniPCD-1000 & --- & -144.25 & -100.47 & -100.45 & 0.02 & -100.46 & -100.46 & 0.00 \end{tabular} \end{scriptsize} \end{center} \vspace{-0.1in} \caption{\small RAISE and AIS average test log-probabilities using 100,000 intermediate distributions and both choices of ${p_{\rm ini}}$. {\bf CSL:} the estimator of \citet{bounding-test-loglik}. {\bf gap:} the difference $\textrm{AIS} - \textrm{RAISE}$} \label{tbl:raise-vs-ais} \vspace{-0.1in} \end{table*} \subsection{Restricted Boltzmann Machines} \label{sec:rbm-experiments} \vspace{-0.05in} We considered models trained using two algorithms: contrastive divergence \citep[CD;][]{contrastive-divergence} with both 1 and 25 CD steps, and persistent contrastive divergence \citep[PCD;][]{pcd}. We will refer to the RBMs by the dataset, training algorithm, and the number of hidden units. For example, ``mnistCD25-500'' denotes an RBM with 500 hidden units, trained on MNIST using 25 CD steps. The MNIST trained RBMs are the same ones evaluated by \citet{moment-averaging}. We also provide comparisons to the Conservative Sampling-based Log-likelihood (CSL) estimator of \citet{bounding-test-loglik}.\footnote{The number of chains and number of Gibbs steps for CSL were chosen to match the total number of Gibbs steps required by RAISE and AIS for ${K}= \textrm{100,000}$.} \begin{figure}[t] \vspace{-0.1in} \includegraphics[width=0.4\textwidth]{figures/log_likelihood_rbms.pdf} \caption{\small RAISE estimates of average test log-probabilities using uniform ${p_{\rm ini}}$. The log-probability estimates tend to increase with the number of intermediate distributions, suggesting that RAISE is a conservative estimator.} \label{fig:log-likelihood-rbms} \vspace{-0.2in} \end{figure} Figure \ref{fig:log-likelihood-rbms} shows the average RAISE test log-probability estimates for all of the RBMs as a function of the number of intermediate distributions. In all of these examples, as expected, the estimated log-probabilities tended to increase with the number of intermediate distributions, consistent with RAISE being a conservative log-probability estimator. Table \ref{tbl:raise-vs-ais} shows the final average test log-probability estimates obtained using CSL as well as both RAISE and AIS with 100,000 intermediate distributions. In all of the trials using the DBR initial distribution, the estimates of AIS and RAISE agreed to within 1 nat, and in many cases, to within 0.1 nats. The CSL estimator, on the other hand, underestimated $\log {p_{\rm tgt}}$ by tens of nats in almost all cases, which is insufficient accuracy since well-trained models often differ by only a few nats. We observed that the DBR initial distribution gave consistently better agreement between the two methods compared with the uniform distribution, consistent with the results of \citet{ais-rbm}. The largest discrepancy, 44.26 nats, was for mnistCD1-500 with uniform ${p_{\rm ini}}$; with DBR, the two methods differed by only 0.68. Figure~\ref{fig:cd1-500} plots both estimates as a function of the number of initial distributions. In the uniform case, one might not notice the inaccuracy only by running AIS, as the AIS estimates may appear to level off. One could be tricked into reporting results that are tens of nats too high! By contrast, when both methods are run in conjunction, the inaccuracy of at least one of the methods becomes obvious. \begin{figure}[t]% \subfloat[uniform]{\includegraphics[width=0.48\linewidth]{figures/cd1_500_uniform.pdf}}% \hfill \subfloat[data base rates]{\includegraphics[width=0.48\linewidth]{figures/cd1_500_dbr.pdf}}% \caption{\small AIS and RAISE estimates of mnistCD1-500 average test log-probabilities have a significant gap when annealing from a uniform initial distribution. However, they agree closely when annealing from the data base rates.} \label{fig:cd1-500} \vspace{-0.1in} \end{figure} As discussed in Section~\ref{sec:raise-tractable}, RAISE is a stochastic lower bound on the log-likelihood of the annealing model $\pmf_{\rm ann}$, but not necessarily of the RBM itself. When $\pmf_{\rm ann}$ is a good approximation to the RBM, RAISE gives a conservative estimate of the RBM log-likelihood. However, it is possible for RAISE to overestimate the RBM log-likelihood if $\pmf_{\rm ann}$ models the data distribution better than the RBM itself, for instance if the approximation attenuates pathologies of the RBM. We observed a single instance of this in our RBM experiments: the mnistCD1-20 RBM, with the data base rate initialization. As shown in Figure~\ref{fig:cd1-20-dbr}, the RAISE estimates exceeded the AIS estimates for small ${K}$, and declined as ${K}$ was increased. Since RAISE gives a stochastic lower bound on $\log \pmf_{\rm ann}$ and AIS gives a stochastic upper bound on $\log {p_{\rm tgt}}$, this inversion implies that $\pmf_{\rm ann}$ significantly outperformed the RBM itself. Indeed, the RBM (mistakenly) assigned 93\% of its probability mass to a single hidden configuration, while the RAISE model spreads its probability mass among more diverse configurations. In all of our other RBM experiments, the AIS and RAISE estimates with DBR initialization and ${K}=\textrm{100,000}$ agreed to within 0.1 nats. Figure~\ref{fig:omni-pcd-1000} shows one such case, for an RBM trained on the challenging Omniglot dataset. Overall, the RAISE and AIS estimates using DBR initialization agreed closely in all cases, and RAISE gave conservative estimates in all but one case, suggesting that RAISE typically gives accurate and conservative estimates of RBM test log-probabilities. \begin{figure} \vspace{-0.1in} \begin{minipage}{0.26 \textwidth} \includegraphics[width=\textwidth]{figures/cd1_20_dbr_nosamp.pdf} \end{minipage} \begin{minipage}{0.2 \textwidth} \includegraphics[width=\textwidth]{figures/samples_cd1_20_2r.png} \\ \vspace{1em} \includegraphics[width=\textwidth]{figures/samplesRAISE_2r.png} \end{minipage} \vspace{-0.1in} \caption{\small The mnistCD1-20 RBM, where we observed RAISE to overestimate the RBM's test log-probabilities. {\bf Left:} Average test log-probability estimates as a function of $K$. {\bf Top right:} 10 independent samples from the RBM. {\bf Bottom right:} 10 independent samples from the annealing model $\pmf_{\rm ann}$ with 10 intermediate distributions. The $\pmf_{\rm ann}$ samples, while poor, show greater diversity compared to the RBM samples, consistent with $\pmf_{\rm ann}$ better matching the data distribution.} \label{fig:cd1-20-dbr} \end{figure} \begin{figure}[t] \vspace{-0.1in} \begin{minipage}{0.26 \textwidth} \includegraphics[width=\textwidth]{figures/omni_sgd_uniform.pdf} \end{minipage} \begin{minipage}{0.2 \textwidth} \includegraphics[width=\textwidth]{figures/omni_samples5x8.pdf}\\ \vspace{-0.5em} \includegraphics[width=\textwidth]{figures/samples_omni_rbm_RAISE.pdf} \end{minipage} \vspace{-0.1in} \caption{\small {\bf Left: } AIS and RAISE estimates of omniPCD-1000 RBM average test log-probabilities with annealing from a uniform initial distribution {\bf Top right: } 32 training samples from Omniglot training set {\bf Bottom right: } 32 independent samples from the omniPCD-1000 RAISE model with 100,000 intermediate distributions.} \label{fig:omni-pcd-1000} \vspace{-0.1in} \end{figure} \subsection{Deep Boltzmann Machines} \vspace{-0.05in} We used RAISE to estimate the average test log-probabilities of two DBM models trained on MNIST and Omniglot. The MNIST DBM has 2 hidden layers of size 500 and 1000, and the Omniglot DBM has 2 hidden layers each of size 1000. As with RBMs, we ran RAISE on 100 random test examples and used the DBM log unnormalized probability, $\log {f}({\bf v})$, as a control variate. To obtain estimates of the DBM unnormalized probability ${f}({\bf v}) = \sum_{\hidUnits_1, \hidUnits_2} {f}({\bf v}, \hidUnits_1, \hidUnits_2)$ we used simple importance sampling ${f}({\bf v}) = \mathbb{E}_q{\left(\frac{{f}({\bf v}, \hidUnits_2)}{q(\hidUnits_2 {\,|\,} {\bf v})}\right)}$ with 500 samples, where the proposal distribution $q$ was the mean-field approximation to the conditional distribution ${p}(\hidUnits_2 {\,|\,} {\bf v})$. The term ${f}({\bf v}, \hidUnits_2)$ was computed by summing out $\hidUnits_1$ analytically, which is efficient because the conditional distribution factorizes.\footnote{Previous work \citep[\emph{e.g.}][]{dbm} estimated $\log {f}({\bf v})$ using the mean-field lower bound. We found importance sampling to give more accurate results in the context of AIS. However, it made less difference for RAISE, where the log unnormalized probabilities are merely used as a control variate.} We compared the RAISE estimates to those obtained using AIS. All results for ${K}=\textrm{100,000}$ are shown in Table~\ref{tbl:deep-models-comparison}, and the estimates for the MNIST DBN are plotted as a function of ${K}$ in Figure~\ref{fig:dbn}. All estimates for the MNIST DBM with ${K}=\textrm{100,000}$ agreed quite closely, which is a piece of evidence in favor of the accuracy of the estimates. Furthermore, RAISE provided conservative estimates of log-probabilities for small ${K}$, in contrast with AIS, which gave overly optimistic estimates. For the Omniglot DBM, RAISE overestimated the DBM log-probabilities by at least 6 nats, implying that the annealing model fit the data distribution better than the DBM, analogously to the case of the mnistCD1-20 RBM discussed in Section~\ref{sec:rbm-experiments}. This shows that RAISE does not completely eliminate the possibility of overestimating an MRF's test log-probabilities. \begin{table}[t] \vspace{-0.1in} \begin{scriptsize} \begin{tabular}{l|rrr|rrr} \multicolumn{1}{c}{} & \multicolumn{3}{c}{uniform} & \multicolumn{3}{c}{data base rates} \\ Model & RAISE & AIS & gap & RAISE & AIS & gap \\ \hline MNIST DBM & -85.69 & -85.72 & -0.03 & -85.74 & -85.67 & 0.07 \\ Omniglot DBM & -104.48 & -110.86 & -6.38 & -102.64 & -103.27 & -0.63 \\ MNIST DBN & -84.67 & -84.49 & 0.18 & --- & --- & --- \\ Omniglot DBN & -100.78 & -100.45 & 0.33 & --- & --- & --- \end{tabular} \end{scriptsize} \vspace{-0.1in} \caption{\small Test log-probability estimates for deep models with ${K}=\textrm{100,000}$. {\bf gap:} the difference $\textrm{AIS} - \textrm{RAISE}$} \label{tbl:deep-models-comparison} \end{table} \begin{figure} \includegraphics[width=0.23\textwidth]{figures/dbm_graph.pdf} \includegraphics[width=0.23\textwidth]{figures/dbn_graph.pdf} \vspace{-0.1in} \caption{\small Average test log-probability estimates for MNIST models as a function of $K$. {\bf Left:} the DBM. {\bf Right:} the DBN.} \label{fig:dbn} \vspace{-0.1in} \end{figure} \subsection{Deep Belief Networks} \vspace{-0.05in} In our final set of experiments, we used RAISE to estimate the average test log-probabilities of DBNs trained on MNIST and Omniglot. The MNIST DBN had two hidden layers of size 500 and 2000, and the Omniglot DBN had two hidden layers each of size 1000. For the initial distribution ${p}_0$ we used the uniform distribution, as the DBR distribution is not defined for DBNs. To obtain estimates of DBN unnormalized probabilities ${f}({\bf v})=\sum_{\hidUnits_1}{{p}({\bf v} {\,|\,} \hidUnits_1){f}(\hidUnits_1)}$ we used importance sampling ${f}({\bf v})=\mathbb{E}_q\left(\frac{{p}({\bf v} {\,|\,} \hidUnits_1){f}(\hidUnits_1)} {q(\hidUnits_1 {\,|\,} {\bf v})}\right)$ with 500 samples, where $q$ was the DBN recognition distribution \citep{dbn}. All results for ${K}=\textrm{100,000}$ are shown in Table~\ref{tbl:deep-models-comparison}, and Figure~\ref{fig:dbn} shows the estimates for the MNIST DBN as a function of ${K}$. For both DBNs, RAISE and AIS agreed to within 1 nat for ${K} = \textrm{100,000}$, and RAISE gave conservative log-probability estimates for all values of ${K}$. \subsection{Summary} \vspace{-0.05in} Between our RBM, DBM, and DBN experiments, we compared 10 different models using both uniform and data base rate initial distributions. In all but two cases (the mnistCD1-20 RBM and the Omniglot DBN), RAISE gave estimates at or below the smallest log-probability estimates produced by AIS, suggesting that RAISE typically gives conservative estimates. Furthermore, in all but one case (the Omniglot DBM), the final RAISE estimate agreed with the lowest AIS estimate to within 1 nat, suggesting that it is typically accurate. \section{Conclusion} \vspace{-0.1in} In this paper we presented the Reverse AIS Estimator (RAISE), which gives a stochastic lower bound on the log-likelihood of an approximation to an MRF model. Our experimental results show that RAISE typically produces accurate, yet conservative, estimates of log-probabilities for RBMs, DBMs, and DBNs. More importantly, by using RAISE and AIS in conjunction, one can judge the accuracy of one's results by measuring the agreement of the two estiatmators. RAISE is simple to implement, requiring only the same transition operators as AIS, so it gives a simple and practical way to evaluate MRF test log-probabilities. \section*{Acknowledgements} \vspace*{-10pt} This research was supported by NSERC, Google, and Samsung. \bibliographystyle{plainnat}
\section{Introduction} \label{introduction} Since the discovery of the first example amongst thousands of faint sources uncovered by the {\it Infrared Astronomical Satellite}, the origin of the prodigious energy that characterises hyperluminous infrared galaxies (HyLIRGs, defined such that $L_{\rm IR}$\ $\ge 10^{13}$\,L$_\odot$) has been a topic of controversy --- `monsters or babies?' \citep{lutz99}. Indeed, `hidden quasar or protogalaxy?' was part of the title of the paper announcing the discovery of the now-famous $z=2.3$ galaxy, IRAS\,FSC10214+4724 \citep{rr91}, which still qualifies as a HyLIRG even though it was eventually found to be strongly lensed \citep{gl95, deane13}. A decade later, this same rare population made an appearance amongst early images obtained in the submm waveband, with the first submm-selected galaxy (SMG) being identified as a weakly lensed HyLIRG at $z=2.8$, comprising a dust-obscured, gas-rich starburst alongside a broad-absorption-line (BAL) quasar \citep{ivison98leblob, ivison10leblob, frayer98, frayer18, vernet01}. As a result of the spatial resolution of the Atacama Large Millimetre Array (ALMA), many previously suspected HyLIRGs have been resolved into multiple, discrete ultraluminous IR galaxies (ULIRGs), some hovering around the HyLIRG threshold \citep[e.g.][cf.\ \citealt{younger07, younger08, riechers13}]{karim13, fu13, oteo16cplus, oteo18, riechers17, litke19}. Genuine, instrinsic HyLIRGs are thus known to be extraordinarily rare --- indeed, to find the nearest examples one must search out to $z\approx 0.3$ \citep[][cf.\ \citealt{efstathiou14}]{rrw10}. Nevertheless, HyLIRGs are excellent laboratories with which to confront recent hydrodynamic simulations of isolated and merging galaxies \citep[e.g.][]{hayward11, narayanan15} which struggle to reproduce their number densities in the presence of the feedback required to match the local mass function. The luminosity of a HyLIRG implies a star-formation rate (SFR) of $\approx 3,400$\,M$_\odot$\,yr$^{-1}$; indeed, its SFR would remain substantial if the stellar initial mass function is top heavy \citep{zhang18}, or if there is a substantial contribution to $L_{\rm IR}$\ from a powerful AGN as has often been suspected \citep[e.g.][]{hw93, franceschini00}. Either way, when observing HyLIRGs we are witnessing galaxy formation at its most extreme and it is important to understand which physical processes trigger these far-IR-luminous events, and the subsequent quenching mechanisms. Are these high-redshift starbursts similar to the ULIRGs seen locally, with the same efficiency in converting gas into stars, or do they have a higher star-formation efficiency? If the latter, why? Are the relations between metal content, star formation and mass similar to other high-redshift galaxy populations? How do the starburst episodes relate to the growth of the central black holes? As its name implies, HATLAS\,J084933.4+021443 (hereafter HATLAS\,J084933), was found in the largest extragalactic {\it Herschel} survey, {\it H}-ATLAS \citep{eales10}, with $S_{\rm 350\mu m} = 249$\,mJy \citep{valiante16}. Its redshift was determined quickly via multiple CO lines ($z=2.41$ -- \citealt{harris12, ivison13, gomez18}), suggesting $L_{\rm IR}$\ $\approx 6\times 10^{13}$\,L$_\odot$. Extensive panchromatic observations, including imaging at high spatial resolution with ALMA, the Jansky Very Large Array (JVLA), the Submillimetre Array and the Institut Radioastronomie Millimetrique's Plateau de Bure Interferometer (IRAM PdBI), revealed that HATLAS\,J084933\ -- like many other HyLIRGs \citep[e.g.][]{karim13} --- breaks down into multiple gas-rich galaxies at the same redshift, covering $\approx 10$\,arcsec or $\approx 80$\,kpc in the plane of the sky, designated W, T, M and C (see\footnote{Alternatively, the layout of these galaxies is shown later in this paper.} Figure~1 of \citealt{ivison13}\defcitealias{ivison13}{I13} hereafter \citetalias{ivison13}), each component a distinct ULIRG or HyLIRG. T is gravitationally amplified\footnote{Weakly --- by less than a factor two; intrinsically, T is still a HyLIRG.} by a foreground edge-on spiral; W is an unlensed HyLIRG. M and C are somewhat less luminous (still $L_{\rm IR}$\ $>10^{12}$\,L$_\odot$, so ULIRGs, and unlensed) yet gas-rich galaxies. An unusually high intrinsic IR luminosity was first suspected for HATLAS\,J084933\ because of its broad CO $J=1$--0 line, $\approx 1,180$\,km\,s$^{-1}$ FWHM, as detected by the Greenbank Telescope (GBT), which \citet{harris12} argued was consistent with no gravitational amplification. However, although W does have an unusually broad CO line, $825\pm 115$\,km\,s$^{-1}$ FWHM, and is unlensed, the overall line width was shown by \citetalias{ivison13} to owe much to the velocity dispersion of the aforementioned group of luminous starbursts found within the GBT beam. High-resolution ($\sim 0.3$-arcsec) 3-D spectroscopy obtained with JVLA and the IRAM PdBI in $^{12}$CO $J=1$--0 and 4--3, respectively, traced the molecular gas dynamics on scales of $\approx 1$\,kpc, measuring the spatial extent of the gas ($\sim 1$\,arcsec, or $\sim 8$\,kpc), its mass (and density), Toomre parameter and the mid-plane hydrostatic ISM pressure. Later ALMA observations suggested that the far-IR emission of the most luminous component, W, corresponds to greybody emission from dust at a single temperature, $\approx 40$\,K, throughout the full extent of the galaxy \citep{gomez18}. In all of these long-wavelength studies, and also with regard to its rest-frame optical--through--radio SED, HATLAS\,J084933-W\ resembles a starburst, with no obvious sign of any influence from an AGN, though --- like mergers and interactions --- accreting black holes are often difficult to identify in dusty starbursts with anything but the deepest and most complete of datasets. In this paper we present new observations of HATLAS\,J084933\ obtained using the Atacama Large Millimetre/Submillimetre Array (ALMA), the European Space Agency's {\it XMM-Newton} space observatory and the KMOS spectrograph on UT1 of the European Southern Observatory's Very Large Telescope, to further elucidate the nature of this galaxy: hidden quasar, or protogalaxy? Monster, or baby? The paper is organised as follows: \S\,\ref{sec:observations} describes the observations and data reduction. \S\,\ref{sec:results} presents the results, with our discussion of those results in \S\,\ref{sec:discussion}. We summarise our results and draw conclusions in \S\,\ref{sec:summary}. Throughout, we adopt a standard $\Lambda$-CDM cosmology with $\Omega_{\rmn m} = 0.3$, $\Omega_\Lambda = 0.7$ and $H_0 = 70$\,km\,s$^{-1}$\,Mpc$^{-1}$, such that 1\,arcsec corresponds to 8.1\,kpc at $z=2.41$. \section{Observations and data reduction} \label{sec:observations} \subsection{KMOS/VLT} The KMOS observations were carried out during ESO observing period P94, under the observing programme $\rm 094.A-0214(A)$. We adopted the standard object-sky-object nod-to-sky observation pattern, with 300-s exposures and an alternating 0.2- and 0.1-arcsec dither pattern to improve the spatial sampling. The target was observed in the $K$ band, with a total integration time (on source) of 80\,min, with a median seeing of 0.7\,arcsec. The data reduction was performed by using SPARK \citep[Software Package for Astronomical Reduction with KMOS --][]{davies13}, implemented using ESOREX \citep[ESO Recipe Execution Tool --][]{freudling13}. Each of the 300-s exposures was re-constructed independently, wavelength calibrated and sky subtracted using the closest sky exposure, and finally re-sampled into a data cube with $0.1\times 0.1$\,arcsec$^2$ spaxels. In order to improve the sky subtraction we used the {\sc skytweak} option within SPARK \citep{davies07}, which accounts for the time variability of the various OH sky-line families. Standard-star observations were carried out on the same night as the science observations and used for telluric correction and flux calibration. The individual 300-s cubes were finally combined together to create a stacked cube. \subsection{XMM-Newton} \begin{figure*} \centering \includegraphics[width=5.1in, angle=90]{W_image_spec.eps} \caption{Extracted inside a 1-arcsec-diameter aperture (yellow circle, overlaid on the {\it Hubble Space Telescope (HST)} F110W image from \citetalias{ivison13}), the lower panel shows the broad H$\alpha$ emission, spanning 9,700\,km\,s$^{-1}$ FWHM, in component W of HATLAS\,J084933\, as observed by VLT/KMOS, unambiguously revealing the presence of a previously unseen broad-line AGN. [N\,{\sc ii}] at 654.986, 658.527\,nm and [S\,{\sc ii}] can also be seen (and is also marked). The other four spectra illustrate the variation in the ratio of [N\,{\sc ii}] to H$\alpha$, which increases markedly to the North-East (NE), as sampled by small off-centre apertures (the four blue circles). It is unfortunate that the blue [N\,{\sc ii}] line falls almost exactly on a sky line (this region was masked during the fitting); however, this issue is mitigated by the fact that this blue [N\,{\sc ii}] line has a fixed wavelength offset and flux ratio ($3.06^{-1}\times$) with respect to the redder (brighter) component of the [N\,{\sc ii}] doublet. Given the bright red component, we can be confident that the blue component is buried under the sky line. N is up; E is to the left; offsets from $\alpha_{2000} = 132^{\circ}.3900$ and $\delta_{2000} = 2^{\circ}.2457$ are marked in arcsec. A known star is labelled.} \label{fig:kmos} \end{figure*} HATLAS\,J084933\ was observed with {\it XMM-Newton} on 2017 April 22--23 (AO15, proposal 078435). The resulting data from the European Photon Imaging Camera (EPIC), which comprises three X-ray charge-coupled device cameras operated in a so-called `photon-counting mode', were reduced with version 15.0 of the {\it XMM-Newton} Science Analysis Software (SAS). The {\it XMM-Newton} observation was significantly affected by particle background flaring events, from which the data were screened on the basis of the full-field count rate above 5\,keV. After filtering out the high-background periods, the net exposure times were 27, 26 and 26\,ks in the MOS\,1 and MOS\,2 cameras and the pn camera, respectively. Images were constructed in four bands: 0.2--0.5, 0.5--2, 2--5 and 5-10\,keV. Background images were constructed following the procedure described by \citet{loaring05}. The images were searched simultaneously for sources in the four energy bands using the standard SAS tasks, {\sc eboxdetect} and {\sc emldetect}. The astrometric solution of the images was refined by cross-correlating the X-ray source list with optical sources from the Sloan Digital Sky Survey \citep[SDSS --][]{adelman08} using the SAS task, {\sc eposcorr}. A point-like X-ray source was found at $\alpha = \rm 08h\,49m\,33.59s$, $\delta = +02^\circ\,14'\,44.''6$ (J2000) with a 1-$\sigma$ statistical uncertainty of 0.4\,arcsec, which includes the contribution from the astrometric cross match to SDSS. This position is coincident with that of the unlensed\footnote{An optical point source visible immediately to the west of W is a star. As described by \citetalias{ivison13}, it is unable to provide any significant gravitational amplification; the X-ray data reported here are not compatible with stellar emission, thus we can reliably associate the X-ray emission with W.} component, W, of HATLAS\,J084933. We have verified by examination of the 3XMM catalogue \citep{rosen16} that our 0.4-arcsec positional uncertainty is reasonable for an on-axis source with comparable signal-to-noise ratio. Note that the point spread function (PSF) of EPIC, at around 6\,arcsec full-width half maximum (FWHM), is sufficiently large that minor contributions to the X-ray flux from the other components of the HATLAS\,J084933\ system may be hidden in the wings of the PSF. Nevertheless, the precise positional coincidence of the X-ray source with component W implies that the X-ray emission is dominated by this component. We extracted X-ray spectra in the three EPIC cameras from a circular region of 15~arcsec radius, centred on the X-ray source. Background spectra were obtained from an annular region surrounding the source, from which detected X-ray sources were excised. Event patterns 0--12 were included in the spectra derived from MOS, while for the spectra derived from the pn camera we used patterns 0--4 above 0.4\,keV and only pattern~0 between 0.2 and 0.4\,keV. For MOS, channels corresponding to the strong 1.5-keV Al K$\alpha$ background emission line \citep{lumb02} were excluded. The spectra of the target from the different EPIC cameras were then combined to form a single spectrum, and the corresponding response matrices and background spectra were combined in an appropriate fashion to form a single response matrix and a single background spectrum, following the method described in Appendix~A of \citet{page03}. Finally, the spectrum was grouped to a minimum of 20 counts per bin. \subsection{ALMA} The ALMA band-6 245-GHz (1.22-mm) data used here were obtained for project 2013.1.00164.S, targeting the CH$^+$ line \citep{falgarone17}. Our resulting continuum image, where we have discarded the channels around the CH$^+$ line, was made using the {\sc clean} task in the Common Astronomy Software Application package \citep[CASA --][]{mcmullin07}. The image has an r.m.s.\ noise level, $\sigma= 38\,\mu$Jy\,beam$^{-1}$, and the synthesised beam measures $0.49 \times 0.48$\,arcsec$^2$ FWHM, with the major axis at a position angle, measured East of North, of $127^{\circ}$. \section{Results} \label{sec:results} Here, we outline what can be deduced from the new rest-frame optical spectroscopy, X-ray spectroscopy, and the submm imaging, as described in the previous section. Fig.~\ref{fig:kmos} presents our new KMOS spectrum of component W of HATLAS\,J084933, extracted in a 1-arcsec-diameter aperture. The Balmer H$\alpha$ 656.461-nm emission is very strong indeed, and broad, where in SMGs it is normally a combination of weak and narrow lines offset spatially from broader (few $\times 1000$\,km\,s$^{-1}$) compact components \citep{md13}. We fit Gaussians simultaneously to the broad and narrow H$\alpha$ components, and to the [N\,{\sc ii}] doublet at 654.986 and 658.527\,nm with a red/blue [N\,{\sc ii}] line ratio of 3.06, with all the narrow lines tied to the same redshift and line width\footnote{[S\,{\sc ii}] can also be seen, weakly; this wavelength range was excluded from the fits.}, and we also fit simultaneously to the continuum. The strongest sky lines (marked in grey in Fig.~\ref{fig:kmos}) were masked during the fitting process. We find that the broad [narrow] H$\alpha$ lines span 9,700 [600]\,km\,s$^{-1}$ FWHM, with the narrow lines\footnote{Recall that the three CO lines observed towards HATLAS\,J084933-W\ have an error-weighted average redshift, $z_{\rm lsr}=2.4068\pm 0.0002$, so offset away from us along the line of sight from the rest-frame optical lines by $\approx 600$\,km\,s$^{-1}$.} at $z_{\rm lsr}=2.4048$. The line fluxes determined for the narrow and broad H$\alpha$ lines, accurate to $\approx 10$ per cent, are $2.0\times 10^{-17}$ and $2.0\times 10^{-16}$\,ergs\,s$^{-1}$\,cm$^{-2}$. The flux and width of the broad line imply a black hole mass, $M_{\rm bh} \approx 2\times 10^9$\,M$_\odot$ \citep{schulze18}, i.e.\ a super-massive black hole (SMBH). There can be no doubt, therefore, based on the characteristics of the H$\alpha$ emission line profile, that a broad-line Type-1 AGN is present in the HATLAS\,J084933-W\ system. In passing we note that the ratio of [N\,{\sc ii}] to H$\alpha$ increases markedly to the NE of the combined continuum/line centroid, as illustrated in Fig.~\ref{fig:kmos}, suggestive of an ionisation cone. \begin{figure} \includegraphics[width=2.42in,angle=-90]{xray_spectrum.eps} \caption{\,The {\it XMM-Newton} EPIC X-ray spectrum of HATLAS\,J084933. The data points correspond to the measured spectrum and the stepped line corresponds to the best-fitting absorbed power-law model. Both model and data have been divided by the product of the effective area and Galactic column as a function of energy and are plotted as $EF_{\rm E}$, channel energy multipled by the energy flux per unit energy, so that an unabsorbed power law with $\alpha = +1$ would correspond to a horizontal line.} \label{fig:xrayspectrum} \end{figure} \begin{figure} \includegraphics[width=3.3in,angle=0]{xray_contours.eps} \caption{Confidence contours for power-law slope, $\alpha$, and intrinsic column density, $N_{\rm H}$, from the model fit to the {\it XMM-Newton} EPIC X-ray spectrum. The solid, dashed and dotted contours correspond to $\Delta \chi^{2}$ of 2.3, 6.2 and 11.8 respectively, equivalent to 68, 95 and 99.7 per cent confidence regions for two interesting parameters. The best-fit model parameters are indicated with a small cross.} \label{fig:xraycontours} \end{figure} We turn now to the data from {\it XMM/Newton}. The X-ray spectrum was modelled using version 11.3 of the X-ray spectral fitting package, XSPEC. An absorbed power-law model was fitted to the spectrum, in which the absorption was the product of cold Galactic photoelectric absorption with a fixed column density of $N_{\rm H} = 2.9\times 10^{20}$\,cm$^{-2}$, and cold photoelectric absorption at $z=2.41$, corresponding to the rest frame of HATLAS\,J084933, for which the column density was a free parameter in the fit. The spectrum and best-fitting model are shown in Fig.~\ref{fig:xrayspectrum}. The fit yielded a $\chi^{2}$ of 22.3 for 31 degrees of freedom, implying that the model fits the data well. Fig.~\ref{fig:xraycontours} shows the confidence contours for the intrinsic ($z=2.41$) column density, $N_{\rm H}$, and power-law energy index, $\alpha$, defined such that $S_{\nu}\propto \nu^{-\alpha}$. The best-fitting power-law index, $\alpha = 0.8\pm 0.1$, typical for QSOs at $z\approx 2$--3 \citep[e.g.][]{mateos05, mateos10}. The best-fitting intrinsic column density, $N_{\rm H} =(5\pm 3) \times 10^{21}$\,cm$^{-2}$. Zero intrinsic column density corresponds to a $\Delta \chi^{2}$ of 5.9, so the intrinsic absorption is only marginally significant ($2\sigma$). The $3\sigma$ upper limit to the intrinsic column is $1.2\times 10^{22}$\,cm$^{-2}$. The best-fitting model implies a 2--10-keV flux of $4.1\times 10^{-14}$\,erg\,cm$^{-2}$\,s$^{-1}$ and a 2--10-keV luminosity, $L_{\rm X}=1.4\times 10^{45}$\,erg\,s$^{-1}$, once corrected for Galactic and intrinsic absorption. This X-ray luminosity is orders of magnitude brighter\footnote{We must acknowledge, of course, that we cannot yet know whether the X-ray emission is variable.} than we would expect for X-ray emission due to star formation \citep[$L_{\rm X}\ls 0.004$\,$L_{\rm IR}$\ --][]{alexander05a} and brighter than any of the 21 X-ray-luminous SMGs found by \citet{stach19} amongst the 274 SMGs covered by the $\ge200$-ks {\it Chandra} X-UDS \citep{kocevski18} observations. We find $L_{\rm X} \sim 0.011$\,$L_{\rm IR}$, roughly consistent with both the `AGN-classified SMGs' of \citet{alexander05b} and the quasars catalogued by \citet{elvis94}, with both $L_{\rm X}$ and $L_{\rm IR}$\ similar to IRAS\,F15307+3252, a hyperluminous Seyfert 2 quasar at $z=0.93$ about which relatively little is known \citep{cutri94, ruiz07}. Our X-ray data thus corroborate the conclusion of our rest-frame optical spectroscopy: HATLAS\,J084933-W\ hosts an AGN. To estimate the {\it bolometric} luminosity of the AGN, we begin by translating the $K$-corrected X-ray luminosity to an expected rest-frame ultraviolet (UV) 250-nm luminosity via the logarithmic slope, $\alpha_{\rm OX}$, which connects the 250-nm and 2-keV points on the SED. Using equation 4 from \citet{just07}, we obtain\footnote{Note that the minus sign is explicit in our definition of $\alpha_{\rm OX}$ but not in that used by \citet{just07}.} $\alpha_{\rm OX}=1.63$. Taking the resulting UV luminosity, we use the bolometric correction from figure~12 of \cite{richards06} to arrive at an overall bolometric correction factor of $110\times$ the 2--10-keV luminosity, implying a bolometric luminosity of $1.6\times 10^{47}$\,erg\,s$^{-1}$, or log $L_{\rm bol}/{\rm L}_\odot\approx 13.62$, so of the same order as the IR luminosity of the starburst, log $L_{\rm IR}/{\rm L}_\odot=13.52\pm 0.04$ \citepalias{ivison13}. To assess the uncertainty in our estimate of the AGN's bolometric luminosity, we combine in quadrature the r.m.s.\ scatter in $\alpha_{\rm OX}$ from \citet{strateva05} and in the UV-to-bolometric correction factor from \citet{richards06}, obtaining an overall $1\sigma$ logarithmic uncertainty of 0.31, so roughly a factor of two. In the absence of a measurement of the Balmer decrement, we can make a crude estimate of the optical extinction towards the AGN using the well-documented correlation between the logarithms of H$\alpha$ and X-ray luminosity. For this, we use the regression of the full sample of \citet{panessa06}, given by the fourth row of their table~3 and shown in the right-hand panel of their figure~4, adjusting for the different value of $H_{0}$ that they adopted. For our observed 2--10-keV luminosity, we would expect an intrinsic H$\alpha$ luminosity of $4.7\times 10^{43}$\,erg\,s$^{-1}$, where the measured flux of the broad H$\alpha$ component corresponds to a luminosity of $9\times 10^{42}$~\,erg\,s$^{-1}$. To reconcile these predicted and observed luminosities requires 1.8~magnitudes of extinction. Taking the scatter of the \citet{panessa06} sample about the regression into account, together with the measurement errors on the H$\alpha$ and X-ray luminosities, we estimate a $1\sigma$ uncertainty of 1.3~magnitudes on the implied extinction. Translating to $A_V$ from the extinction experienced at the wavelength of H$\alpha$ using the $R_{V}=3.1$ extinction law of \citet{cardelli89} gives $A_{V}=2.2\pm1.6$. \citet{guver09} suggest that $N_{\rm H}=(2.21\pm 0.09)\times 10^{21} A_{V}$ for a Milky-Way-like dust-to-gas ratio, so we find $N_{\rm H}=(4.8\pm 3.5)\times 10^{21}$\,cm$^{-2}$, perfectly consistent with the column density we determined using our X-ray measurements. Any intrinsic absorption towards the AGN is thus likely to be rather small, which is perhaps surprising for a galaxy that contains such immense quantities of molecular gas, and with $\approx 2\times 10^9$\,M$_\odot$ of dust that we have already noted is well fit with greybody emission at a single temperature over the full extent of the galaxy \citep{gomez18}, bearing in mind that typical SMGs are thought to be optically thick out to $\approx 75$\,$\mu$m, with $N_{\rm H}\approx 10^{24}$\,cm$^{-2}$ and $A_{V}\approx 500$ \citep{simpson17}. \begin{figure} \centering \includegraphics[width=3.3in]{G09-124_overlays.eps} \caption{{\it Top:} band-6 245-GHz (1.22-mm) continuum emission superimposed as contours on the {\it HST} F110W imaging of HATLAS\,J084933. {\it Bottom:} contours of $^{12}$CO $J=1$--0 emission, collapsed optimally for each object, superimposed on a three-colour image comprising a heavily smoothed $J + H + K_{\rm s}$ image from VISTA as the blue channel plus the {\it Spitzer} IRAC 3.6- and 4.5-$\mu$m data, all from \citetalias{ivison13}. The object marked `E', north of T, can now be considered robustly identified --- via the positional coincidence of faint 1.2-mm and CO emission with a red IRAC counterpart --- as a fifth dusty, gas-rich galaxy in this $z=2.4$ proto-cluster. Contours are plotted at $-3, 3 \times \sigma$ , with $\sqrt 2$-spaced increments thereafter, where $\sigma$ is the local noise level. N is up; E is to the left; offsets from $\alpha_{2000} = 132^{\circ}.3889$ and $\delta_{2000} = 2^{\circ}.2457$ are marked in arcsec. Known stars are labelled.} \label{fig:vinod} \end{figure} Moving now to the ALMA data, Fig.~\ref{fig:vinod} presents our ALMA band-6 continuum image of HATLAS\,J084933\, with contours superimposed on the {\it HST} F110W image from \citetalias{ivison
13}. In a separate panel of Fig.~\ref{fig:vinod} we show the JVLA $^{12}$CO $J=1$--0 imaging and the {\it Spitzer} IRAC imaging from \citetalias{ivison13}. We do this to illustrate that a band-6 continuum source, detected at $6\sigma$, which we denote component `E', which lies $\approx 12$\,arcsec to the north of component T, can now be seen to be coincident with a very red galaxy that was detected by IRAC, as well as being detected in CO, also at $6\sigma$, but which is not seen in the F110W {\it HST} image. E lies in a confused region of the maps obtained by the {\it Wide-field Infrared Survey Explorer} \citep[{\it WISE} --][]{wright10} and a limit cannot be set that is both meaningful and useful. Component E can thus be added to the inventory of dusty, gas-rich galaxies that form part of the protocluster associated with HATLAS\,J084933, making five in total, covering $\approx 15$\,arcsec or $\approx 120$\,kpc. As seen in the CO $J=1$--0 image, integrated over its full line width, E subtends $\approx 7.3$\,kpc. Consistent measurements of $I_{\rm CO}$ were obtained from this image and from a Gaussian fit to the spectrum extracted at the peak and corrected for $I_{\rm total}/I_{\rm peak}$. E lies close to the redshifts of M and C, somewhat redward of W and T, and thus further helps to explain the broad line seen originally by the $\approx 22$-arcsec primary beam of GBT \citep{harris12}. The line is considerably broader than those of the other cluster galaxies, $\approx 1,380$\,km\,s$^{-1}$ FWHM, albeit with a large uncertainty. The signal to noise is too low for us to be fully confident, but two gas clumps may be involved, distinct both spatially and in redshift, or there could be a rotating gas disk, as with W and T, this time along PA $\approx 45^\circ$, with the reddest component to the north east. Scaling from the band-6 flux density and SED of component W, component E has an IR luminosity of approximately $6\times 10^{12}$\,L$_\odot$. Following \citet{kennicutt98}, this implies an SFR of 650\,M$_\odot$\,yr$^{-1}$ for a \citet{chabrier03} stellar initial mass function (IMF), or considerably less for the IMF observed in distant, dusty starbursts by \citet{zhang18}. E contains approximately $6\times10^{10}$ and $2\times10^{10}$\,M$_\odot$ of gas and stars, respectively. Its basic observational properties are listed in Table~\ref{tab:e}. \begin{table} {\centering \caption{Basic observational properties of component~E \label{tab:e}} \begin{tabular}{rcl} \\ Wavelength & $S_{\nu}$ & Comment\\ \hline 1.1 $\mu$m & $3\sigma<0.8$&$\mu$Jy; F110W\\ 2.15 $\mu$m & $3\sigma<9.1$&$\mu$Jy; VISTA $K_{\rm S}$\\ 3.6 $\mu$m & $7.9\pm 1.0$ &$\mu$Jy; IRAC\\ 4.5 $\mu$m & $10.3\pm 1.5$ &$\mu$Jy; IRAC\\ 1.22 mm$^{\rm a}$ & $1.47\pm 0.24$&mJy; ALMA\\ 5.9 cm & $3\sigma<45$ &$\mu$Jy; JVLA\\ \hline R.A.\ & 08:49:32.867 ($\pm 0.003$) & J2000\\ Dec.\ &+02:14:53.12 ($\pm 0.03$) &J2000\\ log $L_{\rm IR}$$^{\rm b}$ &$12.8^{+0.1}_{-0.2}$&L$_{\odot}$\\ SFR & 650 & M$_\odot$\,yr$^{-1}$ (see text)\\ FWHM CO $J\!=\!1\!-\!0$\ & $7.3\pm 1.8$ &kpc\\ CO $J\!=\!1\!-\!0$\ $I_{\rm CO}$ & $0.27\pm 0.06$ & Jy\,km\,s$^{-1}$ \\ CO $J\!=\!1\!-\!0$\ FWHM& $1380\pm 410$ &km\,s$^{-1}$\\ CO $J\!=\!1\!-\!0$\ $z_{\rm LSR}$& $2.4151\pm 0.0018$&\\ CO $J\!=\!1\!-\!0$\ $L^\prime_{\rm CO}$ & $76\pm 17$ & $10^9$\,{\sc k}\,km\,s$^{-1}$\,pc$^2$ \\ CO $J\!=\!1\!-\!0$\ $L_{\rm CO}$ & $3.9\pm 0.9$ &$10^6$\,L$_\odot$\\ log $M_{\rm H_2+He}^{\rm c}$ &$10.8\pm 0.2$&M$_{\odot}$\\ log $M_{\rm stars}$ &$10.3\pm 0.2$&M$_{\odot}$\\ SFE &120&L$_{\odot}$\,M$_{\odot}^{-1}$\\ \hline \end{tabular}} \noindent $^{\rm a}$Peak 1.22-mm flux density, $660\pm 75\,\mu$Jy\,beam$^{-1}$, so resolved by the $0.49 \times 0.48$\,arcsec$^2$ FWHM beam. \noindent $^{\rm b}$Scaled from component W, where $S_{\rm 1.22mm}=8.29\pm 0.13$\,mJy and log $L_{\rm IR}$\ = 13.52, thereby adopting the same SED as W for component E. \noindent $^{\rm c}$For $\alpha_{\rm CO}=0.8$\,M$_{\odot}$ ({\sc k}\,km\,s$^{-1}$ pc$^2$)$^{-1}$. \end{table} Component E is not detected in our {\it XMM-Newton} images, and is in the wings of the PSF of component W. We measured the counts in a 10-arcsec-radius aperture at the position of component E and can set a $3\sigma$ 2--10-keV upper limit of $8.5\times 10^{-15}$\,erg\,cm$^{-2}$\,s$^{-1}$. \section{Discussion} \label{sec:discussion} Before discussing plausible explanations for the properties of HATLAS\,J084933-W, let us first look at our results in the context of other relevant samples. The X-ray properties of HATLAS\,J084933-W\ are consistent with those of the majority of the 14 HyLIRGs at $z=0.3$--2.0 observed in X-rays by \citet{ruiz07}, where the sample was selected to contain a range of examples of the HyLIRG population, including Type 1 and 2 QSOs, so significantly biased towards those containing AGN. Compared to the \citet{lusso12} sample of 929 AGN, selected over a $\approx 2$-deg$^2$ field via X-rays, only five have higher bolometric luminosities than HATLAS\,J084933-W; its redshift is amongst the most distant decile of Type-1 AGN; its absorbing column is roughly $7\times$ the average $N_{\rm H}$ for an X-ray-selected Type-1 AGN --- not particularly unusual --- and half the average for a Type-2 AGN. We know that there are no clear signatures of an AGN in the rest-frame UV spectrum of HATLAS\,J084933-W, where the Keck spectroscopy of \citetalias{ivison13} revealed only faint C\,{\sc ii}] at rest-frame 232.6\,nm, consistent with considerable dust obscuration. Is there any other indication from existing observations of HATLAS\,J084933-W\ that it might harbour a powerful AGN? Is the panchromatic SED of HATLAS\,J084933-W\ more consistent with a typical SMG, or with a luminous AGN? The SED of HATLAS\,J084933-W\ was not presented blueward of $\lambda_{\rm obs}=880$\,nm by \citetalias{ivison13}, but it has since been observed as part of the wide layer of the Hyper Suprime-Cam Subaru Strategic Program \citep{aihara18}, with $g=23.69\pm 0.26$, $r=22.59\pm 0.18$, $i=21.92\pm 0.13$, $z=21.41\pm 0.09$ and $y=21.08\pm 0.08$\,AB~mag. We also note that {\it WISE} obtained a $3\sigma$ detection consistent with the position of component W of HATLAS\,J084933\ - albeit with poor spatial resolution, 6.5\,arcsec FWHM --- in its W3 band at 12\,$\mu$m, with a flux density of $420\pm140$\,$\mu$Jy. An even more tentative ($<3\sigma$) detection was made in the W4 band (12\,arcsec FWHM) at 22\,$\mu$m: $3.3\pm 1.3$\,mJy. These data represent weak evidence that the mid-IR SED tends towards the AGN region of colour-colour space outlined in diagnostic plots \citep[e.g.][]{ivison04, lacy04}. An updated SED for HATLAS\,J084933-W, now spanning rest-frame UV--radio wavelengths, is shown in Fig.~\ref{fig:sed}. For comparison, Fig.~\ref{fig:sed} also shows the median SED of over 700 ALMA-identified SMGs from U.~Dudzevi\v{c}i\={u}t\.e et al.\ (in prep), and the well-sampled SEDs of the dust-rich, Type-1 AGNs, APM\,08279+5255 and BR\,1202$-$0725 \citep{irwin98, mcmahon94, leipski10, leung19}. The SEDs have been normalised at rest-frame 100\,$\mu$m. Compared to a typical SMG, Fig.~\ref{fig:sed} reveals that an extra $(2.8\pm0.9)\times 10^{11}$\,L$_\odot$ emerges from HATLAS\,J084933-W\ across the rest-frame UV-optical wavelength regime, where it is a magnitude bluer in $g-K$. The SED of HATLAS\,J084933-W\ is, however, fully consistent with that of an SMG, lying at the upper boundary of the r.m.s.\ scatter in rest-frame UV--optical emission from typical SMGs, whereas we can see that an order of magnitude or more in flux density separates the SEDs of HATLAS\,J084933-W\ and the dusty Type-1 AGNs, across two orders of magnitude in wavelength, $\lambda_{\rm rest}\approx 0.15$--30\,$\mu$m. \begin{figure} \centering \includegraphics[width=3.3in]{350norm_sed.eps} \caption{Rest-frame ultraviolet--through--radio SED of HATLAS\,J084933-W, with new photometry from the Hyper Suprime-Cam Subaru Strategic Program \citep{aihara18} and from {\it WISE}. Also shown are the median SED of ALMA-identified SMGs [dashed line] in the Ultra-Deep Survey field (U.~Dudzevi\v{c}i\={u}t\.e et al., in prep), where the grey area is the r.m.s.\ spread in SMG SEDs, and the SEDs of the Type-1 dust-rich quasars, APM\,08279+5255, at $z=3.9$ \citep{irwin98, leung19} and BR\,1202$-$0725, at $z=4.7$ \citep{mcmahon94,leipski10}. The SEDs have been normalised at rest-frame 100\,$\mu$m. Across rest-frame $\approx 0.15$--30\,$\mu$m, an order of magnitude or more in flux density separates the SEDs of HATLAS\,J084933-W\ and the dusty Type-1 AGNs.} \label{fig:sed} \end{figure} This illustrates the conundrum we face: the X-ray data for HATLAS\,J084933-W\ imply a colossal bolometric luminosity,\footnote{Determining the rest-frame 250\,nm--2\,keV slope using the observed optical magnitude would yield $\alpha_{\rm OX} \approx 1.1$, which would suggest a lower bolometric luminosity, yet this slope would then imply heavy UV obscuration; thus the approach taken in \S\,\ref{sec:results} is physically meaningful and appropriate.} similar in magnitude to the far-IR luminosity, yet the latter arises from a large disk and cannot sensibly be powered by the AGN; unlike other dusty Type-1 AGN, there is no obvious sign of significant excess rest-frame UV--optical--mid-IR emission from the AGN, yet the accretion energy must be emerging somewhere, unless our understanding of how X-ray luminosity maps to bolometric luminosity is flawed. The panchromatic SED, low intrinsic X-ray absorption and general properties of HATLAS\,J084933-W\ therefore present a considerable puzzle. A number of plausible scenarios could give rise to the observed configuration. We deal with three of them in what follows. \subsection{AGN seen through cavity in an unusually large disk?} Could a solution to the aforementioned conundrum be related to power of the AGN and the unusual extent of the dusty starburst in the galaxy hosting it? High-resolution spectroscopic imaging of component W of HATLAS\,J084933\ has revealed molecular gas spread across a disk that is several times larger ($\approx 3$--7\,kpc FWHM, depending on the tracer) than the compact, $\approx$kpc submm continuum emission more typically found in SMGs \citep[e.g.][]{ikarashi15, simpson15, oteo17hires, oteo17almacal2, hodge19, rujopakarn19}, irrespective of the presence in those SMGs of X-ray-detected AGN \citep{harrison16}. Such a powerful AGN will rapidly excavate a central cavity, perhaps with an ionisation cone oriented towards the NE that can go some way towards explaining the observed narrow-line properties,\footnote{The velocity offset between the narrow lines and the CO is then due to an outflow or winds towards the observer.} perhaps leading to a relatively clear view of the broad-line region and little X-ray absorption along the line of sight through the disk\footnote{\citetalias{ivison13} and \citet{gomez18} found disk inclinations of $56\pm 10^\circ$ and $\approx 48^\circ$ respectively, where $0^\circ$ is face-on.} to a central AGN. It may also be possible that there is a contribution to the soft X-rays from photoionised gas and scattering within the ionisation cone, such that the true X-ray column is larger than that deduced from our simple model fit. In this scenario, the observed extinction (see \S\,\ref{sec:results}) is due to gas and dust local to the AGN, i.e.\ its obscuring torus in the unified scheme, and/or along the line of sight through the host galaxy. $A_V\approx 2.2$ corresponds to over 7~mag of extinction at 125\,nm, which is roughly the observed $g$ band, such that we would see only the host galaxy in $g$, consistent with the observed properties and roughly 5 mag fainter than the prediction for the AGN before obscuration. We note that the apparent lack of hot dust emission is a problem for any model that invokes a conventional AGN torus. If the X-ray opacity proves to be small compared to the dust attenuation implied by the Balmer decrement\footnote{Measuring the Balmer decrement will be relatively straightforward with $H$-band spectroscopy.}, this might indicate that the obscuration arises from a dusty, ionised outflow, akin to that identified by \citet{mehdipour12}. Alternatively, if the AGN is obscured predominantly by dust in the surrounding galaxy then we would expect a a Seyfert-1-like [O\,{\sc iii}]/H$\alpha$ ratio and a normal dust-to-gas ratio for the absorber, if we could combine measurements of X-ray absorption and the broad-line Balmer decrement. We could then calculate what fraction of the AGN power goes into heating the dust in the surrounding galaxy. \subsection{A second, unseen AGN?} In the absence of observations that tie down the broad-line Balmer decrement, or the spatial distribution of [O\,{\sc iii}]/H$\beta$, we can speculate that the absence of significant X-ray absorption --- together with rest-frame optical features consistent with a Type-1 AGN --- may imply that the accreting SMBH is not embedded in the gas-rich starburst. This would, in turn, suggest that energy due to accretion does not dominate the bolometric luminosity of HATLAS\,J084933-W. \citetalias{ivison13} speculated that the mutual proximity and counter-rotation of the gas disks in components W and T might explain their unusual luminosities. Occam's razor might suggest instead that the near-naked SMBH we observe towards HATLAS\,J084933-W\ --- modulo the lack of associated rest-frame UV--optical emission --- is associated with another galaxy involved in a merger or interaction that triggered the starburst \citep[e.g.][]{ellison19}. HATLAS\,J084933-W\ would then be expected to contain a second SMBH, associated with the intense ongoing starburst, plausibly an unseen Compton-thick AGN. Sadly, although nearby binary AGN are occasionally revealed \citep[e.g.\ in Mrk~739, at a distance of 130\,Mpc, using {\it Chandra} --][]{koss11}, the observational challenges associated with demonstrating the presence of dual or binary AGN at even modest redshifts are similar or worse than those of identifying distant interactions and mergers, where the sensitivity and especially spatial resolution available to X-ray observers are rarely up to the task. Largely because of SPIRE on the {\it Herschel Space Observatory} \citep{griffin10, pilbratt10} and the wide-field ground-based imager, SCUBA-2 \citep{holland13}, many thousands of SMGs are now known \citep[e.g.][]{eales10, oliver12, geach17}. While only a very small fraction of these rest-frame far-IR-selected galaxies are associated with Type-1 AGN, \citet{knudsen03} describe a submm-selected, intrinsically hyperluminous quasar, SMM\,J04135+10277, lensed by the galaxy cluster Abell~478, and {\it Herschel} led to the detection of many more \citep[e.g.][]{my15, pitchford16, dongwu16}. Some others have Type-2 AGN, or objects believed to be transitioning from Type-2 to Type-1 in the evolutionary scheme proposed by \cite{sanders88}, e.g.\ the aforementioned SMM\,J02399$-$0136, with its BAL quasar. Follow-up submm detections of known optically-luminous quasars are relatively common \citep[][amongst others]{isaak94, isaak02, ivison95, omont96qsos, priddey03, mainieri05, stacey18, hatziminaoglou18}. The quasars BR\,1202$-$0725 and BRI\,1335$-$0417 were amongst the first to be detected at submm wavelengths, and were later found to have physically associated SMGs close by \citep[][see also \citealt{decarli18, decarli19, venemans18}]{omont96, yun00, carillibr02, carilli13, salome12, wagg12, wagg14, lu17}, one of which also harbours an X-ray-luminous AGN \citep{ionochandra06}. It is conceivable, then, that we have found a system that contains a HyLIRG, with a buried AGN, close to another more evolved quasar-like system. We cannot ignore the stark difference between the SED of HATLAS\,J084933-W\ and those of other quasars known to have SMG companions; however, a location in the outskirts or behind the gas-rich disk of HATLAS\,J084933-W\ may help explain why the Type-1 AGN is not bright at rest-frame UV--optical--mid-IR wavelengths. With sufficient separation, such a geometry might be revealed using the spatial resolution of {\it Chandra}, even if it could not be easily distinguished from our next suggestion. \subsection{An ejected AGN?} In another scenario --- discussed in terms of a merging galaxy pair in the COSMOS field by \citet{civano12} --- asymmetric emission of gravitational radiation \citep{peres62, bekenstein73} during the coalescence of two SMBHs with anti-aligned spins \citep{campanelli07,lz11a,lz11b} and a high mass ratio \citep{baker08} could have led to the ejection of the newly formed SMBH from the site of the merger, with a relative velocity as high as 5,000\,km\,s$^{-1}$. Such an ejected SMBH is thought unlikely to carry its narrow-line region along with it \citep{loeb07}, but it could shine for $10^7$\,yr as a Type-1 AGN, and in an extremely gas-rich environment like that in HATLAS\,J084933-W\ --- spanning several\footnote{$\approx 7$\,kpc FWHM, as measured in CO $J=1$--0 \citepalias{ivison13} so $\approx 10^7$\,yr at the maximum plausible velocity, or up to an order of magnitude higher at the measured line-of-sight velocity offset betweeen H$\alpha$ + [N\,{\sc ii}] and CO.} kpc --- the AGN would give rise to a narrow-line region as it travels. We can further speculate that if the SMBH that we observe in X-rays and in the rest-frame optical has been ejected from the site of the merger, then the resulting lack of feedback via powerful AGN-driven winds \citep[e.g.][]{maiolino12, veilleux13, veilleux17, cicone14, tombesi15} --- often invoked to regulate the growth of the stellar spheroidal component of the host galaxy and the SMBH itself \citep[cf.][cf.\ \citealt{grimmett19}]{ramasawmy19} --- may explain the extreme nature of the starburst in HATLAS\,J084933-W. Feedback from supernovae would be left as the primary regulation mechanism, perhaps enabling the object to skip quickly to the end of the aforementioned \citeauthor{sanders88} sequence, from Compton-thick AGN to naked quasar. Indeed, recent theoretical work by \citet{mcalpine19}, based on cosmological hydrodynamical simulations, suggests that galaxies at $z\approx 2.5$ with high $L_{\rm IR}$\ are able to reach and maintain large SFRs because their gas reservoirs are not depleted by accretion onto their central black holes, such that their black holes are under-massive. It would be interesting if --- in HATLAS\,J084933-W\ --- we have found an extreme example of this hypothesis, with the absence of a significantly massive black hole pushing its starburst firmly into the HyLIRG category. Although undeniably interesting, the associated implication of this last scenario --- that SMBHs may permeate intergalactic space --- is not a topic for this paper. \section{Summary and concluding remarks} \label{sec:summary} We report new X-ray, near-IR and submm observations of the starburst galaxies that comprise HATLAS\,J084933\ at $z=2.4$. Our ALMA imaging confirms a more distant, fifth, dust- and gas-rich member, E, of the HATLAS\,J084933\ protocluster, which is now known to cover $\approx 15$\,arcsec or $\approx 120$\,kpc. HATLAS\,J084933-E\ is extremely red, with an unusually broad CO $J=1$--0 line; it may be a merger or a colossal disk. Our {\it XMM-Newton} and KMOS imaging spectroscopy of HATLAS\,J084933-W\ --- the brightest of the five galaxies, a HyLIRG, unlensed and extraordinarily luminous, even by the standards of SMGs --- reveal the presence of an AGN. HATLAS\,J084933-W\ displays significant X-ray emission together with bright [N\,{\sc ii}]\ lines and a very broad {H$\alpha$}\ line; the latter implies $M_{\rm bh}\approx 2\times 10^9$\,M$_\odot$. For such a dusty and gas-rich host galaxy, we see surprisingly little intrinsic absorption towards the AGN, $N_{\rm H}\approx 5\times 10^{21}$\,cm$^{-2}$, likely with modest extinction, $A_V\approx 2$. Our estimate of the bolometric luminosity of the X-ray-bright AGN is commensurate with the far-IR luminosity of the starburst, yet we know from spatially resolved imaging spectroscopy that the system contains a colossal gas- and dust-rich disk, with no significant temperature gradient. Despite the AGN's potential to dominate the overall power budget, it is therefore not obvious that it does so. We outline three plausible scenarios that could give rise to the observed characteristics of HATLAS\,J084933-W, though the lack of significant rest-frame UV--optical and/or mid-IR emission remains a puzzle in all of them. Either we have a relatively clear view of the broad-line region through the starbursting disk of the host galaxy, with the powerful AGN having excavated a central cavity, or the AGN is not embedded in the starburst. In this second, prosaic option --- where analogues are known --- we speculate that there are two SMBHs: one, visible in X-rays, having evolved more quickly towards the naked quasar phase, in an unseen galaxy or galaxy remnant that lies very close to the HyLIRG; the second, buried deep within the dusty starburst, invisible to us. In our third scenario, the observed SMBH has been ejected from the region experiencing the starburst, e.g.\ via asymmetric gravitational radiation during the coalescence of two SMBHs, and we postulate that the resulting absence of local AGN feedback may then explain the extreme nature of the starburst. It is clear that there is considerably more to learn about the role and impact of the AGN or AGNs in HATLAS\,J084933. The intrinsic absorption is detected only barely and is poorly constrained, so it would be interesting to determine the degree of obscuration through which we are viewing the AGN. This could be achieved through a combination of deeper X-ray data --- to more accurately measure the absorbing column and to determine whether it is dominated by reflection and/or has the strong 6.4-keV emission line that usually characterises this --- and through rest-frame optical spectroscopy to measure the Balmer decrement of the broad emission lines. Mid-IR spectroscopy would also be useful to help disentangle the AGN/star formation contribution to the IR luminosity. Our findings add to the growing body of evidence that powerful AGN are ubiquitous amongst HyLIRGs. However, the nature of the AGN observed in HATLAS\,J084933\ is not at all what we expected and we cannot easily reconcile the high bolometric luminosity and modest intrinsic absorption inferred from our X-ray observations with the large far-IR-emitting disk and the faint rest-frame UV--optical--mid-IR portion of its panchromatic SED. \section*{Acknowledgements} We acknowledge the many contributions from Iv\'an Oteo, the wisdom of Alain Omont and Ian Smail, and the generosity of Edith Falgarone, Hannah Stacey and Song Huang. We are also indebted to the anonymous referee for an excellent report. RJI dedicates this paper to his long-time friend and colleague, Wayne Holland, who passed away in May 2019; without his immensely valuable contributions to submm instrumentation and astronomical research, we would still be waiting for ALMA. This publication makes use of data products from the {\it Wide-field Infrared Survey Explorer}, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2013.1.00164.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. \bibliographystyle{mnras}
\section{Introduction} A Markov process\cite{kamp,chung} is a paradigmatic model for describing a stochastic process in various fields of science including biophysics\cite{ken07,ken15,gop,sung2,brown,feng,Pande1,chodera,Chu,Hyeon}. A Markov process can be obtained from the microscopic dynamics of a closed system by coarse-graining\cite{groot,zwan}. Thus, it can be considered as a process where information is continuously being lost. Alternatively, Markov processes have also been obtained by maximizing the dynamical entropy of the probability distribution of stochastic paths under appropriate constraints\cite{rmp,ken,sl}. Considering discrete states labelled by an index $i$, the time evolution of a probability distribution in a Markov jump process, where transitions occur only at times that are integer multiples of $\Delta t$, is given by ~\cite{markov1} \begin{equation} \pi_i\left(t + \Delta t \right) = \sum_j \pi_j(t) p_{j \to i}, \label{markov2} \end{equation} where $\pi_i(t)$ is the probability that the system is in a state $i$ at time $t=n\Delta t$ for some integer $n$, and $p_{i \to j}$ is the transition probability from the state $i$ to $j$. The conservation of probability implies that $\sum_j p_{i \to j} = 1$. Eq.(\ref{markov2}) can also be written as \begin{equation} \frac{\Delta \pi_i(t)}{\Delta t} = \sum_j \pi_j(t ) k_{j \to i}, \label{mard} \end{equation} where $\Delta \pi_i(t) \equiv \pi_i(t + \Delta t) - \pi_i(t)$ and \begin{equation} k_{i \to j} \equiv \frac{ p_{i \to j}-\delta_{i,j}}{\Delta t},\label{rate} \end{equation} with $\delta_{i,j}$ being the Kronecker delta, which is one if $i= j$ and zero otherwise. Here, $k_{i \to j}$ is called the transition rate from the state $i$ to $j$ for $i \ne j$. The conservation of probability imposes the constraint that $\sum_j k_{i \to j} = 0$, from which we obtain $k_{i \to i} = -\sum_{j \ne i} k_{i \to j}$. The transition rates completely determine the stochastic evolution of the system. The values of $k_{i \to j}$ are considered to be time-independent constants, and the term time-homogeneous Markov process is sometimes used to emphasize this fact. The equation for the continuous-time Marvkov process is obtained from Eq.(\ref{mard}) by taking the limit $\Delta t \to 0$: \begin{equation} \frac{d \pi_i(t)}{dt} = \sum_j \pi_j(t) k_{j \to i} ,\label{markov} \end{equation} which is called the master equation. It is a well-known fact that under appropriate conditions, the probability distribution of a Markov chain converges to a unique stationary distribution $\pi^{\rm st}_i$ regardless of the initial distribution~\cite{markov1}. A stationary distribution satisfies the balance condition: \begin{equation} \sum_j \pi^{\rm st}_j k_{j \to i}=\sum_j \left[ \pi^{\rm st}_j k_{j \to i} - \pi^{\rm st}_i k_{i \to j} \right] = 0\quad \forall i , \label{balance} \end{equation} where the second expression follows from the first by the conservation of probability, $\sum_j k_{i \to j} =0$. A stationary distribution is considered a true equilibrium, which we now denote as $\pi^{\rm eq}_i$, only if a stronger condition called detailed balance holds: \begin{equation} \pi^{\rm eq}_j k_{j \to i} - \pi^{\rm eq}_i k_{i \to j} =0\quad \forall i,j. \label{db} \end{equation} A given Markovian transition matrix admits an equilibrium solution with detailed balance if and only if Kolmogorov's criterion is satisfied. It states that for any cycle of states $i_0, i_1, \cdots, i_n, i_0$, the product of forward transition rates over the cycle is equal to that of the reverse rates~\cite{kogo,kelly}: \begin{equation} k_{i_0 \to i_1} k_{i_1 \to i_2} \cdots k_{i_{n-1} \to i_n} k_{i_{n} \to i_0} = k_{i_0 \to i_n} k_{i_{n} \to i_{n-1} } \cdots k_{i_2 \to i_1} k_{i_1 \to i_0} .\label{kobo} \end{equation} Therefore, we see that the existence of a cycle in the network topology of a Markov process is a necessary condition that its stationary distribution violates the detailed balance condition. From here on, we will call a Markov model that violates Kolmogorov's criterion simply a cyclic Markov model, since cycles that satisfy Kolmogorov's criterion are not of interest here. Because a Markov process is a coarse-grained description, a state labelled with the index $i$ is usually not a true microstate of the closed system, but rather an aggregate of such microstates. The crucial assumption underlying the coarse-graining that leads to Eq.(\ref{markov}) is that of instantaneous local equilibrium: the equilibration between the microstates within each Markov state occurs much faster than the transition between distinct Markov states. Therefore, we may call the index labelling the Markov states as the slow variable, and the underlying additional hidden index required for specifying the microstate as the fast variable. Once instantaneous local equilibrium is assumed, the detailed balance condition for an equilibrium distribution follows from the symmetry of the underlying microscopic laws under time reversal, under the condition that the index $i$ is invariant under time reversal, which will be assumed always true in this paper\cite{groot,zwan}. This suggests that any closed system can be described by a Markov process that satisfies Kolmogorov's criterion if coarse-graining is performed properly. However, cyclic Markov processes that violate Kolmogorov's criterion are often used to model systems continuously driven out of equilibrium by an external agent\cite{BS,cycle1,cycle2,wang,QH11,QH12,QH2,QHx1,QHx2,GQ1}. The stationary state of such a model is called the nonequilibrium steady state\cite{GQ2,QH11,QH12,QH2,QHx1,QHx2,GQ1,QH31,QH32,hi12} because the detailed balance condition does not hold. It has been argued that these rather contradictory views can be reconciled if the cyclic Markov process is embedded in a larger Markov model that explicitly includes the degrees of freedom for the driving agent\cite{zmh}. Obviously, the total system consisting of the driven system plus the outside environment containing the driving agent forms a closed system, which will eventually reach equilibrium. For example, a cyclic Markov model can be used to describe a biochemical cycle driven by ATP. However, from a more global point of view, the cycle will stop once all ATP molecules are used up. If we consider a situation where ATP itself is regenerated by food intake, we know that the cycle is still a part of a larger cycle driven by the sun. Considering a closed system that includes all biological organisms as well as the sun, the whole system will reach equilibrium once the sun has burnt out and all life processes have ended. Therefore, the dynamics for the driven system are described by a model where the transition rate is time-dependent. Since the transition rates change with time, it is possible that the rates violate Kolmogorov's criterion at earlier times, but satisfy the criterion as the system reaches equilibrium. A cyclic Markov process is clearly only an approximate description valid only for time periods much earlier than the equilibration of the total closed system. In this work, it is shown that for any time-homogeneous Markov model without detailed balance, an extended Markov model that explicitly includes the degrees of freedom for the driving agent and satisfies Kolmogorov's criterion can be constructed. The original cyclic Markov model for the driven system is then recovered as an approximation at early times by summing over the degrees of freedom of the driving agent. By constructing the extended model, the widely accepted formula for the entropy production in a cyclic Markov model is explicitly expressed as a time derivative of an entropy component. Furthermore, an analytic expression for the entropy component is presented, which is hidden in the original cyclic Markov model. \section{Derivation of Markov model that violates detailed balance} \subsection{Three-state model} Before providing a derivation for general Markov processes without detailed balance, a simple example of a discrete-time Markov process is presented, consisting of the three states shown in Figure 1(a), where $a b c \ne \alpha \beta \gamma$. Now, consider an extended model where the state of the driving agent, labelled by an integer $X\ (0 \le X \le N)$, is explicitly included. We assume that the change of $X$ is uniquely determined for each $i \to j$ transition, denoted by $\Delta X (i \to j)$. We also assume that $\Delta X (i \to j) = -\Delta X (j \to i)$. For example, this three-state process may be a biochemical cycle driven by the hydrolysis of ATP to ADP. Then, we may take $N$ to be the total number of ATP and ADP molecules, which is assumed to be fixed, and let $X$ and $N-X$ be the numbers of ATP and ADP molecules, respectively. In this case, $-\Delta X (i \to j)$ is the number of consumed ATP molecules in the biochemical reaction $i \to j$. We request that the sum of $\Delta X(i \to j)$ along the cycle is nonzero. This leads to an absence of any cycle in the extended model, which in turn guarantees the detailed balance. There is no unique extended model corresponding to the cyclic Markov model in Figure 1(a). For example, we may have $\Delta X = \pm 1$ for each transition (Figure 1(b)), or alternatively $\Delta X = \pm 1$ only for the transitions between C and A and $\Delta X = 0$ for all the other transitions (Figure 1(c)). However, the model in Figure 1(c) can be clearly mapped into that of Figure 1(b) by redefining the coordinate $X$ and changing the value of $N$. Hence, these models are mathematically equivalent. From here on, $X$ will be defined as in Figure 1(b), so that there are a total of $N+1$ states in the extended model. Then, $X$ is just a serial number attached to the states in the extended model, and does not necessarily coincide with the number of ATP molecules. We now consider a Markov process for the probability distribution $\Pi_{(i,X)}(t)$ of the extended system, and assume that the transition probability $P_{(i,X) \to (j,Y)}$ from $(i,X)$ to $(j,Y)$ in the extended model has the form \begin{equation} P_{(i,X) \to (j,Y)} = p_{i \to j}\delta(Y-X,\Delta X(i \to j)), \label{simp} \end{equation} where $\delta(x,y) \equiv \delta_{x,y}$ is the Kronecker delta function. That is, the nonzero value of the transition probability depends only on $i$ and $j$ (Figure 1(b), (c)). Starting from the extended model, we now sum over the degrees of freedom $X$ to obtain the reduced model describing the time evolution of $\pi_i(t)$. The dynamics of the reduced model is not described by a time-homogeneous Markov process in general, but the transition probability $q(i \to j; t)$ can still be defined, and is given by~(Appendix \ref{trcnd}) \begin{eqnarray} &&q(i \to j; t) \equiv \frac{{\rm Pr}(i,t; j,t+\Delta t)}{\pi_i(t)} \nonumber\\ &=& \frac{ \sum_{X,Y}{\rm Pr}(i,X,t; j,Y,t + \Delta t)}{\pi_i(t)}\nonumber\\ &=& \frac{1}{\pi_i(t)}\sum_{X,Y} \Pi_{(i,X)}(t) P_{(i,X) \to (j,Y)} \nonumber\\ &=& p_{i \to j} \frac{\sum_{X} \Pi_{(i,X)}(t)}{\pi_i(t)} \label{wij} \end{eqnarray} where ${\rm Pr}(i,t; j,t+\Delta t)$ is the joint probability that the state of the driven system is $i$ at time $t$ and $j$ at time $t+ \Delta t$. Similarly, ${\rm Pr}(i,X,t; j,Y,t+\Delta t)$ is the joint probability that the state of the extended model is $(i,X)$ at time $t$ and $(j,Y)$ at time $t+\Delta t$. In Eq.(\ref{wij}), the first line follows from the definition of the transition probability (Appendix \ref{trcnd}), and the same definition was used for the extended model to obtain the third line. Finally, the condition Eq.(\ref{simp}) was used to derive the last line. Note that $X=N$ and $X=0$ are excluded from the summation in the numerator of Eq.(\ref{wij}) for $\Delta X(i \to j) = 1$ and $\Delta X(i \to j)=-1$, respectively, leading to \begin{equation} q (i \to j; t) = \left\{ \begin{array}{ll} p_{i \to j}(1-\frac{\Pi_{(i,N)}(t)}{\pi_i(t)}) & \quad \Delta X(i \to j) = 1,\\ p_{i \to j}(1-\frac{\Pi_{(i,0)}(t)}{\pi_i(t)}) & \quad \Delta X(i \to j) = -1,\\ p_{i \to j} & \quad \mathrm{otherwise}. \end{array} \right. \label{reduced} \end{equation} As mentioned earlier, the coarse-graining of microscopic dynamics under appropriate conditions leads to a time-homogeneous Markov model that satisfies Kolmogorov's criterion. From Eq.(\ref{reduced}), we now see why the cyclic Markov model for the three states violates Kolmogorov's criterion: We cannot assume instantaneous equilibration among the microstates within a state labelled by the index $i$, because the dynamics of the variable $X$ is not fast enough. Only in the limit of $t \to \infty$, $\Pi_{(i,X)}(t)$ approaches the equilibrium distribution where the time-dependence in Eq.(\ref{reduced}) disappears, leading to a time-homogeneous Markov model that satisfies Kolmogorov's criterion. It is straightforward to obtain the analytic form for the equilibrium solution by using the detailed-balance condition~(Appendix \ref{eqsol}). Now consider the early time period. It is clear that if $N \gg 1$ and $\Pi_{(i,X)}(0)$ are nonzero only around the intermediate values of $X$, say $N/2$, then both $\Pi_{(i,0)}(t)$ and $\Pi_{(i,N)}(t)$ remain negligible for $t \ll t_{\rm eq}$, where $t_{\rm eq} = N \Delta t/p$ is the time scale of equilibration, with $p$ denoting the typical size of $p_{i \to j}$. In this regime, $q(i \to j; t) \simeq p_{i \to j}$, and the time-homogeneous cyclic Markov model with broken detailed balance is recovered. The driven system reaches the steady state of the cyclic model around $t_{\rm st} = \Delta t/p$, which is actually a quasi-steady state that persists for $t_{\rm st} \ll t \ll t_{\rm eq}$. Let us refer to the three-state model with transition probability given by Eq.(\ref{reduced}) as model 1~(Figure 1). The result of a numerical computation for model 1 is shown in Figure 2, with a transition probability given by $p_{C \to A} = 0.5$ and $p_{i \to j} = 0.25$ for all other pairs with $i \ne j$. With $N=3000$, the system is initially in the state $(i,X)=(B,1500)$, with $X$ defined as in Figure 1(b). Note that for $t < 1500 \Delta t$, $\Pi_{i,N}(t) = \Pi_{i,0} (t)=0$, so the system is exactly described by the time-homogeneous three-state Markov model with broken detailed balance (Figure 1(a)). The steady-state distribution and the currents of the cyclic model are $(\pi^{\rm st}_A ,\pi^{\rm st}_B,\pi^{\rm st}_C) = (5/12,1/3,1/4)$, and $J^{\rm st} = 1/48$, respectively, which are actually the quasi-steady state distribution and currents of the extended model. We see that the system reaches the quasi-steady state at around $t/\Delta t \sim 4$~(Figure 2(a)). As we look at a longer time scale, we see that the system makes a transition from the nonequilibrium quasi-steady state to true equilibrium with $(\pi^{\rm eq}_A ,\pi^{\rm eq}_B,\pi^{\rm eq}_C) = (2/5,2/5,1/5)$ and $J_{\rm eq}=0$ around $t/\Delta t \sim 25000 $~(Figure 2(b)). The three-state system is now described by a time-homogeneous Markov model with detailed balance, where $p_{C \to A} = 0.5$, $p_{B \to C} = 0.125$, and $p_{i \to j} = 0.25$ for all other pairs with $i \ne j$. The condition that the nonzero values of the transition rates depend only on the states of the driven system, Eq.(\ref{simp}), may be overly strict to be realistic. We now consider a more general situation, where the nonzero values of the transition rate $P_{(i,X) \to (j,Y)}$ also depend on $X$, so that the constant $p_{i \to j}$ in Eq.(\ref{simp}) is now replaced by $p_{i \to j} (X) $, which is a function of $X$. Even in this more general case, the previous arguments presented under the condition Eq.(\ref{simp}) remain valid, as long as $p_{i \to j}(X)$ is a slowly varying function of $X$ so that \begin{eqnarray} P_{(i,X) \to (j,Y)} &=& p_{i \to j}(X)\delta(Y-X,\Delta X(i \to j))\nonumber\\ &\simeq& p_{i \to j}(X_0)\delta(Y-X,\Delta X(i \to j)) \end{eqnarray} for $X \ll N$. We then get $q(i \to j; t) \simeq p_{i \to j}(X_0)$ for $t \ll t_1 = N \Delta t/p(X_0)$. However, in contrast to the model where the values of $p_{i \to j}$ are constants that are independent of $X$, the system does not reach equilibrium at $t \sim t_1$, because $p_{i \to j}(X)$ deviates significantly from $p_{i \to j}(X_0)$ as $t \to t_1$. This means that $t_{\rm eq}$ is less well defined in the model with $X$-dependent values of non-zero transition probability, suggesting that the transition to equilibrium is smoother. As a simple example, let us consider a three-state model that is more realistic than model 1, which we call model 2, where the transitions $C \to A$ and $A \to C$ are driven by the reactions ${\rm ATP} \to {\rm ADP}+{\rm P}$ and ${\rm ADP}+{\rm P} \to {\rm ATP}$, respectively. We take $N=3000$, as in the case of model 1, where $N+1$ is the total number of states in the extended model, labelled by the coordinates $X=0, \cdots N$. As in the case of model 1, we assume that the state of the driven system at both ends of the Markov chain is B. It is then easy to see that the numbers of ATP and ADP molecules are $N_{\rm ATP}= [(X+1)/3]$ and $N_{\rm ADP} = N/3-[(X+1)/3]$, respectively, with their total number fixed as $N_{\rm tot}=N/3 =1000$, where $[X]$ denotes the integer part of the number $X$. The result of the numerical computation for model 2 is shown in Figure 2(a) and (c), where the initial condition and parameters are the same as in the case of model 1, except that $p(C \to A) = 0.50 N_{\rm ATP} /N_{\rm tot}$ and $p(A \to C) = 0.25 N_{\rm ADP}/N_{\rm tot} $ so that they are proportional to the numbers of ATP and ADP, respectively. The parameters are chosen so that they coincide with those of model 1 for the initial value of $X=X_0 \equiv 1500$. The behavior of model 2 is almost identical to that of model 1 at early times, as expected (Figure 2 (a)). As in model 1, the system reaches the quasi-steady state at $t_{\rm st}\sim \Delta t/p(X_0)\simeq 4 \Delta t$. However, in contrast to model 1, the system does not make a sharp transition to equilibrium at a well-defined $t_{\rm eq}$, but rather makes a much smoother transition to equilibrium characterized by $(\pi^{\rm eq}_A ,\pi^{\rm eq}_B,\pi^{\rm eq}_C) = (1/3,1/3,1/3)$ and $J_{\rm eq}=0$, as predicted. Further details regarding the equilibrium distribution for both model 1 and model 2 can be found in Appendix \ref{eqsol}. \subsection{General derivation} The discussion above can be generalized to any Markov model that violates detailed balance. Now there can be more than one cycle in the Markov network, and accordingly more than one driving agent. All the degrees of freedom for the driving agent are now grouped and expressed as a vector ${\bf X}$, where we regard the components of ${\bf X}$ to be dimensionless without loss of generality. For example, in a realistic biochemical cycle, ATP will not simply be exhausted, but rather replenished by another biochemical cycle. This biochemical cycle may be coupled to other chemical cycles, which are ultimately coupled to radiation energy coming from the sun. Then, the vector ${\bf X}$ represents the state of all the degrees of freedom involved in driving the biochemical cycle of interest, including the amount of hydrogen in the sun. To encompass both discrete-time models and continuous-time models, I will describe a model in terms of transition rate rather than transition probability, and denote the transition rates in the extended and reduced models by $W_{(i,{\bf X}) \to (j,{\bf Y})}$ and $k_{i \to j}$ respectively. Again, we assume that there is no cycle in the extended model and therefore the detailed balance is satisfied. We also assume that \begin{eqnarray} F({\bf X},i,j) \equiv \sum_{\bf Y} W_{(i,{\bf X}) \to (j,{\bf Y})} \end{eqnarray} is a slowly varying function of $X$. That is, there is a large number $N \gg 1$ such that $F({\bf X},i,j)$ does not deviate significantly from its initial value $F({\bf X}_0,i,j)$ if $|{\bf X}| \ll N$: \begin{eqnarray} F({\bf X},i,j) \simeq k_{i \to j} \equiv F({\bf X_0},i,j) \quad ({\rm for}\ \ |{\bf X}| \ll N). \label{simp2} \end{eqnarray} Then, the transition rates of the driven system are obtained by summing over the states of the driving agent~(Appendix \ref{trcnd}): \begin{eqnarray} &&w(i \to j; t) \equiv \Delta t^{-1}(q(i \to j; t) - \delta_{i,j} ) \nonumber\\ &=& \Delta t^{-1} \frac{1}{\pi_i(t)} \sum_{{\bf X},{\bf Y}}\left( {\rm Pr}(i,{\bf X},t; j,{\bf Y},t + \Delta t) - {\rm Pr}(i,{\bf X},t; j,{\bf Y},t) \right)\nonumber\\ &=& \frac{1}{\pi_i(t)}\sum_{{\bf X},{\bf Y}} \Pi_{(i,{\bf X})}(t) W_{(i,{\bf X}) \to (j,{\bf Y})} \nonumber\\ &\simeq& k_{i \to j} \frac{\sum_{{\bf X}} \Pi_{(i,{\bf X})}(t)}{\pi_i(t)} \simeq k_{i \to j} \label{reduced2} \end{eqnarray} for $t \ll t_1 \equiv N/k$, with $k$ being the typical size of $k_{i \to j}$. Again, the summation of ${\bf X}$ in the numerator of the last line excludes the states at the boundary of the Markov network, whose effect is negligible at early times, leading to the final approximation. Although the definition of the transition rate for discrete-time model has been used, the final result does not depend on $\Delta t$, and Eq.(\ref{reduced2}) can be used for both discrete-time and continuous-time models. From here on, let us refer to the time regime $t \ll t_1$, where the system can be described by a time-homogeneous cyclic Markov model without detailed balance, as the cyclic regime. \section{Entropy production} \subsection{Continuous time} The formalism presented in this work clarifies the notion of entropy production~\cite{QHx1,QHx2,hi12,zmh,hi11,jap,hi12a,hi13a,seif1,seif2,sch,AG,tom1,tom2,zia1,zia2} for the case of a Markov process without detailed balance. The entropy production is connected with time-irreversibility via the fluctuation theorem~\cite{evan,gc,kurc,lebo,crooks,maes,seif1,seif2,gj,episto,no,park,kaw,hi13} . For a continuous-time Markov process described by the transition rates $k_{i \to j}$, the entropy production of the whole closed system has been defined as \begin{eqnarray} \Sigma \equiv \sum_{i,j} \pi_i(t) k_{i \to j} \log \left(\frac{ \pi_i(t) k_{i \to j}}{\pi_j(t) k_{j \to i}} \right),\label{sch} \end{eqnarray} where the Boltzmann constant has been set to unity\footnote{The base of the $\log$ function will be kept arbitrary, because the results do not depend on it.}. This formula for the entropy production was originally proposed by Schnakenberg \cite{sch}, and is widely used nowadays~\cite{QHx1,QHx2,hi12,zmh,hi11,jap,hi12a,hi13a,seif1,seif2,AG,tom1,tom2,zia1,zia2}. It can be shown that $\Sigma \ge 0$ for any Markov process~(Appendix \ref{entcyc}, \ref{sigcyc}). The Schnakenberg entropy production $\Sigma$ is explicitly expressed as the time-derivative of an entropy in the case of a Markov model that satisfies Kolmogorov's criterion. From the detailed balance condition in Eq.(\ref{db}), we obtain \begin{eqnarray} \Sigma &=& \sum_{i,j} \pi_i(t) k_{i \to j} \log \left(\frac{ \pi_i(t) \pi^{\rm eq}_j}{\pi_j(t) \pi^{\rm eq}_i} \right)\nonumber\\ &=& -\sum_{i,j} \left( \pi_j(t) k_{j \to i} - \pi_i(t) k_{i \to j} \right) \log \left(\frac{ \pi_i(t) }{\pi^{\rm eq}_i} \right)\nonumber\\ &=& -\sum_{i} \dot \pi_i(t) \log \left(\frac{ \pi_i(t) }{\pi^{\rm eq}_i} \right)\nonumber\\ &=& -\frac{d}{dt}\left[ \sum_{i} \pi_i(t) \log \left(\frac{ \pi_i(t) }{\pi^{\rm eq}_i} \right) \right],\label{sig1} \end{eqnarray} where the condition $\sum_i \dot \pi_i(t) = 0$ was used to derive the last line. Therefore, we find that $\Sigma
= \dot S_{\rm closed}$, where \begin{equation} S_{\rm closed} = -\sum_i \pi_i(t) \log \left(\frac{\pi_i(t)}{\pi^{\rm eq}_i}\right) \label{tot} \end{equation} is the entropy of the whole closed system. It takes the form of the negative of the relative entropy, also called the Kullback-Leibler divergence~\cite{KL}. The Kullback-Leibler divergence $D_{\rm KL}(P||Q)$ measures the distance of a probability distribution $P(i)$ from a given distribution $Q(i)$, which is defined as \begin{equation} D_{\rm KL}(P||Q)=\sum_i P(i) \log \frac{P(i)}{Q(i)}. \end{equation} Because of the sign flip, $S_{\rm closed}$ measures the similarity of the distribution $\pi(t)$ to $\pi^{\rm eq}_i$. Therefore, we may say that $S_{\rm closed}$ is a nondecreasing function of time because $\pi$ converges to $\pi^{\rm eq}_i$ as time proceeds. However, $\dot S_{\rm closed} \ge 0$ even when $\lim_{t \to \infty} \pi_i(t) \ne \pi^{\rm eq}_i$~(Appendix \ref{entcyc}). The physical interpretation of $S_{\rm closed}$ is very clear. At the equilibrium, a closed system has an equal probability to be in each microstate that is consistent with the given constraints~\cite{lee,tolman} (Appendix \ref{shanlee}). Consequently, the equilibrium probability $\pi^{\rm eq}_i$ is proportional to the number of such microstates corresponding to the state $i$, denoted as $\Omega_i$~\cite{lee} (Appendix \ref{shanlee}): \begin{equation} \pi^{\rm eq}_i \propto \Omega_i = B^{S_i}, \label{eqd} \end{equation} where $S_i \equiv \log \Omega_i$ is the Boltzmann entropy corresponding to the state $i$, and $B$ is the base of the log function. From Eq.(\ref{eqd}), we have \begin{equation} S_{\rm closed} = -\sum_i \pi_i(t) \log \pi_i(t) + \sum_i \pi_i(t) S_i(t) + {\rm const}. \label{sysmed} \end{equation} The first term in Eq.(\ref{sysmed}), called the Shannon entropy~\cite{rmp,shannon}, results from the uncertainty of the slow variable $i$. The second term, the average Boltzmann entropy, is due to the uncertainty of the remaining fast degree of freedom that is in instantaneous local equilibrium~(Appendix \ref{shanlee}). For a cyclic Markov model without detailed balance, the entropy in Eq.(\ref{tot}) is still a nondecreasing function of time~(Appendix \ref{entcyc}). We now denote it as \begin{equation} S_{\rm cyc} \equiv -\sum_i \pi_i(t) \log \left(\frac{\pi_i(t)}{\pi^{\rm st}_i}\right), \label{scy} \end{equation} with $\pi^{\rm st}_i$ being the stationary distribution without detailed balance. However, $S_{\rm cyc}$ does not lend itself to a clear physical interpretation as Eq.(\ref{sysmed}). Furthermore, although the fact that $\Sigma \ge 0$ remains true regardless of the detailed balance, $\Sigma$ is no longer equal to $\dot S_{\rm cyc}$. In fact, it has been shown that $\Sigma > \dot S_{\rm cyc}$\cite{hg,gq} in the absence of detailed balance (Appendix \ref{sigcyc}). No analytic expression for the entropy component, whose time derivative is $\Sigma$, has been constructed so far. In this section, it will be shown that by embedding the cyclic Markov model into a larger Markov model with detailed balance that explicitly includes the degrees of freedom for the drivers, the Schnakenberg entropy production $\Sigma$ can be explicitly expressed as a time derivative of an entropy component under an appropriate condition, justifying its identity as an entropy production. The total entropy of the extended model is simply obtained from Eq.(\ref{tot}) by making the replacements $\pi_i(t) \to \Pi_{(i,{\bf X})}(t)$ and $\pi^{\rm eq}_i \to \Pi^{\rm eq}_{(i,{\bf X})}$: \begin{equation} S_{\rm tot}\equiv -\sum_{i,{\bf X}} \Pi_{(i,{\bf X})}(t)\log \frac{\Pi_{(i,{\bf X})}(t)}{\Pi^{\rm eq}_{(i,{\bf X})}} . \label{clex} \end{equation} By performing the decomposition \begin{equation} \Pi_{(i,{\bf X})}(t) = \pi_i (t) \Pi_{({\bf X}/i)}(t) , \end{equation} where $\pi_i(t) \equiv \sum_{\bf X} \Pi_{(i,{\bf X})}(t)$ is the marginal probability and $\Pi_{({\bf X}/i)}(t) \equiv \Pi_{(i,{\bf X})}(t)/\pi_i(t)$ is the conditional probability, $S_{\rm tot}$ is now decomposed as \begin{equation} S_{\rm tot} = -\sum_i \pi_i(t) \log \pi_i(t) + \sum_{i,{\bf X}} \Pi_{(i,{\bf X})}(t) \log \Pi^{\rm eq}_{(i,{\bf X})} -\sum_{i,{\bf X}} \Pi_{(i,{\bf X})}(t) \log \Pi_{({\bf X}/i)}(t). \label{dcmp1} \end{equation} The first term, the Shannon entropy of the driven system \begin{equation} S_{\rm shan} = -\sum_i \pi_i(t) \log \pi_i(t), \label{shan1} \end{equation} results from the uncertainty of the state $i$ of the driven system. The third term, the hidden entropy \begin{equation} S_{\rm hid} = -\sum_{i,{\bf X}} \Pi_{(i,{\bf X})}(t) \log \Pi_{({\bf X}/i)}(t), \label{hid1} \end{equation} originates from the uncertainty of the driver degrees of freedom ${\bf X}$ for a given value of $i$. Finally, the second term, the average Boltzmann entropy \begin{equation} S_{\rm bol} = \sum_{i,{\bf X}} \Pi_{(i,{\bf X})}(t) \log \Pi^{\rm eq}_{(i,{\bf X})}= \sum_{i,{\bf X}} \Pi_{(i,{\bf X})}(t) S_{(i,{\bf X})},\label{abol} \end{equation} comes from the uncertainty of the remaining fast degrees of freedom for given $(i,{\bf X})$. Because fast degrees of freedom are locally equilibrated, the corresponding indices do not appear explicitly in Eq.(\ref{abol})~\cite{lee}(Appendix \ref{shanlee}). It is straightforward to show that (Appendix \ref{totsig}) \begin{equation} \dot S_{\rm tot} - \dot S_{\rm hid} =\dot S_{\rm shan} + \dot S_{\rm bol} = \Sigma_{\rm exact} \label{entpr} \end{equation} where \begin{eqnarray} \Sigma_{\rm exact} &\equiv& \sum_{i,j,{\bf X},{\bf Y}} \Pi_{(i,{\bf X})}(t) W_{(i,{\bf X}) \to (j,{\bf Y})} \log \frac{\pi_i (t) W_{(i,{\bf X}) \to (j,{\bf Y})} }{\pi_j (t) W_{(j,{\bf Y}) \to (i,{\bf X})} }. \label{exent} \end{eqnarray} We now assume that in the cyclic regime where Eq.(\ref{simp2}) holds, the ratio $ W_{(i,{\bf X}) \to (j,{\bf Y})} / W_{(j,{\bf Y}) \to (i,{\bf X})}$ is also determined solely by the indices $i$ and $j$. That is, we assume that \begin{equation} \frac{W_{(i,{\bf X}) \to (j,{\bf Y})}}{W_{(j,{\bf Y}) \to (i,{\bf X})}} \simeq r_{i j},\label{condom} \end{equation} which can be rewritten as \begin{equation} W_{(i,{\bf X}) \to (j,{\bf Y})}= r_{i j} W_{(j,{\bf Y}) \to (i,{\bf X})}. \label{111} \end{equation} By summing Eq.(\ref{111}) over ${\bf X}$ and ${\bf Y}$, we get $r_{i j}=k_{i \to j}/k_{j \to i}$. Therefore, the condition Eq.(\ref{condom}) can be rewritten as \begin{equation} \frac{W_{(i,{\bf X}) \to (j,{\bf Y})}}{W_{(j,{\bf Y}) \to (i,{\bf X})}} \simeq \frac{k_{i \to j}}{k_{j \to i}}.\label{condom2} \end{equation} Under the conditions Eq.(\ref{simp2}) and Eq.(\ref{condom2}), $\Sigma_{\rm exact}$ is approximated as \begin{eqnarray} \Sigma_{\rm exact} &\simeq& \sum_{i,j,{\bf X},{\bf Y}} \Pi_{(i,{\bf X})}(t) W_{(i,{\bf X}) \to (j,{\bf Y})} \log \frac{\pi_i (t) k_{i\to j} }{\pi_j (t) k_{j \to i} } \nonumber\\ &=& \sum_{i,j,{\bf X}} \Pi_{(i,{\bf X})}(t) F({\bf X},i,j) \log \frac{\pi_i (t) k_{i\to j} }{\pi_j (t) k_{j \to i} } \nonumber\\ &\simeq& \sum_{i,j} \Pi_{(i,{\bf X})}(t) k_{i \to j} \log \frac{\pi_{i}(t) k_{i \to j}}{\pi_{j}(t) k_{j \to i}} = \Sigma. \end{eqnarray} and we get \begin{equation} \dot S_{\rm tot} \simeq \Sigma + \dot S_{\rm hid}. \end{equation} The hidden entropy $S_{\rm hid}$ is a newly identified entropy component that cannot be expressed in terms of the parameters of the reduced model. Hidden entropy production has been discussed previously\cite{kaw,epi,chun}, but the analytic expression for $S_{\rm hid}$ itself has not been derived up to the present. I also derived the condition Eq.(\ref{condom2}), required in addition to Eq.(\ref{simp2}), for the Schnakenberg entropy production $\Sigma$ to be equal to $\dot S_{\rm tot}-\dot S_{\rm hid}$. If these conditions are not satisfied, then $\Sigma$ should be replaced by the exact form $\Sigma_{\rm exact}$ in Eq.(\ref{exent}). \subsection{Discrete time} Even for a discrete Markov jump process, $S_{\rm closed}$ and $S_{\rm cyc}$, defined by Eq. (\ref{tot}) and Eq. (\ref{scy}) respectively, are nondecreasing functions of time~(Appendix \ref{entcyc})\cite{morimoto}. The discrete-time counterpart of $\dot S_{\rm closed}$ is~(Appendix \ref{rem}): \begin{eqnarray} \frac{\Delta S_{\rm closed}}{\Delta t} &\equiv& \left[ S_{\rm closed}(t+\Delta t) - S_{\rm closed}(t)\right]\frac{1}{\Delta t} \nonumber\\ &=& {\Delta t}^{-1} \sum_{i,j} \pi_i(t) p_{i \to j} \log \left(\frac{\pi_i(t) p_{i \to j}}{\pi_j(t +\Delta t) p_{j \to i}}\right) \nonumber\\ &=& \sum_{i,j} \pi_i(t) k_{i \to j} \log \left(\frac{\pi_i(t) k_{i \to j}}{\pi_j(t +\Delta t) k_{j \to i}}\right) + {\Delta t}^{-1} \sum_{i} \pi_i(t) \log \left(\frac{\pi_i(t) }{\pi_i(t +\Delta t) }\right) .\label{entpd} \end{eqnarray} Therefore, it is reasonable to generalize Eq.(\ref{entpd}) to a Markov model without detailed balance and define the Schnakenberg entropy production $\Sigma$ as \begin{eqnarray} \Sigma &\equiv& {\Delta t}^{-1} \sum_{i,j} \pi_i(t) p_{i \to j} \log \left(\frac{\pi_i(t) p_{i \to j}}{\pi_j(t +\Delta t) p_{j \to i}}\right)\nonumber\\ &=& \sum_{i,j} \pi_i(t) k_{i \to j} \log \left(\frac{\pi_i(t) k_{i \to j}}{\pi_j(t +\Delta t) k_{j \to i}}\right) + {\Delta t}^{-1} \sum_{i} \pi_i(t) \log \left(\frac{\pi_i(t) }{\pi_i(t +\Delta t) }\right) \label{schd} \end{eqnarray} for the case of a discrete-time model. As in the case of continuous-time models, $\Sigma > {\Delta S_{\rm cyc}}/{\Delta t} \ge 0$ for discrete-time models in the absence of detailed balance~(Appendix \ref{sigcyc}). We now define $\Sigma_{\rm exact}$ as \begin{eqnarray} \Sigma_{\rm exact} &\equiv& \sum_{i,j,{\bf X}} \Pi_{(i,{\bf X})}(t) W_{(i,{\bf X}) \to (j,{\bf Y})} \log \frac{\pi_{i}(t) W_{(i,{\bf X}) \to (j,{\bf Y})}}{\pi_{j}(t+ \Delta t) W_{(j,{\bf Y}) \to (i,{\bf X})}}\nonumber\\ &&+ \Delta t^{-1} \sum_{i } \pi_{i}(t) \log \frac{\pi_{i}(t) }{\pi_{i}(t+ \Delta t)}, \label{exd} \end{eqnarray} which is the discrete-time counterpart of Eq.(\ref{exent}). It is then straightforward to show that~(Appendix \ref{totsig}) \begin{equation} \Sigma_{\rm exact} = \frac{\Delta S_{\rm tot}}{\Delta t}- \frac{\Delta S_{\rm hid}}{\Delta t} = \frac{\Delta S_{\rm shan}}{\Delta t}+ \frac{\Delta S_{\rm bol}}{\Delta t},\label{fin1} \end{equation} with the same definitions of $S_{\rm tot}$, $S_{\rm hid}$, $S_{\rm shan}$, and $S_{\rm bol}$ (Eqs.(\ref{clex}),(\ref{hid1}),(\ref{shan1}), and (\ref{abol})) as in the case of continuous time. In the regime where the conditions Eq.(\ref{simp2}) and Eq.(\ref{condom2}) are satisfied, we have \begin{eqnarray} \Sigma_{\rm exact} &\simeq& \sum_{i,j} \pi_{i}(t) k_{i \to j} \log \frac{\pi_{i}(t) k_{i \to j}}{\pi_{j}(t+ \Delta t) k_{j \to i}} \nonumber\\ &+& \Delta t^{-1} \sum_{i } \pi_{i}(t) \log \frac{\pi_{i}(t) }{\pi_{i}(t+ \Delta t)}\nonumber\\ &=& \Sigma, \end{eqnarray} and therefore \begin{equation} \frac{\Delta S_{\rm tot}}{\Delta t} \simeq \Sigma + \frac{\Delta S_{\rm hid}}{\Delta t} . \end{equation} \subsection{Examples with $\Sigma_{\rm exact} \simeq \Sigma$} Eq.(\ref{simp2}) and Eq.(\ref{condom2}) are the crucial assumptions for $\Sigma_{\rm exact}=\dot S_{\rm tot} - \dot S_{\rm hid}$ to be approximated by the Schnakenberg entropy production $\Sigma$. We discuss a couple of examples where these two conditions are satisfied. First, let us assume that the change of ${\bf X}$, $\Delta {\bf X}$, is uniquely determined by the states of the driven system before and after the transition. We write this as $\Delta {\bf X}(i \to j)$, where $i$ and $j$ denote the states before and after the transition, respectively. Then, the transition rate takes the form \begin{equation} W_{(i,{\bf X}) \to (j,{\bf Y})} = g(i \to j; {\bf X}) \delta({\bf Y}-{\bf X},\Delta {\bf X}(i \to j)). \label{simp3} \end{equation} We also assume that \begin{equation} \Delta {\bf X}(i \to j) = -\Delta {\bf X}(j \to i). \label{another} \end{equation} The transition rates in the example considered earlier, namely the ATP-driven biochemical cycle with a fixed total number of ATP and ADP, takes the form of Eq. (\ref{simp3}) with Eq. (\ref{another}). If $g(i \to j; {\bf X})$ is a slowly-varying function, then \begin{equation} W_{(i,{\bf X}) \to (j,{\bf Y})} \simeq k_{i \to j} \delta({\bf Y}-{\bf X},\Delta {\bf X}(i \to j)) \label{simp4} \end{equation} at early times. It is easy to see that Eq. (\ref{simp4}) and Eq. (\ref{another}) imply that both Eq. (\ref{simp2}) and Eq. (\ref{condom2}) hold. The change of ${\bf X}$ for a given transition $i \to j$ does not have to be unique in order for the conditions in Eq. (\ref{simp2}) and Eq. (\ref{condom2}) to hold. For example, let us say that the number of driver molecules consumed for the transitions $C \to A$ and $A \to C$ in the three-state model considered earlier are not unique. Then, the transition rates take the form $W_{(C,X) \to (A,X-1)} = k_1 X,\ W_{(A,X-1) \to (C,X)} = \tilde k_1 (N-X+1),\ W_{(C,X) \to (A,X-2)} = k_2 X^2,\ W_{(C,X-2) \to (A,X)} = \tilde k_2 (N-X+2)^2, \cdots$, where $X$ is the number of ATP molecules, and $N$ is the total combined number of ATP and ADP molecules. Eq. (\ref{simp2}) holds at early times because $X$ does not deviate significantly from its initial value. Eq. (\ref{condom2}) is also satisfied if $k_1/\tilde k_1 = k_2/\tilde k_2 \cdots$, because $X \gg 1$ at early times. \subsection{Housekeeping entropy} In a cyclic Markov model without detailed balance, the entropy defined in terms of the nonequilibrium steady state, \begin{equation} S_{\rm cyc} \equiv -\sum_i \pi_i(t) \log \frac{\pi_i(t)}{\pi^{\rm st}_i}, \label{cy3} \end{equation} is often considered, which increases with time as explained earlier. This motivates us to perform an alternative decomposition \begin{eqnarray} S_{\rm tot} &=& -\sum_i \pi_i(t) \log \frac{\pi_i(t)}{\pi^{\rm st}_i} + \sum_{i,{\bf X}} \Pi_{(i,{\bf X})}(t) \log \frac{\Pi^{\rm eq}_{(i,{\bf X})}}{\pi^{\rm st}_i} -\sum_{i,{\bf X}} \Pi_{(i,{\bf X})}(t) \log \Pi_{({\bf X}/i)}(t)\nonumber\\ &=& S_{\rm cyc} + S_{\rm hk} + S_{\rm hid}, \label{dcmp2} \end{eqnarray} where we now define the housekeeping entropy $S_{\rm hk}$ as \begin{equation} S_{\rm hk} \equiv \sum_{i,{\bf X}} \Pi_{(i,{\bf X})}(t) \log \frac{\Pi^{\rm eq}_{(i,{\bf X})}}{\pi^{\rm st}_i}. \label{hk2} \end{equation} It is easy to see the equivalence of Eq. (\ref{dcmp2}) to Eq. (\ref{dcmp1}), because $\pi^{\rm st}_i$ in the first and the second terms of Eq. (\ref{dcmp2}) cancel with each other, leading to Eq. (\ref{dcmp1}). We then see that \begin{equation} \Sigma_{\rm exact} - \dot S_{\rm cyc} = \dot S_{\rm hk} \end{equation} for a continuous-time model. In the regime where Eq. (\ref{simp2}) and Eq. (\ref{condom2}) are valid, we get \begin{equation} \Sigma - \dot S_{\rm cyc} \simeq \dot S_{\rm hk}. \end{equation} It has been argued that even after the steady-state has been reached in a cyclic Markov model, where $\dot S_{\rm cyc} \simeq 0$, heat should be constantly generated in order to maintain the steady state, called the housekeeping heat~\cite{hg,gq,op,hs,sp}. Clearly, the generation of such heat is proportional to the production of an entropy component. Eq. (\ref{hk2}) is the analytic formula for this entropy component, and was accordingly termed the house-keeping entropy. Details on the relations between $S_{\rm hk}$ and the housekeeping heat are given in Appendices \ref{open} and \ref{lang}. Both $S_{\rm cyc}$ and $S_{\rm hk}$ are expressed in terms of the relative entropy, and we see that $S_{\rm cyc}$ in Eq. (\ref{cy3}) measures the similarity of the distribution $\pi(t)$ to the quasi-steady state $\pi^{\rm st}_i$. Also, we see that $S_{\rm hk}$ in Eq. (\ref{hk2}) measures the tendency of $\Pi_{(i,\rm X)}$ to move away from the quasi-steady state and approach the true equilibrium $\Pi^{\rm eq}_{(i,{\bf X})}$. Note that from the viewpoint of the extended model, $\pi^{\rm st}_i$ is just a quasi-steady state. Therefore, in contrast to equilibrium distribution $\Pi^{\rm eq}_{(i,{\bf X})} \propto \Omega_{(i,{\bf X})}$, which is expressed in terms of the number of microstates $\Omega_{(i,{\bf X})}$ for a given $(i,{\bf X})$, the quasi-steady state $\pi^{\rm st}_i$ is just an artefact of the dynamics, and does not seem to have a microscopic interpretation as in the case of $\Pi^{\rm eq}_{(i,{\bf X})}$. Because the alternative decomposition in Eq. (\ref{dcmp2}) is defined in terms of $\pi^{\rm st}_i$, $S_{\rm cyc}$ and $S_{\rm hk}$ do not lend themselves to clear interpretations as uncertainties of some degrees of freedom, in contrast to $S_{\rm shan}$ and $S_{\rm bol}$. Although we considered a continuous-time model in this section, all of the results given here are valid for a discrete-time model if we make the replacements $\dot S_{\rm cyc} \to \Delta S_{\rm cyc}/\Delta t$ and $\dot S_{\rm hk} \to \Delta S_{\rm hk}/\Delta t$. The explicit connections of $S_{\rm hk}$ and $S_{\rm hid}$ to the quantities considered in previous literature are provided in Appendices \ref{open}, \ref{lang}, and \ref{adna}. \subsection{Behavior of various entropy components} Let us summarize the general behavior of various entropy components. We will use the notation for the continuous-time Markov process. The result for the discrete-time Markov process is obtained by simply replacing $\dot S$ with $\Delta S/\Delta t$. We assume that the condition Eq.(\ref{condom2}) holds in the cyclic regime so that $\Sigma_{\rm exact} \simeq \Sigma$. From the results in the literature for cyclic models\cite{hg,gq,morimoto,zmh}, we already know that $\dot S_{\rm cyc} \ge 0$ and $\Sigma-\dot S_{\rm cyc} \ge 0$ in the cyclic regime, and therefore $\dot S_{\rm hk} \ge 0$~(Appendix \ref{entcyc}, \ref{sigcyc}). Once the system reaches the quasi-steady state, $\dot S_{\rm cyc} \simeq 0$. By embedding the cyclic Markov model into a larger model, it can be shown that $\dot S_{\rm hid} \ge 0$ in the cyclic regime~(Appendix \ref{dhid}). During the transition from the quasi-steady state to true equilibrium, we have $\dot S_{\rm cyc} \le 0$, because the distribution diverges from the quasi-steady state. However, we have $\dot S_{\rm hk} + \dot S_{\rm hid} \ge 0$ because $\dot S_{\rm tot} \ge 0$. All entropy components will reach constant values after the system reaches equilibrium. Various entropy components for the previous cyclic three-state discrete-time Markov jump process are shown in Figure 3, where this general behavior is confirmed. \section{Conclusion} It has been shown that for any time-homogeneous Markov process that violates Kolmogorov's criterion, one can always find a larger Markov process that satisfies the criterion, where the degrees of freedom ${\bf X}$ for the driving agent are explicitly included. The original Markov model is then recovered as an approximation at early times after eliminating ${\bf X}$. The nonequilibrium steady state of the original model is then found to be a quasi-steady state. An important contribution of the current work is that by extending the cyclic model to a model with detailed balance, we indeed find analytic expressions for $S_{\rm hk}$ and $S_{\rm hid}$ that satisfy $\dot S_{\rm hk} = \Sigma - \dot S_{\rm cyc}$ and $\dot S_{\rm hid} = \dot S_{\rm tot} - \Sigma$ in the cyclic regime. Here, $S_{\rm hk}$ itself cannot be expressed with parameters in the cyclic Markov model, but its derivative $\Sigma - \dot S_{\rm cyc}$ can. Furthermore, neither $S_{\rm hid}$ nor its derivative can be expressed with parameters of the cyclic Markov model. That is, they are completely hidden in the cyclic Markov model description. The current formalism is very general and can be applied to any closed system. Such a closed system includes, but is not limited to, an open system and an infinitely large heat bath in contact with each other~(Appendix \ref{open}). Although we assumed that the state index $i$ is discrete in this work, the extension of the current formalism to a Markov process with a continuous index such as Langevin dynamics~\cite{seif1,seif2,hs,sp,hy2}, is straightforward~(Appendix \ref{lang}). Note that the construction of the extended system is by no means unique. The situation is analogous to that of the canonical ensemble of an equilibrium system, where the properties of the system depend only on the temperature of the heat bath and microscopic details of the bath are arbitrary. \section{Acknowledgement} This work was supported by the National Research Foundation of Korea, funded by the Ministry of Education, Science, and Technology (NRF-2017R1D1A1B03031344). The author thanks Changbong Hyeon for useful suggestions.
\section{INTRODUCTION} A fundamental difference between quantum and classical physics is that there is nonclassical correlation in quantum systems called quantum entanglement \cite{1} which not counterpart in classical systems. Therefore, the quantum entanglement is considered as an important rescurce in quantum information and quantum computation \cite{1,2}. In addition, quantum entanglement also plays a significant role in the quantum phase transition (QPT). Due to the correlation length diverges at the quantum critical points, the entanglement is a good index of QPTs \cite{3,4,5}. In the past few years, the behavior of entanglement at the vicinity of quantum critical point has been studied in various spin models \cite{6,7,8,9,10,11,12,13}, which has achieved great success in theoretically. Recent advances in experimental technologies, for instance the cold atoms, trapped ions \cite{14} and ultrafast pulsed lasers \cite{15}, have made it possible to study the dynamics of nonequilibrium quantum many-body systems. In addition, not only studies the entanglement dynamics in many-body spin systems help us understanding the essence of dynamical quantum phase transition, but it also provides a possible theoretical reference for the design of solid-state quantum computer. Therefore, the entanglement dynamics of many-body quantum systems has attracted widespread attention in recent years. The entanglement dynamics have been studied in recent years mainly from the following two different perspectives. On the one hand, some work studied the propagation of entanglement start from a initial state that the entanglement has been created in a given portion of the multi-body system \cite{16,17,18}. On the other hand, the Hamiltonian changed in time from a certain ground state of a Hamiltonian $H_{0}$. Recent advances have demonstrated the effectiveness of a quantum quenching approach for the study of the entanglement dynamics in many-body spin systems \cite{19,20,21,22,23 . For example, the adaptive time-dependent density-matrix renormalization group was applied to discuss the quantum entanglement dynamics of an open anisotropic spin-$\frac{1}{2}$ Heisenberg chain \cite{19}. The properties of entanglement dynamics in the ITM were also studied by the quantum renormalization group \cite{20}. However, the ground state fidelity and quench dynamics of the 1D extended quantum compass model in a transverse field were investigated \cite{22}, which indicate that the fidelity susceptibility and LE could detect the QPTs in the inhomogeneous system. In particular, the dynamical quantum phase transitions of an interacting many-body system has been observed experimentally \cite{21}. Previous research dedicated to understanding the general properties of nonequilibrium quantum states and expanding significant concepts such as universality to nonequilibrium regime. In this paper, we show that the low-energy-state dynamical quantities of one-dimensional XXZ model can detect the QPTs of the system, which is universal. In equilibrium, the entanglement and quantum phase transition of a spin- \frac{1}{2}$ anisotropic Heisenberg chain has been discussed in Ref. \cite{7 . In this paper, the entanglement dynamics of one-dimensional XXZ model is studied using the quantum renormalization group method. It is found that the entanglement presents cosine variations with time for anisotropic interaction quench, whereas entanglement is sine variable with time for spin direction quench. Yet, these two different quench methods correspond to the same period, which are discussed in detail in the Sec. \ref{A ZA}. In addition, the evolution behavior of entanglement with respect to anisotropy parameter, for two quenching methods, is dramatical different, but both of them appear sigular behavior at the quantum critical point. To gain further insight, the nonanalytic behavior and scaling behavior of the entanglement dynamics are studied. The organization of this paper is as follows. In Sec. \ref{come on} we introduce the idea of quantum renormalazation group and apply it to the one-dimensional XXZ model. In Sec. \ref{A ZA} the entanglement dynamics of spin chain are studied when perform two kinds of typical quench. We summarize in Sec. \ref{jiayou}. \section{QUANTUM RENORMALIZATION GROUP\label{come on}} We introduce the QRG method whose main idea is the elimination or thinning of the degrees of freedom of the system followed by an iteration. The purpose of iteration is to gradually reduce the number of variables until a fixed point is reached. In this paper, the Kadanoff's block approach is applied to the XXZ model, where the lattice is divided into three sites as a block. Each block can construct the projection operator onto their lower eigenvectors. The full Hamiltonian is projected onto these eigenvectors to obtain the effective Hamiltonian which has structural similarities to the original Hamiltonian \cite{23,24}. The specific operation is as follows, the Hamiltonian of the XXZ model on a periodic chain with N sites can be written a \begin{equation} H(J,\Gamma )=\frac{1}{4}\sum_{i}^{N}\left[ J\left( \sigma _{i}^{x}\sigma _{i+1}^{x}+\sigma _{i}^{y}\sigma _{i+1}^{y}\right) +\Gamma \sigma _{i}^{z}\sigma _{i+1}^{z}\right] , \label{1a} \end{equation where $J$ is the exchange coupling constant, $J>0$ corresponds to the antiferromagnetic system and $J<0$ corresponds to the ferromagnetic system, here we only study the situation of antiferromagnetic system. $\gamma =\frac \Gamma }{J}$ is the anisotropy parameter, and $\sigma _{i}^{\alpha }$ \left( \alpha =x,y,z\right) $ are Pauli matrices of the $i$th site. Eq. $\left( \ref{1a}\right) $ can be written as $H=H^{B}+H^{BB}$ by applying the Kadanoff\textquotedblright s block approach. Here $H^{B}=\sum_{I=1}^ \frac{N}{3}}h_{I}^{B}$, with $h_{I}^{B}=\frac{1}{4}[J(\sigma _{I,1}^{x}\sigma _{I,2}^{x}+\sigma _{I,2}^{x}\sigma _{I,3}^{x}+\sigma _{I,1}^{y}\sigma _{I,2}^{y}+\sigma _{I,2}^{y}\sigma _{I,3}^{y})+\Gamma (\sigma _{I,1}^{z}\sigma _{I,2}^{z}+\sigma _{I,2}^{z}\sigma _{I,3}^{z})]$, is the block Hamiltonian. The $H^{BB}=\frac{1}{4}\sum_{I=1}^{\frac{N}{3 }[J(\sigma _{I,3}^{x}\sigma _{I+1,1}^{x}+\sigma _{I,3}^{y}\sigma _{I+1,1}^{y})+\Gamma \sigma _{I,3}^{z}\sigma _{I+1,1}^{z}]$ is the interblock Hamiltonian. In terms of matrix product space, the Hamiltonian of each block $(h_{I}^{B})$ can be exactly diagonalized and get two degenerate ground states which are used to construct the projection operator (T=\left\vert \varphi _{0}\rangle \langle \Uparrow \right\vert +\left\vert \varphi _{0}^{\prime }\rangle \langle \Downarrow \right\vert )$. Where $\mid \varphi _{0}\rangle $ and $\mid \varphi _{0}^{\prime }\rangle $ are the two degenerate ground states of the block Hamiltonian $(h_{I}^{B})$, $\mid \Uparrow \rangle $ and $\mid \Downarrow \rangle $ are the renormalization change effective basis vector of the each block spin operator. Therefore, the effective Hamiltonian $[H^{eff}=T^{+}(H^{B}+H^{BB})T]$ by the original Hamiltonian and the projection operator can be written as \begin{equation} H^{eff}=\frac{1}{4}(\sum_{I}^{\frac{N}{3}}[J^{\prime }(\sigma _{I}^{x}\sigma _{I+1}^{x}+\sigma _{I}^{y}\sigma _{I+1}^{y})+\Gamma ^{\prime }\sigma _{I}^{z}\sigma _{I+1}^{z})], \label{2a} \end{equation where \begin{equation} J^{\prime }=\frac{16J^{3}k^{2}}{(8J^{2}+k^{2})^{2}},\text{ \ }\Gamma ^{\prime }=\frac{k^{4}\Gamma }{(8J^{2}+k^{2})^{2}},\text{ \ }k=\Gamma +\sqrt 8J^{2}+\Gamma ^{2}}. \label{3a} \end{equation Eq. $\left( \ref{3a}\right) $ is the QRG equations. We define a dimensionless anisotropy parameter $\gamma =\Gamma /J$ which determines the phase transition properties of the system. The QRG equations can also be written as \begin{equation} \text{ \ }\gamma ^{\prime }=\frac{\gamma }{16}(\gamma +q)^{2},\text{ \ }q \sqrt{8+\gamma ^{2}}. \label{4} \end{equation} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The stable and unstable fixed points can be obtained by solving $\gamma \equiv \gamma ^{^{\prime }}\equiv \gamma ^{\ast }$. The stable fixed points locate at $\gamma =0$ and $\gamma =\infty $, the unstable fixed point is \gamma =1$ which is the critical point of the model. As the number of QRG iterations increase, starting with any initial values for $\gamma >1$, the coupling parameter flows toward infinity indicating that the system falls into the universality class of Ising model. But for $\gamma <1$ the stable fixed point$(\gamma =0)$ is touched. The model expresses a spin fluid phase when $0\leq \gamma \leq 1$, $\gamma >1$ the model expresses a N\'{e}el phase. \section{ENTANGLEMENT DYNAMICS\label{A ZA}} In this section, the entanglement dynamics of XXZ spin chain is analyzed when the anisotropic interaction and the spin direction are quenched. The following these two kinds of quenches process will be discussed in detail. \subsection{Anisotropic interaction quench} We consider that the spin chain is initialized on one of the two degenerate ground states $\mid \varphi _{01}\rangle $ of the XX model. Experimentally, the initial state can be obtained by adding an otherwise inconsequential infinitesimal magnetic field to the XXZ model \cite{25}. In terms of matrix product state \cite{26}, the ground state $\mid \varphi _{01}\rangle (\mid \varphi _{01}\rangle =\frac{1}{2}\mid \uparrow \uparrow \downarrow \rangle \frac{\sqrt{2}}{2}\mid \uparrow \downarrow \uparrow \rangle +\frac{1}{2}\mid \downarrow \uparrow \uparrow \rangle )$ of the three-site XX model can be obtained, here $\mid \uparrow \rangle $ and $\mid \downarrow \rangle $ are eigenvector of $\sigma ^{z}$. The anisotropic interaction parameter suddenly increases from zero when time $t=0$. In other words, the Hamiltonian is suddenly converted from $H_{01}$ into $H$, where $H$ is the Hamiltonian of the XXZ model. The system state evolves to $\mid \varphi _{1}(t)\rangle =e^{-iHt}\mid \varphi _{01}\rangle $, here \begin{equation} \mid \varphi _{1}(t)\rangle =a1\mid \uparrow \uparrow \downarrow \rangle +b1\mid \uparrow \downarrow \uparrow \rangle +a1\mid \downarrow \uparrow \uparrow \rangle , \label{4a} \end{equation where \begin{eqnarray} a1 &=&\frac{e^{\frac{1}{4}iJ\gamma t}[q^{2}\cos (\frac{1}{4}Jqt)-i(\gamma - \sqrt{2})q\sin (\frac{1}{4}Jqt)]}{2q^{2}},\text{ } \label{5a} \\ \text{\ }b1 &=&-\frac{e^{\frac{1}{4}iJ\gamma t}[\sqrt{2}q^{2}\cos (\frac{1}{ }Jqt)+i(4+\sqrt{2}\gamma )q\sin (\frac{1}{4}Jqt)]}{2q^{2}}. \end{eqnarray Thus that the pure-state density matrix of the three-site system at time $t$ is defined by \begin{equation} \rho _{1}(t)=\mid \varphi _{1}(t)\rangle \langle \varphi _{1}(t)\mid . \label{6a} \end{equation In the product space of $\sigma ^{z}$, $\rho _{1}(t)$ can be written as \begin{equation} \rho _{1}(t)=\left[ \begin{array}{cccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & w & x & 0 & w & 0 & 0 & 0 \\ 0 & x^{\ast } & y & 0 & x^{\ast } & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & w & x & 0 & w & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \end{array \right] , \label{7a} \end{equation the expectation values of Pauli matrices and its correlation functions can be represented by the time-dependent density matrix \begin{eqnarray} \left\langle \sigma _{1}^{z}\right\rangle &=&y=\frac{1}{2}+\frac{2\sqrt{2 \gamma \sin ^{2}(\frac{1}{4}Jqt)}{q^{2}}, \label{8a} \\ \left\langle \sigma _{1}^{z}\sigma _{3}^{z}\right\rangle &=&y-2w=-\frac{ \sqrt{2}\gamma \sin ^{2}(\frac{1}{4}Jqt)}{q^{2}}, \label{8b} \\ \left\langle \sigma _{1}^{x}\sigma _{2}^{x}\right\rangle &=&x+x^{\ast }= \frac{8+\gamma ^{2}\cos (\frac{1}{2}Jqt)}{\sqrt{2}q^{2}}, \label{8c} \\ \left\langle \sigma _{1}^{x}\sigma _{2}^{y}\right\rangle &=&ix-ix^{\ast }= \frac{\gamma \sin (\frac{1}{2}Jqt)}{\sqrt{2}q}. \label{8d} \end{eqnarray When $\gamma $ is a fixed value, it can be seen that the mean value of \sigma _{1}^{z}$ and its correlation function are all periodic functions with respect to time based on Eqs. $\left( \ref{8a}\right) $ to $\left( \re {8d}\right) $. Similarly, when $\gamma $ is a fixed value, each matrix element of $\rho _{1}(t)$ is also periodic function with respect to time because that each matrix element can be regarded as a function of the expectation value of Pauli matrix and its correlation function, as follows \begin{eqnarray} w &=&\frac{1}{2}(\left\langle \sigma _{1}^{z}\right\rangle -\left\langle \sigma _{1}^{z}\sigma _{3}^{z}\right\rangle )=\frac{1}{4}-\frac{\sqrt{2 \gamma \sin ^{2}(\frac{1}{4}Jqt)}{q^{2}},\text{ } \label{88a} \\ \text{\ }x &=&\frac{1}{2}(\left\langle \sigma _{1}^{x}\sigma _{2}^{x}\right\rangle -i\left\langle \sigma _{1}^{x}\sigma _{2}^{y}\right\rangle )=-\frac{8+\gamma ^{2}\cos (\frac{1}{2}Jqt)-i\gamma q\sin (\frac{1}{2}Jqt)}{2\sqrt{2}q^{2}},\text{ } \\ \text{\ }y &=&-\left\langle \sigma _{1}^{z}\right\rangle =\frac{1}{2}+\frac{ \sqrt{2}\gamma \sin ^{2}(\frac{1}{4}Jqt)}{q^{2}}. \end{eqnarray
} It is well known that there are many measurement methods for pairwise entanglement \cite{30,31,32,33}. In this paper, we calculate the concurrence of the system and observe the evolution rules of concurrence with time. In order to without loss of generality, we trace over site $2$. The reduced density matrix for sites 1 and 3 can be obtained as \begin{equation} \rho _{13}(t)=\left[ \begin{array}{cccc} y & 0 & 0 & 0 \\ 0 & w & w & 0 \\ 0 & w & w & 0 \\ 0 & 0 & 0 & \end{array \right] . \label{9a} \end{equation The concurrence between the sites 1 and 3 is defined a \begin{equation} C_{1}(t)=\max \{\sqrt{\lambda _{4}}-\sqrt{\lambda _{3}}-\sqrt{\lambda _{2}} \sqrt{\lambda _{1}},0\}, \label{10a} \end{equation where the $\lambda _{k}(k=1,2,3,4)$ are the eigenvalues of $\widehat{R}=\rho _{13}(t)\widetilde{\rho }_{13}(t)$[with $\widetilde{\rho }_{13}(t)=(\sigma _{1}^{y}\otimes \sigma _{3}^{y})\rho _{13}^{\ast }(\sigma _{1}^{y}\otimes \sigma _{3}^{y})$ is the spin-flipped density matrix operator] in ascending order. The eigenvalues of $\widehat{R}$ can be accurately solved \begin{equation} \lambda _{1}=\lambda _{2}=\lambda _{3}=0,~~\lambda _{4}=4w^{2}. \label{11a} \end{equation Therefore, the concurrence of the three-site system corresponding to the first quench procedure is obtained \begin{equation} C_{1}(t)=2w=\left\langle \sigma _{1}^{z}\right\rangle -\left\langle \sigma _{1}^{z}\sigma _{3}^{z}\right\rangle =\frac{1}{2}-\frac{2\sqrt{2}\gamma \sin ^{2}(\frac{1}{4}Jqt)}{q^{2}}. \label{12a} \end{equation When $\gamma $ is a fixed value, the $C_{1}(t)$ is a periodic function with respect to time based on Eq. $\left( \ref{12a}\right) $. The concurrence between spin blocks whose with different number of sites can be obtained by step by step iteratively QRG equation. \subsection{Spin $z$ axis rotation quench} The three-site spin system is initialized on the ground state of Hamiltonian $H_{02}(H_{02}=-\frac{J}{4}[\sigma _{1}^{x}\sigma _{2}^{x}+\sigma _{2}^{x}\sigma _{3}^{x}+\sigma _{1}^{y}\sigma _{2}^{y}+\sigma _{2}^{y}\sigma _{3}^{y}-\gamma (\sigma _{1}^{z}\sigma _{2}^{z}+\sigma _{2}^{z}\sigma _{3}^{z})])$ that it is obtained by rotating a $\pi $ around the $z$ axix for all even sites and leave all odd sites unchanged in the XXZ model, which can be achieved by pulsed laser in experimentally \cite{25}. The initial state $\mid \varphi _{02}\rangle (\mid \varphi _{02}\rangle =\frac{d}{2}\mid \uparrow \uparrow \downarrow \rangle -\frac{q+\gamma }{2}\mid \uparrow \downarrow \uparrow \rangle +\frac{d}{2}\mid \downarrow \uparrow \uparrow \rangle )$ evolves under the Hamiltonian $H$ of XXZ system when time $t=0$, where $d=\sqrt{8+(q+\gamma )^{2}}$. The state of the system evolves to $\mid \varphi _{2}(t)\rangle =e^{-iHt}\mid \varphi _{02}\rangle $. Thus that the time-dependent density matrix of three-site system can be written a \begin{equation} \rho _{2}(t)=\mid \varphi _{2}(t)\rangle \langle \varphi _{2}(t)\mid . \label{13a} \end{equation} As we have mentioned above, each matrix element of $\rho _{2}(t)$ can also be derived from the average value of the Pauli matrix and its correlation functions, which is also periodic function with respect to time. We obtain the concurrence of sites $1$ and $3$ using the previous method. The concurrence i \begin{equation} C_{2}(t)=\frac{8\gamma -\gamma ^{3}+q^{3}-16\gamma \cos (\frac{1}{2}Jqt)} 2q^{3}}. \label{14a} \end{equation Obviously, $C_{2}(t)$ is a periodic function with respect to time when \gamma $ is a fixed value. Similarly, entanglement between larger spin blocks can be obtained by QRG equation. For simplicity and without loss of generality, we will choose the exchange coupling $J=1.0$ afterwards. \subsection{Evolution of concurrence} It is easy to see from Eq. $\left( \ref{12a}\right) $ and $\left( \ref{14a \right) $ that the concurrence of the system mainly depend on the time$(t)$ and the anisotropy parameter$(\gamma )$. For different QRG steps, $C_{1}(t)$ and $C_{2}(t)$ versus time$(t)$ for the different values of $\gamma $ is plotted in Fig. 1. As can be seen from Figs. 1(a1) and (a2), the $C_{1}(t)$ shows a cosine variations with time while $C_{2}(t)$ shows a sine change with time, but $C_{1}(t)$ and $C_{2}(t)$ have the same period$(T=\frac{4\pi }{J\sqrt{8+\gamma ^{2}}})$ with time. As shown in Fig. 1(a1) and (a2), as the size of the system increases, the lowest peak of $C_{1}(t)$ gradually becomes higher when $\gamma =0.9$, conversely, each peak of $C_{2}(t)$ gradually decreases. It is interesting that the periods of $C_{1}(t)$ and C_{2}(t)$ progressively become larger under QRG iteration and finally C_{1}(t)$ and $C_{2}(t)$ are equal to $0.5$ in the thermodynamic limit. As we have mentioned previously, the coupling constant flows to the stable fixed point$(\gamma =0)$ under QRG iteration when $\gamma =0.9$. At $\gamma =0$, the initial Hamiltonian$(H_{01})$ and the evolutionary Hamiltonian$(H)$ are the same so that the $C_{1}(t)$ does not change with time. For $\gamma =1 $ (see the left insets of Fig. 1(a1) and (a2)), the invariance between the concurrence of different-length chains is the result of the correlation length divergence at $\gamma _{c}=1$. For $\gamma =1.1$ (see in Fig. 1(a1) right inset), when the QRG iteration tends to infinity, the lowest peak of C_{1}(t)$ decreases at first and then increases and finally oscillates around $0.5$. In Fig. 1(a2) right inset, as the size of the system increases, the height of each peak of $C_{2}(t)$ increases at first and then decreases gradually and finally vanishes when $N\rightarrow \infty $. It is easy to see that increasing the length of the chain shortens the periods of C_{1}(t)$ and $C_{2}(t)$ when $\gamma =1.1$. In order to further understand the evolution of entanglement, we calculate that the probability of the evolved ground state returns to the initial ground state for two different quench types, separately \cite{27,28}. As follow \begin{equation} P_{1}=\left\vert \langle \varphi _{01}\mid \varphi _{1}(t)\rangle \right\vert ^{2}=\frac{16+\gamma ^{2}+\gamma ^{2}\cos (\frac{1}{2}qt)} 16+2\gamma ^{2}}, \label{15a} \end{equation \begin{equation} P_{2}=\left\vert \langle \varphi _{02}\mid \varphi _{2}(t)\rangle \right\vert ^{2}=\frac{64+\gamma ^{4}+16\gamma ^{2}\cos (\frac{1}{2}qt)} q^{4}}. \label{16a} \end{equation Higher values of $P_{1}$ and $P_{2}$ mean that the system is easier to return to the initial state, in addition, the system completely returns its initial state when $P_{1}=1$ or $P_{2}=1$. As can be seen from Eq. $\left( \ref{15a}\right) $ and $\left( \ref{16a}\right) $, for a fixed value of \gamma $, $P_{1}$ and $P_{2}$ are periodic functions with respct to time and the periods are $T=\frac{4\pi }{\sqrt{8+\gamma ^{2}}}$. It is means that the system will completely return to the initial state every other period. In particular, the greater the $\gamma $, the shorter the period. For different QRG steps, the evolution of $C_{1}(t)$ and $C_{2}(t)$ versus \gamma $ for $t=11.5$ and $t=1.5$ is plotted in Fig. 2. We find that the changes of concurrence corresponding to the two quenching methods with respect to anisotropy parameters are different, it is means that initial state plays an important role in the evolution of system entanglement. Moreover, the short-time (sufficiently near to the initial moment) behavior and long-time (far from the initial moment) behavior of concurrence are somewhat different. As shown in Fig. 2(b1), at $t=11.5$, the concurrence decreases from the equilibrium state to a finite value and then start to oscillate when the $\gamma $ is turned on. However, at $t=1.5$ (see in Fig. 2(b1) inset), the concurrence decays from equilibrium state to zero and then begin to oscillate. As $\gamma $ increases, the values of each trough of C_{1}(t)$ gradually high and finally severe oscillates around $0.5$ when \gamma $ tend to infinity. It is because that the period of the system returns completely to the initial state approaches zero when $\gamma $ tends to infinity, which can be obtained from Eq. $\left( \ref{15a}\right) $, hence, $C_{1}(t)$ violently oscillates around $0.5$ when $\gamma \rightarrow \infty $ because that the ground state entanglement of the initial Hamiltonian$(H_{01})$ equals $0.5$ and it is independent of $\gamma $. The changes of $C_{2}(t)$ versus $\gamma $ is different from $C_{1}(t)$ when t=11.5$ and $t=1.5$. From the Fig. 2(b2), as $\gamma $ increases, we find that the $C_{2}(t)$ increases from the initial state to a finite value and then begins to oscillate, but for $t=1.5$ $\left[ \text{see the inset of Figs. 2(b2)}\right] $, the $C_{2}(t)$ reaches its maximum at first and then begins to oscillate. the height of each peak of $C_{2}(t)$ gradually decreases with $\gamma $ increases and finally vanishes as $\gamma \rightarrow \infty $, it is because that the ground state entanglement of H_{02}$ is related to $\gamma $, i.e., the ground state entanglement of H_{02}$ tends to zero when $\gamma \rightarrow \infty $. Although the change $C_{1}(t)$ and $C_{2}(t)$ versus $\gamma $ are different, but they all reveal that increasing the length of the chain enhances the oscillation of the $C_{1}(t)$ and $C_{2}(t)$. Moreover, in the thermodynamic limit, C_{1}(t)$ and $C_{2}(t)$ all happen mutation at the critical point because that the result of the correlation length divergence at $\gamma _{c}=1$. The non-analytic behavior of a physical quantity is a characteristic of QPT. The non-analytic behavior is often accompanied by a scaling behavior since that the correlation length diverges at the critical point. In this section, we demonstrate that the characteristic time can be used to describe the critical phenomenon of the one-dimensional anisotropic XXZ model which in the vicinity of the transition point. For any the anisotropy parameter, we define the characteristic time $T_{\min }^{k}(\gamma )$ at which the C_{1}(t)$ reaches its $k$th minimum values and $T_{\max }^{k}(\gamma )$ at which the $C_{2}(t)$ reaches its $k$th maximum values. The characteristic time is analyzed as a function of the coupling constant$(\gamma )$ at different QRG steps. Further insight, we analyze the first derivatives of $T_{\min }$ and T_{\max }$ with respect to the coupling constant$(\gamma )$ for $k=1$ in Fig. 3, which show the singular behavior at the critical point as the size of the system becomes large. The inset of Fig. 3 are the change of $T_{\min } $ and $T_{\max }$ versus $\gamma $ at different QRG steps, which shows that $T_{\min }$ and $T_{\max }$ develop two saturated values in the thermodynamic limit. Specially, as shown in Fig. 3(c1) and Fig. 3(c2), \frac{dT_{\min }}{d\gamma }$ and $\frac{dT_{\max }}{d\gamma }$ vs the \gamma $ are both of the same singular behavior because that the period of C_{1}(t)$ and $C_{2}(t)$ are identical when the $\gamma $ is a constant. For a more detailed analysis, the position of the minimum ($\gamma _{m}$) of \frac{dT_{\min }}{d\gamma }$ approaches the critical point with the size of the system increase. This is plotted in Fig. 4, which shows the relation \gamma _{m}=\gamma _{c}+N^{-\theta }$ with $\theta =0.47$. Besides, the scaling behavior of $y\equiv \left\vert \frac{dT_{\min }}{d\gamma \right\vert _{\gamma _{m}}$ versus $N$ is plotted in Fig. 5 which shows a liner behavior of $\ln (y)$ versus $\ln (N)$, i.e., $\left\vert \frac dT_{\min }}{d\gamma }\right\vert _{\gamma _{m}}\sim N^{0.46}$. Moreover, the exponent $\theta $ is directly related to the correlation length exponent \nu $ at the vicinity of critical point $(\gamma _{c})$, i.e., the relation is $\theta =\frac{1}{\nu }$. Interestingly, the characteristic time represents scaling behavior close to the quantum critical point with exponent $\theta =0.47$ which fantastic corresponds to the entanlement exponent of the one-dimensional XXZ model. Remarkably, the scaling behavior of $\left\vert \frac{dT_{\max }}{d\gamma }\right\vert _{\gamma _{m}}$ versus $N$ is the same as $\left\vert \frac{dT_{\min }}{d\gamma }\right\vert _{\gamma _{m}}$ because that $C_{1}(t)$ and $C_{2}(t)$ have the same period when $\gamma $ is a constant. Therefore, we only study one of them. \section{SUMMARY\label{jiayou}} In this paper, the dynamics of entanglement for the one-dimensional spin- \frac{1}{2}$ anisotropic XXZ model are studied using the quantum renormalization-group method. We obtain the analytic expressions of concurrence of the system corresponding to two different quenching methods. We find that the initial state plays a key role in the evolution of system entanglement. In order to further understand the dynamics of system entanglement, we investigate the probabilities when the system return to the initial state Corresponding to the two quenching methods, the result shows that both of them return completely to the initial state with period$(T \frac{4\pi }{\sqrt{8+\gamma ^{2}}})$. The period is related to the anisotropy parameter, i.e., when $\gamma \rightarrow \infty $, the period when the system returns completely to the initial state approaches zero. We demonstrate that the characteristic time can detect the QPT of one-dimensional XXZ model. Interestingly, we find that the scaling behavior of $\left\vert \frac{dT_{\max }}{d\gamma }\right\vert _{\gamma _{m}}$ versus $N$ close to the critical point are similar to those of the XXZ model in equilibrium and find the scaling behavior is independent of the initial state or quenching method. \begin{acknowledgments} This work was supported by the National Natural Science foundation of China under Grant NO. 11847086, NO. 11675090. \end{acknowledgments}
\section{Introduction} Disc galaxies contain a substantial amount of baryonic matter outside the disc, and this extra-planar material has become an important problem to study for a number of reasons. Theoretically, since the pioneering papers by \citet{Spitzer1956} and \citet{Pikelner1958} that invoked the presence of a high pressure `coronal' gas outside the disc, its existence has been debated. Early works on galaxy formation suggested that the collapse of halos should lead to shock heated gas in the halo. However, various physical processes during the galactic evolution, such as star formation, galaxy mergers and stripping by intergalactic medium (IGM) would have played a significant role in shaping the extra-planar material that is observed today. Recent observations have detected this extra-planar gas in HI \citep[e.g.][]{Swaters1997} in H$\alpha$ \citep{Rossa2004,Voigtlander2013} and in X-rays \citep{Wang2001,Strickland2004}. The extra-planar gas, and its different phases, appear in the literature under different names, a few of which are listed below. {\it Hot halo gas}: In the standard cold dark matter scenario of structure formation in the universe, baryonic gas falls into dark matter potentials and gets heated to the virial temperature \citep{Silk1977,White1978,White1991}. The gas then cools radiatively, and if the temperature is such that the cooling is rapid (for $T \le 10^6$ K), for the case of low mass galaxies, then no accretion shock develops outside the evolving disc, and most of the halo gas remains at a temperature lower than the virial temperature \citep{Binney1977,Birnboim2003}. This scenario of 'cold accretion' for low mass galaxies has received some observational support in recent years (e.g., Smith et al 2008). In massive galaxies, the hot halo gas is believed to cool slowly, and eventually form warm ($10^4$ K) clouds embedded in a large scale hot corona. These clouds could fall down on the disc in the form of high velocity clouds \citep[e.g.][]{Maller2004,Kauffmann2006}. Numerical simulations have also shown that disc galaxies to be embedded in a hot gaseous halo (Toft et al 2002; Keres et al 2005), and that the X-ray luminosity of the halo gas should scale with galactic mass. However this hypothetical hot halo gas is yet to be observed \citep[e.g.][and references therein]{Rasmussen2009}, and the observations of the extraplanar gas so far have been limited to regions close to the disc and bulge, or around active star forming regions \citep[e.g.][]{Wang2007}. Moreover, if the halo does contain a large amount of gas, it could potentially explain the missing baryon problem, which states that more than $80\%$ of the baryons are unaccounted for by collapsed gas and stars in galaxies \citep{Fukugita1998,Anderson2010}. { {\it Galactic wind}: The subsequent evolution of the halo gas is thought to be governed both by the inflow of gas from the IGM and energy injection processes in the disc. Many star forming galaxies have been observed to contain a large amount of outflowing gas \citep[see][for a review]{Veilleux2005}. The early observations of outflowing gas in M82 led to the development of Parker-type steady winds with energy and mass injection from supernovae (SNe) \citep{Burke1968,Johnson1971,Chevalier1985}. These calculations showed that fast steady winds with speeds exceeding $10^3$ km s$^{-1}$ are possible to generate from the central regions of M82-type starburst galaxies. \citet{Wang1995} explored steady wind solution with radiative cooling and showed that O\hskip0.25mm{\sc vi}\ emission in halos can arise from thermally unstable outflows. \citet{Tomisaka1993} studies the SNe driven outflow and its interaction with halo gas and estimated the extended X-ray emission from M82 type galaxy. Detailed numerical simulations have been carried out focused on properties of outflows and their implication for IGM enrichment \citep[e.g.][]{Suchkov1994,Suchkov1996,MacLow1999,Schaye2008,Hopkins2012}. Recently \citet{Mahavir2013} studied steady galactic winds from dark matter halos with Navarro-Frenk-White (NFW) density profile \citep{Navarro1997} and found that SNe can drive outflows from dwarf galaxies, and active galactic nuclei (AGN) power the outflows with speeds exceeding $10^3$ ${\rm km\ s}^{-1}$\ in massive galaxies, e.g., ultra-luminous infrared galaxies (ULIRGs). They also found that winds from intermediate sized galaxies which are in a quiescent mode of star formation (e.g. the Milky Way) can not escape the halo. As a consequence of this, one can explain the observed trend of the stellar to halo mass ratio.} {\it Outflowing cold/warm clouds}: The outflowing gas is often observed to contain a clumpy component that contains neutral or partially ionized atoms, at $10^4$ K \citep[e.g.][]{Martin2005}. In starbursts such as M82, molecular clouds and filaments have also been observed in the outflowing gas \citep{Veilleux2009}. The dynamics of these clouds offer useful clues to the formation and evolution of the outflowing gas, and recently have been used to discuss various physical processes that drives these clouds. \citet{Martin2005} and \citet{Murray2005} suggested that radiation pressure on dust grains embedded in these clouds may play an important role in their dynamics \citep[see also,][]{MahavirFirst,Chattopadhyay2012}. \citet{Mahavir2012} showed that radiation pressure becomes important only for galaxies with mass $\ge 10^{12}$ M$_{\odot}$ and with high SFR, e.g., for ULIRGs. However, the formation process of these clouds remain uncertain. \citet{Marcolini2005} argued that the clouds are likely to be shredded by Kelvin-Helmholtz instability and/or evaporation due to thermal conduction, with a time scale $\le 1$ Myr. It is therefore a puzzle that these clouds are often observed at distances of several kpc, since the travel time (assuming the speed to be a few hundred km s$^{-1}$) is likely to be $\sim 10$ Myr. \citet{Nath2009} and \citet{Murray2011} have suggested a scenario in which radiation pressure pushes a shell of gas and dust which then fragments after being accelerated by thermal pressure from SNe ejecta. {\it High velocity clouds}: The extraplanar gas is also affected by disc ISM processes in another way. In the galactic fountain model \citep{Shapiro1976,Bregman1980}, partially ionized gas is launched from the disc by the effect of multiple supernovae (SNe), and after travelling some distance in the halo it falls back to the disc. An important problem in this regard is that of the formation of clouds in the extraplanar gas, which are observed as either high velocity clouds (HVC) or O\hskip0.25mm{\sc vi}\ clouds. \citet{Bregman1980} argued that HVCs could form from the condensation of galactic fountain material \citep[but see][]{Ferrara1992}. However, this idea cannot explain the fact that observed metallicity is less than that of the disc ISM \citep{vanWoerden2004}. This is particularly important for HVCs which are more than $\sim 5$ kpc away from the Milky Way disc which have low metallicity ($Z \sim 0.01\hbox{--}0.2 Z_{\odot}$). Recently, \citet{Binney2009} have argued that thermal instability is not efficient in the moving fountain gas, and clouds are rather likely to form due to the interaction of ejected disc material with an pre-existing halo gas \citep[see also][]{Mar2010,Mar2011}. However, the ejected disc material is modeled in a ballistic manner, whereas in reality the outward moving gas is more likely to move like a fluid. \cite{Wakker2005} found that O\hskip0.25mm{\sc vi}\ absorption associated with HVCs can arise from the interface between the HVC and a coronal gas. However we note that, in the HVCs nearer than $\sim 10$ kpc, there can be a mixture of components from Galactic ISM, the interface region with halo gas, or gas from the Local group. Thus the overall picture is tangled because of a contamination at lower heights connected with interaction of falling gas with the extended gas of galactic corona or by the flows generated by internal galactic processes. {\it Circumgalactic medium}: The gaseous content of spiral galaxies outside the disc but within the halos of dark matter, formed and shaped by various processes of accretion and outflows, as mentioned above, is generally referred to as the circumgalactic medium (CGM). This refers to the reservoir of gas at distances $\sim 100\hbox{--}250$ kpc. However, its existence, and the relation to processes near and in the disc are not yet fully understood. Although outflowing gas is observed near the disc, it is not clear how far this gas propagates, and about the nature of the interaction with pre-existing gas in the halo. It is uncertain if most of the wind material escapes the halo, or is retained in the halo, and then whether or not most of this gas eventually falls back to the disc. A large amount of hot gas in halo can potentially distort the cosmic microwave background radiation through Sunyaev-Zel'dovich effect \citep[e.g.][]{Majumdar2001}. The low density of the CGM makes it an elusive component to observe, but recently it has been detected through absorption in the line of sight of background quasars. \citet{Tumlinson2011} detected a large amount of O\hskip0.25mm{\sc vi}\ absorbing gas, at $T\sim 10^{5.5}$ K, in galaxies with moderate SFR. However, its spatial extent and the total gas (and metal) content remains to be measured with enough accuracy that can constrain theoretical models. In the present paper we address the question of the origin of high velocity clouds, circumgalactic material and the cold clumps in galactic winds, as a result of the interaction of a steady galactic wind with the relic halo gas distribution using 2D simulations. Our aim is to show that all these constituents of extra planar material may have a common origin, which is the interaction zone between galactic outflow and hot gas in halo. The remainder of the paper is organised as follows. In \S 2 we describe our simulation set-up, including the initial and boundary conditions. In \S 3 we give analytical estimates for the dynamics of the interaction zone, and compare the timescales relevant for the problem. In \S 4 we present the results of our model runs and then in \S 5 we discuss our results along with the implications. \section{Simulation set-up} { In this work we use the cylindrical ($R,\phi,z$) coordinate system with cylindrical symmetry around the $z$-axis}, in which the distance ($r$) to any random point can be calculated from the in-plane radius ($R$) and height, ($z$) by using $r=\sqrt{R^2+z^2}$. The unit of density in our simulation is $1.67 \times 10^{-24}$ g cm$^{-3}$. Unit of velocity is 100 ${\rm km\ s}^{-1}$\ and the unit of distance is kpc. Hence the unit of time in the present simulation is 9.5 Myr. \subsection{Initial and boundary conditions} We populate the halo with an isothermal gas distribution which is in hydrostatic equilibrium in a NFW dark matter halo. The gravitational potential for a NFW dark matter halo is given by, \begin{equation} \Phi(r) = - 2 v_s^2 {\ln(1+ r/r_s) \over r/r_s} \label{NFW_pot} \end{equation} where $v_s^2 = G M_{200}/[2 r_s f(c)]$ with $f(c)=\ln(1+c)-c/(1+c)$ and $r_s = r_{200}/c$. Here $c(=10)$ is the halo concentration parameter, $r_{200}$ is the virial radius and $M_{200}$ is the virial mass. { We consider a halo of mass $10^{12}$ M$_\odot$ in our simulation. The corresponding circular speed is $\approx180$ ${\rm km\ s}^{-1}$\ according to the definition given in \citet{Navarro1997}, and the escape speed at the centre is $2v_s$, whose value is $530$ ${\rm km\ s}^{-1}$\ .} For the gravitational potential in Eq. \ref{NFW_pot}, the hydrostatic density profile is given by \begin{equation} n(r) = n_0 \exp\left[-\frac{\mu m_p (\Phi(r)-\Phi(r_b))}{k T}\right] \label{eq_haloprof} \end{equation} where $n_0$ is the density at the base, which is also the maximum value of initial density and $\mu (=0.6)$ is the mean molecular weight. We take the density at the base $n_0 = 10^{-3}$ cm$^{-3}$. We note that the similar cored density profiles, with a central density $\sim 10^{-3}$ cm$^{-3}$ have been deduced for the gas in the halo of the Galaxy in recent studies \citep[e.g.][]{Fang2013,Putman2012,Maller2004}. The above profile is characterised by the temperature $T$. We consider a temperature, $T\approx 3\times10^6$ K, which is approximately the virial temperature for a milky way size halo with mass $10^{12}$ M$_{\odot}$ considered in this simulation. Observations also indicate similar temperature for the hot gas in the halo of the Galaxy \citep{Hagihara2010,Fang2013}. In Fig. \ref{fig_denprof}, we have shown the density profile of the halo gas for the halo of mass $10^{12}$ ${\rm M}_\odot$, considered in this work. \begin{figure} \includegraphics[scale=0.4]{fig_HGH_density.eps} \caption{The density profile for the gas in the halo.} \label{fig_denprof} \end{figure} { We set-up a simulation box in which $R$ ranges from $0$ to $R_{\rm max}$ and $z$ goes from $r_b$ to $z_{\rm max}$, where $r_b$ is the launching height. Therefore the simulation box covers right half of the upper hemisphere of galactic halo. Spiral disc lies at the base of the hemisphere and it lies outside ($z<r_b$) of our computation box . We use the density profile given by equation (\ref{eq_haloprof}) for outer boundaries at $z_{\rm max}^+$ and $R_{\rm max}^+$, so that the initial hydrostatic distribution of gas does not change with time. At the lower $z$ boundary, we once again use the density profile given in equation (\ref{eq_haloprof}), except for a region satisfying, $0<\left|R\right|<r_b$, $z< r_b$ in which we inject the material with a specific density and velocity as we discuss below. } \subsection{Injection parameters} The injection of wind gas is implemented by assuming that a supersonic wind enters the computation zone from below. We inject (continuously in time) the gas with speed $v_{\rm inj}$ and density $n_{\rm inj}$ in a section of the lower boundary which satisfies $0<\left|R\right|<r_b$. The injected gas has a temperature corresponding to the sound speed ($c_s$) somewhat lower than the injection speed ($v_{\rm inj}$) to ensure that the flow is in supersonic regime. This value of $v_{\rm inj}$ will give a wind speed $v_{\rm wind} \sim 2 v_{\rm inj}$, which can be understood simply by the fact that in the supersonic regime, as the wind diverges, the sound speed decreases to small values and the wind speed becomes the sum of initial injection speed and the sound speed at the base. A supersonic injection right from the base implies that we have assumed a thermalization zone with dimensions $0<\left|R\right|<r_b$, $0<\left|z\right|<r_b$ centred at the galaxy in which the energy and mass injection due to stellar processes occurs. The size of the thermalization zone ($r_b$) in principle is a free parameter. \citet{Melioli2004} consider $r_b$ in the range $100\hbox{--}700$ pc. In the present study we work with a fixed value of $r_b=500$ pc. We use a range of injection speeds and densities. { The injections speed and density are functions of the energy injection rate ($\dot E$), mass loading rate ($\dot M$) and the SFR. If we consider the thermalized injection zone to be approximately spherical with radius $r_b$, then the mass loading rate for the wind can be approximated as, $ \dot M = 4\pi {\rm m_p} n_{\rm inj} v_{\rm inj} r_b^2 $. Furthmore, one compares this mass loass rate with SFR, by defining a load factor $\eta$ which can be written as, \begin{equation} \eta = \Bigl( \frac{\dot M}{\rm M _\odot\, yr^{-1}} \Bigr) \,{\rm SFR}^{-1}
\label{eq_ett} \end{equation} where the ${\rm SFR}$ is in the units of ${\rm M_\odot\, yr^{-1}}$. Energy injection due to the SNe depends on the rate of occurrence of SNe and therefore on the SFR. One can define the energy injection as, ($\dot E = \epsilon f_{\rm sn} E_{\rm sn} {\rm SFR}$) erg yr$^{-1}$. Here $\epsilon$ is the efficiency of the energy injection whose value is $0.1$ in normal situations when most of the energy of SNe is lost via radiation and can be as high as $0.5$, for staburst galaxies like M82. $f_{\rm sn}$ is the fraction of SNe per unit solar mass of star formation, and its value is $1.26 \times 10^{-2}$ for a Kroupa-Chabrier initial mass function \citep{Chab2003}. $E_{\rm sn}=10^{51}$ erg is the energy yield of a single supernova. Using these values we get the energy injection rate, $\dot E = \epsilon (4\times 10^{41})\, {\rm SFR}$ erg s$^{-1}$. For the SNe driven wind the energy injection rate is the mechanical luminosity of the wind and it is related to the mass loss rate by $\dot E = \frac{1}{2}\dot M v_{\rm wind}^2 = 2 \dot M v_{\rm inj}^2$, where we have used, $v_{\rm wind}=2 v_{\rm inj}$. Equating the above two definitions of $\dot E$, we obtain following relation for the efficiency of energy injection ($\epsilon$), \begin{equation} \epsilon = 0.8\, \Bigl( \frac{\dot M}{\rm M _\odot\, yr^{-1}} \Bigr) \Bigl( \frac{v_{\rm inj}}{500\, {\rm km\, s^{-1}}}\Bigr)^2 {\rm SFR}^{-1} \label{eq_eff} \end{equation} In the 4th column of Table 1, we have provided the quantity $\eta\times{\rm SFR}$, which is essentially the mass loss rate and in the 5th column we have provided $\epsilon \times {\rm SFR}$ corresponding to each run. One can infer the value of $\eta$ or $\epsilon$ for a particular case by dividing the values mentioned in 4th and 5th column by the SFR. For example if the SFR$=3$ M$_\odot$ yr$^{-1}$ in case of {\it MG1} we would have $\eta \approx 0.7$ and $\epsilon\approx0.2$. } \subsection{Details of our runs} We use the publicly available hydrodynamic code {\sc Pluto} \citep{Mignone2007}. For our runs we use the solver based on total variation diminishing Lax-Friedrich (TVDLF) scheme supplied with the code. {\sc Pluto} has a well tested implementation of radiative cooling \citep{Tesileanu2008}, which is done by solving the energy equation with a energy loss term, which for this study is a function of density and temperature as given by \citet{Sutherland1993} in case of equilibrium cooling. We assume solar metallicity for the wind and the halo gas in our simulation. The halo likely has a lower metallicity, but in this work we do not explore the effect of a two-component gas with different metallicities. { We have run the simulation for various values of initial and injection parameters. The quantitative details of parameters involved in our runs are provided in Table 1. Various set of parameters are chosen to study the dependence of formation of multiphase medium on hot halo gas and wind properties. We set a resolution of $250\times500$, $500\times1000$ and $1000\times2000$, for the box of dimensions $10\times20, 20\times 40, 50\times 100$ kpc respectively. Therefore the smallest cell size in our simulations is 40 pc for the box with heights 20 and 40 kpc and a cell size of 50 pc for the box with height 100 kpc. We find that cloud formed in our simulation have size roughly in the range $100\hbox{--} 500$ pc and hence the resolution is adequate to capture the clouds. We have also checked with more finer resolution (cell size=20 pc), and the cloud sizes are unaffected.} \begin{table} \centering \begin{minipage}{140mm} \caption{The parameters of our runs} \begin{tabular}{@{}cccccc@{}} \hline Name &\, \, $\frac{n_{\rm inj}}{{\scriptsize cm^{-3}}}$ & ${v_{\rm inj} \over {\scriptsize \rm km\, s^{-1}}}$ & $\eta\times{\rm SFR}$ & $\epsilon\times {\rm SFR}$ & ${{\rm box\, size} \over {\scriptsize \rm kpc\times kpc}}$ \\ \hline {\it MG1} & 0.1 & 300 & 2.23 & 0.64& $10\times20$ \\ {\it MG2} & 0.1 & 300 & 2.23 &0.64 & $10\times20$ \\ {\it MG3} & 1.0 & 400 & 29.78 & 15.12 & $20\times40$ \\ {\it MG4} & 2.0 & 600 & 89.34 & 102.56 & $50\times100$ \\ {\it MG5} & 0.5 & 500 & 18.61 & 14.84 & $50\times100$ \\ {\it MG6} & 1.2 & 500 & 44.67 & 35.61 & $50\times100$ \\ {\it MG7} & 0.05 & 500 & 1.86 & 1.48 & $20\times40$ \\ {\it MG8} & 0.4 & 350 & 10.41 & 4.06 & $20\times40 $ \\ {\it MG9} & 2.0 & 350 & 52.03 & 20.32 & $20\times40 $ \\ {\it MG10} & 0.1 & 400 & 2.97 & 1.52 & $20\times40$ \\ {\it MG11} & 0.1 & 500 & 3.72 & 2.96 & $50\times100$\\ {\it MG12} & 0.5 & 500 & 18.58 & 14.81 & $50\times100$ \\ {\it MG13} & 0.01 & 200 & 0.15 & 0.02 & $10\times20$ \\ {\it MG14} & 10.0 & 300 & 223 & 67 & $10\times20$ \\ {\it MG15} & 5.0 & 200 & 74.3 & 9.5 & $10\times20$\\ \hline \end{tabular} \end{minipage} \caption{List and details of runs : Combinations of $n_{\rm inj}$ and $v_{\rm inj}$ adopted for our runs are given in 2nd and 3rd column. In the 4th column we have provided the quantity $\eta\times{\rm SFR}$, which is essentially the mass loss rate and in 5th column we have provided $\epsilon \times {\rm SFR}$ corresponding to each run. One can infer the value of $\eta$ or $\epsilon$ for a particular run by dividing the values mentioned in 4th and 5th column by the SFR (i.e. for an SFR of ${\rm 3\, M_\odot\, yr^{-1}}$ in case of {\it MG1} we would have $\eta \approx 0.7$ and $\epsilon\approx0.2$). In the last column we have provided the dimensions of simulation box. All of our runs except {\it MG1} have an implementation of equilibrium radiative cooling. } \end{table} \section{Analytic considerations} In this section we provide analytical estimates relevant for the simulations in this work. This problem has similarities with the case of a wind blown bubble where the stellar wind is expanding in a uniform density medium \citep{Weaver1977}. However there are some subtle differences between the stellar wind blown bubble and the present case of galactic wind interacting with the halo gas. While in the stellar wind blown bubble the ambient medium is denser than the wind, here it is the opposite, at least for the initial evolution when the wind density is larger compared to the ambient density. Also, the temperature of the ambient medium in this case is $\sim 10^6$ K, which is two orders of magnitude larger than the ISM temperature ($\sim 10^4$ K), relevant for a stellar wind blown bubble. Another difference is the stratification in the density of the ambient halo gas. Because of the stratification, the outermost shock does not slow down quickly and the late time evolution should not be given by a self-similar solution as described in \citet{Weaver1977}. One should instead work out the dynamics after taking the density profile into account as in \citet{Kompaneets1960}. Due to this effect, the free expansion phase of the wind lasts longer and if the wind has enough mechanical luminosity so that it crosses the scale height of the halo gas profile, then it may keep expanding freely with the normal wind speed and eventually escape the halo. We quantify these aspects below. In the general case of a supernova blast or a wind interacting with ambient medium there are three distinct phases of evolution. Initially there is the free expansion phase. When the swept up mass becomes comparable to the mass in the wind then the wind is obstructed by the outside mass and the free expansion phase ends, the wind enters the next phase of evolution in which the motion can be described by a self-similar solution. This phase ends when radiative losses become significant and the system enters into a momentum driven phase. Let us do a simple estimate for the distance ($r_{\scriptscriptstyle \rm f}$) at which the transition from a free wind phase to self similar phase is expected. It can be computed by equating the swept up mass with the injected mass. For this rough estimate, we assume the injection region to be spherical with a radius $r_b$, since $r_b$ is much smaller than the length scale of winds involved in our simulations. This gives, \begin{equation} \int_{r_b}^{r_{\scriptscriptstyle \rm f}} (4\pi r^2) m_{\rm p} n(r) dr = \int_{0}^{t_{\scriptscriptstyle \rm f}} 4\pi m_{\rm p} n_{\rm inj} v_{\rm inj} r_b^2 dt \,, \end{equation} where $n(r)$ is the density profile of the gas in the halo as given by equation ($\ref{eq_haloprof}$) and $t_{\scriptscriptstyle \rm f}$ is the time which marks the end of free expansion phase. The left hand side in the above equation is the swept up mass of the halo gas and the right hand side is the total injected mass till the time $t_{\scriptscriptstyle \rm f}$. The $r_{\scriptscriptstyle \rm f}$ is related to $t_{\scriptscriptstyle \rm f}$ via $r_{\scriptscriptstyle \rm f} = \int_{0}^{t_{\scriptscriptstyle \rm f}} v_{\rm wind} dt$. As explained in the \S 2.2 that in the supersonic regime wind speed is, $v_{\rm wind}\sim 2 v_{\rm inj}$, therefore $t_{\scriptscriptstyle \rm f} = r_{\scriptscriptstyle \rm f}/(2 v_{\rm inj})$ We find that $r_{\scriptscriptstyle \rm f}$ is roughly constant for $n_{\rm inj} \le 0.1$ cm$^{-3}$, and lies in the range of $2 \hbox{--}5$ kpc, but it increases to $\sim 30$ kpc for $n_{\rm inj}\sim 1$ cm$^{-3}$. This implies that for low injection density, the free expansion phase ends well inside the high density core of the halo gas profile, and the shell will decelerate according to the Weaver et al. (1977) solution. However, for the high injection density, the free expansion phase would continue for a longer time \begin{figure} \includegraphics[scale=0.4]{vsh.eps} \caption{Velocity of the forward shock ($v_{\rm sh}$)from our simulation runs {\it MG6} (dashed line) and {\it MG7} (dotted line) is compared with the self-similar solution for a stellar wind blow bubble expanding in a uniform density ISM (solid line). The $v_{\rm sh}$ is shown in the units of velocity of free expansion which is $10^3$ ${\rm km\ s}^{-1}$\ for {\it MG6} and {\it MG7}. } \label{fig_vsh} \end{figure} To illustrate this point, we plot in Figure \ref{fig_vsh} the velocities of the forward shocks from two of our simulation runs ({\it MG6, MG7}), with a higher ($n_{\rm inj}=1.2$ cm$^{-3}$) and a low injection density ($n_{\rm inj}=0.05$ cm$^{-3}$) respectively. We show the case of {\it MG6} with a dashed line and that of {\it MG7} with a dotted line, and compare both with the slope of $v_{\rm sh}\propto t^{-2/5}$ (solid line) expected for a stellar wind blown bubble expanding in a uniform density ISM. The case of {\it MG6} with large injection density shows less deceleration compared to {\it MG7}. If we consider a power-law density profile given by $n\propto r^{-m}$, then in the self-similar phase one would expect $r_{\rm sh}\propto t^{3/(5-m)}$ and $v_{\rm sh} \propto t^{-(2-m)/(5-m)}$. The halo gas profile (Fig \ref{fig_denprof}) beyond the core region can be approximated as a power law decreasing with $r$, with index $m\sim 1$. One therefore expects $v_{\rm sh}\propto t^{-0.25}$, implying only a mild decrease in velocity with time (or distance), which indeed is seen in Fig \ref{fig_vsh} for the case of {\it MG6} (dashed line). For the case of {\it MG7} (dotted line), with a low injection density, the final velocity is lower because it enters the self-similar phase at the core region of the halo density profile and therefore it exhibits a steeper decrease in velocity initially. There is one regime in which our results should resemble the self-similar solution of the `standard stellar wind bubble'. In an extreme situation, for very small values of $r_{\scriptscriptstyle \rm f}$, the wind will decelerate quickly. We can quantify this regime in the following manner. The extent of the free expansion phase is small for low injection densities. Therefore, the evolution is governed mainly by the self-similar solution, for which the distance of the outermost shock is $r_{\rm sh} = (Lt^3/\rho)^{1/5}$ and the corresponding velocity is $v_{\rm sh} = 0.6(Lt^{-2}/\rho)^{1/5}$, where $\rho = n\ m_{\rm p}$ is the density of the halo gas, whose value is almost constant ($\sim 10^{-3}$ ${\rm m_p\ cc}^{-1}$\ ) at small distances from the centre. Considering the inverse power law dependence of the velocity ($v_{\rm sh}$) on time, it may so happen that the wind travels a small distance and it decelerates to negligible velocities. In such a case there is no significant large scale wind. Hence for very low mechanical luminosities, when $n_{\rm inj}$ and $v_{\rm inj}$ are both small, we have a small scale wind. Using the mechanical luminosity, $L= (1/2) 4\pi n_{\rm inj} m_p v_{\rm inj}^3 r_b^2$, we can write the following general expression for $r_{\rm sh}$ in this case, \begin{equation} r_{\rm sh}\approx 7 \, {\rm kpc} \, \Bigl ({ n_{\rm inj} \over 0.01 \, {\rm\ cm}^{-3} } \Bigr )^{1/2} \, \Bigl ( {v_{\rm inj} \over 250 \, {\rm km/s}} \Bigr )^{3/2} \, \Bigl ({v_{\rm sh} \over 100 \, {\rm km/s}} \Bigr )^{-3/2} \,. \label{sssw} \end{equation} From the above equation, larger is the value of $n_{\rm inj}$ and $v_{\rm inj}$, larger will be the value of $r_{\rm sh}$. If we assume that the wind decelerates to a small velocity, say $v_{\rm sh}\approx 100$ ${\rm km\ s}^{-1}$\ , before crossing $r_{\rm sh}\approx 7$ kpc (i.e., the core of the gas density profile in halo), then the corresponding values of $n_{\rm inj}$ and $v_{\rm inj}$
integer $q_2$ such that $M_2^+ = f_0^{q_2}(M_1^-)$ so that the global map from $U_{01}$ to $U_2$ is defined as $T_{12}: \Pi_1^- \to \Pi_2^+ = \left. f_\mu^{q_2} \right|_{\Pi_1^-}$. Let $P^{ue}(M_1^-)$ be the tangent plane to $W^{ue}(O_1)$ at $M_1^-$ and $F_2^{ss}(M_{2}^+)$ be the leaf of invariant foliation $F_{2}^{ss}$ on $W^s(O_{2})$ passing through $M_{2}^+$. \begin{df}The heteroclinic intersection of $W^u(O_1)$ and $W^s(O_{2})$ is called simple if image $T_{12}(P^{ue}(M_1^-))$ and leaf $F_{2}^{ss}(M_{2}^+)$ intersect transversely. If this condition is not fulfilled the heteroclinic intersection is non-simple, and undergoes an orbit flip, see fig.~\ref{Case3}. \end{df} \begin{figure} \includegraphics[width=12cm]{Case3_2.pdf} \caption{\label{Case3} A non-simple heteroclinic intersection (orbit flip) of $W^u(O_1)$ and $W^s(O_{2})$.} \end{figure} \subsection{ Local degeneracies.} Now consider the case when all fixed points in the heteroclinic cycle are saddles and all connections are simple. The existence of extended unstable manifold $W^{ue}$ and strong stable manifold $W^{ss}$ is a robust property -- they persists under small parametric perturbation, and with an absence of non-simple global orbits this immediately implies the existence of a global lower-dimensional center manifold along the homoclinic (heteroclinic) cycle. To prevent from this, local bifurcations can occur at one of the saddle points, such that $W^{ue}$ and $W^{ss}$ near this point do not exist at all, cases {\bf I--II.3}. The first bifurcation is the resonance condition when the stable eigenvalues have the same absolute value, but different signs, $\lambda_1 = -\lambda_2 = \lambda$, cases {\bf I--II.3.a}. Under small perturbations here the strong stable manifold appears, but in alternating directions: when $|\lambda_1| < |\lambda_2|$, $W^{ss}$ is tangent to the eigendirection corresponding to $\lambda_1$, and to the eigendirection corresponding to $\lambda_2$ otherwise. \begin{figure} \includegraphics[width=10cm]{Belyakov_cases.eps} \caption{\label{fig:transition1} A Belyakov-like transition. When $\mu_2 > 0$ (left), the stable eigenvalues are real, and when $\mu_2 < 0$ (right) they form a complex-conjugate pair.} \end{figure} The second bifurcation is analogous to the Belyakov resonance for the continuous-time case \cite{Bel80}, that is the boundary between saddle and saddle-focus, cases {\bf I-II.3.b}. At the bifurcation moment the stable multiplier has multiplicity two: $\lambda_1 = \lambda_2 = \lambda$, and under small perturbations such a degenerate saddle becomes a saddle with real eigenvalues or a saddle-focus, see fig.~\ref{fig:transition1}. \begin{remark} Another type of saddle to saddle-focus transition is possible in dimensions higher than three, when a complex-conjugate pair of stable eigenvalues $\lambda_2 e^{\pm i \varphi}$ coincides in absolute value with real stable eigenvalue $\lambda_1$, i.e. $|\lambda_1| = \lambda_2 = \lambda$. In continuous time such a homoclinic bifurcation was studied recently in Ref.~\onlinecite{KMK19}, where it was called the 3DL-bifurcation. Under small perturbations, manifold $W^{ss}$ appears with alternating direction and dimension, namely in the case when $|\lambda_1| < \lambda_2$ it is one-dimensional and tangent to the $\lambda_1$ eigendirection, and when $|\lambda_1| > \lambda_2$, strong stable manifold $W^{ss}$ is two-dimensional and tangent to the eigendirections corresponding to complex eigenvalues $\lambda_2 e^{\pm i \varphi}$. This case is out of scope of the current paper and will be studied separately. \end{remark} \subsection{Main theorems} The main result of the paper is given by the following \begin{theorem} \label{thmmain} Let $f_0$ satisfy conditions {\bf A}--{\bf D} and $f_\mu$ be the three-parametric unfolding of $f_0$ defined above. Then, in any neighbourhood of the origin in the parameter space there exist infinitely many accumulating to $\mu=0$ domains $\delta_{k}, \; k = \{k_1, k_2\}$, such that diffeomorphism $f_\mu$ has a discrete Lorenz attractor for $\mu \in \delta_{k}$. \\ \end{theorem} The proof of Theorem~\ref{thmmain} is based on the fact that for sufficiently large $k$ the first return map can be transformed to the form of three-dimensional Henon map~(\ref{H3D}) plus asymptotically small terms, see Lemma~\ref{resc_lemma} for more details. According to Ref.~\onlinecite{GOST05}, map~(\ref{H3D}) possesses the discrete Lorenz attractor is some open domain $V$ in the parameter space. Varying indices $ k_1, k_2$ unboundedly, one gets that in the space of original parameters this domains correspond to a sequence of domains $V_k$ that accumulate to point $\mu = 0$. \begin{theorem}\label{conj_inv} Let $f_0$ satisfy conditions {\bf A}--{\bf D} and $f^{-1}_\mu$ be the three-parametric unfolding of its inverse map $f^{-1}_0$. Then, in any neighbourhood of the origin in the parameter space there exist infinitely many accumulating to $\mu=0$ domains $\delta_{k}, \; k = \{k_1, k_2\}$, such that diffeomorphism $f^{-1}_\mu$ has a discrete Lorenz attractor for $\mu \in \delta_{k}$. \\ \end{theorem} \section{Three-dimensional Henon maps and discrete Lorenz attractors}\label{sec:Henon} In this section the main attention is paid to the extension of results obtained in Refs.~\onlinecite{GOST05} and~\onlinecite{GGOT13} on the birth of discrete Lorenz attractors in local bifurcations of three-dimensional maps. In these papers an analysis of a certain local codimension-three bifurcation was performed, namely when a fixed point possesses multipliers $(-1, -1, +1)$. The reason why this bifurcation is relevant to Lorenz attractors is that the pair of $-1$ multipliers in a single Jordan block provides the same symmetry $(x, y) \to (-x, -y)$ as in the Lorenz system. Also, bifurcations of triply degenerate equilibria with the same symmetry lead to birth of Lorenz attractors in continuous-time systems~\cite{SST93}. In 3D Henon map (\ref{H3D}), a fixed point with eigenvalues $(-1, -1, +1)$ exists when $(M_1, M_2, B) = (-1/4, 1, 1)$. In paper~\cite{GOST05} the normal form of this bifurcation was approximated by an ODE system, such that for some $T$ a time-$T$ shift of the flow of that system coincided with map (\ref{H3D}) up to arbitrary small terms. It was also shown that this system coincides with the Shimizu-Morioka system after rescaling of coordinates, parameters and time: \begin{equation} \label{sm} \dot X=Y, \;\; \dot Y=X(1-Z)-\lambda Y, \;\;\dot Z=-\alpha Z+X^2. \end{equation} The latter has the Lorenz attractor in some open region of parameters, as proved with assistance of computer, using interval arythmetics in Ref.~\onlinecite{CTZ18}. Thus, the original Henon map can be regarded as the time-$T$ shift of a periodically perturbed ODE system with a Lorenz attractor. From Refs.~\onlinecite{GOST05, GGOT13} it follows that map (\ref{H3D}) possesses a discrete-time analogue of the classical Lorenz attractor. The results above are applicable to orientation preserving maps, as in the proof the Jacobian $B$ varies near $+1$. When the original map $f_0$ is non-orientable, it is possible that the first return map will also reverse orientation, and parameter $B$ will take only negative values. Then the results above will not apply to such cases, and in order to find Lorenz-like attractors, one needs to look for another parameter domain. Recently, this case was studied in Ref.~\onlinecite{GKKST}. The authors considered a fixed point with eigenvalues $(i, -i, -1)$, which exists in map (\ref{H3D}) for $(M_1, M_2, B) = (7/4, -1, -1)$, and showed that the flow normal form near this bifurcation point possesses a new, 4-winged strange attractor of Lorenz type, which they call the ``Simo's angel''. Numerically such an attractor was observed in Ref.~\onlinecite{GOST05} for $(M_1, M_2, B) = (1.77, -0.925, -0.95)$. Note that the fixed point with eigenvalues $(i, -i, -1)$ has eigenvalues $(-1, -1, +1)$ for the second iterate of the map; however, there is no Jordan block, so the normal form differs significantly from the Shimizu-Morioka system (\ref{sm}). In Refs.~\onlinecite{GGOT13, OT17} the results of Ref.~\onlinecite{GOST05} were extended to a wider class of maps and a more generic criterion of existence. It was shown that near a fixed point with multipliers $(-1, -1, +1)$ the map can be represented as the following normal form: \begin{equation}\label{Lor_nf} \begin{array}{rcl} \bar u_1 & = & -u_1 - u_2 \\ \bar u_2 & = & - u_2 + a u_1 u_3 + a_1 u_2 u_3 + O(\|u\|^3) \\ \bar u_3 & = & u_3 + b u_1^2 + b_1 u_2^2 + b_2 u_1 u_2 + b_3 u_3^2 + O(\|u\|^3). \\ \end{array} \end{equation} By Lemma 3.1 from Ref.~\onlinecite{GGOT13}, if $ab > 0$, the discrete Lorenz attractors is born in generic perturbations of system (\ref{Lor_nf}), as in this case the flow approximation of it is the Shimizu-Morioka system. From the proof of the Lemma it can be easily seen that when $ab < 0$, the flow normal form can be also transformed to the Shimizu-Morioka system, but with negative scaling of time. This means that in the normal form a discrete Lorenz repeller is born in generic perturbations. Also it is wotrh to mention, that Lemma 3.1 provides a simple criterion of existence of Lorenz attractors or repellers -- this fact follows immediately from the signs of two coefficients $a$ and $b$ of normal form (\ref{Lor_nf}). This will be used below for the study towards the proof of Conjecture~\ref{conj_inv}. Now consider the inverse map $f_0^{-1}$, i.e. the diffeomorphism having a homoclinic or heteroclinic cycle composed of $(1, 2)$ saddle or saddle-focus fixed points and one quadratic tangency of invariant manifolds. The first return map $X \to F(X)$ along the cycle after an appropriate rescaling of coordinates and parameters can be brought to the form of an inverse to (\ref{H3D}) map (\ref{Henon2}). The correspondence between the parameters is $\displaystyle \hat B = \frac{1}{B}, \;\; \hat M_1 = \frac{M_1}{B^2}, \;\; \hat M_2 = -\frac{M_2}{B}$. This map is also well-known in homoclinic dynamics \cite{GST93c, Tat01, GST96, GST08}. The conservative dynamics of both Henon maps (the case when $B = 1$) was studied in Ref.~\onlinecite{LoMe99}. As map (\ref{Henon2}) is the inverse to (\ref{H3D}), it automatically follows that it has a discrete Lorenz repeller, which appears under perturbations of a fixed point with multipliers $(-1, -1, +1)$. The ``Simo's angel'' near a fixed point with multipliers $(i, -i, -1)$ is also a repeller here for $\hat B < 0$. It means, that in order to find Lorenz-like attractors in map (\ref{Henon2}), one should look at not only fixed points, but also periodic orbits. That is, to consider $n$-periodic points such that in the $n$-th iterate of map (\ref{Henon2}) it is a fixed point with multipliers $(-1, -1, +1)$ and a Jordan block. \begin{lm}\label{Lemma_H2} There exist parameter values, for which map $($\ref{Henon2}$)$ has periodic points of period $6$ such that its $6$-th iterate $F^6$ has a fixed point with multipliers $(-1, -1, +1)$. The normal form of this bifurcation is $($\ref{Lor_nf}$)$ with $ab > 0$. \end{lm} {\it Proof}\\ Theoretical computations show that such periodic points do not exist for periods $2$ and $3$. For periods $4$ and $5$, numerical computations show that there exist parameter values such that $4$th and $5$th iterates of map (\ref{Henon2}) have fixed points with eigenvalues $(-1, -1, +1)$, but in all of them, the coefficients of normal form (\ref{Lor_nf}) give $ab < 0$, this means that discrete Lorenz repellers appear near these points. Now consider period-$6$ orbits. They it will be orbits consisting of $6$ points $Z_1$--$Z_6$ with coordinates $$ (z_1, z_2, z_3) \to (z_2, z_3, z_4) \to (z_3, z_4, z_5) \to (z_4, z_5, z_6) \to (z_5, z_6, z_1) \to (z_6, z_1, z_2) \to (z_1, z_2, z_3), $$ which satisfy the following system of equations: \begin{equation}\label{period-6} \begin{array}{l} z_1 = \hat M_1 + \hat M_2 z_6 + \hat B z_4 - z_5^2, \qquad\qquad z_2 = \hat M_1 + \hat M_2 z_1 + \hat B z_5 - z_6^2 \\ z_3 = \hat M_1 + \hat M_2 z_2 + \hat B z_6 - z_1^2, \qquad\qquad z_4 = \hat M_1 + \hat M_2 z_3 + \hat B z_1 - z_2^2 \\ z_5 = \hat M_1 + \hat M_2 z_4 + \hat B z_2 - z_3^2, \qquad\qquad z_6 = \hat M_1 + \hat M_2 z_5 + \hat B z_3 - z_4^2. \\ \end{array} \end{equation} Each of $6$-periodic points $Z_1$--$Z_6$ are fixed points of the sixth iteration of map $F$, i.e. $x \to F^6(x)$. Next we will determine the equations that guarantee that matrix $D(F^6)$ possesses eigenvalues $(-1, -1, +1)$ at these points. Namely, condition \begin{equation}\label{Jacobian} \det D(F^6)(Z_1) \equiv \hat B^6 = 1 \end{equation} ensures that the product of the eigenvalues is equal to $1$, condition \begin{equation}\label{trace} tr D(F^6)(Z_1) \equiv tr \left(DF(Z_6) \circ DF(Z_5) \circ DF(Z_4) \circ DF(Z_3) \circ DF(Z_2) \circ DF(Z_1) \right) = -1 \end{equation} makes the sum of the eigenvalues to be equal to $-1$, and the third one \begin{equation}\label{plus_one} \det(D(F^6)(Z_1) - id) = 0 \end{equation} means that $D(F^6)(Z_1)$ has an eigenvalue $+1$. Then equations (\ref{Jacobian}) and (\ref{trace}) imply that the product of the rest two multiplies is $1$, and their sum is $-2$, so they both are equal to $-1$. Formulas (\ref{period-6})--(\ref{plus_one}) define $9$ equations for $9$ unknowns $z_1, \ldots, z_6, \hat M_1, \hat M_2, \hat B$, so the system of equations is well-posed. One of numerical solutions of this system is \begin{equation}\label{p6point-} \begin{array}{cc} z_1 = 1.1109087187819051, & z_2 = 0.5430803496704105, \\ z_3 = -0.018564282101437988, & z_4 = -1.0126053862814206, \\ z_5 = -0.3759675295870319, & z_6 = -0.6947447970072144, \\ \hat M_1 = 0.3974562084897318, & \hat M_2 = 0.2271356235631268, \\ \hat B = -1. \end{array} \end{equation} For the parameter values given by (\ref{p6point-}), the $6$-th iterate of the map near point $(z_1, z_2, z_3)$ can be written as normal form (\ref{Lor_nf}) with $a = -0.0555732$ and $b = -1.6955$. According to Ref.~\onlinecite{GGOT13}, Lemma 3.1, a discrete Lorenz attractor (of period $6$) is born in system (\ref{Henon2}) near this bifurcation point for the orientation reversing case, as $\hat B = -1$. For the orientation preserving map (\ref{Henon2}), i.e. when $\hat B > 0$, another numerical solution of equations (\ref{period-6})--(\ref{plus_one}) was found: \begin{equation}\label{p6point+} \begin{array}{cc} z_1 =0.913442745966901, & z_2 =1.220643948207064, \\ z_3 = 1.3256709760748737, & z_4 = 1.1287783775951246, \\ z_5 = 0.7765991221464961, & z_6 = 0.6638157026635255, \\ \hat M_1 = -0.9336687216264129, & \hat M_2 =1.99067193080051, \\ \hat B = 1. \end{array} \end{equation} Normal form (\ref{Lor_nf}) has in this case coefficients $a = -0.107789$ and $b = -0.769823$, thus $ab > 0$, and near this bifurcation point also a period-$6$ discrete Lorenz attractor is born. \mbox \section{The first return map and the rescaling lemma}\label{sec:fret} Consider $U$ --- a sufficiently small fixed neighborhood of the homoclinic or heteroclinic cycle under consideration. It is a union of small neighbourhoods ${\textbf U_0} = U_{01}\cup U_{02}$ of the fixed points and small neighbourhoods $U_m$ of all points of homoclinic or heteroclinic orbits $\Gamma_{12}$ and $\Gamma_{21}$, that do not belong to ${\textbf U_0}$. Note that there exists only a finite number of such points and neighbourhoods $U_m$. Each single-round periodic orbit that lies entirely in $U$, has exactly one intersection point with each of $U_m$ and all remaining its points lie in ${\textbf U_0}$. For each saddle $O_j$, $j = 1,2$, select two points: $M_j^+ \in W^s_{loc}(O_i)$ and $M_j^- \in W^u_{loc}(O_i)$ and their respective neighbourhoods $\Pi_j^+, \Pi_j^- \subset U_{0j}$. The restriction of diffeomorphism $f_\mu$ onto neighbourhoods $U_{0j}$ are called local maps $T_{0j}$. Begin iterating $\Pi_j^+$ under the action of $T_{0j}$. Starting from some number $\bar k_j$ images $T_{0j}^{k} \Pi_j^+$, $k > \bar k_j$, will have nonempty intersections with $\Pi_j^-$. As discussed in section~\ref{sec:def}, there exist numbers $q_{1,2}$ such that $M_2^+ = f_0^{q_2}(M_1^-)$, $M_1^+ = f_0^{q_1}(M_2^-)$. For all small $\mu$ global maps are defined as $T_{12}: \Pi_1^- \to \Pi_2^+ = \left. f_\mu^{q_2} \right|_{\Pi_2^-}$, $T_{21}: \Pi_2^- \to \Pi_1^+ = \left. f_\mu^{q_1} \right|_{\Pi_1^-}$. Now for every $k = (k_1, k_2)$, where $k_j > \bar k_j$, $j = 1,2$, the first return maps $T_k: V_k \to \Pi_1^+$ are defined as $T_k = T_{21} \circ T_{02}^{k_2} \circ T_{12} \circ T_{01}^{k_1}$, where $V_k \subset \Pi_1^+$ is a subdomain such that $T_{01}^{k_1} (V_k) \subset \Pi_1^-$, $T_{12} \circ T_{01}^{k_1} (V_k) \subset \Pi_2^+$, $T_{02}^{k_2} \circ T_{12} \circ T_{01}^{k_1} (V_k) \subset \Pi_2^-$, and $T_k(V_k) \subset \Pi_1^+$. In order to write the first return map in coordinates, the local and global maps should be represented in the most suitable form. \subsection {Local maps} \begin{table} \caption{\label{table_local} Local maps near fixed points of different types} \begin{ruledtabular} \begin{tabular}{{| c | l | l | c |}} {\bf NN} & {\bf Fixed point} & {\bf Case \#} & {\bf The local map} \\ \hline 1. & Saddle & {\bf I.2, II} & (\ref{eq:T0k_s}) \\ \hline 2. & Saddle-focus & {\bf I--II.1} & (\ref{eq:T0k_sf})\\ \hline 3. & Resonant alternating saddle & {\bf I--II.3.a} & (\ref{eq:T0k_sr})\\ \hline 4. & Resonant Belyakov saddle & {\bf I--II.3.b}& (\ref{eq:T0k_bel})\\ \end{tabular} \end{ruledtabular} \end{table} In this subsection formulas for multiple iterations of local maps, $T_{0j}^{k_j}$ are derived for different types of fixed points: saddle, saddle-focus, saddle with the alternating resonance, saddle with the Belyakov resonance. The summary with references to formulas is given in Table~\ref{table_local}. For a saddle fixed point $O_j$ with eigenvalues $\lambda_{1j}$, $\lambda_{2j}$, $\gamma_{j}$, $|\lambda_{2j}| < |\lambda_{1j}|$, the local map can be brought to the main normal form (\ref{t0norm}). This gives us the following formula for its $k$-th iteration (see Refs.~\onlinecite{book, GS92} for details): \begin{equation}\label{eq:T0k_s} \begin{array}{l} x_{1k} = \lambda_{(j)1}^k x_{10} + \hat\lambda_j^{k} \xi^j_{1k}(x_0, y_k, \mu),\\ x_{2k} = \hat\lambda_j^{k} \xi^j_{2k}(x_0, y_k, \mu),\\ y_0 = \gamma_{(j)}^{-k} y_k + \hat \gamma_j^{-k} \xi^j_{3k}(x_0, y_k, \mu). \end{array} \end{equation} Here $0 < |\lambda_{(j)2}| \le \hat \lambda_j < |\lambda_{(j)1}|$, $\hat\gamma_j > |\gamma_{(j)}|$, functions $\xi^j_{mk}$ and their derivatives up to the order $(r - 2)$ are uniformly bounded, and their higher order derivatives tend to zero. Case {\bf I--II.1.} When $O_j$ is a saddle-focus with eigenvalues $\lambda_{(j)} e^{\pm i \varphi_j}$, $\gamma_{(j)}$, where $i^2 = -1$, the $k$-th iteration of the local map has the form \begin{equation}\label{eq:T0k_sf} \begin{array}{l} (x_{1k}, x_{2k})^\top = \lambda_{(j)}^k R_{k \varphi_j}(x_{10}, x_{20})^\top + \hat\lambda_{j}^{k} \xi^j_{1k}(x_0, y_k, \mu),\\ y_0 = \gamma_{(j)}^{-k} y_k + \hat \gamma_j^{-k} \xi^j_{2k}(x_0, y_k, \mu) , \end{array} \end{equation} where $R_\psi$ is the rotation matrix of angle $\psi$. {\bf I--II.3.a.} For the case of a resonant saddle with eigenvalues $\lambda_{(j)1}(0) = -\lambda_{(j)2}(0) = \lambda_{(j)}$ and $\gamma_{(j)}$, the $k$-th iteration can be written as \begin{equation}\label{eq:T0k_sr} \begin{array}{l} x_{1k} = \lambda_{(j)1}^k x_{10} + \hat\lambda_j^{k} \xi^j_{1k}(x_0, y_k, \mu)\\ x_{2k} = \lambda_{(j)2}^k x_{20} +\hat\lambda_j^{k} \xi^j_{2k}(x_0, y_k, \mu)\\ y_0 = \gamma_{(j)}^{-k} y_k + \hat \gamma_j^{-k} \xi^j_{3k}(x_0, y_k, \mu), \end{array} \end{equation} where $0 < \hat \lambda_j < |\lambda_{(j)}|$. Parameter $\mu_2$ unfolds the resonance condition: \begin{equation}\label{mu2_resonant} \frac{\lambda_{(j)1}}{\lambda_{(j)2}} = -1 + \mu_2. \end{equation} {\bf I--II.3.b.} In the case of the Belyakov-type bifurcation $\lambda_{(j)1}(0) = \lambda_{(j)2}(0) = \lambda_{(j)}$, in order to construct smooth parmetric families, it is not possible to use canonical Jordan forms for saddle and saddle-focus~\cite{Arnold_book}, as these two normal forms can not be smoothly conjugated at the bifurcation moment. One of the possible smooth conjugating parametric families is given by the following formula: \begin{equation}\label{lin_bel} Df_\mu(O_j) = \left(\begin{array}{cc} A_s & 0 \\ 0 & \gamma_{(j)}(\mu) \end{array} \right), \;\; A_s = \left(\begin{array}{cc} \lambda_{(j)}(\mu) & 1 \\ \mu_2 & \lambda_{(j)}(\mu) \end{array} \right). \end{equation} When $\mu_2 > 0$, the linearization matrix has real stable eigenvalues $\lambda_{(j)}(\mu) \pm \sqrt{\mu_2}$, and when $\mu_2 < 0$, they form a complex-conjugate pair $\lambda_{(j)}(\mu) \pm i\sqrt{-\mu_2}$. The $k$-th power of matrix $A_s$ has the following form: \begin{equation}\label{A_k} A_s^k = \lambda_{(j)}(\mu)^k \left(1 - \frac{\mu_2}{\lambda_{(j)}^2(\mu)}\right)^{k/2} \left(\begin{array}{cc} C_k(\mu) & S_k(\mu) \\ \mu_2 S_k(\mu) & C_k(\mu) \end{array} \right), \end{equation} where \begin{equation}\label{C_k} C_k = \left\{ \begin{array}{l} \cosh{k \varphi_j}, \; \mu_2 \ge 0 \\ \cos{k \varphi_j}, \; \mu_2 < 0 \end{array} \right., \;\; S_k = \left\{ \begin{array}{l} \frac{\sinh{k \varphi_j}}{\sqrt{\mu_2}}, \; \mu_2 > 0 \\ 0, \; \mu_2 = 0 \\ -\frac{\sin{k \varphi_j}}{\sqrt{-\mu_2}}, \; \mu_2 < 0, \end{array} \right. \end{equation} with \begin{equation}\label{phi_j} \varphi_j = \left\{ \begin{array}{l} $arctanh$ {\frac{\sqrt{\mu_2}}{\lambda_j}}, \; \mu_2 \ge 0 \\ -\arctan{\frac{\sqrt{-\mu_2}}{\lambda_j}}, \; \mu_2 < 0 \end{array} \right., \end{equation} and the $k$-th iteration of the local map is written as: \begin{equation}\label{eq:T0k_bel} \begin{array}{l} (x_{1k}, x_{2k})^\top = A_s^k(x_{10}, x_{20})^\top + \hat\lambda_j^{k} \xi^j_{1k}(x_0, y_k, \mu),\\ y_0 = \gamma_{(j)}^{-k} y_k + \hat \gamma_j^{-k} \xi^j_{2k}(x_0, y_k, \mu), \end{array} \end{equation} where $\hat\lambda_j < \lambda_{(j)}$. All possible types of fixed points and the references to the corresponding formulas for the local maps are given in Table~\ref{table_local}. \subsection{Global maps} Recall that the global map in the homoclinic cases maps neighbourhood $\Pi^-$ to $\Pi^+$, and in the heteroclinic cases they map $\Pi_1^-$ to $\Pi_2^+$ and $\Pi_2^-$ to $\Pi_{1}^+$. Assume that the homoclinic or heteroclinic points at $\mu = 0$ have the following coordinates: $M_j^-(0, 0, y_{(j)}^-)$ and $M_{j}^+(x_{(j)1}^+, x_{(j)2}^+, 0)$, where $x_{(j)1}^+$, $x_{(j)2}^+$ and $y_{(j)}^-$ depend on parameters, and $(x_{(j)
1}^+)^2 + (x_{(j)2}^+)^2 \neq 0$, $y_{(j)}^- \neq 0$. The global maps are written as Taylor expansions near points $M_j^+$. \subsubsection{Transversal heteroclinic intersections} \begin{table} \caption{ \label{table_global_transverse} Global maps for transversal intersections} \begin{ruledtabular} \begin{tabular}{| c | p{17em} | m{3.8em} | c |} {\bf NN} &{\bf Connection} & {\bf Case \#} & {\bf Genericity conditions} \\ \hline 1. & Saddle $\to$ saddle, no orbit flip & {\bf II} & (\ref{no_orbit_flip_trans}) \\ \hline 2. & Saddle $\to$ saddle, orbit flip & {\bf II.2.c} & (\ref{mu2_orbit_flip_trans}) \\ \hline 3. & Saddle $\to$ saddle-focus \newline Saddle-focus $\to$ saddle \newline Saddle-focus $\to$ saddle-focus & {\bf II.1} & None \\ \hline 4. & Resonant alternating saddle $\to$ saddle & {\bf II.3.a} & (\ref{no_orbit_flip_trans_res_a}) \\ \hline 5. & Saddle $\to$ resonant alternating saddle & {\bf II.3.a} & (\ref{no_orbit_flip_trans_res_b}) \\ \hline 6. & Saddle $\to$ resonant Belyakov saddle & {\bf II.3.b} & (\ref{no_orbit_flip_trans_resb}) \\ \hline 7. & Resonant Belyakov saddle $\to$ saddle & {\bf II.3.b} & (\ref{no_orbit_flip_trans}) \\ \end{tabular} \end{ruledtabular} \end{table} Consider first transversal heterolinic intersections along orbit $\Gamma_{12}$ that appear in case {\bf II}. Unstable manifold $W^u(O_1)$ in $\Pi_1^-$ has equation $x_{(1)1} = x_{(1)2} = 0$, and under the action of global map $T_{12}$ it is transformed to a curve that intersect transversely stable manifold $W^s(O_2)$, which locally in $\Pi_2^+$ has equation $y_{(2)} = 0$. Thus one can write $T_{12}$ as follows: \begin{equation}\label{global_transverse} \begin{array}{rcl} x_{(2)1} - x_{(2)1}^+ &=& a_{11}^{(1)} x_{(1)1} + a_{12}^{(1)} x_{(1)2} + b_1^{(1)} (y_{(1)} - y_{(1)}^-) + O(\|x_{(1)}\|^2 + |y_{(1)} - y_{(1)}^-|^2) \\ x_{(2)2} - x_{(2)2}^+ &=& a_{21}^{(1)} x_{(1)1} + a_{22}^{(1)} x_{(1)2} + b_2^{(1)} (y_{(1)} - y_{(1)}^-) + O(\|x_{(1)}\|^2 + |y_{(1)} - y_{(1)}^-|^2) \\ y_{(2)} &=& y^+_{(1)} + c_{1}^{(1)} x_{(1)1} + c_{2}^{(1)} x_{(1)2} + d^{(1)} (y_{(1)} - y_{(1)}^-) + O(\|x_{(1)}\|^2 + |y_{(1)} - y_{(1)}^-|^2). \\ \end{array} \end{equation} Here all coefficients depend smoothly on parameters, and $y^+_{(1)}(0) = 0$, $d^{(1)}(0) \neq 0$, as the intersection is transversal. Map $T_{12}$ is a diffeomorphism, therefore its Jacobian $DT_{12}$ at $M^-_1$ is non-degenerate, i.e. \begin{equation} \label{Jacobian_transverse} \det DT_{12} = \det \left( \begin{array}{ccc} a_{11}^{(1)} & a_{12}^{(1)} & b_1^{(1)} \\ a_{21}^{(1)} & a_{22}^{(1)} & b_2^{(1)} \\ c_{1}^{(1)} & c_{2}^{(1)} & d^{(1)} \end{array} \right) \neq 0. \end{equation} In case {\bf II.2.c} the transversal intersection has an additional degeneracy at the bifurcation moment --- an orbit flip, in coordinates the condition of a simple and non-simple heteroclinic orbit $\Gamma_{12}$ is obtained as follows. The equation of extended unstable manifold $W^{ue}_{loc}(O_1)$ is $x_{(1)2} = 0$, and the leaf $F^{ss}(M^+_{2})$ passing through point $M^+_{2}$ is locally a straight line $\{x_{(2)1} = x^+_{(2)1}$, $y_{(2)} = 0\}$ with direction vector $l^{ss} = (0, 1, 0)^\top$. Tangent plane $P^{ue}(M^-_1)$ has equation $x_{(1)2} = 0$, and its image under the action of global map $T_{12}$ has at $\mu = 0$ the following parametric equation: \begin{equation}\label{global_transverse_1} \begin{array}{rcl} x_{(2)1} - x_{(2)1}^+ &=& a_{11}^{(1)} x_{(1)1} + b_1^{(1)} (y_{(1)} - y_{(1)}^-) + O(\|x_{(1)}\|^2 + |y_{(1)} - y_{(1)}^-|^2) \\ x_{(2)2} - x_{(2)2}^+ &=& a_{21}^{(1)} x_{(1)1} + b_2^{(1)} (y_{(1)} - y_{(1)}^-) + O(\|x_{(1)}\|^2 + |y_{(1)} - y_{(1)}^-|^2) \\ y_{(2)} &=& c_{1}^{(1)} x_{(1)1} + d^{(1)} (y_{(1)} - y_{(1)}^-) + O(\|x_{(1)}\|^2 + |y_{(1)} - y_{(1)}^-|^2). \\ \end{array} \end{equation} At point $M^+_{2}$ it has two linearly independent tangent vectors $l_1 = (a_{11}^{(1)}, a_{21}^{(1)}, c_1^{(1)})^\top$ and $l_2 = (b_{1}^{(1)}, b_{2}^{(1)}, d^{(1)})^\top$. Curve $F^{ss}(M^+_{2})$ and surface $T_{12}(P^{ue}(M^-_1))$ will be tangent at point $M^+_{2}$ if vectors $l_1$, $l_2$ and $l^{ss}$ are linearly dependent, this happens when \begin{equation}\label{transverse_orbit_flip} \left. A_{11}^{(1)}(\mu)\right|_{\mu = 0} = \left. \left(a_{11}^{(1)}(\mu) - \frac{b_1^{(1)}(\mu) c_1^{(1)}(\mu)}{d^{(1)}(\mu)} \right)\right|_{\mu = 0} = 0. \end{equation} So in case {\bf II.2.c}, when the heteroclinic orbit connecting saddles $O_1$ and $O_{2}$ is non-simple, parameter $\mu_2$ is introduced to unfold the orbit flip degeneracy as \begin{equation}\label{mu2_orbit_flip_trans} \mu_2 \equiv A_{11}^{(1)}(\mu). \end{equation} When transversal heteroclinic orbit $\Gamma_{12}$ is simple, it should satisfy the non-degeneracy condition \begin{equation}\label{no_orbit_flip_trans} A_{11}^{(1)}(0) \neq 0. \end{equation} If $O_1$ is a saddle with an alternating resonance (case~{\bf II.3.a}), then due to switching of leading and non-leading directions for small $\mu$, $\Gamma_{12}$ will be simple if \begin{equation}\label{no_orbit_flip_trans_res_a} A_{11}^{(1)}(0) \neq 0, \;\; \left. A_{12}^{(1)}(\mu)\right|_{\mu = 0} = \left. \left(a_{12}^{(1)}(\mu) - \frac{b_1^{(1)}(\mu) c_2^{(1)}(\mu)}{d^{(1)}(\mu)} \right)\right|_{\mu = 0} \neq 0. \end{equation} If $O_2$ is a saddle with an alternating resonance, the genericity conditions are \begin{equation}\label{no_orbit_flip_trans_res_b} A_{11}^{(1)}(0) \neq 0, \;\; \left. A_{21}^{(1)}(\mu)\right|_{\mu = 0} = \left. \left(a_{21}^{(1)}(\mu) - \frac{b_2^{(1)}(\mu) c_1^{(1)}(\mu)}{d^{(1)}(\mu)} \right)\right|_{\mu = 0} \neq 0. \end{equation} If $O_1$ is a saddle undergoing the Belyakov transition (case~{\bf II.3.b}), then the leading stable direction at $O_1$ tends to the $x_{(1)1}$ axis as $\mu_2 \to +0$, so that condition (\ref{no_orbit_flip_trans}) guarantees the absence of orbit flips in small perturbations. If $O_{2}$ undergoes the Belyakov transition, then its non-leading direction tends to the $x_{(2)1}$ axis in the limit $\mu_2 \to +0$. In this case the heteroclinic orbit will be simple if \begin{equation}\label{no_orbit_flip_trans_resb} A_{21}^{(1)}(0) \neq 0. \end{equation} All possible cases of transverse intersections together with the references to the corresponding non-degeneracy conditions are summarized in Table~\ref{table_global_transverse}. \subsubsection{Quadratic homoclinic and heteroclinic tangencies} \begin{table} \caption{\label{table_global_quadratic} Global maps for quadratic tangencies} \begin{ruledtabular} \begin{tabular}{| c | p{17em} | m{4em} | c | } {\bf NN} &{\bf Connection} & {\bf Case \#} & {\bf Genericity conditions} \\ \hline 1. & Saddle $\to$ saddle, simple tangency & {\bf II} & (\ref{simple_tan_1}) \\ \hline 2. & Saddle $\to$ saddle, inclination flip & {\bf I--II.2.a} & (\ref{incl_flip}), (\ref{mu2_non-simp}) \\ \hline 3. & Saddle $\to$ saddle, orbit flip & {\bf I--II.2.b} & (\ref{orbit_flip}), (\ref{mu2_non-simp}) \\ \hline 4. & Saddle $\to$ saddle-focus & {\bf II.1} & (\ref{simple_tan_1.1}) \\ \hline 5. & Saddle-focus $\to$ saddle & {\bf II.1} & None \\ \hline 6. & Saddle-focus $\to$ saddle-focus & {\bf I--II.1} & None \\ \hline 7. & Saddle $\to$ resonant alternating saddle \newline Resonant alternating saddle $\to$ saddle \newline Resonant alternating saddle $\to$ itself& {\bf I--II.3.a} & (\ref{simple_tan_2}) \\ \hline 8. & Saddle $\to$ resonant Belyakov saddle \newline Resonant Belyakov saddle $\to$ itself & {\bf I--II.3.b} & (\ref{simple_tan_3}) \\ \hline 9. & Resonant Belyakov saddle $\to$ saddle & {\bf II.3.b} & (\ref{simple_tan_1}) \\ \end{tabular} \end{ruledtabular} \end{table} The nontransversal heteroclinic (homoclinic) orbit connects fixed points $O_2$ and $O_1$. When $\mu = 0$, global map $T_{21}$ transforms a piece of unstable manifold $W^{u}(O_2) \cap \Pi^-_2$ with equation $x = 0$ into a curve tangent at point $M^+_1$ to surface $W^s(O_1) \cap \Pi^+_1$ with equation $y = 0$. Then for all small $\mu$ global map $T_{21}$ is written as \begin{equation}\label{global_tangency_3D} \begin{array}{rcl} \bar x_{(1)1} - x_{(1)1}^+ &=& a_{11}^2 x_{(2)1} + a_{12}^2 x_{(2)2} + b_1^2 (y_{(2)} - y_{(2)}^-) + O(\|x_{(2)}\|^2 + |y_{(2)} - y_{(2)}^-|^2) \\ \bar x_{(1)2} - x_{(1)2}^+ &=& a_{21}^2 x_{(2)1} + a_{22}^2 x_{(2)2} + b_2^2 (y_{(2)} - y_{(2)}^-) + O(\|x_{(2)}\|^2 + |y_{(2)} - y_{(2)}^-|^2) \\ \bar y_{(1)1} &=& y^+_{(2)} + c_{1}^2 x_{(2)1} + c_{2}^2 x_{(2)2} + d^2 (y_{(2)} - y_{(2)}^-)^2 + O(\|x_{(2)}\|^2 + |y_{(2)} - y_{(2)}^-|^3) \\ \end{array} \end{equation} The left hand side variables are denoted here as $(\bar x_{(1)}, \bar y_{(1)})$ to indicate that the image of $T_{21}$ lies in $\Pi_1^+$ and these coordinates also represent the iteration of the first return map $T_k$ from $\Pi_1^+$ to itself. All coefficients here depend smoothly on parameters, and when $\mu = 0$ we have $y^+_{(2)}(0) = 0$ and $d^2(0) \neq 0$, as the tangency is quadratic at the bifurcation moment. The Jacobian of the global map $DT_{21}$ at $M^-_2$ is non-degenerate, that is \begin{equation}\label{jacobian_tangency} \det DT_{21} = \det \left( \begin{array}{ccc} a_{11}^2 & a_{12}^2 & b_1^2 \\ a_{21}^2 & a_{22}^2 & b_2^2 \\ c_{1}^2 & c_{2}^2 & 0 \end{array} \right) \neq 0 \end{equation} When $\mu \neq 0$, value $y^+_{(2)}(\mu)$ is the splitting distance of the quadratic tangency up to $o\|\mu\|$ terms, so it is taken as the splitting parameter: \begin{equation}\label{mu_1} \mu_1 \equiv y^+_{(2)}(\mu). \end{equation} Now write in coordinates the conditions of simple and non-simple quadratic tangencies. Consider saddle fixed points $O_2$ and $O_1$ such that all their eigenvalues are real and do not satisfy resonance conditions from cases {\bf I--II.3}. The equation of extended unstable manifold $W^{ue}_{loc}(O_2)$ is $x_{(2)2} = 0$, and the leaf $F^{ss}(M^+_{1})$ passing through point $M^+_{1}$ is locally a straight line $\bar x_{(1)1} = x^+_{(1)1}$, $\bar y_{(1)} = 0$ with direction vector $l^{ss} = (0, 1, 0)^\top$. The image of tangent plane $P^{ue}(M^-_2)$ under the action of global map $T_{21}$ has the following parametric equation: \begin{equation}\label{global_tangency_1} \begin{array}{rcl} \bar x_{(1)1} - x_{(1)1}^+ &=& a_{11}^2 x_{(2)1} + b_1^2 (y_{(2)} - y_{(2)}^-) + O(\|x_{(2)}\|^2 + |y_{(2)} - y_{(2)}^-|^2) \\ \bar x_{(1)2} - x_{(1)2}^+ &=& a_{21}^2 x_{(2)1} + b_2^2 (y_{(2)} - y_{(2)}^-) + O(\|x_{(2)}\|^2 + |y_{(2)} - y_{(2)}^-|^2) \\ \bar y_{(1)} &=& c_{1}^2 x_{(2)1} + d^2 (y_{(2)} - y_{(2)}^-)^2 + O(\|x_{(2)}\|^2 + |y_{(2)} - y_{(2)}^-|^2). \\ \end{array} \end{equation} At point $M^+_{1}$ it has two linearly independent tangent vectors $l_1 = (a_{11}^2, a_{21}^2, c_1^2)^\top$ and $l_2 = (b_{1}^2, b_{2}^2, 0)^\top$. Curve $F^{ss}(M^+_{1})$ and surface $T_{21}(P^{ue}(M^-_2))$ will be tangent at point $M^+_{1}$ if vectors $l_1$, $l_2$ and $l^{ss}$ are linearly dependent, this happens when \begin{equation}\label{tangency_non-simp} \left. b_1^{2}(\mu) c_1^{2}(\mu)\right|_{\mu = 0} = 0. \end{equation} So here naturally two possibilities appear for the quadratic tangency to be non-simple. In the inclination flip cases {\bf I--II.2.a}, surfaces $T_{21}(P^{ue}(M^-_2))$ and $W^s(O_1)$ are tangent to each other (fig.~\ref{fig05}~(a)), therefore vectors $l_1$ and $l_2$ both lie in $W^s(O_1)$, thus \begin{equation}\label{incl_flip} c_1^2(0) = 0, \; b_1^2(0) \neq 0, \end{equation} and in the orbit flip cases {\bf I--II.2.b}, when surface $T_{21}(P^{ue}(M^-_2))$ is transverse to $W^s(O_1)$, (fig.~\ref{fig05}~(b)), it follows that \begin{equation}\label{orbit_flip} b_1^2(0) = 0, \; c_1^2(0) \neq 0, \end{equation} which implies that vectors $l_{ss}$ and $l_2$ are parallel. For these types of degeneracies the second unfolding parameter $\mu_2$ is introduced as \begin{equation}\label{mu2_non-simp} \mu_2 = \left\{\begin{array}{rl} c_1^2(\mu) & {\rm in \; cases \; \textbf{I--II.2.a}} \\ b_1^2(\mu) & {\rm in \; cases \; \textbf{I--II.2.b}} \end{array}\right. \end{equation} When the quadratic tangency is simple the condition of absence of non-simple tangencies at the bifurcation moment and in small perturbations should be written. If points $O_1$ and $O_2$ are saddles, and they do not satisfy resonance conditions from cases cases {\bf I--II.3}, then \begin{equation}\label{simple_tan_1} b_1^2(0) \neq 0, \; c_1^2(0) \neq 0. \end{equation} If point $O_2$ is a saddle and $O_1$ is a saddle-focus, only the inclination flip degeneracy is possible, when manifold $W^{ue}(O_2)$ is tangent to stable manifold $W^s(O_1)$. To avoid this, one needs: \begin{equation}\label{simple_tan_1.1} c_1^2(0) \neq 0. \end{equation} If one of the points $O_1$ and $O_2$ at the bifurcation moment is a saddle with the alternating resonance, cases {\bf I--II.3.a}, then either the direction of $W^{ue}(O_2)$, or the direction of $W^{ss}(O_1)$ may alternate when $\mu$ varies, thus the quadratic tangency is simple if \begin{equation}\label{simple_tan_2} b_1^2(0) \neq 0, \; b_2^2(0) \neq 0, \; c_1^2(0) \neq 0, \; c_2^2(0) \neq 0. \end{equation} If point $O_2$ satisfies the Belyakov condition, case {\bf II.3.b}, then inequalities (\ref{simple_tan_1}) should be fulfilled to avoid non-simple tangencies, and if $O_1$ satisfies the Belyakov condition (this also inlcludes the homoclinic case {\bf I.3.b}), the quadratic tangency will be simple if \begin{equation}\label{simple_tan_3} b_2^2(0) \neq 0, \; c_1^2(0) \neq 0. \end{equation} All possible cases of quadratic tangencies together with the references to the corresponding non-degeneracy conditions are summarized in Table~\ref{table_global_quadratic}. \begin{lm}\label{resc_lemma} {\em (The rescaling lemma)} Let $f_{\mu_1, \mu_2, \mu_3}$ be the family under consideration. Then, in the space $(\mu_1, \mu_2, \mu_3)$ there exist infinitely many regions $\Delta_{i}$ in the homoclinic case {\bf I} and $\Delta_{ij}$ in the heteroclinic case {\bf II} ac\-cu\-mu\-la\-ting to the origin as $i, j \to \infty$, such that the first return map in appropriate rescaled coordinates and parameters is asymptotically $C^{r - 1}$-close to one of the following limit maps. {\rm 1)} In the orbit flip cases~{\bf I--II.2.b}: \begin{equation} \label{HIhet} \begin{array}{l} \bar X_1 \; = \; -B X_2 + M_2 Y,\;\; \bar X_2 \; = \; Y,\;\; \bar Y = M_1 - X_1 - Y^2, \end{array} \end{equation} {\rm 2)} In all other cases: \begin{equation} \label{HIIhet} \begin{array}{l} \bar X_1 \; = \; Y,\;\; \bar X_2 \; = \; X_1,\;\; \bar Y = M_1 + M_2 X_1 + B X_2- Y^2, \end{array} \end{equation} \end{lm} Thus, the rescaled first return map in almost all cases is exactly the 3D Henon map (\ref{H3D}). In cases~{\bf I--II.2.b} in system (\ref{HIhet}) we make an additional change of coordinates $X_{1new} = X_1 - M_2 X_2$ and scale $X_1$ by $(-B)$, bringing it again to the form (\ref{HIIhet}). The relations between old and new parameters are the following. \begin{equation}\label{mu1_resc} M_1 \sim \left\{\begin{array}{rl} \mu_1 \gamma^{2i} & {\rm in \; case \; \textbf{I}} \\ \mu_1 \gamma_{(1)}^{2i} \gamma_{(2)}^{2j} & {\rm in \; case \; \textbf{II}}. \end{array}\right. \end{equation} When $i, j \to \infty$, with sufficiently small variations of parameter $\mu_1$ one gets arbitrary finite values of parameter $M_1$. \begin{equation}\label{mu3_resc} B \sim \left\{\begin{array}{rl} J^i(O) \det DT_1 & {\rm in \; case \; \textbf{I}} \\ J^i(O_1)J^j(O_2)\det DT_{12} \det DT_{21} & {\rm in \; case \; \textbf{II}}. \end{array}\right. \end{equation} Based on formulas (\ref{mu3_hom}) and (\ref{mu3_het}), by small variations of parameter $\mu_3$ parameter $B$ takes arbitrary finite values. If the original diffeomorphism $f_0$ is orientation preserving, $B$ takes only positive values, if $f_0$ is orientation reversing, then $B$ takes either only positive or only negative values, depending on the orientability of the first return map. \begin{equation}\label{mu2_resc_1} M_2 \sim \left\{\begin{array}{rl} \lambda_1^i \gamma^i \cos(i \varphi + \theta) & {\rm in \; case \; \textbf{I.1}} \\ \lambda_{(1)1}^i \gamma_{(1)}^i \lambda_{(2)1}^j \gamma_{(2)}^j \cos(i \varphi_1 + \theta_1) \cos(j \varphi_2 + \theta_2) & {\rm in \; case \; \textbf{II.1}}, \end{array}\right. \end{equation} where $\theta$, $\theta_1$ and $\theta_2$ smoothly depend on parameters and $\mu_2$ is varied in the way that the trigonometric function stays close to zero. At the same time, according to formulas (\ref{mu3_hom}) and (\ref{mu3_het}), the coefficients $$ \lambda_1^i \gamma^i \sim \lambda_2^{-i} \;\; {\rm and } \;\; \lambda_{(1)1}^i \gamma_{(1)}^i \lambda_{(2)1}^j \gamma_{(2)}^j \sim \lambda_{(1)2}^{-i} \lambda_{(2)2}^{-j} $$ are asymptotically large when $i, j \to \infty$. Thus parameter $M_2$ takes arbitrary finite values. \begin{equation}\label{mu2_resc_2} M_2 \sim \left\{\begin{array}{rl} \mu_2 \lambda_1^i \gamma^i & {\rm in \; case \; \textbf{I.2}} \\ \mu_2 \lambda_{(1)1}^i \gamma_{(1)}^i \lambda_{(2)1}^j \gamma_{(2)}^j & {\rm in \; case \; \textbf{II.2}}. \end{array}\right. \end{equation} Again, for $i, j \to \infty$ and sufficiently small $\mu_2$ parameter $M_2$ takes arbitrary finite values. \begin{equation}\label{mu2_resc_3} M_2 \sim \left\{\begin{array}{rl} \displaystyle \lambda_1^i \gamma^i \left( (-1 + \mu_2)^i+ A \right) & {\rm in \; case \; \textbf{I.3.a}} \\ \displaystyle \lambda_{(1)1}^i \gamma_{(1)}^i \lambda_{(2)1}^j \gamma_{(2)}^j \left( (-1 + \mu_2)^k + A \right) & {\rm in \; case \; \textbf{II.3.a}}.\\ \end{array}\right. \end{equation} Here value $A$ smoothly depends on the parameters, and $A \neq 0$ when $\mu = 0$. The power $k$ denotes $i$ or $j$ depending on which saddle point, $O_1$ or $O_2$, satisfies the resonance condition. Here, to make $M_2$ finite, parameter $\mu_2$ is varied near such values, where $\left((-1 + \mu_2)^k + A \right)$ becomes zero. To achieve this, the parity of $k$ is taken appropriately, depending on the sign of $A$. \begin{equation}\label{mu2_resc_4} M_2 \sim \left\{\begin{array}{rl} \displaystyle \lambda_1^i \gamma^i \left(\frac{A}{\sqrt{-\mu_2}}\cos(i \varphi + \theta) \right) & {\rm in \; case \; \textbf{I.3.b}} \\ \displaystyle \lambda_{(1)1}^i \gamma_{(1)}^i \lambda_{(2)1}^j \gamma_{(2)}^j \left(\frac{A}{\sqrt{-\mu_2}}\cos(k \varphi + \theta) \right) & {\rm in \; case \; \textbf{II.3.b}}. \end{array}\right. \end{equation} This formula is valid only when $\mu_2 < 0$, which means that the saddle point having a stable eigenvalue with multiplicity two (the Belyakov resonance), becomes a saddle-focus. Here $A$ and $\theta$ smoothly depend on the parameters, moreover $A \neq 0$ when $\mu = 0$. Exponent $k$ is $i$ or $j$ depending on which saddle point, $O_1$ or $O_2$, satisfies the resonance condition. The angle variable $\varphi$ is given by formula (\ref{phi_j}). Varying a small $\mu_2$ near one of the zeros of the trigonometric function, and at the same time, keeping it away from zero, one get parameter $M_2$ taking arbitrary finite values. \section*{Acknowledgements} This paper is a contribution to the project M7 (Dynamics of Geophysical Problems in Turbulent Regimes) of the Collaborative Research Centre TRR 181 ``Energy Transfer in Atmosphere and Ocean'' funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- Projektnummer 274762653. The paper is also supported by the grant of the Russian Science Foundation 19-11-00280. I especially thank S. Gonchenko and D. Turaev for the idea of considering bifurcations of periodic points in the inverse 3D Henon map (\ref{Henon2}). In addition, I am grateful to Jean-Luc Doumont for the seminars on scientific writing, that helped me to significantly improve the manuscript. \section*{Data Availability Statement} The data that support the findings of this study (proofs, pictures) are placed in the body of the text. If some extra requirements appear, they should be addressed to the corresponding author. \nocite{*}
\section{Introduction} Answering questions requires an efficient exploitation of knowledge. Modern QA systems \cite{karpukhin2020dense, guu2020realm, lewis2020retrieval, izacard2020leveraging} often rely on external knowledge like Wikipedia or databases that contain explicit answers. However, for many real-world QA tasks, such external knowledge either does not exist or can hardly be utilized when facing complex and very personal non-factoid questions. FAQ or How-To documents usually can help users with simple guidance on the basics in real-world applications; yet, there is no guarantee that they can cover a wide range of issues. Under these circumstances, the absence of external knowledge limits the application of QA systems in real-world scenarios. However, conversation logs collected from real conversations conducted between the questioner and the respondent contain valuable information which can be considered as latent background knowledge to answer questions. In this study, we propose a neural Retrieval-Reading system to leverage this latent background knowledge in QA tasks. Our system consists of a retriever and a generative reader. The former retrieves relevant background knowledge to the given question using dialog data, and the reader incorporates the knowledge to answer questions. \begin{figure}[H] \centering \includegraphics[width=8cm]{images/Capture.PNG} \caption{An written test dialogue example in our research. Each conversation turn has a Question ($Q_i$) and an Answer ($A_i$). Dialogue history for $Q_3$ is $\mathrm{\left\{Q_1, A_1; Q_2, A_2 \right\}}$} \label{fig:ConvQA-example} \end{figure} \noindent Humans seek answers through multiple rounds of dialogue by asking several questions for more comprehensive information. As Figure \ref{fig:ConvQA-example} shows, Conversational Question Answering (ConvQA) tasks require the respondent to consider both the current question and one or multiple specific conversation rounds from the dialog history to infer answers. Previous research \cite{choi2018quac, reddy2019coqa, ohsugi2019simple} has shown that incorporating more dialogue context rounds to a certain degree improves readers' prediction qualities; however, considering too much contextual detail impairs their performance. The reasons are 1.\ readers suffer from processing long sequential information and expect additional computational resources, and 2.\ not all history rounds are helpful for understanding the current question; some may even confuse the machin . Furthermore, QA data in the real world often tends to be colloquial and informal \cite{faisal2021sd}, covering non-valuable semantic content \cite{ravichander2021noiseqa}, such as greetings and personal and redundant information besides true intent \cite{li2018question}. Such noisy information distracts QA models from capturing the focus of attention. Therefore, we propose a text summarizer based on TFIDF refining contexts to efficiently provide more concise and less noisy contextual information for our Retrieval-Reading system. Our history summarizer can be easily implemented in real-world applications with minimal computing resources in an unsupervised way.\\\\ Our work is conducted on a real industry application scenario, and it makes three important contributions to the field of real-world ConvQA:\\ 1. Our research verifies that even without explicit external knowledge, exploitation for latent internal knowledge can enhance the system.\\ 2. We find that the history summarizer significantly improves the retriever and reader thus provides a good practical value in real-life ConvQA applications.\\ 3. We explored directions of history modelling. Our main finding is that the use of a generative reader shows a promise, as confirmed by quantitative and qualitative evaluations. \section{Related work} \textbf{Knowledge Exploitation} Many modern extractive QA systems \cite{guu2020realm, yang2019end, karpukhin2020dense} provide answer predictions by retrieving documents containing explicit answers and then extract answer spans from them. Nevertheless, in real-world applications, especially in customer service tasks, it is not feasible to extract answer intervals from FAQ documents or conversation data when facing complex and personal non-factual questions. On the other hand, several studies \cite{lewis2020retrieval,longpre2021mkqa,izacard2020leveraging, roller2020recipes} have investigated building Retrieval-Reading systems by augmenting the generative model via using the retrieved external knowledge like Wikipedia documents in order to generate better response predictions. Their work only investigated the external knowledge exploitation and was solely validated on open-domain QA or chatbot domains; we will investigate internal knowledge exploitations using the Retrieval-Reading systems and validate it on a real-world multilingual conversation question answering task.\\\\ \textbf{Context Dependency Issue} The introduction of the multi-turn dialogues introduces the dialog history dependency issue to QA systems, it requires machines to process information from a broad dialogue context to understand the current question \cite{zhu2021retrieving, zaib2021conversational,gupta2020conversational}. Two lines of active investigation aim to improve the context dependency issue. \textbf{1.\ Question rewriting} studies \cite{vakulenko2021question, anantha2020open, chu2020ask} investigate how to rewrite the combination of contexts from dialog history and the current question into a form that machines can better understand. However, these approaches require rewritten questions as training data for supervised learning. Their implementation also requires large-scale neural network training, which is hardly practical in real-world applications. \textbf{2. History modelling}. Works by \citet{qu2019bert} and \citet{qu2019attentive} have shown that dynamically encoding contextual information from dialog history for the extractive reader result in better answer spans on the QuAC dataset. However, these approaches are limited to answers directly extractable from the context. Here, we will extend history modelling to generative models. \section{Data} We conducted our research using an industrial multilingual question answering dataset collected from real conversations conducted between bank customer service agents and customers. It consists of Dutch and English and a small amount of other languages such as German. Table \ref{our_dataset_stats} shows statistics of our dataset and compares them with the QuAC\citep{choi2018quac} dataset. Compared to open-domain academic datasets, our dataset is more challenging bacause: 1. Real-world conversations involve a high percentage of non-factoid questions and redundant dialog turns that does not contain users' true intent. 2. Our data contains a large number of customer support phone numbers, website urls, etc. Most of this information is contained in the actual answers. 3. Longer QA data poses challenges when encoding and selecting contexts from the conversation history. 4. The lack of external explicit knowledge source like Wikipedia makes the task particularly challenging. \begin{table}[h!] \centering \begin{tabular}{lll} \hline \textbf{Stats} & \textbf{Our dataset} & \textbf{QuAC} \\ \hline \# Questions & 339,478 & 90,922 \\ \# Dialogues & 131,725 & 12,567 \\ \# Avg. Tokens in Q & 24.5 & 6.5\\ \# Avg. Tokens in A & 30.0 & 12.6\\ \# Avg. Turn & 2.6 & 7.2 \\ \hline \end{tabular} \caption{\label{our_dataset_stats} Data statistics for our dataset and the QuAC dataset. \emph{\# Avg. Tokens in Q} and \emph{\# Avg. Tokens in A} present the average token number for all questions and answers. While \emph{\# Avg. Turn} denotes the average number of rounds for all Dialogues.} \end{table} \section{Method} \textbf{Task Definition} Our QA task can be defined as follows: Given a large collection of passages $D$, a query $U_k$ including the current question $Q_k$ and its previous dialog context $H_k$, where the $H_k = \mathrm{\left\{ Q_i, A_i \right\}}_{i=1}^{k-1}$ contains all QA pairs(questions and answers) before the $Q_k$ in this conversation. The goal is to provide an answer $A_k$ for $U_k$ using $D$. In addition, since our data is obtained from real-world customer service, we hardly have the Wikipedia-like articles used in academic data as $D$. Instead, we use answers from QA data as our $D$; hence, in our case, a single element in $D$ is a question answer pair in the dataset.\\\\ \textbf{System Overview} Figure \ref{fig:QA_system} shows the architecture of our QA system. To utilize potential background knowledge in the task of ConvQA, we build a retrieval-reading model that uses a document retriever to provide relevant knowledge and uses a generative reader to predict the answer to the given questions. In addition, we incorporate a reranker, a History Summarization Module (HSM), and a Dynamic History Re-weighting Module (DHRM) in the retrieval-reading architecture to enhance the system. We trained all modules on our down-stream task in our experiments, except for HSM which was used without training. \begin{figure}[h!] \centering \includegraphics[width=8cm]{images/QA_system.png} \caption{Architecture of the retrieval-reading system in this study} \label{fig:QA_system} \end{figure} \\\\ \noindent \textbf{Retrieval-Reading System} We implement the Retrieval-Reading model to enable the use of internal background knowledge in the ConvQA task. High-quality retrieved documents can provide more relevant background knowledge for subsequent generative readers, and ideally answers from top retrieved QA pairs can even be used directly as the predicted answer. To validate what model can yield more satisfactory results with real-world data, we consider both sparse (BM25 in \citet{lin2021pyserini}) and dense (DPR in \citet{karpukhin2020dense}) retrievers in our retrieval task. In the retriever, we replace BERT model with Multilingual-BERT (MBERT) in the dual encoder model for efficient multilingual model training. The generative reader in our system makes predictions by considering the query input and background knowledge returned by the retriever. We implement both the mBART\cite{liu2020multilingual} and mT5\cite{xue2020mt5} models as our readers to validate which model will yield better results in our multilingual reading task.\\\\ \textbf{History Summarization Module} We propose a History Summarization Module (HSM) to refine the conversation history context in an attempt to improve both retrieval and reading performance. We implement TFIDF based extractive summarization with stemming because it is commonly used as a strong baseline for summarization tasks. Therefore, we choose to keep the head $H_1$ and tail $H_{k-1}$ pairs of QA contexts from the dialog history, and only summarize the content in the middle. This is because, intuitively, the head of a conversation normally contains the user's primary intent, while the tail is most likely to be relevant to the current question since it is the most recent context.\\\\ \textbf{Dynamic History Re-weighting Module} Some history turns are redundant for the current question, while others can help us understand and answer the current question. Thus, our intuition is that by dynamically assigning fewer weights to low-value conversation turns from the dialog history in the reading process, the generative reader could be more capable of handling the ConvQA tasks. Algorithm \ref{alg:Dynamic History Re-weighting Mechanism} shows how Dynamic History Re-weighting Module (DHRM) works with a generative reader. In general, it learns the importance weight of QA pairs from the dialog history, then re-weights those historical turns and tokens in the passages. Compared to work by \citet{qu2019attentive}, our DHRM is different by reweighting both candidate passages and contexts from the conversation history, and our work is intended to implement on the generative reader instead of extractive reader. \begin{algorithm}[h!] \caption{Dynamic History Re-weighting Mechanism (DHRM)} \label{alg:Dynamic History Re-weighting Mechanism} \begin{algorithmic}[1] \State {Input $U_k$: it includes the current question $Q_k$, conversation history $H_k = \mathrm{\left\{ Q_i, A_i \right\}}_{i=1}^{k-1}$, and candidate passages $P_k$.} \State {Feed $U_k$ to the encoder to get the contextualized embedding.} \State {Utilize mean pooling for the current question embedding and the embedding of each context from the conversation history. This process outputs $[ QS, HS^1, HS^2, ..., HS^{k-1}]$.} \State {Use the Bahdanau attention layer to calculate the attention score for each context from the conversation history. After that, pass the attention scores to a softmax layer to compute their attention weights ($ [\alpha^1, \alpha^2, ..., \alpha^{k-1}]$, they are ranging from $[0,1]$).} \State {Reweighting tokens in the contexts from the conversation history by multiplying its corresponding attention weights.} \State {Reweighting tokens in the candidate passages. For example, if a token in candidate passage also appeared in $H_2$, then multiply $\alpha^2$ to that the token embedding.} \end{algorithmic} \end{algorithm} \noindent \textbf{Passage reranking Module} Inspired by \citet{qu2020open}, we verify whether adding a passage reranker can improve the ranking of the retrieved candidate passages even though the reranker and the dense retriever are based on the same language model without sharing parameters. The reranking strategy can be effective when we cannot obtain satisfactory retrieval ranking results, especially when we cannot take advantage of the large number of background documents being retrieved. We adopted the neural reranker architecture from \citet{nogueira2019passage} and modified it to fit our multilingual scenario by switching to the MBERT. \section{Experiments} \textbf{History Contribution Experiment} Firstly, to
investigate whether considering complete QA pairs as context yields better results than using the questions or the answers alone, we randomly selected 4000 questions in the test set and then used trained models to predict the ranking of their actual documents among 32,000 documents.\\\\ \textbf{Retrieval QA Experiment} We conduct the complete retrieval experiment using both BM25 and DPR as our retriever to investigate whether relying solely on information retrieval is possible for our QA tasks. Based on the result of history contribution, we select complete QA pairs as the conversation history context component in this experiment.\\\\ \textbf{Retrieval-Reading Experiment} In this experiment, we implement the retrieval-reading system on our ConvQA task. To further validate that using background knowledge can help the retrieval-reading system to make better predictions than using the generative reader alone, we evaluate a pure generative reader model without retrieved knowledge for comparison. Furthermore, we also compare our model to pure retriever to investigate to what extent the output of information retrieval model can be compared with our retrieval-reading system.\\\\ \textbf{Evaluation metrics} We measure the average document rank for the experiment of history contribution. It measures the average rank of the true answer to a question among a given large number of documents when the retriever is searching for the answer to that question. For the Retrieval QA Experiment, we utilize top-n retrieval accuracy and three rouge scores \cite{lin2004rouge} as our evaluation metrics. As for the retrieval-reading experiment, in addition to the quantitative evaluation based on rouge scores, we also conducted a qualitative analysis based on a double-blind evaluation by human annotators. In this human evaluation, we compare the scores of three different types of answer candidates on the scales of relevance, correctness, and readability.\\\\ \textbf{Distributed training} To accelerate the entire training and inference process, we utilized distributed data parallelism and model parallelism to implement uniform distributed training through the Zero Redundancy Optimizer (ZeRO \citealp{rajbhandari2020zero}) strategy and Pytorch platform \cite{paszke2019pytorch}. \section{Results} \subsection{Results of Retrieval tasks} As the Table \ref{table:historicalcontextcomposition} shows, both questions and answers are necessary for the retriever, where the latter is more significant than the former. This indicates that considering complete QA pairs on our data helps to better perform the retrieval task. Our finding is consistent to previous studies conducted on public datasets, such as \citet{reddy2019coqa, zhu2018sdnet}. \begin{table}[h!] \centering \begin{tabular}{lc} \hline Models & Avg. rank \\ \hline DPR w/Qs & 170.55 \\ DPR w/As & 158.09 \\ \textbf{DPR w/QAs} & \textbf{156.69} \\ \hline \end{tabular}\caption{Evaluation of History Contribution Experiment (Lower is better).} \label{table:historicalcontextcomposition} \end{table} \\ \noindent Table \ref{table: Results-RetrievalTask} shows the results of all our experiments on the machine retrieval task. First of all, it is clear from our results that the dense retriever shows a strong superiority over the sparse retriever, which means that the neural-based retriever can serve as a powerful retriever baseline on real data in industry. Moreover, the introduction of a textual summarizer to refine the context can effectively improve the performance of the retriever and thus offers a good value in real-life applications. Specifically, our approach improves the F1 rouge scores by up to 3.58\% on the retrieval task using TFIDF; hence, in contrast to previous studies on question rewriting \cite{vakulenko2021question, anantha2020open, chu2020ask}, our approach is very easy to implement in the real world application and it does not require the large computational resources that neural models require. Finally, the introduction of a reranker does not achieve better document ranking for retrievers that use neural model-based retrievers, which is not consistent with the conclusions in \citet{qu2020open}. \begin{table}[h!] \begin{tabular}{lccc} \hline Models & \begin{tabular}[c]{@{}c@{}}Retrieval\\ Accuracy\end{tabular} & Rouge-L Scores \\ \hline BM25 & 6.00 & 10.23, 14.50, 11.58 \\ \hline DPR & 24.75 & 13.67, 20.8, 14.04 \\ \hline DPR+HSM & \textbf{26.43} & \textbf{18.76, 21.63, 17.62} \\ \hline \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}DPR+HSM \\ + Reranker \end{tabular}} & 26.12 & 18.74, 21.06, 17.32 \\ \hline \end{tabular}\caption{Evaluation of Retrieval Task on test set. Values in the bold font denotes they are better compared to values of other models. For Rouge-L scores, the order of presentation is Precision, Recall, F1 score, and this is consistent with other tables demonstrating rouge scores.} \label{table: Results-RetrievalTask} \end{table} \subsection{Results of Retrieval-Reading tasks} Table \ref{table: Results-RetrievalReadingTask2} shows that when we employ only the generative reader without background knowledge, the results can slightly outperform the retriever-only model on all f1 rouge scores. This means that solely using the encoder-decoder model alone without background knowledge can already serve as a strong baseline. Furthermore, our results (in Table \ref{table: Results-RetrievalReadingTask1}) indicate that mT5 performs better than mBart, and our reader can generate better answer predictions when incorporating more background knowledge documents. In addition, our qualitative and quantitative experiments present that the retrieval-reading model can take advantage of the latent background knowledge to make better predictions. Compared to the retriever, our retrieval-reading model improves the precision and F1 of rouge-L scores by up to 6.24\% and 2.87\%. On the other hand, incorporating background knowledge for the reader leads to improvement in all rouge scores, particularly evident in the precision, recall, and f1 metrics for rouge-1 (by 3.57\%, 1.66\%, and 2.11\%, respectively). Figure \ref{fig: Qualitative-Retrieval-Reading} also demonstrate that the retrieval-reading model yields statistically significant improvements than using only the retriever in terms of correctness and readability criteria. \begin{table}[h!] \centering \begin{tabular}{lc} \hline Models & Rouge-L Scores \\ \hline \begin{tabular}[c]{@{}l@{}}DPR \\+ mT5 w/5 passages\end{tabular} & 23.49, 20.28, 19.34 \\ \hline \begin{tabular}[c]{@{}l@{}}DPR\\+ mT5 w/10 passages\\ \end{tabular} & \textbf{25.17}, 21.69, \textbf{20.88} \\ \hline \begin{tabular}[c]{@{}l@{}}DPR \\+ mBart w/10 passages\end{tabular}& 20.9, \textbf{22.83}, 19.48 \\ \hline \end{tabular}\caption{Model comparison result of Retrieval-Reading Task on test set. This table compares the performance of the mT5 and mBart models, and compares whether incorporating more background documents helps.} \label{table: Results-RetrievalReadingTask1} \end{table} \begin{table*}[t] \centering \begin{tabular}{l|ccc} \hline Models & Rouge-1 score & Rouge-2 score & Rouge-L score \\ \hline \begin{tabular}[c]{@{}l@{}}DPR Top-1 answer\end{tabular} & 23.43, \textbf{27.59}, 22.09 & 7.28, 8.84, 7.21 & 18.93, \textbf{22.66}, 18.01 \\ \hline \begin{tabular}[c]{@{}l@{}}mT5 without background knowledge \end{tabular} & 25.99, 23.58, 22.28 & 8.44, 8.15, 7.55 & 22.1, 20.55, 19.19 \\ \hline \begin{tabular}[c]{@{}l@{}}DPR + mT5 w/10 passages\\ \end{tabular}& 29.56, 25.24, 24.39 & 10.69, \textbf{10.05}, 9.01 & 25.17, 21.69, 20.88 \\ \hline \begin{tabular}[c]{@{}l@{}}DPR + mT5 w/10 passages\\ + Summarization\end{tabular} & \textbf{30.08}, 25.98, \textbf{25.02} & \textbf{11.23}, 9.65, \textbf{9.57} & \textbf{25.61}, 22.35, \textbf{21.44} \\ \hline \begin{tabular}[c]{@{}l@{}}DPR + mT5 w/10 passages\\ + Summarization\\ + DHRM\end{tabular} & 29.30, 25.83, 24.68 & 10.46, 9.08, 8.94 & 24.79, 22.11, 21.01 \\ \hline \end{tabular}\caption{Quantitative Evaluation result of Retrieval-Reading Task on test set. The DPR in this experiment inherits the \textbf{DPR+HSM} model from the retrieval experiment. For all rouge scores, the order of presentation is Precision, Recall, F1 score.} \label{table: Results-RetrievalReadingTask2} \end{table*} \begin{figure*}[h!] \centering \includegraphics[width=14cm]{images/qualitative_result.png} \caption{Qualitative Evaluation result of Retrieval-Reading Task on test set. The figure shows the results of experts' ratings on relevance, correctness and readability for three candidate answers. The scores are in the range of [0, 10], higher means better. It also illustrates the pairwise t test result where $p_{unadj}$ represents p-value without adjustment.} \label{fig: Qualitative-Retrieval-Reading} \end{figure*} \noindent Moreover, our History Summarization Module improves the machine's performance on both retrieval and reading tasks in a very efficient way by providing more concise and organized history information. It can be easily deployed on almost any QA system with a retriever or reader, and it is potentially adaptable to most multilingual environments. Finally, we observed that adding the history attention module (DHRM) to the generative model to force the reader to pay less attention to low-value history information did not meet our expectations. This is probably because our data has fewer conversation rounds in general and thus does not have enough data to learn and update parameters for DHRM. In addition, DHRM may also force the generative model to deliberately disregard historical information that is useful for understanding the current question. Therefore, a possible future direction for our study is to observe what questions are more likely to be influenced by DHRM through attention visualization. Our findings are contrary to \cite{qu2019attentive}, suggesting that we have to consider other more efficient history modelling approaches for generative models in future work. \section{Conclusions} In this paper, we present a Retrieval-Reading model with customized modules to explore how to efficiently and effectively perform real-world conversational question answering tasks. We observed that utilizing the retriever to provide relevant internal knowledge to the reader can significantly improve the quality of answer prediction. We also observed that refined contexts bring less noisy information to the retriever and reader, thus allowing them to incorporate more concise and organized information. We hope this work will spur more research on knowledge exploitation and context utilization, which are key elements of how to implement QA and dialogue systems research more efficiently in the real world. In future work, enhancement of document retrieval methods for multilingual, non-factoid data may be a major direction to improve the overall system. And we plan to continue to optimize the efficiency for the conversational scenario leveraging the response of customer feedback from real-world applications. \section*{Limitations} One major limitation of our study is that we cannot make the dataset publicly available because of the sensitive nature of the data collected from real customer service conversations. In addition, to limit the use of computational resources during processing, we only integrate a small part of retrieved internal knowledge into our system. Expanding retrieved data usage would probably improve our results to a large extent. \section*{Ethics Statement} The data we use in this paper are from real conversations between customers and customer service staff. Therefore, in order to ensure the proper use of data, we strictly comply with the requirements of the GDPR in our research and possible subsequent applications. We use a variety of methods to remove any information that may contain private data during the pre-processing part of the data acquisition and application phase. No information related to real names, customers' email addresses, age or gender, etc.\ was used in this study. In addition, our data is anonymized at the time of acquisition to ensure that no other user identifying information is retained, such as IP addresses.
\section{Introduction} Inflation is a central part of early Universe cosmology passing many observational tests and becoming a predictive scenario scrutinized by current and forthcoming observations. Originally, inflation was introduced as an elegant explanation for several shortcomings of the standard Big Bang cosmology\cite{guth}-\cite{riottorev}. However, one of the most compelling aspects of inflation is that it provides a mechanism for generating scalar (density) and tensor (gravitational wave) perturbations\cite{mukhanov}-\cite{bran}. A distinct aspect of inflationary perturbations is that these are generated by quantum fluctuations of the scalar field(s) that drive inflation. After their wavelength becomes larger than the Hubble radius, these fluctuations are amplified and grow, becoming classical and decoupling from causal microphysical processes. Upon re-entering the horizon, during the matter era, these classical perturbations seed the inhomogeneities which generate structure upon gravitational collapse\cite{mukhanov}-\cite{bran}. While there is a great diversity of inflationary models, most of them predict fairly generic features: a gaussian, nearly scale invariant spectrum of (mostly) adiabatic scalar and tensor primordial fluctuations. These generic predictions of most inflationary models make the inflationary paradigm fairly robust. The gaussian, adiabatic and nearly scale invariant spectrum of primordial fluctuations provide an excellent fit to the highly precise wealth of data provided by the Wilkinson Microwave Anisotropy Probe (WMAP)\cite{komatsu,spergel,kogut,peiris}. Perhaps the most striking validation of inflation as a mechanism for generating \emph{superhorizon} (`acausal') fluctuations is the anticorrelation peak in the temperature-polarization (TE) angular power spectrum at $l \sim 150$ corresponding to superhorizon scales\cite{kogut,peiris}. The confirmation of many of the robust predictions of inflation by current high precision cosmological observations is placing inflationary cosmology on solid grounds. Current and forthcoming observations with ever increasing precision measurements will begin to discriminate among different inflationary models, placing stringent constraints on the underlying particle physics model of inflation. There are small but important telltale discriminants amongst different models: non-gaussianity, a running spectral index for either scalar or tensor perturbations (or both), an isocurvature component for scalar perturbations, different ratios for the amplitudes between scalar and tensor modes, etc. Already WMAP reports a hint of deviations from constant scaling exponents (running spectral index)\cite{peiris}. Amongst the wide variety of inflationary scenarios, \emph{slow roll} inflation\cite{barrow,stewlyth} provides an appealing, simple and fairly generic description of inflation. The basic premise of slow roll inflation is that the potential is fairly flat during the inflationary stage. This flatness not only leads to a slowly varying Hubble parameter, hence ensuring a sufficient number of e-folds, but also provides an explanation for the gaussianity of the fluctuations as well as for the (almost) scale invariance of their power spectrum. A flat potential precludes large non-linearities in the dynamics of the \emph{fluctuations} of the scalar field, which is therefore determined by a gaussian free field theory. Furthermore, because the potential is flat the scalar field is almost massless, and modes cross the horizon with an amplitude proportional to the Hubble parameter. This fact combined with a slowly varying Hubble parameter yields an almost scale invariant primordial power spectrum. Upon crossing the horizon the phases of the quantum fluctuations freeze out and a growing mode dominates the dynamics, i.e. the quantum fluctuations become classical (see ref.\cite{liddle} and references therein). Departures from scale invariance and gaussianity are determined by the departures from flatness of the potential, namely by derivatives of the potential with respect to the expectation value of the scalar field. These derivatives can be combined into a hierarchy of dimensionless slow roll parameters\cite{barrow} that allow an assessment of the \emph{corrections} to the basic predictions of gaussianity and scale invariance\cite{liddle}. This \emph{slow roll expansion} has the important bonus of allowing a reconstruction program that yields details of the inflaton potential from observables extracted from the analysis of CMB data, for example the index of the power spectra of scalar and tensor perturbations, the ratio of their amplitudes, etc.\cite{lidsey}. While more complicated scenarios can be proposed, the current WMAP data seems to validate the simpler slow roll scenario\cite{peiris}. Forthcoming precision CMB data forces a deeper examination of the inflationary predictions, which has motivated an analysis of the power spectra to higher order in the slow roll expansion. A general slow roll approximation\cite{stewart,domi} along with WKB\cite{martin,casa} and uniform\cite{salman} approximations to study the power spectrum beyond slow roll have been introduced. While progress is being made in obtaining a more precise assessment of the power spectra of scalar and tensor perturbations within the slow roll scenario, it must be noted that all these refinements are still within the \emph{gaussian} approximation, namely quadratic fluctuations of the scalar field and the metric (or alternatively \emph{linear} perturbations in the equations of motion for the fluctuations). Interactions of the inflaton with other fields are a necessary ingredient for a post-inflationary reheating stage where the energy stored in the inflaton is transferred to other degrees of freedom which eventually thermalize and lead to a transition from inflation to the standard Hot Big Bang, radiation dominated cosmology. Even within the simple single field slow roll scenario, higher derivatives of the potential with respect to the homogeneous expectation value of the scalar field will unavoidably lead to non-linearities. The lowest order non-linearity results from a \emph{cubic self-interaction} of the fluctuations around the homogeneous expectation value. The strength of the cubic self-interaction is determined by a particular slow-roll parameter, (the `jerk' parameter)\cite{liddle,lidsey,peiris}. This slow roll parameter also enters in the running of the spectral index, and the current WMAP data provides a rather loose bound on it\cite{peiris} which suggests a small but non-vanishing cubic self-interaction. Self-interactions of the fluctuations of the scalar field in turn lead to \emph{non-gaussianities} which are characterized by a non-vanishing \emph{bi-spectrum}\cite{allen}-\cite{7L}. The effect of interactions on the decay of the inflaton in de Sitter space time was studied in refs.\cite{prem}, and we have implemented a dynamical renormalization group to study the decay of the quantum fluctuations into other fields as well as the \emph{self decay} of the fluctuations both for sub and super-horizon modes in slow roll inflation\cite{ultimonuestro1,ultimonuestro2}. In\cite{ultimonuestro2} the connection between the \emph{self-decay} of inflaton fluctuations and the \emph{bi-spectrum} in single field slow roll inflation was established. In this article we study the effect of the \emph{self interactions} as well as the interaction of the inflaton with other scalar fields to assess the quantum corrections to the potential reconstruction program based on the slow-roll expansion. In particular, we study the quantum corrections to the equations of motion of the expectation value of the inflaton, and to the fluctuations as well as the quantum corrections to the Friedmann equation and to the slow roll parameters. Such corrections are important for an accurate assessment of the inflationary parameters fit from the WMAP and future CMB data. \vspace{2mm} \textbf{Inflation as an effective field theory:} Effective field theory provides a useful and physically motivated interpretation of scalar field inflation below a cutoff scale. In this interpretation, inflation is driven by a scalar field (the inflaton) with a fairly flat potential, which justifies the slow roll approximation and is consistent with observational data. The inflaton replaces the microscopic description provided by grand unified models in the cosmological space-time. Such a description as an effective field theory relies on a separation of scales, in this case the scale of inflation, determined by the Hubble parameter, and a high energy scale $ M $. We identify $ M $ with the Planck scale $ M_{Pl} $ since so far, this is the only known energy scale above the inflation scale. Within this effective field theory approach to inflationary cosmology the inflaton model is interpreted as the effective `low energy' field theory resulting from `integrating out' the degrees of freedom with energy scales at or even above the scale $M$ (as advocated in refs.\cite{kalohol}). In this interpretation, inflationary models are not fundamental theories but {\it effective} descriptions in terms of a condensate, the inflaton field. This type of description is very successful in a wide variety of physically relevant cases: the low energy pion dynamics emerging from full QCD and the Landau-Ginzburg effective theories of superconductivity, superfluidity and critical phenomena. In all these cases the low energy effective field theory allows a systematic study of the \emph{universal} aspects of the relevant dynamics. In this approach small dimensionless quantities are a result of the ratio between the low and high energy scales. It is a tantalizing possibility that the robustness of the predictions of inflationary theories may be a manifestation of such `universality' of the low energy effective field theory, akin to the robustness of the description of critical phenomena by the Landau-Ginsburg approach to phase transitions. Such point of view for inflationary dynamics as an effective field theory driven by a scalar field has been recently studied quantitatively in ref.\cite{hector}. The small parameter that determines the validity of inflation as an effective \emph{quantum field theory} below the scale $M$ is $H/M $ where $H$ is the Hubble parameter during inflation and therefore the scale at which inflation occurs. The slow roll expansion is in a very well defined sense an \emph{adiabatic} approximation since the time evolution of the inflaton field is slow on the expansion scale. Thus the small dimensionless ratio $ H/M $, which is required for the validity of an effective field theory (EFT) is logically \emph{independent} from the small dimensionless combinations of derivatives of the potential which ensure the validity of the slow-roll expansion. In particular, since $ M=M_{Pl} $, the ratio $ H/M_{Pl} $ determines the amplitude of tensor perturbations\cite{liddle}. Hence the validity of the effective field theory description of inflation requires a very small amplitude of tensor perturbations (gravitational waves) which is consistent with the WMAP data\cite{peiris}. Therefore, in this article we will invoke \emph{two independent} approximations, the effective field theory (EFT) and the slow roll approximation. The former is defined in terms of an expansion in the ratio $ H/M_{Pl} $, whereas the latter corresponds to small slow roll parameters. In order to determine the validity of the (EFT) and slow roll expansions, it is important to highlight the main differences between slow roll inflation and the post-inflationary stage. During slow roll inflation the dynamics of the scalar field is slow on the time scale of the expansion and consequently the change in the amplitude of the inflaton is small and quantified by the slow roll parameters. The slow roll approximation is indeed an \emph{adiabatic approximation}. In striking contrast to this situation, during the post-inflationary stage of reheating the scalar field undergoes rapid and large amplitude oscillations that cannot be studied in a perturbative expansion\cite{reheatnuestro,ramsey}. The slow roll approximation during inflation is warranted because of the adiabatic evolution of the scalar field, and the (EFT) is warranted because of the smallness of the ratio $ H/M_{Pl} $ as determined by the WMAP data. \vspace{2mm} \textbf{The goals of this article:} In our previous work in ref.\cite{ultimonuestro1,ultimonuestro2} we have found that the near scale invariance of the inflaton fluctuations result in quantum corrections that feature an \emph{infrared enhancement}. The results of these references suggest that even when quantum (loop) corrections are suppressed by powers of the effective field theory ratio $ H/M_{Pl} $, there are enhancements arising from infrared effects, a result of the near scale invariance of the power spectrum of fluctuations. In this article we focus our study on quantum aspects of inflationary dynamics \emph{during the slow roll stage} by considering quantum (loop) corrections from \emph{inflaton fluctuations} as well as \emph{light} scalar fields with a nearly scale invariant power spectrum. In particular we study the contributions from the inflaton fluctuations and light scalar fields to the \emph{effective inflaton potential} as well as self-energy corrections to the equations of motion of the inflaton fluctuations \emph{during slow roll}. We show that a {\bf strong infrared enhancement} appears in the effective field theory when nearly scale invariant fluctuations are present. This is precisely the case in slow-roll inflationary cosmology. We restrict our study here to the quantum corrections from fluctuations of the inflaton and light scalar fields during slow roll as a \emph{prelude} to a more complete study that should eventually include gravitational fluctuations, a task that is postponed for further study. The motivation of this work is driven by forthcoming precision measurements of the primordial power spectrum. These measurements can potentially yield precise information to map out the inflationary potential during inflation. In order to go from the data to the inflaton potential and inflationary parameters, the slow-roll approximation is typically invoked. Higher order derivatives of the inflationary potential will be determined from the slow roll parameters obtained from the data. The main point that we highlight in this article is that while the typical slow-roll expansion is based solely on a free field description (gaussian) of the fluctuations and a classical description of the inflaton dynamics, quantum (loop) corrections yield contributions in the slow-roll parameters. These \emph{quantum} corrections compete with higher order corrections in the slow roll approximation in the gaussian theory and therefore affect the determination of the inflationary potential. Our study in this article seeks to obtain a quantitative understanding of these corrections. We find that the loop corrections have a strong infrared behavior as a consequence of the nearly scale invariant spectrum of scalar fluctuations. The infrared behavior is manifest as poles in a small parameter $\Delta$ which is a measure of the departure from scale invariance of the power spectrum of scalar perturbations. $\Delta$ is a simple function of slow-roll parameters. \textbf{Brief summary of main results: } \begin{itemize} \item{We show that the one loop quantum correction to the equation of motion for the homogeneous expectation value of the inflaton is determined by the power spectrum of its quantum fluctuations. As discussed in\cite{ultimonuestro2}, a nearly scale invariant power spectrum of scalar fluctuations introduces a strong infrared behavior: it is naturally regulated by a small parameter $\Delta$ which measures the departure from scale invariance and is a simple function of slow-roll parameters. The infrared divergences are manifest as poles in $\Delta$\cite{ultimonuestro2}. We obtain the effective equation of motion for the expectation value of the inflaton field at one loop level in the effective field theory and to leading order in the slow roll expansion. It is given by $$ \ddot{\Phi}_0(t)+3\,H_0\,\dot{\Phi}_0(t)+ V^{'}_R(\Phi_0)\left\{1+ \frac{\Delta_{\mathcal{R}}^2}{n_s -1 +\frac{r}4} \left[\frac{r}2 \left(n_s -1 +\frac{3 \, r}{16} \right) - \frac{dn_s}{d \ln k} \right]\right\} = 0 \; . $$} \item{We obtain the one-loop corrections to the Friedmann equation to leading order in the EFT and slow roll expansions. Just as for the equation of motion for the expectation value of the inflaton, the one-loop correction to the Friedmann equation is determined by the power spectrum of the inflaton fluctuations and also features an infrared enhancement. The effective inflaton potential is obtained to leading order in the EFT and slow roll approximations, it is given by $$ V_{eff}(\Phi_0) = V_R(\Phi_0)\left[1+ \frac{r \; \Delta^2_{\mathcal{R}}}{32} \; \frac{n_s -1 + \frac38 \; r}{n_s -1 + \frac14 \; r} \right] \; . $$ where the CMB observables $n_s\,,\,r$ are implicit functions of $\Phi_0$ through the slow roll parameters. This effective potential during slow roll inflation is strikingly different from the effective potential in Minkowski space-time given by eq.(\ref{potefM}). Moreover, the \emph{quantum corrections} to the slow roll parameters are obtained in the leading order of the EFT and slow roll expansions.} \item{We obtain the renormalized equations of motion for the superhorizon quantum fluctuations of the inflaton in leading order in the EFT and slow roll expansions. The solution of these equations features secular terms which are resummed via the dynamical renormalization group\cite{ultimonuestro1,ultimonuestro2}. This resummation reveals that superhorizon fluctuations display a novel scaling dimension which is related to the \emph{self} decay of inflaton fluctuations. We compute the quantum correction to the scaling dimension and the rate for the \emph{self} decay of superhorizon inflaton fluctuations in cosmic time, $\Gamma_{\varphi \rightarrow \varphi\varphi}$. To leading order in (EFT) and slow roll expansions we find for the correction to the scaling dimension $$ -d_-=\Delta_{\mathcal{R}}^2 \frac{\sigma_V \; (\eta_V-\epsilon_V) + 6 \, \xi^2_V}{4 \, (\eta_V-\epsilon_V)^2} $$ and for the \emph{self-decay} rate $$ \Gamma_{\varphi \rightarrow \varphi\varphi} = \frac12 \; \Delta_{\mathcal{R}}^2 \; \frac{H_0 \; \xi^2_V}{(\eta_V-\epsilon_V)^2} = \frac{2 \; H_0 \; \Delta_{\mathcal{R}}^2 \; \xi^2_V}{\left(n_s -1 + \frac14 \; r\right)^{\! 2}} \; . $$ These results have been expressed in terms of CMB observables and the jerk parameter $ \xi_V $ which is related the running of the index of scalar perturbations [see eq.(\ref{gorda})].} \item{We generalize these results by studying a model in which the inflaton interacts with another light scalar field $\sigma$. We obtain the corrections to the effective potential and scaling exponents from the self-energy loop of $\sigma$ particles. In particular we obtain the partial rate of superhorizon fluctuations of the inflaton decaying into two $\sigma$ particles, $\Gamma_{\varphi\rightarrow \sigma \sigma}$. } \end{itemize} One of the main points of this article is that quantum corrections arising from the interactions of the inflaton with itself and \emph{any} other scalar field may compete with higher order slow-roll corrections from the \emph{gaussian} approximation. The article is organized as follows. In section \ref{eftsr} we briefly discuss the systematics of the effective field theory (EFT) and slow roll expansions. In section \ref{Eqofmotin} we obtain the one loop correction to the equation of motion for the expectation value of the inflaton. In this section we introduce the renormalized effective field theory approach. Here we show that the \emph{quantum} correction to the equation of motion is determined by the power spectrum of the scalar fluctuations. The nearly scale invariance of the power spectrum results in strong infrared behavior of this contribution, naturally regulated by the small parameter $\Delta$ which is a simple function of slow roll parameters. We obtain this correction to leading order in the EFT and slow roll expansions. In section \ref{friedeq} we obtain the one loop correction to the Friedmann equation to leading order in the EFT and slow roll expansions and identify the correct effective potential up to this order. In this section we also obtain the \emph{quantum corrections} to the slow roll parameters to leading order in the expansions. In section \ref{anomdim} we obtain the equations of motion for the quantum fluctuations of the inflaton including the one-loop self energy. The solution of these equations feature secular terms that are resummed by implementing the dynamical renormalization group\cite{ultimonuestro1,ultimonuestro2}. The improved solution features a quantum correction to the superhorizon scaling dimension, which is obtained to leading order in the EFT and slow roll expansions. In section \ref{other} we generalize the results to the case in which the inflaton interacts with a light scalar field. An appendix shows that a calculation akin to that presented in sections \ref{Eqofmotin}, \ref{friedeq} but in Minkowski space-time, lead to the familiar effective potential. The Minkowski effective potential is {\bf different} from the inflationary potential obtained in sec. \ref{friedeq}. Section \ref{conclu} summarizes our results and presents our conclusions. \section{Effective field theory (EFT) and slow roll expansion.}\label{eftsr} We consider single field inflationary models described by a general self-interacting scalar field theory in a spatially flat Friedmann-Robertson-Walker cosmological space time with scale factor $a(t)$. In comoving coordinates the action is given by \begin{equation}\label{action} S= \int d^3x \; dt \; a^3(t) \Bigg[ \frac{1}{2} \; {\dot{\phi}^2}-\frac{(\nabla \phi)^2}{2a^2}-V(\phi) \Bigg] \;. \end{equation} We consider a \emph{generic} potential $V(\phi)$, the only requirement is that its \emph{derivatives} be small in order to justify the slow roll expansion\cite{barrow,liddle,lidsey}. In order to study the corrections from the quantum fluctuations we separate the classical homogeneous expectation value of the scalar field from the quantum fluctuations by writing \begin{equation}\label{tad} \phi(\vec{x},t)= \Phi_0(t)+\varphi(\vec{x},t)\;, \end{equation} \noindent with \begin{equation}\label{exp} \Phi_0(t)=\langle \phi(\ensuremath{\vec{x}},t) \rangle~~;~~ \langle \varphi(\ensuremath{\vec{x}},t)\rangle =0 \;, \end{equation} where the expectation value is in the non-equilibrium quantum state. Expanding the Lagrangian density and integrating by parts, the action becomes \begin{equation}\label{Split} S= \int d^3x \; dt \; a^3(t)\, \mathcal{L}[\Phi_0(t),\varphi(\ensuremath{\vec{x}},t)]\;, \end{equation} \noindent with \begin{eqnarray}\label{lagra} &&\mathcal{L}[\Phi_0(t),\varphi(\ensuremath{\vec{x}},t)] = \frac{1}{2} \; {\dot{\Phi}^2_0}-V(\Phi_0)+\frac{1}{2} \; {\dot{\varphi}^2}-\frac{(\nabla \varphi)^2}{2 \, a^2} -\frac{1}{2}\; V^{''}(\Phi_0)\; \varphi^2 \cr \cr &&- \varphi\; \left[\ddot{\Phi}_0+3 \, H \,\dot{\Phi}_0+V^{'}(\Phi_0)\right] - \frac{1}{6}\; V^{'''}(\Phi_0)\; \varphi^3 - \frac{1}{24}\; V^{(IV)}(\Phi_0)\; \varphi^4+ \textmd{higher orders in}\, \varphi \; . \end{eqnarray} We will obtain the equation of motion for the homogeneous expectation value of the inflaton field by implementing the tadpole method (see \cite{ultimonuestro1,ultimonuestro2} and references therein). This method consists in requiring the condition $\langle \varphi(\ensuremath{\vec{x}},t)\rangle =0 $ consistently in a perturbative expansion by treating the \emph{linear}, cubic, quartic (and higher order) terms in the Lagrangian density eq.(\ref{lagra}) as \emph{perturbations}\cite{ultimonuestro1,ultimonuestro2}. Our approach relies on two distinct and fundamentally different expansions: i) the effective field theory (EFT) expansion and ii) the slow-roll expansion. \vspace{2mm} {\bf The EFT expansion:} as mentioned above, the effective field theory approach relies on the separation between the energy scale of inflation and the cutoff scale, which here is the Planck scale. The scale of inflation is determined by the Hubble parameter during the relevant stage of inflation when wavelengths of cosmological relevance cross the horizon. Therefore, the dimensionless ratio that defines the EFT approximation is the ratio $H(\Phi_0)/M_{Pl}$, where $H(\Phi_0)$ is the Hubble parameter during the relevant inflationary stage. In scalar field driven inflation the reliability of this approximation \emph{improves} upon dynamical evolution since the scale of inflation {\it diminishes} with time. Phenomenologically, the EFT approximation is an excellent one since the amplitudes of tensor and scalar perturbations $ \Delta_{\mathcal{T}} $ and $ \Delta_{\mathcal{R}} $, respectively are given by\cite{liddle,peiris} \begin{equation}\label{amps} \Delta_{\mathcal{T}} = \frac{\sqrt2}{\pi} \; \frac{H}{M_{Pl}} \quad , \quad \frac{H}{M_{Pl}} = 2 \, \pi \; \Delta_{\mathcal{R}} \; \sqrt{2 \, \epsilon_V} \;, \end{equation} \noindent where $\epsilon_V \ll 1$ is a slow-roll parameter (see below). WMAP data\cite{peiris} yields $ \Delta_{\mathcal{R}} = 0.47 \times 10^{-4} $ thus providing strong observational support to the validity of an effective field theory for inflation well below the Planck scale and to the $ \frac{H}{M_{Pl}} $ expansion. Expected CMB constraints on $ \Delta_{\mathcal{T}} $ should still improve this observational support. These perturbation amplitudes can be expressed in terms of the semiclassical and quantum gravity temperature scales \begin{equation}\label{tem} T_{sem} = \frac{\hbar \; H}{2 \, \pi \, k_B} \quad , \quad T_{Pl} = \frac{ M_{Pl} \; c^2 }{2 \, \pi \, k_B} \end{equation} where $ k_B $ stands for the Boltzmann constant and $ c $ for the speed of light. $ T_{sem} $ is the Hawking-Gibbons temperature of the initial state (Bunch-Davis vacuum) of inflation while $ T_{Pl} $ is the Planck temperature $\simeq 10^{32}$K. We have, \begin{equation} \Delta_{\mathcal{T}} = \frac{\sqrt2}{\pi} \; \frac{T_{sem}}{T_{Pl}} \quad , \quad \frac{T_{sem}}{T_{Pl}} = 2 \, \pi \; \Delta_{\mathcal{R}} \; \sqrt{2 \, \epsilon_V} \; \end{equation} Therefore, the WMAP data yield for the Hawking-Gibbons temperature of inflation: $ T_{sem} \simeq \sqrt{\epsilon_V} \; 10^{28}$K. \vspace{2mm} {\bf Slow roll approximation:} During slow roll inflation the homogeneous expectation value $\Phi_0$ is a slowly varying function of time which entails that the potential $V(\Phi_0)$ is fairly flat as a function of $\Phi_0$. The slow roll expansion introduces a hierarchy of small dimensionless quantities that are determined by the derivatives of the potential. Some\cite{barrow,liddle} of these (potential) slow roll parameters are given by\footnote{We follow the definitions of $\xi_V;\sigma_V$ in ref.\cite{peiris}. ($\xi_V;\sigma_V$ are called $\xi^2_V;\sigma^3_V$, respectively, in\cite{barrow}).} \begin{eqnarray} &&\epsilon_V = \frac{M^2_{Pl}}{2} \; \left[\frac{V^{'}(\Phi_0)}{V(\Phi_0)} \right]^2 \quad , \quad \eta_V = M^2_{Pl} \; \frac{V^{''}(\Phi_0)}{V(\Phi_0)}\, , \label{etav} \\ && \xi_V = M^4_{Pl} \; \frac{V'(\Phi_0) \; V^{'''}(\Phi_0)}{V^2(\Phi_0)} \quad , \quad \sigma_V = M^6_{Pl}\; \frac{\left[V^{'}(\Phi_0)\right]^2\,V^{(IV)}(\Phi_0)}{V^3(\Phi_0)}\;. \label{sig} \end{eqnarray} The slow roll approximation\cite{barrow,liddle,lidsey} corresponds to $\epsilon_V \sim \eta_V \ll 1$ with the hierarchy $\xi_V \sim \mathcal{O}(\epsilon^2_V)~;~\sigma_V \sim \mathcal{O}(\epsilon^3_V)$, namely $\epsilon_V$ and $\eta_V$ are first order in slow roll, $\xi_V$ second order in slow roll, etc. The slow roll parameters can be expressed in terms of the CMB observables and their spectral runnings as follows, \begin{eqnarray}\label{gorda} &&\epsilon_V = \frac{r}{16} \quad , \quad \eta_V =\frac12\left( n_s - 1 + \frac{3}{8} \, r \right) \quad , \quad \xi_V = \frac{r}4 \left(n_s - 1 + \frac{3}{16} \, r \right) - \frac12\frac{dn_s}{d \ln k} \cr \cr &&\sigma_V = - \frac{r}8 \left[ \left(n_s-1+\frac{r}{32}\right)^2 - \frac{9 \, r^2}{1024} \right] + \frac14\left(n_s-1-\frac{9 \, r}8 + \frac{r^2}{16} \right) \frac{dn_s}{d \ln k} \cr \cr &&- \frac14\left(1 - \frac{r}6 \right) \left(n_s-1+\frac{3 \, r}8\right) \frac{dr}{d \ln k} + \frac12 \left(1 - \frac{r}6 \right) \frac{d^2n_s}{d (\ln k)^2} \end{eqnarray} The Friedmann equation and the classical equation of motion for $\Phi_0$ are \begin{eqnarray} &&H_0^2 = \frac{1}{3 \, M^2_{Pl}}\left[\frac{1}{2}(\dot{\Phi}_0)^2+ V(\Phi_0)\right]\;, \label{hub} \\ \label{claseq} &&\ddot{\Phi_0}+3\,H_0\,\dot{\Phi}_0+V'(\Phi_0) =0 \;. \end{eqnarray} During slow roll inflation the equation of motion (\ref{claseq}) can be approximated as \begin{equation}\label{claseqsr} \dot{\Phi}_0= -\frac{V'(\Phi_0)}{3\,H_0\,} + \textmd{higher orders in slow roll}\,, \end{equation} \noindent and the Friedmann equation reads \begin{equation}\label{FRW} H_0^2 = \frac{V(\Phi_0)}{3 \, M^2_{Pl}}\left[1+\frac{\epsilon_V}{3}+ \mathcal{O}(\epsilon^2_V,\epsilon_V \; \eta_V) \right]\;. \end{equation} During slow roll, the number of e-folds before the end of inflation is given by \begin{equation}\label{Nefold} N(\Phi) = \frac{1}{M_{Pl}}\int_{\Phi}^{\Phi_e} \frac{d\Phi_0}{\sqrt{2 \, \epsilon_V(\Phi_0)}}\;, \end{equation} \noindent where $\Phi_e$ is the value of $\Phi_0$ at the end of inflation. The stage of inflation during which wavelengths of cosmological relevance today first cross the Hubble radius corresponds to $N(\Phi) \sim 50$. Therefore, during this stage the smallness of the slow roll parameter $\epsilon_V$ is justified by the large number of e-folds. In a wide variety of inflationary models the slow roll parameter $\epsilon_V$ is small for large $N(\Phi)$\cite{liddle,hector,peiris}. We now introduce the effective mass of the fluctuations $M^2$ and the cubic and quartic self-couplings $g,\lambda$ respectively as \begin{eqnarray} &&M^2 \equiv M^2(\Phi_0) = V''(\Phi_0) = 3 \; H_0^2 \; \eta_V + \mathcal{O}(\epsilon_V \; \eta_V)\,, \label{flucmass}\\ && g\equiv g(\Phi_0) = \frac{1}{2} \; V^{'''}(\Phi_0)\,, \label{g}\\ && \lambda \equiv \lambda (\Phi_0) = \frac{1}{6} \; V^{(IV)}(\Phi_0)\,.\label{lambda} \end{eqnarray} In particular, the dimensionless combination of the cubic coupling and the scale of inflation is given to leading order in slow-roll by \begin{equation}\label{gcoup} \frac{g}{H_0} = \frac{3\,\xi_V}{2\sqrt{2\;\epsilon_V}} \; \frac{H_0}{M_{Pl}} = \frac32 \, \pi \; \Delta_{\mathcal{R}}\left[ \frac{r}2 \left(n_s - 1 + \frac{3 \, r}{16} \right) - \frac{dn_s}{d \ln k}\right] \; , \end{equation} \noindent and the quartic coupling $\lambda$ can be conveniently written in terms of slow-roll and effective-field theory parameters as \begin{equation}\label{lam} \lambda = \frac{\sigma_V}{4 \, \epsilon_V}\left(\frac{H_0}{M_{Pl}}\right)^2 = 2 \, \pi^2 \; \Delta_{\mathcal{R}}^2 \; \sigma_V \; . \end{equation} Moreover, $\lambda$ can be written solely in terms of CMB observables inserting the expression eq.(\ref{gorda}) for $ \sigma_V $ into eq.(\ref{lam}). During slow roll the effective mass and couplings are not constants but \emph{very slowly varying functions of time}, and according with the slow roll hierarchy, both the cubic and quartic self couplings are small, the quartic being of higher order in slow roll than the cubic etc, namely $$ 1\gg \frac{g}{H_0} \gg \lambda \gg \cdots $$ \noindent where the dots stand for self-couplings arising from higher derivative of the potential as displayed in eq.(\ref{lagra}). The time dependence of these couplings is implicit through their dependence on $\Phi_0$ determined by eq.(\ref{claseqsr}). Eqs.(\ref{gcoup}) and (\ref{lam}) clearly show that $(g/H)^2$ and $\lambda$ are of the \emph{same order} in the EFT expansion, namely $\mathcal{O}(H^2_0/M^2_{Pl})$. This observation will be important in the calculation of the self-energy correction for the quantum fluctuations. In order to keep a simple notation, the calculations will be performed in terms of $g;\lambda$ [see definitions eqs.(\ref{g})-(\ref{lambda})] and we will write these effective couplings in terms of slow roll and EFT variables using eqs.(\ref{gcoup})-(\ref{lam}) at the end of the calculations. \section{Quantum corrections to the equation of motion for the inflaton.}\label{Eqofmotin} Quantum corrections to the equations of motion for the inflaton and for the fluctuations will be obtained by treating the second line in eq. (\ref{lagra}), namely, the \emph{linear} and the non-linear terms in $\varphi$ in perturbation theory. The generating functional of non-equilibrium real time correlation functions requires a path integral along a complex contour in time: the forward branch corresponds to time evolution forward $(+)$ and backward $(-)$ in time as befits the time evolution of a density matrix. Fields along these branches are labeled $\varphi^+$ and $\varphi^-$, respectively (see refs.\cite{ultimonuestro1,ultimonuestro2} and references therein). The tadpole conditions \begin{equation}\label{tads} \langle \varphi^\pm(\ensuremath{\vec{x}},t) \rangle =0 \; , \end{equation} \noindent both lead to the (same) equation of motion for the expectation value $\Phi_0(t)$ by considering the \emph{linear, cubic} and higher order terms in the Lagrangian density as interaction vertices. Up to one loop order we find \begin{equation}\label{1lupeqn} \ddot{\Phi}_0(t)+3 \, H \; \dot{\Phi}_0(t)+V'(\Phi_0)+g(\Phi_0) \; \langle [\varphi^+(\ensuremath{\vec{x}},t)]^2\rangle =0 \;. \end{equation} The first three terms in eq.(\ref{1lupeqn}) are the familiar ones for the equation of motion of the inflaton. The last term is the one-loop correction to the equations of motion of purely quantum mechanical origin. Another derivation of this quantum correction can be found in\cite{reheatnuestro,ramsey}. The fact that the tadpole method, which in this case results in a one-loop correction, leads to a covariantly conserved and fully renormalized energy momentum tensor has been previously established in the most general case in refs.\cite{reheatnuestro,erice} and more recently in ref.\cite{mottola}. The coupling $g$ is defined by eq. (\ref{g}). The $\langle(\cdots)\rangle$ is computed in the free field (Gaussian) theory of the fluctuations $\varphi$ with an effective `mass term' $M^2$ given by eq. (\ref{flucmass}), the quantum state will be specified below. Furthermore, it is straightforward to see that $\langle [\varphi^+(\ensuremath{\vec{x}},t)]^2\rangle = \langle [\varphi^-(\ensuremath{\vec{x}},t)]^2\rangle=\langle [\varphi(\ensuremath{\vec{x}},t)]^2\rangle$. In terms of the spatial Fourier transform of the fluctuation field $\varphi(\ensuremath{\vec{x}},t)$, the one-loop contribution can be written as \begin{equation}\label{lupPS} \langle [\varphi(\ensuremath{\vec{x}},t)]^2\rangle = \int \frac{d^3 k}{(2\pi)^3} \; \langle |\varphi_{\ensuremath{\vec{k}}}(t)|^2 \rangle = \int_0^{\infty} \frac{dk}{k} \; \mathcal{P}_{\varphi}(k,t)\,, \end{equation} \noindent where $\varphi_{\ensuremath{\vec{k}}}(t)$ is the spatial Fourier transform of the fluctuation field $\varphi(\ensuremath{\vec{x}},t)$ and we have introduced the power spectrum of the fluctuation \begin{equation}\label{PS} \mathcal{P}_{\varphi}(k,t) = \frac{k^3}{2 \, \pi^2} \; \langle |\varphi_{\ensuremath{\vec{k}}}(t)|^2 \rangle \,. \end{equation} The metric background is as usual, $$ ds^2= dt^2-a^2(t) \; d{\vec x}^2 = C^2(\eta) \left[ (d \eta)^2 - d{\vec x}^2 \right] \; , $$ where $ \eta $ is the conformal time and $ C(\eta) \equiv a(t(\eta)) $. In order to compute the one-loop contribution, it is convenient to work in conformal time and to conformally rescale the field \begin{equation}\label{rescale} \varphi(\ensuremath{\vec{x}},t) =\frac{\chi(\ensuremath{\vec{x}},\eta)}{C(\eta)} \quad , \end{equation} \noindent $ C(\eta) $ being the scale factor in conformal time. During slow roll inflation the scale factor is quasi de Sitter and to lowest order in slow roll it is given by : \begin{equation}\label{quasiDS} C(\eta)=-\frac{1}{H_0 \; \eta} \; \frac{1}{1-\epsilon_V}= -\frac{1}{H_0 \; \eta} (1+\epsilon_V) + \mathcal{O}(\epsilon_V^2) \,. \end{equation} The spatial Fourier transform of the free field Heisenberg operators $\chi(\ensuremath{\vec{x}},\eta)$ obey the equation \begin{equation}\label{heiseqn} \chi^{''}_{\ensuremath{\vec{k}}}(\eta)+ \left[k^2 + M^2 \; C^2(\eta)- \frac{C^{''}(\eta)}{C(\eta)} \right]\chi_{\ensuremath{\vec{k}}}(\eta)=0 \,. \end{equation} Using the slow roll expressions eqs.(\ref{flucmass}) and (\ref{quasiDS}), it becomes \begin{equation}\label{heiseqn2} \chi^{''}_{\ensuremath{\vec{k}}}(\eta)+ \left[k^2 -\frac{\nu^2-\frac{1}{4}}{\eta^2} \right]\chi_{\ensuremath{\vec{k}}}(\eta)=0 \end{equation} \noindent where the index $\nu$ is given by \begin{equation}\label{nu} \nu = \frac{3}{2} + \epsilon_V-\eta_V +\mathcal{O}(\epsilon^2_V,\eta^2_V,\epsilon_V\eta_V) \,. \end{equation} The scale invariant case $ \nu = \frac{3}{2} $ corresponds to massless inflaton fluctuations in the de Sitter background. The quantity \begin{equation}\label{delta} \Delta= \frac{3}{2}-\nu = \eta_V-\epsilon_V + \mathcal{O}(\epsilon^2_V,\eta^2_V,\epsilon_V\eta_V)\,. \end{equation} measures the departure from scale invariance. In terms of the spectral index of the scalar adiabatic perturbations $ n_s $ and the ratio $ r $ of tensor to scalar perturbations, $ \Delta $ takes the form, \begin{equation} \Delta=\frac12 \left( n_s - 1 \right) + \frac{r}8 \; . \end{equation} The free Heisenberg field operators $\chi_{\ensuremath{\vec{k}}}(\eta)$ are written in terms of annihilation and creation operators that act on Fock states as \begin{equation}\label{ope} \chi_{\ensuremath{\vec{k}}}(\eta) = a_{\ensuremath{\vec{k}}} \; S_{\nu}(k,\eta)+ a^{\dagger}_{-\ensuremath{\vec{k}}} \; S^{*}_{\nu}(k,\eta) \end{equation} \noindent where the mode functions $S_{\nu}(k,\eta)$ are solutions of the eqs. (\ref{heiseqn2}). We choose Bunch-Davis boundary conditions on these mode functions so that \begin{equation}\label{BDS} S_{\nu}(k,\eta) = \frac{1}{2} \; \sqrt{-\pi\eta} \; e^{i\frac{\pi}{2}(\nu+\frac{1}{2})} \; H^{(1)}_\nu(-k\eta)\, , \end{equation} this defines the Bunch-Davis vacuum $ a_{\ensuremath{\vec{k}}} |0>_{BD} =0 $. There is no unique choice of an initial state, and a recent body of work has began to address this issue\cite{inistate} (see ref.\cite{mottola} for a discussion and further references). A full study of the \emph{quantum loop} corrections with different initial states must first elucidate the behavior of the propagators for the fluctuations in such states. In this article we focus on the standard choice in the literature\cite{liddle,lidsey} which allows us to include the quantum corrections into the standard results in the literature. A study of quantum loop corrections with different initial states is an important aspect by itself which we postpone to later work. The index $\nu$ in the mode functions eq.(\ref{BDS}) depends on the expectation value of the scalar field, via the slow roll variables, hence it slowly varies in time. Therefore, it is consistent to treat this time dependence of $\nu$ as an \emph{adiabatic approximation}. This is well known and standard in the slow roll expansion\cite{liddle,lidsey}. Indeed, there are corrections to the mode functions which are higher order in slow roll as discussed in detail in refs.\cite{stewart}-\cite{salman}. However, these mode functions enter in the propagators in loop corrections, therefore they yield higher order contributions in slow roll and we discard them consistently to lowest order in slow roll. With this choice and to lowest order in slow roll, the power spectrum eq.(\ref{PS}) is given by \begin{equation}\label{PSSR} \mathcal{P}_{\varphi}(k,t) = \frac{H^2}{8 \, \pi} \; (-k\eta)^3 \; |H^{(1)}_\nu(- k \eta)|^2 \,. \end{equation} For large momenta $|k\eta| \gg 1$ the mode functions behave just like free field modes in Minkowski space-time, namely \begin{equation} S_{\nu}(k,\eta) \buildrel{|k\eta| \gg 1}\over= \frac{1}{\sqrt{2k}} \; e^{-ik\eta} \end{equation} \noindent Therefore, the quantum correction to the equation of motion for the inflaton eqs.(\ref{1lupeqn}) and (\ref{lupPS}) determined by the momentum integral of $ \mathcal{P}_{\varphi}(k,t) $ features both quadratic and logarithmic divergences. Since the field theory inflationary dynamics is an \emph{effective field theory} valid below a comoving cutoff $\Lambda$ of the order of the Planck scale, the one loop correction (\ref{lupPS}) becomes \begin{equation}\label{PSint} \int^{{\Lambda}}_0 \frac{dk}{k} \,\mathcal{P}_{\varphi}(k,t) = \frac{H^2}{8 \, \pi} \int^{\Lambda_p}_0 \frac{dz}{z} \; z^3\, \left|H^{(1)}_\nu(z)\right|^2 \,, \end{equation} \noindent where $ \Lambda_p(\eta)$ is the ratio of the cutoff in physical coordinates to the scale of inflation, namely \begin{equation}\label{physcut} \Lambda_p(\eta) \equiv\frac{\Lambda}{H \; C(\eta)}=-\Lambda\, \eta\;. \end{equation} The integration variable $ z=-k \, \eta $ has a simple interpretation at leading order in slow roll \begin{equation}\label{zSR} z \equiv -k \, \eta = \frac{k}{H_0 \, C(\eta)}= \frac{k_p(\eta)}{H_0} \,, \end{equation} \noindent where $k_p(\eta)=k/C(\eta)$ is the wavevector in physical coordinates. If the spectrum of scalar fluctuations were strictly scale invariant, (namely for massless inflaton fluctuations in de Sitter space-time), then the index would be $\nu=3/2$ and the integrand in (\ref{PSint}) given by \begin{equation}\label{integ} z^3 \, \left|H^{(1)}_{\frac{3}{2}}(z)\right|^2 = \frac{2}{\pi}\left[1+z^2\right]\,. \end{equation} In this strictly scale invariant case, the integral of the power spectrum also features an \emph{infrared} logarithmic divergence. While the ultraviolet divergences are absorbed by the renormalization counterterms in the effective field theory, no such possibility is available for the infrared divergence. Obviously, the origin of this infrared behavior is the {\bf exact} scale invariance of superhorizon fluctuations. However, during slow roll inflation there are small corrections to scale invariance determined by the slow roll parameters, in particular the index $\nu$ is slightly different from $3/2$ and this slight departure from scale invariance introduces a natural infrared regularization. In a previous article\cite{ultimonuestro2} we have introduced an expansion in the parameter $\Delta = 3/2-\nu= \eta_V-\epsilon_V+\mathcal{O}(\epsilon^2_V,\eta^2_V,\epsilon_V\eta_V)$ which is small during slow roll and we expect here as in ref.\cite{ultimonuestro2} that the infrared divergences featured by the quantum correction manifest as \emph{simple poles} in $\Delta$. We will now proceed to compute the quantum correction to the equation of motion for the inflaton by isolating the pole in $\Delta$ as well as the leading logarithmic divergences. To achieve this goal we write the integral \begin{equation}\label{intsplit} \int^{\Lambda_p}_0 \frac{dz}{z} \; z^3 \; |H^{(1)}_\nu(z)|^2= \int^{\mu_p}_0 \frac{dz}{z} \; z^3 \, \left|H^{(1)}_\nu(z)\right|^2 + \int^{\Lambda_p}_{\mu_p} \frac{dz}{z} \; z^3 \, \left|H^{(1)}_\nu(z)\right|^2 \, . \end{equation} \noindent $\mu_p$ acts here as infrared cutoff for the first integral. The second integral is ultraviolet and infrared finite for finite $\mu_p, \; \Lambda_p$. We can set then $\nu=3/2$ in this integral and use eq. (\ref{integ}). In the first integral we obtain the leading order contribution in the slow roll expansion, namely the pole and leading logarithm, by using the small argument limit of the Hankel functions. This yields, \begin{equation} z^3 \, \left|H^{(1)}_\nu(z)\right|^2 \buildrel{z \to 0}\over=\left[ \frac{2^{\nu} \; \Gamma(\nu)}{\pi} \right]^2 \; z^{2 \, \Delta} \end{equation} and we find that eq.(\ref{intsplit}) yields after calculation, \begin{equation}\label{1int} \int^{\mu_p}_0 \frac{dz}{z} \; z^3 \, \left|H^{(1)}_\nu(z)\right|^2 = \frac{2}{\pi}\left[\frac{1}{2 \, \Delta}+ \frac{\mu^2_p}{2} + \gamma - 2 + \ln(2 \; \mu_p) +\mathcal{O}(\Delta)\right]\,, \end{equation} \noindent where we have displayed the pole in $\Delta$ and the leading infrared logarithm. Combining the above result with the second integral (for which we set $\Delta=0$) we find the following final result for the quantum correction to leading order in slow roll \begin{equation}\label{QC} \frac12 \langle[\varphi(\ensuremath{\vec{x}},t)]^2\rangle = \left(\frac{H_0}{4 \, \pi}\right)^2 \left[ {\Lambda_p}^2 + \ln \Lambda_p^2 +\frac1{\Delta} + 2 \, \gamma - 4 + \mathcal{O}(\Delta) \right]\,, \end{equation} \noindent where $\gamma$ is the Euler-Mascheroni constant. While the quadratic and logarithmic \emph{ultraviolet} divergences are regularization scheme dependent, the pole in $\Delta$ arises from the infrared behavior and is independent of the regularization scheme. In particular this pole coincides with that found in the expression for $<\phi^2(\ensuremath{\vec{x}},t)>$ in ref.\cite{fordbunch}. The \emph{ultraviolet divergences}, in whichever renormalization scheme, require that the effective field theory be defined to contain \emph{renormalization counterterms} in the bare effective lagrangian, so that these counterterms will systematically cancel the divergences encountered in the calculation of quantum corrections in the (EFT) and slow roll approximations. \subsection{Renormalized effective field theory: renormalization counterterms} The renormalized effective field theory is obtained by writing the potential $V[\phi]$ in the Lagrangian density in (\ref{action}) in the following form \begin{equation}\label{count} V(\phi)=V_R(\phi)+\delta\,V_R(\phi,\Lambda)\;, \end{equation} \noindent where $V_R(\phi)$ is the renormalized \emph{classical} inflaton potential and $\delta\,V_R(\phi,\Lambda)$ includes the renormalization counterterms which are found systematically in a slow roll expansion by requesting that in the perturbative (slow roll) expansion insertion of the counterterms cancel the ultraviolet divergences. In this manner, the equations of motion and correlation functions in this effective field theory \emph{will not depend on the cutoff scale}. The counterterm required to cancel the ultraviolet divergences in the inflaton equation of motion can be gleaned by restoring the dependence of the coupling $g$ on $\Phi_0$, namely \begin{equation}\label{equalup} \ddot{\Phi}_0(t)+3\,H\,\dot{\Phi}_0(t)+V'(\Phi_0)+ V^{'''}(\Phi_0) \left(\frac{H_0}{4 \, \pi}\right)^2 \left[\Lambda_p^2+ \ln\Lambda_p^2 +\frac{1}{\Delta}+2 \, \gamma - 4 + \mathcal{O}(\Delta)\right]=0 \,. \end{equation} >From this equation it becomes clear that the one-loop ultraviolet divergences can be canceled by choosing \begin{equation}\label{count2} \delta\,V_R(\phi,\Lambda)= \mathcal{C}_0[\Lambda,H_0]+\mathcal{C}_2[\Lambda,H_0] \; V^{''}_R(\phi) +\textmd{higher orders in slow roll}\;, \end{equation} \noindent the extra terms refer to higher derivatives with respect to $\phi$ which when evaluated at $\Phi_0$ are of higher order in slow roll. The counterterm $\mathcal{C}_0[\Lambda,H_0]$ is independent of $\phi$ and will be required to cancel the ultraviolet divergences in the energy momentum tensor (see next section). The counterterm coefficient $\mathcal{C}_2[\Lambda,H_0] $ is fixed by requiring that it cancels the ultraviolet divergence in the eq.(\ref{equalup}). This is achieved as follows. Introducing the renormalized form of the inflaton potential given by eq. (\ref{count2}) in the Lagrangian density and performing the shift in the fields as in eq. (\ref{tad}), the Lagrangian density, eq.(\ref{lagra}) now becomes \begin{eqnarray}\label{lagraren} &&\mathcal{L}[\Phi_0(t),\varphi(\ensuremath{\vec{x}},t)] = \frac{1}{2} \; {\dot{\Phi}^2_0}-V_R(\Phi_0)-\delta V_R(\Phi_0)+\frac{1}{2} \; {\dot{\varphi}^2}-\frac{(\nabla \varphi)^2}{2 \, a^2} -\frac{1}{2} \; \left[ V^{''}_R(\Phi_0)+\mathcal{C}_2[\Lambda,H_0] \; V^{(IV)}_R(\Phi_0)+\cdots\right] \; \varphi^2 \nonumber \\ && - \varphi \left[ \ddot{\Phi}_0+3 \, H_0 \, \dot{\Phi}_0+V^{'}_R(\Phi_0)+ \mathcal{C}_2[\Lambda,H_0] \; V^{'''}_R(\Phi_0)+\cdots\right] - \frac{1}{6} \; V^{'''}_R(\Phi_0) \; \varphi^3 - \frac{1}{24} \; V^{(IV)}_R(\Phi_0) \; \varphi^4+\cdots \end{eqnarray} \noindent where the dots contain terms with higher derivatives of the potential with respect to $\Phi_0$ which are subleading in the slow roll expansion. In Minkowski space time the counterterms are time independent because they must maintain the space-time symmetries. In an spatially flat FRW cosmology only spatial translational invariance restricts the form of the counterterms, hence time dependent counterterms are allowed. The equation of motion for $\Phi_0$ can now be obtained by implementing the tadpole method as described above, with the leading order result \begin{equation}\label{eqofmotQ} \ddot{\Phi}_0(t)+3\,H_0\,\dot{\Phi}_0(t)+V^{'}_R(\Phi_0)+ V^{'''}_R(\Phi_0)\left\{\mathcal{C}_2[\Lambda,H_0]+ \left(\frac{H_0}{4 \, \pi}\right)^2 \left[\Lambda_p^2+ \ln\Lambda_p^2 +\frac{1}{\Delta}+ 2 \, \gamma - 4 +\mathcal{O}(\Delta)\right] \right\}=0 \;. \end{equation} The counterterm $\mathcal{C}_2[\Lambda,H_0] $ is now chosen to cancel the ultraviolet cutoff dependence in the equation of motion, namely \begin{equation}\label{countfix} \mathcal{C}_2[\Lambda,H_0]= - \left(\frac{H_0}{4 \, \pi}\right)^2 \left[\Lambda_p^2+\ln\Lambda_p^2 + 2 \, \gamma - 4 \right]\;, \end{equation} \noindent leading to the final form of the renormalized inflaton equation of motion to leading order in the slow roll expansion \begin{equation}\label{fineq} \ddot{\Phi}_0(t)+3\,H_0\,\dot{\Phi}_0(t)+V^{'}_R(\Phi_0)+ \left(\frac{H_0}{4 \, \pi}\right)^2 \frac{V^{'''}_R(\Phi_0)}{\Delta}=0 \;. \end{equation} An important aspect of this equation is the following: naively, the quantum correction is of order $V^{'''}_R(\Phi_0)$, therefore of second order in slow roll, but the strong infrared divergence arising from the quasi scale invariance of inflationary fluctuations brings about a denominator which is of first order in slow roll. Hence the lowest order quantum correction in the slow roll expansion, is actually of the same order as $ V^{'}_R(\Phi_0) $. To highlight this observation, it proves convenient to write eq.(\ref{fineq}) in terms of the EFT and slow roll parameters, \begin{equation}\label{fineqsr} \ddot{\Phi}_0(t)+3\,H_0\,\dot{\Phi}_0(t)+ V^{'}_R(\Phi_0)\left[1+\left(\frac{H_0}{2\pi \, M_{Pl}}\right)^2 \frac{\xi_V}{2\,\epsilon_V\,\Delta}\right]=0 \;. \end{equation} Since $\xi_V \sim \epsilon^2_V$ and $\Delta \sim \epsilon_V$ the leading quantum corrections are of zeroth order in slow roll. This is a consequence of the infrared enhancement resulting from the nearly scale invariance of the power spectrum of scalar fluctuations. The quantum correction is suppressed by an EFT factor $H^2/M^2_{Pl} \ll 1$. Restoring the dependence of $\Delta$ on $\Phi_0$ through the definitions (\ref{etav}) and (\ref{delta}) we finally find the following equation of motion for the inflaton field in the \emph{effective field theory} up to leading order in slow roll \begin{equation}\label{eqnslor}\ddot{\Phi}_0(t)+3\,H\,\dot{\Phi}_0(t)+ V^{'}_R(\Phi_0)+\frac{1}{24 \, (\pi \; M_{Pl})^2} \; \frac{V_R^3(\Phi_0)\, V^{'''}_R(\Phi_0)}{2 \, V_R(\Phi_0)V^{''}_R(\Phi_0)-V^{'\,2}_R(\Phi_0) }=0 \;. \end{equation} We can also write the inflaton field equation in terms of CMB observables $$ \ddot{\Phi}_0(t)+3\,H_0\,\dot{\Phi}_0(t)+ V^{'}_R(\Phi_0)\left\{1+ \frac{\Delta_{\mathcal{R}}^2}{n_s -1 +\frac{r}4} \left[\frac{r}2 \left(n_s -1 +\frac{3 \, r}{16} \right) - \frac{dn_s}{d \ln k} \right]\right\} = 0 \; . $$ \section{Quantum corrections to the Friedmann equation: the effective potential}\label{friedeq} The zero temperature effective potential in Minkowski space-time is often used to describe the scalar field dynamics during inflation \cite{liddle,riottorev}. The focus of this Section is to derive the effective potential for slow-roll inflation. As we see below the resulting effective potential [see eq.(\ref{Veff})] is remarkably different from the Minkowski one [see Appendix A]. Since the fluctuations of the inflaton field are quantized, the interpretation of the `scalar condensate' $\Phi_0$ is that of the expectation value of the full quantum field $\phi$ in a homogeneous coherent quantum state. Consistently with this, the Friedmann equation must necessarily be understood in terms of the \emph{expectation} value of the field energy momentum tensor, namely \begin{equation}\label{FRW2} H^2= \frac{1}{3 \, M^2_{Pl}}\left\langle \frac{1}{2} \; \dot{\phi}^2+\frac{1}{2} \; \left(\frac{\nabla \phi}{a(t)}\right)^2+V[\phi] \right\rangle \;. \end{equation} Separating the homogeneous condensate from the fluctuations as in eq. (\ref{tad}) with the condition that the expectation value of the quantum fluctuation vanishes eq.(\ref{exp}), the Friedmann equation becomes \begin{equation}\label{FRexp} H^2= \frac{1}{3 \, M^2_{Pl}}\left[ \frac{1}{2} \; {\dot{\Phi_0}}^2 + V_R(\Phi_0)+\delta V_R(\Phi_0)\right]+ \frac{1}{3 \, M^2_{Pl}}\left\langle \frac{1}{2} \; \dot{\varphi}^2+\frac{1}{2} \; \left(\frac{\nabla \varphi}{a(t)}\right)^2+\frac{1}{2} \; V^{''}(\Phi_0)\; \varphi^2 +\cdots\right\rangle \end{equation} The dots inside the angular brackets correspond to terms with higher derivatives of the potential which are smaller in the slow roll expansion. The quadratic term $\langle \varphi^2 \rangle$ has been calculated above to leading order in slow roll and given by eq. (\ref{QC}). Calculating the expectation value in eq.(\ref{FRexp}) in free field theory corresponds to obtaining the corrections to the energy momentum tensor by integrating the fluctuations \emph{up to one loop} (see the appendix for a similar calculation in Minkowski space-time). We obtain these contributions up to this order, consistently with our study of the equations of motion up to one-loop order. The first two terms of the expectation value in eq.(\ref{FRexp}) \emph{do not} feature infrared divergences for $\nu=3/2$ because of the two extra powers of the loop momentum in the integral. These contributions are given by \begin{eqnarray}\label{kinterm} &&\left\langle \frac{1}{2} \; \dot{\varphi}^2 \right\rangle = \frac{H^4_0}{16 \,\pi} \; \int^{\Lambda_p}_{0} \frac{dz}{z} \; z^2 \; \left|\frac{d}{dz}\left[z^{\frac{3}{2}} H^{(1)}_{\nu}(z) \right] \right|^2 = \frac{H^4_0 \; \Lambda^4_p}{32 \,\pi^2}+ \mathcal{O}( H^4_0 \Delta)\;,\\ \label{grad} &&\left\langle\frac{1}{2}\left(\frac{\nabla \varphi}{a(t)}\right)^2 \right \rangle = \frac{H^4_0}{16 \,\pi} \; \int^{\Lambda_p}_{0} \frac{dz}{z} \; z^{5} \; \left| H^{(1)}_{\nu}(z) \right|^2 = \frac{H^4_0 \; \Lambda^4_p}{32 \, \pi^2}+ \frac{H^4 \; \Lambda^2_p}{16 \, \pi^2} +\mathcal{O}( H^4_0 \Delta)\;. \end{eqnarray} Clearly, the choice of the counterterm $\mathcal{C}_2[\Lambda,H_0]$ given by eq. (\ref{countfix}) cancels the ultraviolet divergences arising from the third term in the angular brackets in eq. (\ref{FRexp}). The counterterm $\mathcal{C}_0[\Lambda,H_0]$ in the renormalized potential eq.(\ref{count}) is chosen to cancel the ultraviolet divergences from the kinetic and gradient terms, namely \begin{equation}\label{count0fix} \mathcal{C}_0[\Lambda,H_0]= - \frac{\Lambda^2_p}{(4\, \pi)^2}\left(\Lambda^2_p + H^2_0\right) \; . \end{equation} The fully renormalized Friedmann equation up to one loop and to lowest order in the slow roll expansion is therefore \begin{equation}\label{FRren} H^2 = \frac{1}{3 \, M^2_{Pl}}\left[ \frac{1}{2} \; {\dot{\Phi_0}}^2 + V_R(\Phi_0) + \left(\frac{H_0}{4 \, \pi}\right)^2\frac{V^{''}_R(\Phi_0)}{\Delta} +\textmd{higher orders in slow roll}\right] \equiv H^2_0 + \delta H^2 \;, \end{equation} \noindent where $H_0$ is the Hubble parameter in absence of quantum fluctuations: $$ H^2_0 = \frac{V_R(\Phi_0)}{3 \, M^2_{Pl}} \left[1+\frac{\epsilon_V}{3}+ \mathcal{O}(\epsilon^2_V,\epsilon_V \; \eta_V) \right] \; . $$ Using the lowest order slow roll relation eq. (\ref{flucmass}), the last term in eq.(\ref{FRren}) can be written as follows \begin{equation}\label{delH} \frac{\delta H^2}{H^2_0} = \left(\frac{H_0}{4 \, \pi\,M_{Pl}}\right)^2 \frac{\eta_V}{\Delta}\;. \end{equation} This equation defines the back-reaction correction to the scale factor arising from the quantum fluctuations of the inflaton. Hence, while the ratio $\eta_V/\Delta$ is of order zero in slow roll, the one loop correction to the Friedmann equation is of the order $H^2_0/M^2_{Pl} \ll 1$ consistently with the EFT expansion. The Friedmann equation suggests the identification of the effective potential \begin{eqnarray}\label{Veff} &&V_{eff}(\Phi_0) = V_R(\Phi_0)+ \left(\frac{H_0}{4 \, \pi}\right)^2\frac{V^{''}_R(\Phi_0)}{\Delta} +\textmd{higher orders in slow roll} = \\ \cr &&= V_R(\Phi_0)\left[1+ \left(\frac{H_0}{4 \, \pi\,M_{Pl}}\right)^2\frac{\eta_V}{ \eta_V-\epsilon_V} + \textmd{higher orders in slow roll} \right]= \cr \cr && = V_R(\Phi_0)\left[1+\frac{\Delta^2_{\mathcal{T}}}{32} \; \frac{n_s -1 + \frac38 \; r}{n_s -1 + \frac14 \; r} +\textmd{higher orders in slow roll}\right] \; . \label{Vefsr} \end{eqnarray} In eq.(\ref{Vefsr}) we express the quantum corrections to the inflaton potential in terms of observables: $ \Delta_{\mathcal{T}}, \; n_s $ and $ r $. Using the WMAP data for $ \Delta^2_{\mathcal{R}} = \Delta^2_{\mathcal{T}}/r = 0.218 \times 10^{-8} $, eq.(\ref{Vefsr}) becomes $$ V_{eff}(\Phi_0) = V_R(\Phi_0)\left[1+ 0.682 \times 10^{-10} \; r \; \frac{n_s -1 + \frac38 \; r}{n_s -1 + \frac14 \; r} +\textmd{higher orders in slow roll}\right] \; . $$ We see that the equation of motion for the inflaton eq.(\ref{fineq}) takes the natural form $$ \ddot{\Phi}_0(t)+3\,H_0\,\dot{\Phi}_0(t)+ \frac{\partial V_{eff}}{\partial\Phi_0}(\Phi_0) = 0 \;. $$ where the derivative of $ V_{eff} $ with respect to $ \Phi_0 $ is taken at fixed Hubble and slow roll parameters. That is, $ H_0 $ and $ \Delta $ must be considered in the present context as gravitational degrees of freedom and not as matter (inflaton) degrees of freedom. Eqs.(\ref{delH}) and (\ref{Veff}) make manifest the nature of the effective field theory expansion in terms of the ratio $\left(H_0/M_{Pl}\right)^2$. The coefficients of the powers of this ratio are obtained in the slow roll expansion. To leading order, these coefficients are of $\mathcal{O}(\epsilon^0_V)$ because of the infrared enhancement manifest in the poles in $\Delta$, a consequence of the nearly scale invariant power spectrum of scalar perturbations. The equivalence between the (EFT) ratio $ \left(H_0/M_{Pl}\right)^2$ and the ratio $ \left(T_{sem}/T_{Pl} \right)^2$ according to eq.(\ref{tem}), results in that the leading quantum corrections to the effective potential eq.(\ref{Vefsr}) are $\propto T^2_{sem}$. This is akin to the {\it finite temperature} contribution to the one loop effective potential in Minkowski space time. A noteworthy result is the rather different form of the effective potential eq.(\ref{Veff}) as compared to the result in Minkowski space time at zero temperature. In the appendix we show explicitly that the same definition of the effective potential as the expectation value of $T_{00}$ in Minkowski space-time at zero temperature yields the familiar one loop result, which is strikingly different from eq.(\ref{Veff}) during slow roll inflation. \subsection{Quantum corrections to slow roll parameters} We now have all of the elements in place to obtain the \emph{quantum corrections} to the slow roll parameters. Defining the \emph{effective} slow roll parameters as \begin{equation}\label{effSR} \epsilon_{eff} = \frac{M^2_{Pl}}{2}\; \left[\frac{V^{'}_{eff}(\Phi_0)}{V_{eff}(\Phi_0)}\right]^{\! 2}~~;~~\eta_{eff} = M^2_{Pl} \; \frac{V^{''}_{eff}(\Phi_0)}{V_{eff}(\Phi_0)} \; , \end{equation} \noindent eq.(\ref{Veff}) yields to leading order in EFT and slow roll expansions: \begin{eqnarray} &&\epsilon_{eff} = \epsilon_V \left[ 1+ \left(\frac{H_0}{4 \, \pi \, M_{Pl}} \right)^2 \, \frac{4 \, \eta_V\left(\eta_V-\epsilon_V\right)- \xi_V}{\left(\eta_V-\epsilon_V\right)^2} \right] \label{epseff}\\ &&\eta_{eff} = \eta_V \left\{1+ \left(\frac{H_0}{4 \, \pi \, M_{Pl}} \right)^2 \, \frac1{\left(\eta_V-\epsilon_V\right)^2 } \left[\frac{\xi_V^2}{\eta_V(\eta_V-\epsilon_V)} - \frac{\sigma_V}{2\,\eta_V} - \frac{\xi_V}{\eta_V} \left(\eta_V+ 6 \, \epsilon_V \right) + 4 \, \eta_V \, \left( \eta_V + 4 \, \epsilon_V\right) - 20 \, \epsilon_V^2 \right]\right\} \; . \nonumber \end{eqnarray} A remarkable feature of the quantum corrections to the slow roll parameters is that they are of \emph{zeroth} order in slow roll. Again, this is a consequence of the infrared enhancement of the loop diagrams for a nearly scale invariant spectrum of fluctuations. Higher order slow roll parameters can be obtained similarly. \section{Quantum corrections to superhorizon modes: a new scaling dimension}\label{anomdim} In order to study the equations of motion for the fluctuations including self-energy corrections, it is convenient to first pass to conformal time and to implement the conformal rescaling of the field as in eq. (\ref{rescale}). The action is now given by \begin{equation} S= \int d^3x \; d\eta \; \mathcal{L}_c[\chi,\Phi_0] \;, \end{equation} \noindent where the Lagrangian density $\mathcal{L}_c[\chi,\Phi_0]$ is given by \begin{eqnarray}\label{lagconf} &&\mathcal{L}_c[\chi,\Phi_0] = C^4(\eta)\left[ \frac{1}{2} \; {\dot\Phi}^2_0-V_R(\Phi_0)-\delta V_R(\Phi_0) \right] + \frac{{\chi'}^2}{2}-\frac{(\nabla \chi)^2}{2}-\frac{1}{2} \; {\mathcal{M}^2(\eta)} \; \chi^2 - \nonumber \\ && - C^3(\eta) \; \chi \; \left[ \ddot{\Phi}_0+3 \, H \, \dot{\Phi}_0+V^{'}_R(\Phi_0)+\mathcal{C}_2[\Lambda,H] \; V^{'''}_R(\Phi_0)+\cdots\right] - \frac{1}{2} \; \delta \mathcal{M}^2(\eta) \; \chi^2 -\frac{g}{3} \; C(\eta) \; \chi^3-\frac{\lambda}{4} \; \chi^4 +\cdots \end{eqnarray} \noindent where the dots on $\Phi_0$ stand for derivatives with respect to cosmic time, the primes on $\chi$ stand for derivatives with respect to conformal time, and we have used the definitions given in eqs.(\ref{g}) and (\ref{lambda}). The effective (time dependent) mass term and counterterm are given by \begin{eqnarray} &&\mathcal{M}^2(\eta) = V^{''}_R(\Phi_0) \; C^2(\eta)- \frac{C^{''}(\eta)}{C(\eta)} = -\frac{1}{\eta^2}(\nu^2-\frac{1}{4}) \; ,\label{renmass} \\ &&\delta \mathcal{M}^2 = \left(6 \, \lambda \; \mathcal{C}_2[\Lambda,H_0] + g^2 \; \mathcal{C}_3[\Lambda,H_0]\right) C^2(\eta) \label{contramasa}\;. \end{eqnarray} $ \mathcal{C}_2[\Lambda,H_0] $ is given by eq.(\ref{countfix}) and $ \mathcal{C}_3[\Lambda,H_0] $ will cancel a logarithmic divergence proportional to $ g^2 $ in the one loop self energy. The effective equation of motion for the fluctuations is obtained in the linear response approach by introducing an external source that induces an expectation value for the field $\chi(\ensuremath{\vec{x}},\eta)$, switching off the source this expectation value will evolve in time through the effective equation of motion of the fluctuations. This program is implemented by following the steps explained in our previous articles (see ref.\cite{ultimonuestro1,ultimonuestro2} and references therein). We first write the spatial Fourier transform of the fields $\chi^{\pm}(\ensuremath{\vec{x}},\eta)$ defined on the forward $(+)$ and backward $(-)$ branches in the generating functional, namely \begin{equation}\label{split} \chi^{\pm}_{\ensuremath{\vec{k}}}(\eta) = X_{\ensuremath{\vec{k}}}(\eta) + \sigma^{\pm}_{\ensuremath{\vec{k}}}(\eta)~~;~~ \langle \chi^{\pm}_{\ensuremath{\vec{k}}}(\eta)\rangle =X_{\ensuremath{\vec{k}}}(\eta) ~~;~~\langle \sigma^{\pm}_{\ensuremath{\vec{k}}}(\eta)\rangle =0 \;, \end{equation} \noindent were $X_{\ensuremath{\vec{k}}}(\eta)$ is the spatial Fourier transform of the expectation value of the fluctuation field $\chi$ induced by the external source term. Implementing the tadpole condition $\langle \sigma^{\pm}\rangle=0$ up to one loop we obtain the effective equation of motion\cite{ultimonuestro1,ultimonuestro2} \begin{equation} \label{eqnofmotfluc} X''_{\ensuremath{\vec{k}}}(\eta)+\left[k^2-\frac{\nu^2-\frac{1}{4} }{\eta^2}\right] X_{\ensuremath{\vec{k}}}(\eta)+ \int_{\eta_0}^{\eta} \Sigma(k,\eta,\eta') \; X_{\ensuremath{\vec{k}}}(\eta') \; d\eta' = 0 \;. \end{equation} The one-loop contributions to the self-energy kernel $\Sigma(k,\eta,\eta')$ are displayed in fig. \ref{selfenergy}. The sum of the diagrams $(c)$ and $(d)$ cancels by dint of the equation of motion for the inflaton eq. (\ref{1lupeqn}) since the loop in diagram $(d)$ is given by $\langle [\chi(\ensuremath{\vec{x}},\eta)]^2 \rangle= C^2(\eta) \; \langle [\varphi(\ensuremath{\vec{x}},\eta)]^2 \rangle$. Only diagrams $(a)$ and $(b)$ give a non-vanishing contribution to the self energy kernel, which is found to be given by \begin{figure}[h!] \begin{center} \includegraphics[height=2in, width=5in,keepaspectratio=true]{selfenergy.eps} \caption{One-loop self energy contributions. $\lambda=\frac{1}{6} \; V^{(IV)}(\Phi_0)~,~g=\frac{1}{2} \; V^{'''}(\Phi_0)$. The square box in diagram (c) represents $ - C^3(\eta)\chi\left[ \ddot{\Phi}_0+3 \, H \; \dot{\Phi}_0+V^{'}_R(\Phi_0)+\mathcal{C}_2[\Lambda,H] \; V^{'''}_R(\Phi_0)\right]$. The sum of diagrams (c) and (d) is proportional to the equation of motion (\ref{1lupeqn}) and vanishes. Only diagrams (a) and (b) contribute to the self-energy. } \label{selfenergy} \end{center} \end{figure} \begin{equation}\label{Sigma} \Sigma(k,\eta,\eta')= \frac{1}{H^2_0 \; \eta^2}\left[6\,\lambda\, \left(\mathcal{C}_2[\Lambda,H_0]+\frac{1}{2} \; \langle [\varphi(\ensuremath{\vec{x}},\eta)]^2\rangle\right)+ g^2 \; \mathcal{C}_3[\Lambda,H_0]\right]\,\delta(\eta-\eta')+ \frac{2\,g^2}{H^2_0 \, \eta \, \eta'} \; \mathcal{K}_{\nu}(k;\eta,\eta') \;. \end{equation} The term proportional to $\delta(\eta-\eta')$ is the sum of the contribution from the counterterm $\delta \mathcal{M}^2$ and the one loop contribution $(a)$ in fig. \ref{selfenergy}, while the kernel $\mathcal{K}_{\nu}(k;\eta,\eta')$ is determined by the one-loop contribution $(b)$ in fig. \ref{selfenergy} and is given by\cite{ultimonuestro2} \begin{equation}\label{kernel} \mathcal{K}_{\nu}(k;\eta,\eta') = 2 \int \frac{d^3q}{(2\pi)^3} \; \mathrm{Im}\left[ S_{\nu}(q,\eta) S^*_{\nu}(q,\eta')S_{\nu}(|\ensuremath{\vec{q}}-\ensuremath{\vec{k}}|,\eta)S^*_{\nu}(|\ensuremath{\vec{q}}-\ensuremath{\vec{k}}|,\eta')\right]\;, \end{equation} \noindent where the mode functions $S_{\nu}(k,\eta)$ are given by eq. (\ref{BDS}). The equation of motion (\ref{eqnofmotfluc}) is solved in a perturbative loop expansion as follows \begin{equation}\label{pertsol} X_{\ensuremath{\vec{k}}}(\eta)=X_{0,\ensuremath{\vec{k}}}(\eta)+X_{1,\ensuremath{\vec{k}}}(\eta)+\textmd{higher loop corrections} \;, \end{equation} \noindent where $X_{0,\ensuremath{\vec{k}}}(\eta)$ is the free field solution, $X_{1,\ensuremath{\vec{k}}}(\eta)$ is the one-loop correction, etc. This expansion to one loop order leads to the following hierarchy of coupled equations \begin{eqnarray} &&X''_{0,\ensuremath{\vec{k}}}(\eta)+\left[k^2-\frac{1}{\eta^2}\Big(\nu^2- \frac{1}{4} \Big) \right]X_{0,\ensuremath{\vec{k}}}(\eta) = 0 \; ,\label{X0}\\ &&X''_{1,\ensuremath{\vec{k}}}(\eta)+\left[k^2-\frac{1}{\eta^2}\Big(\nu^2- \frac{1}{4} \Big) \right] X_{1,\ensuremath{\vec{k}}}(\eta) = \mathcal{R}_1(k,\eta) \label{X1}\;, \end{eqnarray} \noindent where the inhomogeneity $ \mathcal{R}_1(k,\eta) $ due to the interaction of the inflaton is given by \begin{equation}\label{source} \mathcal{R}_1(k,\eta) = - \frac{1}{H^2_0 \; \eta^2}\left\{\frac{3\,\lambda}{2 \, \Delta}\, \left(\frac{H_0}{2 \, \pi} \right)^2 + g^2 \; \mathcal{C}_3[\Lambda,H_0]\right\} \; X_{0,\ensuremath{\vec{k}}}(\eta) -\frac{ 2\, g^2 }{H^2_0 \; \eta} \int_{\eta_0}^{\eta} \frac{d\eta'}{\eta'} \; \mathcal{K}_{\nu}(k;\eta,\eta') \; X_{0,\ensuremath{\vec{k}}}(\eta') \; , \end{equation} and we have used eqs.(\ref{QC}) and (\ref{countfix}). The solution of the inhomogeneous eq.(\ref{X1}) is given by \begin{equation} \label{forsol} X_{1,\ensuremath{\vec{k}}}(\eta)= \int_{\eta_0}^{0} d\eta' \; \mathcal{G}_\nu(k;\eta,\eta') \; \mathcal{R}_{1}(k,\eta') \; . \end{equation} \noindent $\mathcal{G}_\nu(k;\eta,\eta')$ is the retarded Green's function obeying \begin{equation}\label{GF} \left[\frac{d^2}{d\eta^2}+k^2-\frac{1}{\eta^2}\Big(\nu^2- \frac{1}{4} \Big) \right]\mathcal{G}_\nu(k;\eta,\eta')= \delta(\eta-\eta')~~,~~ \mathcal{G}_\nu(k;\eta,\eta')=0 \quad \mathrm{for}\quad \eta'>\eta \; . \end{equation} We are primarily interested in obtaining the superhorizon behavior of the fluctuations ($ |k \, \eta| \ll 1 $) to obtain the scaling behavior in this limit, therefore we set $k=0$. The kernel $\mathcal{K}_{\nu}(0;\eta,\eta')$ was found in reference\cite{ultimonuestro2}. To leading order in the slow roll expansion and leading logarithmic order it is given by \begin{equation}\label{Knu} \mathcal{K}_{\nu}(0;\eta,\eta')=\mathcal{K}_{\frac{1}{2}}(0;\eta,\eta')+ \frac{1}{6 \, \pi^2} \left[ \left(\frac{1}{2 \, \Delta}+\frac23 \right) \left(\frac{\eta'}{\eta^{2}}-\frac{\eta}{\eta^{'2}}\right) - \frac{\eta'}{\eta^2} \; \ln\frac{\eta'}{\eta}+ \left(\frac{\eta}{\eta^{'2}}- \frac{\eta'}{\eta^2} \right) \, \ln\left(1-\frac{\eta}{\eta'} \right) + \frac{1}{\eta
'}-\frac{1}{\eta} \right] \;,\end{equation} \noindent where \begin{equation}\label{K12} \mathcal{K}_{\frac{1}{2}}(0;\eta,\eta')=-\frac{1}{8 \, \pi^2} \; \mathcal{P}\left( \frac{1}{\eta-\eta'}\right) = -\frac{1}{8 \, \pi^2} \; \frac{\eta-\eta'}{\left(\eta-\eta'\right)^2 + (\epsilon \; \eta')^2}\; . \end{equation} \noindent $\epsilon \rightarrow 0$ furnishes a regularization of the principal part prescription in eq. (\ref{K12}) \cite{ultimonuestro1,ultimonuestro2}. The unperturbed solution and the retarded Green's function for $k=0$ are the following \begin{eqnarray} X_{0,\vec{0}}(\eta) &=& A\;\eta^{\beta_+}+B\;\eta^{\beta_-} \quad ; \quad \beta_{\pm} \equiv \frac{1}{2}\pm \nu. \label{X00} \\ \mathcal{G}_\nu(0,\eta,\eta')&=& \frac{1}{2\nu}\left[\eta^{\beta_+}\;{\eta'}^{\beta_-}-\eta^{\beta_-}\; {\eta'}^{\beta_+} \right]\Theta(\eta-\eta') \;.\label{GF0} \end{eqnarray} where $A$ and $B$ are arbitrary constants. Then, the last term in eq.(\ref{source}) is \begin{equation}\label{integ2} \frac{ 2\, g^2 }{H^2\;\eta}\int_{\eta_0}^{\eta} \frac{d\eta'}{\eta'} \; \mathcal{K}_{\nu}(0;\eta,\eta') \; X_{0,0}(\eta')= A \; \eta^{\beta_+} \; \frac{\alpha_+}{\eta^2}+B \; \eta^{\beta_-} \; \frac{\alpha_-}{\eta^2}+F[\eta,\eta_0] \;, \end{equation} \noindent where $F[\eta,\eta_0]$ refers to the contribution of the lower integration limit and does not produce secular terms in $X_{\vec{0}}(\eta)$. The coefficients $\alpha_{\pm}$ are given by\cite{ultimonuestro2} \begin{eqnarray} \alpha_{\pm} & = & \frac{g^2}{(2 \, \pi \, H_0)^2}\left[ \ln \epsilon + \gamma + \psi(\frac12\mp\nu)\right] + \frac{g^2}{3(\pi \, H_0)^2}\left[ \frac{1}{\frac94-\nu^2}\left(\frac{3}{2 \,\Delta} + 2 +3 \,\gamma\right) + \right. \nonumber \\ & & \left. +\frac{1}{\nu^2-\frac14}+\frac{1}{(\nu\pm\frac32)^2} +\frac{\psi(\frac52 \mp \nu)}{\frac{3}{2}\mp\nu} +\frac{\psi(-\frac12 \mp \nu)}{\frac{3}{2}\pm\nu}\right] \;,\label{alfap} \end{eqnarray} Combining the above results with the first term in eq.(\ref{source}), requiring that the counterterm $\mathcal{C}_3[\Lambda,H_0]$ cancels the ultraviolet divergence $\ln \epsilon $ and keeping leading terms of order $1/\Delta^2$ in eqs.(\ref{alfap}), we find \begin{equation}\label{source0} \mathcal{R}_1(0,\eta)= A \; \frac{\eta^{\beta_+}}{\eta^2}\; (2 \, \nu \, d^+)\,+B \; \frac{\eta^{\beta_-}}{\eta^2} \; (2 \, \nu \, d^-)\,+F[\eta,\eta_0]\;, \end{equation} \noindent where the coefficients $d^{\pm}$ of entirely quantum origin (one-loop) are given by \begin{eqnarray} d^+ & = & - \frac{1}{2 \, \nu}\left[\frac{3 \, \lambda}{8 \, \pi^2\,\Delta}+ \frac{1}{6 \, \pi^2 } \left(\frac{g}{ H_0 \, \Delta}\right)^2 \right]\label{DP} \\ d^- & = & - \frac{1}{2 \, \nu}\left[\frac{3 \, \lambda}{8 \, \pi^2\,\Delta}+ \frac{1}{2 \, \pi^2 } \left(\frac{g}{ H_0 \, \Delta}\right)^2\right]\;. \label{DM} \end{eqnarray} Carrying out the integral in eq. (\ref{forsol}) with the retarded Green's function given by eq. (\ref{GF0}) we find the following result for the first order correction, \begin{equation} X_{1,0}= A\;\eta^{\beta_+} \; d^+ \ln\left(\frac{\eta}{\eta_0} \right)-B\;\eta^{\beta_-} \; d^- \ln\left(\frac{\eta}{\eta_0}\right)+\textmd{non-secular terms}\;, \end{equation} \noindent where the non-secular terms do not grow in the limit $\eta \rightarrow 0$. Up to first order in the loop expansion and leading order in $\Delta$, the solution for superhorizon modes is given by \begin{equation} X_0(\eta)= A\;\eta^{\beta_+}\left[1+d^+ \ln\left(\frac{\eta}{\eta_0} \right)\right]+B\;\eta^{\beta_-}\left[1- d^- \ln\left(\frac{\eta}{\eta_0}\right)\right]+\textmd{non-secular terms} \;.\end{equation} The resummation of the logarithmic secular terms is performed by implementing the dynamical renormalization group resummation introduced in refs.\cite{ultimonuestro1,ultimonuestro2}, leading to the following result \begin{equation}\label{DRGsol} X_0(\eta)=A_{\overline{\eta}} \; \left(\frac{\eta}{\overline{\eta}}\right)^{\beta_++d^+} + B_{\overline{\eta}} \; \left(\frac{\eta}{\overline{\eta}}\right)^{\beta_--d^-} =\left(\frac{\eta}{\overline{\eta}}\right)^{\Gamma} \left[A_{\overline{\eta}} \; \left(\frac{\eta}{\overline{\eta}}\right)^{\beta_++\gamma}+ B_{\overline{\eta}} \; \left(\frac{\eta}{\overline{\eta}}\right)^{\beta_--\gamma}\right]\;, \end{equation} \noindent where $\overline{\eta}$ is a renormalization scale; the amplitudes $A_{\overline{\eta}}, \; B_{\overline{\eta}} $ are given at this renormalization scale and obey a renormalization group equation, so that the full solution $X_0(\eta)$ is independent of the renormalization scale, as it must be. The exponents are given by \begin{eqnarray} \gamma & = & \frac{1}{2} \left(d^++d^- \right)= -\frac{1}{2 \, \nu}\left[ \frac{3 \, \lambda}{8 \, \pi^2\,\Delta}+ \frac{1}{3 \, \pi^2 } \left(\frac{g}{ H_0 \, \Delta}\right)^2 \right]\;, \label{gamma}\\ \Gamma & = & \frac{1}{2} \left(d^+-d^- \right) = \frac{1}{12 \, \pi^2\,\nu } \left(\frac{g}{ H_0 \, \Delta}\right)^2 \;. \label{Gamma} \end{eqnarray} The exponent $\Gamma$ coincides to leading order in slow roll with the result obtained in ref.\cite{ultimonuestro2}. Since $ \eta = -e^{-H_0\,t}/H_0$, in comoving time the amplitude of superhorizon fluctuations decays exponentially with the decay rate \begin{equation}\label{decrat} \Gamma_{\varphi \rightarrow \varphi \varphi} = H_0 \, \Gamma = \frac{H_0}{12 \, \pi^2\,\nu} \left(\frac{g}{ H_0 \, \Delta}\right)^2 \; , \end{equation} \noindent where the subscript ${\varphi \rightarrow \varphi \varphi}$ emphasizes that this is the rate of \emph{self decay} of inflaton fluctuations, a novel phenomenon which is a consequence of the inflationary expansion\cite{ultimonuestro1,ultimonuestro2}. In ref.\cite{ultimonuestro2} no quartic coupling was considered and the exponent $\gamma$ (for $\lambda=0$) was absorbed into a \emph{finite} redefinition (renormalization) of the mass of the inflaton, or alternatively of $\nu$, since the main goal of ref.\cite{ultimonuestro2} was to obtain the decay rate $\Gamma$. While such a redefinition of $\nu$ does not affect the decay rate $\Gamma$, in order to understand the novel scaling dimensions $ d^{\pm} $ it must be kept separate because it originates from infrared effects and not from the ultraviolet. The ultraviolet renormalization which is accounted for by the counterterm $ \delta \mathcal{M}^2 $ [eq.(\ref{contramasa})], subtracts the cutoff dependent contributions which are \emph{independent} of the wavevector $k$ (for $k \ll \Lambda$) whereas the contribution $\gamma$ [eq.(\ref{gamma})] arises from the infrared and not from the ultraviolet, as its dependence on $\Delta$ makes manifest. Therefore, the exponent $\gamma$ is a genuine infrared correction to the scaling of the mode functions which cannot be absorbed by the ultraviolet renormalization and emerges unambiguously in the scaling regime ($k \; |\eta| \ll 1 $), i. e. physical wavelengths much larger than the Hubble radius. These results are in agreement with ref.\cite{ultimonuestro2} for the decay rate $\Gamma$. In addition in the limit $\eta \rightarrow 0^-$ the growing mode features a \emph{novel} scaling dimension $-d^-$ namely \begin{equation} X_0(\eta) \buildrel{\eta \to 0}\over= B_{\overline{\eta}} \; \left(\frac{\eta}{\overline{\eta}}\right)^{\frac{1}{2}-\nu-d^-} \;. \end{equation} This correction to scaling is related to the decay rate $ \Gamma $ of superhorizon fluctuations eq.(\ref{Gamma}). From eqs.(\ref{gcoup}), (\ref{lam}) and (\ref{DM}), to leading order in slow roll and EFT expansions, $ d^- $ and the comoving time decay rate $ \Gamma_{\varphi \rightarrow \varphi\varphi}$ of superhorizon inflaton fluctuations are given by \begin{eqnarray} -d^- = && \left( \frac{H_0}{4\pi\,M_{Pl}} \right)^{\! \! 2} \frac{\sigma_V \; (\eta_V-\epsilon_V)+6 \, \xi^2_V}{2 \, \epsilon_V\,(\eta_V-\epsilon_V)^2}= \Delta_{\mathcal{R}}^2 \; \frac{ \sigma_V \; (\eta_V-\epsilon_V) + 6 \, \xi^2_V}{4 \, (\eta_V-\epsilon_V)^2} \;, \label{diman}\\ \Gamma_{\varphi \rightarrow \varphi\varphi} = && \left( \frac{H_0}{4 \, \pi \; M_{Pl}} \right)^{\! \! 2} \frac{H_0 \; \xi^2_V}{\epsilon_V\,(\eta_V-\epsilon_V)^2} = \frac12 \; \Delta_{\mathcal{R}}^2 \; \frac{H_0 \; \xi^2_V}{(\eta_V-\epsilon_V)^2} \label{gamslow} \end{eqnarray} \noindent where the slow-roll parameters are given by eqs. (\ref{etav})-(\ref{sig}). Whereas the exponent $\nu=\frac32-\Delta= \frac32+\epsilon_V -\eta_V $ is determined by eq.(\ref{X0}) for the free mode functions, the novel scaling exponent $-d^-$ is determined by the quantum corrections arising from the interactions. Again eq.(\ref{diman}) highlights an important aspect of the effective field theory approach. The bracket in this expression is formally of \emph{first order} in slow roll, namely of the \emph{same} order in slow roll as the departure from scale invariance of the \emph{free field} mode functions. This is a consequence of the {\em infrared enhancement} of the self-energy for $\nu \sim 3/2$ manifest as $ \Delta^{-2}= (\eta_V-\epsilon_V)^{-2} $. However, the novel dimension is perturbatively small precisely because of the effective field theory factor $H^2_0/M^2_{Pl}$. There are two important aspects of the above results that must be compared to our previous studies\cite{ultimonuestro1,ultimonuestro2}: \begin{itemize} \item{ In contrast to the study in ref.\cite{ultimonuestro1,ultimonuestro2} we have included both the cubic and the \emph{quartic} interaction vertices. The quartic coupling is of the same order in (EFT) as the square of the cubic coupling but higher order in slow roll [see eqs. (\ref{gcoup})-(\ref{lam})]. However, up to one loop, the diagram with two cubic couplings has two propagators, while the diagram with one quartic coupling has only one propagator. The difference in the number of propagators in these diagrams makes the contribution from the one loop with two cubic couplings of the same order in (EFT) \emph{and slow roll} as the one loop with one quartic coupling. This is an important difference with our previous study, which a systematic treatment of \emph{both} the (EFT) and slow-roll approximations as done here, has revealed.} \item{In our previous study\cite{ultimonuestro1,ultimonuestro2} the term $\gamma = (d^+ + d^-)/2$ in the (DRG) improved solution (\ref{DRGsol}) was absorbed into a finite mass renormalization simply because those studies focused on the \emph{decay rate}. Here we recognize that the contribution from $\gamma$ is \emph{not} a mass renormalization but instead enters in the anomalous dimensions of the growing and the decaying modes. This is an important new result, embodied in the final expression (\ref{DRGsol}) which displays \emph{both} the growing and decaying modes. } \end{itemize} \section{Inflaton coupling to other light scalars.}\label{other} So far our analysis only considered the self-interaction of the inflaton. In this section we generalize the previous results to a model which describes the inflaton field coupled to another scalar field $ \sigma(x) $ with both trilinear and quartic interactions. The new action is obtained from that in eq.(\ref{Split}) by adding the following terms \begin{equation}\label{2fields} S_{\sigma}= \int d^3x \; dt \; a^3(t) \Bigg\{ \frac{1}{2} \; {\dot{\sigma}^2}-\frac{(\nabla \sigma)^2}{2a^2}-\frac{1}{2} \; m^2 \; \sigma^2 - g_{\sigma} \; \phi \; \sigma^2 - \frac{\lambda_{\sigma}}{2} \; \phi^2 \; \sigma^2 \Bigg\} \end{equation} We assume initially zero expectation value for the field $\sigma$ and its time derivative. Thus, $ <\sigma> $ vanishes for all times and inflation is still driven by one scalar field. Upon performing the shift of the inflaton field as in eqn. (\ref{tad}) the effective mass and trilinear coupling are given by \begin{eqnarray} m^2_{\sigma}(\Phi_0) & = & m^2 + 2 \, g_{\sigma} \; \Phi_0+\lambda_{\sigma} \; \Phi^2_0 \label{msigma}\\ \widetilde{g}_{\sigma}(\Phi_0) & = & g_{\sigma} + \lambda_{\sigma} \; \Phi_0 \label{gsigma}. \end{eqnarray} In what follows we will assume that the sigma field is \emph{light} in the sense that \begin{equation}\label{lightsig} \frac{m^2_{\sigma}(\Phi_0)}{H^2_0} \ll 1 \; . \end{equation} Now all the steps in sections III-V generalize to this case. An extra term appears now in the equation of motion of the inflaton (eq.\ref{1lupeqn}) at one-loop level, \begin{equation}\label{fisig} \ddot{\Phi}_0(t)+3 \, H_0 \; \dot{\Phi}_0(t)+V'(\Phi_0)+g(\Phi_0) \; \langle [\varphi^+(\ensuremath{\vec{x}},t)]^2\rangle + \widetilde{g}_{\sigma}(\Phi_0) \;\langle [\sigma^+(\ensuremath{\vec{x}},t)]^2\rangle =0 \; . \end{equation} We conformally rescale the $ \sigma $ field as, $$ \sigma(\ensuremath{\vec{x}},t) =\frac{\rho(\ensuremath{\vec{x}},\eta)}{C(\eta)} \; . $$ The spatial Fourier transform of the free field Heisenberg operators $\rho(\ensuremath{\vec{x}},\eta)$ obey the equation \begin{equation}\label{ecro} \rho^{''}_{\ensuremath{\vec{k}}}(\eta)+ \left[k^2 + m^2_{\sigma}(\Phi_0) \; C^2(\eta)- \frac{C^{''}(\eta)}{C(\eta)} \right]\rho_{\ensuremath{\vec{k}}}(\eta)=0 \,. \end{equation} Using the slow roll expressions eq.(\ref{quasiDS}), it becomes \begin{equation}\label{ecro2} \rho^{''}_{\ensuremath{\vec{k}}}(\eta)+ \left[k^2 -\frac{\ensuremath{\bar{\nu}}^2-\frac{1}{4}}{\eta^2} \right]\rho_{\ensuremath{\vec{k}}}(\eta)=0 \end{equation} \noindent where the index $\ensuremath{\bar{\nu}}$ is given by \begin{equation}\label{nub} \ensuremath{\bar{\nu}} = \frac{3}{2} + \epsilon_V - \frac{m^2_{\sigma}(\Phi_0)}{3 \, H^2_0} +\mathcal{O}\left[\epsilon^2_V,\eta^2_V,\epsilon_V\eta_V, \frac{m^4_{\sigma}(\Phi_0)}{ \, H^4}\right] \,. \end{equation} The parameter $ {\bar\Delta} $ which controls the infrared behavior of the $\rho$ fluctuations is given by \begin{equation}\label{delba} {\bar\Delta}\equiv \frac32 - \ensuremath{\bar{\nu}} = \frac{m^2_{\sigma}(\Phi_0)}{3 \, H^2_0}-\epsilon_V +\mathcal{O}\left[\epsilon^2_V,\eta^2_V,\epsilon_V\eta_V, \frac{m^4_{\sigma}(\Phi_0)}{ \, H^4_0}\right] \; . \end{equation} Notice that the CMB anisotropy observations indicate that the slow roll parameters are $ \lesssim 10^{-2} $ consequently $\Delta$ is also $ \lesssim 10^{-2} $. The validity of the slow roll approximation and the condition that the scalar field $\sigma$ be light (\ref{lightsig}) guarantee that $ {\bar\Delta} \ll 1 $. The renormalized effective equation of motion (\ref{1lupeqn}) becomes \begin{equation}\label{fineqsig} \ddot{\Phi}_0(t)+3\,H_0\,\dot{\Phi}_0(t)+V^{'}_R(\Phi_0)+ \left(\frac{H_0}{4 \, \pi}\right)^2 \left[\frac{V^{'''}_R(\Phi_0)}{\Delta}+ \frac{2 \, \widetilde{g}_{\sigma}(\Phi_0)}{{\bar\Delta}}\right]=0 \;. \end{equation} There are now extra diagrams that contribute to the inflaton self-energy $ \Sigma(k,\eta,\eta') $ [eq.(\ref{Sigma})] with the field $ \sigma $ in the internal lines. These yield further corrections of order $ \widetilde{g}_{\sigma}^2 $ and $\lambda_{\sigma}$ to the inflaton mode functions $ X_{\ensuremath{\vec{k}}}(\eta) $ through eq.(\ref{eqnofmotfluc}). The calculation of these new contributions to $ X_{\ensuremath{\vec{k}}}(\eta) $ is straightforward since the diagram 1 (b) with the field $ \sigma $ in the internal loop yields the kernel $\mathcal{K}_{\ensuremath{\bar{\nu}}}(k;\eta,\eta') $ which follows from $ \mathcal{K}_{\nu}(k;\eta,\eta') $ [eqs.(\ref{kernel} ) and (\ref{Knu})] by simply replacing $ \nu $ by $ \ensuremath{\bar{\nu}} $ and accounting properly for the different combinatorial factors in the Feynman diagrams. We obtain the following contributions to the coefficients $d^{\pm}$ from the self-energy correction that features one loop of the $\sigma$ field \begin{eqnarray} d^+_{\sigma} & = & - \frac{1}{2 \, \nu \, {\bar\Delta}}\left\{\frac{ \lambda_{\sigma}}{8 \, \pi^2}+ \frac{1}{6 \, \pi^2 \, \Delta} \left[\frac{\widetilde{g}_{\sigma}(\Phi_0)}{ H_0}\right]^2 \right\}\cr \cr d^-_{\sigma} & = & - \frac{1}{2 \, \nu \, {\bar\Delta}} \left\{\frac{ \lambda_{\sigma}}{8 \, \pi^2}+ \frac{1}{2 \, \pi^2 \, \Delta} \left[\frac{\widetilde{g}_{\sigma}(\Phi_0)}{H_0}\right]^2\right\}\;. \label{DMS} \end{eqnarray} to leading order for small $ {\bar\Delta} \ll 1 $ and $ \Delta \, \ll 1 $. The contribution from the light $\sigma$ field to the inflaton self-energy leads to a further correction to the scaling dimension, given by $-d^-_{\sigma}$, and to the decay given by \begin{equation}\label{Gamsig} \Gamma^{\sigma}= \frac{1}{2} \left(d^+_{\sigma}-d^-_{\sigma}\right)= \frac{1}{12 \, \pi^2 \; \nu\Delta \; {\bar\Delta}} \left[\frac{\widetilde{g}_{\sigma}(\Phi_0)}{ H_0}\right]^2 \end{equation} In comoving time the partial rate of superhorizon inflaton fluctuations to decay in a pair of $\sigma$ scalars is given by \begin{equation}\label{sigwidth} \Gamma_{\varphi\rightarrow \sigma\sigma} = \frac{H_0}{12 \, \pi^2 \; \nu \; \Delta \; {\bar\Delta}} \left[\frac{\widetilde{g}_{\sigma}(\Phi_0)}{H_0}\right]^2 \end{equation} $ \Gamma_{\varphi\rightarrow \sigma\sigma} $ has a similar structure to the inflaton self-coupling decay $ \Gamma_{\varphi \rightarrow \varphi \varphi} $ [eq.(\ref{decrat})]. The \emph{total} rate of superhorizon fluctuations is given by \begin{equation}\label{Gamtot} \Gamma_{tot} = \Gamma_{\varphi \rightarrow \varphi\varphi}+\Gamma_{\varphi\rightarrow \sigma\sigma} = \frac{1}{12 \, \pi^2 \; \nu \; \Delta \; H_0} \left[\frac{\widetilde{g}_{\sigma}^2(\Phi_0)}{{\bar\Delta}} + \frac{g^2(\Phi_0)}{\Delta}\right] \; , \end{equation} \noindent where $\Gamma_{\varphi \rightarrow \varphi\varphi}$ is given by eq. (\ref{gamslow}). Depending on the values of the effective couplings and mass $\widetilde{g}_{\sigma}(\Phi_0)$, $\lambda_{\sigma}$, $m^2_{\sigma}(\Phi_0)$ the value of $d^{\pm}_{\sigma}$ can exceed that of $ d^{\pm} $. The value of the couplings of the inflaton to another scalar field must necessarily be small so that there is no large isocurvature perturbation contribution. \section{Conclusions and further questions}\label{conclu} Motivated by the current and forthcoming precision CMB data, we study the quantum corrections to the inflationary dynamics arising from inflaton \emph{self-interactions}. Our approach distinctly treats inflationary dynamics in terms of scalar fields as a renormalized \emph{effective field theory} (EFT) valid when the scale of inflation is much smaller than the cutoff scale (presumably the Planck scale). We focus on single field slow-roll inflation as a viable model that provides a generic setting for the robust predictions of inflation: almost scale invariant spectrum of gaussian adiabatic perturbations, small ratio of tensor to scalar amplitudes, etc, which is compatible with the WMAP data. The calculations of inflationary parameters within the slow roll expansion are typically based on purely gaussian quantum fluctuations. However, the WMAP data yields a hint of a cubic interaction in the form of the small but non-vanishing `jerk' parameter $ \xi_V$. Furthermore, self-interactions of fluctuations will necessarily be present if the inflaton potential is non-linear as would be the most natural effective description. In this article we focused in obtaining the lowest order quantum corrections to the following relevant inflationary ingredients: \begin{itemize} \item{ The equation of motion for the homogeneous expectation value of the inflaton field.} \item{The Friedmann equation. The corrections to the Friedmann equation correspond to the quantum contributions in the \emph{effective potential} since these are interpreted as the expectation value of $T_{00}$ in an homogeneous coherent state. The quantum corrections to the effective potential yield \emph{quantum corrections} to the slow roll parameters. These are computed to leading order in EFT and slow roll expansions. } \item{The equations of motion for the quantum fluctuations of the inflaton around its classical value.} \end{itemize} The effective field theory approach requires that the ratio of the scale of inflation to the cutoff scale is small, namely $H/M_{Pl}\ll 1$. Slow roll inflation relies on an \emph{adiabatic} evolution of the scalar field and implies a hierarchy of small dimensionless parameters which are related to derivatives of the scalar potential with respect to the field\cite{barrow,liddle}. We combined both approaches to obtain the quantum corrections in the effective field theory to leading order in the (EFT) and slow roll expansions. Both the one-loop correction to the equation of motion of the inflaton and to the Friedmann equation are determined by the power spectrum of scalar fluctuations. A nearly scale invariant spectrum entails a strong infrared behavior of the loop integral manifest as \emph{poles} in the small parameter $\Delta= \eta_V-\epsilon_V+ \mathcal{O}(\epsilon^2_V,\eta^2_V,\epsilon_V\eta_V)$, which measures the departure from scale invariance. This slow roll parameter provides a natural \emph{infrared} regularization of the loop integrals which allows a controllable and systematic (EFT) and slow roll expansions. We find that the one-loop effective potential to the leading order in slow-roll and $\Delta$ expansions is given by \begin{eqnarray} && V_{eff}(\Phi_0)= V_R(\Phi_0)\left[1+ \left(\frac{H_0}{4 \, \pi \; M_{Pl}}\right)^2\frac{\eta_V}{ \eta_V-\epsilon_V} + \textmd{higher orders in slow roll} \right] = \cr \cr &&= V_R(\Phi_0)\left[1+\frac{\Delta^2_{\mathcal{T}}}{32} \frac{n_s -1 + \frac38 \; r}{n_s -1 + \frac14 \; r} +\textmd{higher orders in slow roll}\right] \; , \end{eqnarray} \noindent where $H_0$ and the slow roll parameters are explicit but slowly varying functions of $\Phi_0$ given by eqs.(\ref{etav})-(\ref{sig}). This result is strikingly different from that in Minkowski space-time (see appendix). >From this effective potential we obtain the effective slow roll parameters that include \emph{quantum corrections}. The lowest effective slow roll parameters are given by eqs.(\ref{epseff}). A remarkable aspect of the \emph{quantum corrections} to the effective potential and slow roll parameters (namely the terms of order $H^2_0/M^2_{Pl}$) is that these are of \emph{zeroth} order in slow roll. This is a consequence of the infrared enhancement from the nearly scale invariant scalar quantum fluctuations. The equations of motion for the quantum fluctuations of the inflaton around the $\Phi_0$ are obtained including the one-loop self-energy corrections. The self-energy features a strong infrared behavior which is regularized by $\Delta$. The perturbative solution of the equations of motion features secular terms that diverge in the long time limit (vanishing conformal time). The dynamical renormalization group program\cite{ultimonuestro1,ultimonuestro2} is implemented to provide a resummation of this perturbative series. The improved solution for superhorizon modes feature two novel aspects: a correction to the scaling exponent and a decay rate. The latter, as anticipated in refs.\cite{ultimonuestro1,ultimonuestro2} describes the \emph{self-decay} of inflaton fluctuations. To leading order in the effective field theory and slow-roll expansions we find that the novel scaling dimension and decay rate are given by \begin{eqnarray} \label{dG} -d^- = && \left( \frac{H_0}{4\pi\,M_{Pl}} \right)^{\! \! 2} \frac{\sigma_V \; (\eta_V-\epsilon_V)+6 \, \xi^2_V}{2 \, \epsilon_V\,(\eta_V-\epsilon_V)^2}\; , \cr \Gamma_{\varphi \rightarrow \varphi \varphi} = && \left( \frac{H_0}{4\pi\,M_{Pl}} \right)^{\! \! 2} \frac{H_0 \; \xi^2_V}{\epsilon_V\,(\eta_V-\epsilon_V)^2}. \end{eqnarray} Both the novel scaling dimension and the decay rate can be expressed in terms of CMB observables inserting eqs.(\ref{gorda}) for $ \eta_V, \; \epsilon_V , \;\xi_V $ and $ \sigma_V $ into eqs.(\ref{dG}). While the quantum corrections are small, consistently with the effective field theory expansion, they may \emph{compete} with higher order slow roll corrections in the gaussian approximation. Therefore, in order to gain understanding of the inflationary parameters from high precision data, any high order estimate in the slow roll approximation must be accompanied by an assessment of the quantum corrections arising from the interactions as studied here. We have generalized these results by studying a model in which the inflaton is coupled to a light scalar field $\sigma$. We have obtained the contribution from a loop of $\sigma$ particles to the effective potential, scaling exponents and the partial rate for the decay of superhorizon fluctuations of the inflaton $\rightarrow \sigma \sigma$. In this article we have focused on studying the effects of the interaction on the expectation value and the fluctuations of the inflaton field. In order to understand the possible effects of the interaction on curvature perturbations and or gravitational waves, the next stage of the program requires to quantize the perturbations and to compute loop corrections. This will be the focus of forthcoming studies. \begin{acknowledgments} D.B.\ thanks the US NSF for support under grant PHY-0242134, and the Observatoire de Paris and LERMA for hospitality during this work. This work is supported in part by the Conseil Scientifique de l'Observatoire de Paris through an `SInAction Initiative'. \end{acknowledgments}
\section{Introduction} Wide-band gap III-V nitrides, particularly Ga-, Al- and InN, and their semiconductor alloys, are materials currently under intense study. Some of their most promising applications in optoelectronics devices are for instance the fabrication of blue/green LED's, \cite{led} laser diodes, \cite{laser} and `solar-blind' UV photodetectors. \cite{UV} The performance improvements of these and related optoelectronic devices depend strongly on the features of the intrinsic and extrinsic impurity defects in the nitride compounds. For example, defects and impurities provide free carriers under suitable conditions. Therefore, knowing the accurate position of the donor and acceptor levels of these systems is an issue of great importance in the understanding of optical properties and practical applications of these nitrides. At present, Mg and Zn are the impurity materials most widely employed in the p-doping of GaN. The experimental thermal ionization energy (acceptor binding energy) associated with Mg is estimated at 250 meV. \cite{Strite} The highest doping achieved reaches hole concentrations of approximately $3\times 10^{18}$ cm$^{-3}$ at room temperature. \cite{conc} It is also known that in order to activate the dopants and improve the p-type conductivity, the samples must be treated with low energy electron beam irradiation, furnace annealing, or rapid thermal annealing after growth.\cite{leebi} On the other hand, Zn doping seems to be inefficient because of its relatively deep ionization energy (340 meV). \cite{Strite} Other dopants have been considered, but experimental problems like instability and/or hole compensation due to the formation of acceptor-H neutral complexes is still at issue. Estimates for the binding energies of several substitutional acceptors in GaN have been obtained in the past, mostly through photoluminescence (PL) spectra.\cite{Strite} However residual impurities and defects in this material complicates the identification of these levels. In contrast, little is known about the doping and spectrum of impurity levels in AlN. In fact, no conclusive results for the doping of AlN with sufficiently high conductivity have yet been reported. Apart from the question of successful p-doping in GaN and AlN using various impurities, there are still at least two other important issues that are under scrutiny. The first one is related to the determination of the origin of the chemical shift observed in the acceptor spectrum levels in GaN, apparently induced by the differences in the cores of the various impurity atoms, and some possible lattice relaxation around the impurity atom. \cite{Strite} The second question is whether acceptors with smaller binding energies ($< 230 $ meV) exist for wurtzite and zincblende GaN. The occurrence of relatively large ionization energies for acceptors in GaN has been attributed in part to the fact that the III-V nitrides are more ionic than other III-V compounds (such as GaAs, GaP and InP), for which the acceptor binding energies are an order of magnitude smaller than those found in GaN. It has also been suggested that the enhanced binding energies found for some acceptors like Zn and Cd, are associated to the relaxation of the d-electron core. \cite{Strite-1} On the other hand, impurities without d-electron states, such as Mg, C, and Si, appear to induce rather shallow acceptor levels. Indeed, very recently, Park and Chadi \cite{Park-CH} examined the stability of acceptor centers in GaN, AlN and BN using first principles calculations. They concluded that the small bondlengths in III-V nitrides inhibit large lattice strain relaxations around impurities (mainly Be, Mg and C), giving rise to relatively shallow states for these species. This would suggest that a similar lack of relaxation accompanies other substitutional impurities in these hosts, producing relatively shallow levels, as long as there are no d-cores close to the valence band energies. Very recently, the formation energies and impurity levels for a few donor and acceptor species have been studied theoretically by several groups, \cite{Bogus,Bernardini,Neugebauer,Fiorentini,Bernard1} employing quantum molecular dynamics schemes and total energy calculations in the local density approximation of density functional theory. In general, consistency is found among those groups, as well as with experimental reports for some impurity levels, such as Mg$_{\it Ga}$ acceptors (X$_Y$ indicates the ion X substituting in the $Y$ site). However, this is not the case for other acceptors like C$_{\it N}$, where discrepancies of factors of three exist among theoretical values. Although the calculated energy levels for these approaches appear reliable for most cases, the impurity levels reported for some acceptors are close to the systematic error bars introduced in the calculations. The delicate and complex nature of these calculations, which require intensive computations, suggest that alternative methods should be explored in the study of impurity levels in these systems. There is also, no doubt, the need for new careful experiments in the better-characterized materials now available, to clarify these discrepancies. The features of the acceptor states in the different crystal phases, wurtzite (WZ) and zincblende (ZB), have not been discussed either. In order to address these questions, we present here a contribution towards the theoretical treatment of the impurity levels in GaN and AlN based in the effective mass approach for degenerate bands. In this paper we report effective mass theory calculations of the acceptor binding energies for various impurity atoms in GaN and AlN for both crystal structures, WZ and ZB. Particular attention has been paid to chemical shifts introduced by the foreign atoms. An acceptor-pseudopotential model is used to take into account this effect. The approach used here is based on the effective mass theory (EMT) for degenerate bands. Well parameterized valence band structure calculations are used as input. The results obtained, with no adjustable parameters, are in very good agreement with experiments, as we will see below. Inevitably, the application of even a simple hydrogen-like model of acceptor states in group III-V semiconductors is more complicated than for idealized semiconductors with a single, isotropic and spin-degenerate valence band. The complications are due in part to the band warping and sixfold degeneracy or near-degeneracy of the valence band structure close to the $\Gamma$ point (${\bf k}=0$). Since the perturbing potential introduced by the foreign atoms can be seen to zeroth order as pure Coulomb-like, the problem can be seen as a generalized hydrogenic problem, where the kinetic energy of a hole, in the rather complicated valence band structure of the III-V materials, is properly described by a $6 \times 6$ matrix Hamiltonian which describes well the dispersion features of the various hole bands. The EMT calculations of the binding energies of Be, Mg, Zn, Ca, and C acceptor impurities are shown to be in very good agreement with the available experiments, and consistent in general with other theoretical calculations employing other methods (with the exceptions discussed above for C, for example). The applicability of EMT for the calculation of impurity levels with 0.2--0.4 eV binding energies is then verified {\em post facto}, likely due to the large bandgap in these materials, which yields negligible mixing of conduction band states. Additionally, we find that the binding energies for acceptors in the ZB structures are predicted to be shallower than their counterparts in the WZ structures, suggesting that doping of ZB material would be of significant practical advantage. We notice that the difference on parameters, mainly the existence of a crystal field splitting for the WZ nitrides, strongly affects the band mixing and correspondingly the binding energies in the two polytypes. It is also likely that differences in the hole masses contribute to the calculated different binding energies. Although substitutional impurities do not represent a strict test of the different band parameterizations, the subtle interplay of the different valence bands on the resulting binding energies provides an interesting overall consistency check of the parameterized band structure. The paper is organized as follows. In section II, we present the characteristics of the generalized acceptor problem. The explicit matrix form of the ZB and WZ valence band Hamiltonians are also given there. The trial form of the envelope wave functions is presented in section III. The impurity pseudopotential model is discussed in detail in section IV. The correction due to polaron effects is briefly described in section V. The results and discussion are given in section VI, and the conclusions in section VII. \section{Generalized shallow acceptor problem} Substitutional impurities with one fewer valence electron than the host atom of the pure crystal introduce well localized acceptor states lying just above the top of the valence band structure. The theory of shallow donor and acceptor states in semiconductors has been reviewed in detail by Pantelides. \cite{Pantelides} We assume, as usual, that all acceptor levels in the semiconductor are described within the effective mass theory for degenerate band structures by the following matrix equation \begin{eqnarray} { H}({\bf r}){\bf F}({\bf r}) & = & [{\cal H}({\bf r})+U({\bf r}) {\bf 1]F}({\bf r}) \nonumber \\ & = & E{\bf F}( {\bf r}) \, , \label{ec1} \end{eqnarray} where ${ H}({\bf r})$ is the full acceptor Hamiltonian with eigenvalues $ E$ for the acceptor states. Here, ${\cal H}({\bf r})$ is the Hamiltonian properly constructed from crystal symmetry considerations which entirely describes the spectrum and eigenvalues of a hole near the valence band extremum at the $\Gamma$ point. Symmetry invariance group theory \cite{Pikus,RSP} and {\bf k$\cdot$p} perturbation theory for degenerate bands \cite{Lutt,Chuang} has been used to derive the proper effective-mass Hamiltonian for strained semiconductors depending upon the crystal structure. The potential $U({\bf r})$ is the perturbation produced by the acceptor-ion on the otherwise pure and periodic host crystal. In a simple idealized case, $U({\bf r})$ is taken to be the Coulomb potential $U({\bf r})=e^2 / \epsilon _o|\bf r|$, where $\epsilon _o$ is the static dielectric constant of the crystal, $\epsilon (q=0,\omega=0)$, representing a point charge in a dielectric medium. Notice that the screening of the simple hydrogenic potential by a dielectric function $\epsilon(\bf q)$ has been considered in the past as an approach to consider the contribution to the acceptor spectrum of the short range potential from the real impurity.\cite{Bernholc} Although this model gives an insight into the specific character of the different atomic acceptor levels, the model results in a generic value for all the impurity defects. This, clearly, neglects the chemical signature of the foreign atoms in the host material (the so-called central-cell contribution). \cite{Pantelides} Given these limitations, we employ instead an {\em ab initio} pseudopotential $U_{ps}(r)$ corresponding to the difference between the bare model potential of the impurity and the host atoms. Since the chemical correction induced by different species is expected to be small and because the pseudopotential used is fairly smooth and without discontinuities, the effective mass approach is expected to yield an appropriate description of the system. More details on the impurity potentials used are given in section IV. In Eq.\ (\ref{ec1}), ${\bf 1}$ is the $6\times 6 $ unit matrix and ${\bf F}({\bf r})$ is a column vector whose $F_j({\bf r})$ elements characterize the envelope function which modulates the Bloch functions $\phi _j({\bf r})$ of the unperturbed crystal at the top $({\bf k}\approx 0)$ of the valence structure. Correspondingly, the wave functions for the shallow states are given by \begin{equation} \psi ({\bf r})=\sum_{j=1}^6F_j({\bf r})\phi _j({\bf r}) \, . \label{ec2} \end{equation} The trial form chosen for the envelope functions $F_j({\bf r})$ is discussed in detail in section III. In the following subsections we describe briefly the explicit form of the hole Hamiltonian ${\cal H}({\bf r})$ for the two crystal polytypes (WZ and ZB), in which the bulk GaN and AlN semiconductors grow. \subsection{Wurtzite valence band Hamiltonian} In order to consider the motion of a carrier at the top of the valence band in a wurtzite semiconductor we must take into account its six-fold rotational symmetry, which induces a crystal field splitting. Moreover, in the case of spin-orbit interaction, the $\Gamma _{15}$ level splits into the $\Gamma _9$ state, upper $\Gamma _7$ level, and lower $\Gamma_7$ level, corresponding to the heavy hole, light hole and split-off hole bands. \cite{Pikus} The appropriate effective mass Hamiltonian that reflects those features of the WZ GaN bulk crystal should be described thus by the Rashba-Sheka-Pikus Hamiltonian (RSP),\cite{Pikus,RSP} as discussed recently by Sirenko {\em et al}.\cite{Sirenko} In the vicinity of the valence band maximum, and to second order in $k$, the six states (including the spin index) of the RSP Hamiltonian for unstrained WZ structures can be explicitly written in a matrix representation as follows: \begin{equation} {\cal H_{WZ}}({\bf k})=\left( \begin{array}{cccccc} F & 0 & -H^{*} & 0 & K^{*} & 0 \\ 0 & G & \Delta & -H^{*} & 0 & K^{*} \\ -H & \Delta & \lambda & 0 & I^{*} & 0 \\ 0 & -H & 0 & \lambda & \Delta & I^{*} \\ K & 0 & I & \Delta & G & 0 \\ 0 & K & 0 & I & 0 & F \end{array} \right) \, , \label{ec3} \end{equation} where \begin{eqnarray} F &=& \lambda +\theta +\Delta _1+\Delta _2 \,\ \nonumber \\ G &=& \lambda +\theta +\Delta _1-\Delta _2 \nonumber \\ \lambda &=& A_1k_z^2+A_2k_{\perp }^2 \qquad \qquad \nonumber \\ \theta &=& A_3k_z^2+A_4k_{\perp }^2 \nonumber \\ H &=& i(A_6k_zk_{+}+A_7k_{+}) \nonumber \\ I &=& i(A_6k_zk_{+}-A_7k_{+}) \nonumber \\ K &=& A_5k_{+}^2 \nonumber \\ \Delta &=& \sqrt{2}\Delta_{3}\, , \end{eqnarray} \noindent with $k_{\perp }^2 = k_x^2+k_y^2$, and $k_{\pm }=k_x\pm ik_y$. Here, $\Delta _1$ corresponds to the energy splitting produced by the anisotropy of the hexagonal symmetry, $\Delta _2=\Delta^{(z)}_{so}/3$ and $\Delta _3=\Delta^{(\perp)}_{so}/3$ are the energy splittings for the $z$ and perpendicular directions produced by the spin-orbit (SO) interaction.\cite{Sirenko} The $A$ constants are related to the inverse of the hole masses, in units of $\hbar^2/2m_o$, where $m_o$ is the bare electron mass. Notice that when the linear terms in (\ref{ec3}) are negligible ($A_7=0 $; which is in fact nearly the case in GaN and AlN), the RSP Hamiltonian has complete inversion symmetry. This symmetry allows for helpful simplifications in dealing with the acceptor problem in the envelope function framework, as we discuss below. \subsection{Zincblende valence band Hamiltonian} In the case of semiconductors with the ZB structure, the hole wave functions characterizing the sixfold degenerate $\Gamma _{15}$ state split, due to the effects of spin-orbit interaction, into the fourfold degenerate $\Gamma _8$ states corresponding to the heavy and light hole bands, and the spin-split off hole states $ \Gamma _7$. \cite{Pikus} The Hamiltonian which takes into account all these features of the cubic symmetry for ZB semiconductors is the well known Luttinger-Kohn Hamiltonian (LK),\cite{Lutt} which at the valence band extremum, and to second order in $k$, is expressed in terms of only four empirical parameters --- the so-called Luttinger-Kohn parameters $\gamma _1,\gamma _2$ and $\gamma _3$, and the spin-orbit splitting $\Delta _{o}$. Thus, the LK Hamiltonian ${\cal H_{ZB}}$ is written in matrix form as follows \begin{equation} {\cal H_{ZB}({\bf k})=}\left( \begin{array}{cccccc} P & L & M & 0 & N & S \\ L^{*} & Y & 0 & M & R & \sqrt{3}N \\ M^{*} & 0 & Y & -L & \sqrt{3}N^{*} & R \\ 0 & M^{*} & -L^{*} & P & -S^{*} & N^{*} \\ N^{*} & R^{*} & \sqrt{3}N & -S & W & 0 \\ S^{*} & \sqrt{3}N^{*} & R^{*} & N & 0 & W \end{array} \right) \, , \label{ec3.1} \end{equation} where \begin{eqnarray} L &=& -2\sqrt{3}i\gamma _2k_zk_{-} \nonumber \\ M &=& \sqrt{3}\gamma _3(k_x^2-k_y^2)-2\sqrt{3}i\gamma _3k_xk_y \nonumber \\ N &=& \frac i{\sqrt{2}}L \nonumber \\ P &=& \gamma _1k^2-\gamma _2\left( 2k_z^2-k_{\perp }^2\right) \nonumber \\ Q &=& \gamma _1k^2+\gamma _2\left( 2k_z^2-k_{\perp }^2\right) \nonumber \\ R &=& -\frac{\sqrt{2}}3i(P-Q) \nonumber \\ S &=& -i\sqrt{2}M \nonumber \\ W &=& \frac 13(2P+Q)+\Delta _o \nonumber \\ Y &=& \frac 13(P+2Q) \, . \end{eqnarray} \noindent Here, $L, M, P$ and $Q$ are in units of $\hbar ^2 /2m_o$. Notice that the higher symmetry of the ZB structure produces a much simpler ${\cal H} (r)$ and fewer parameters than for the RSP case. In both cases, the operator ${\cal H}({\bf r})$ is obtained via the usual transformation $ k_\alpha \rightarrow i\frac \partial {\partial x_\alpha }$ in ${\cal H}( {\bf k})$. \section{Trial form for the envelope functions} To solve the effective mass equation for degenerate bands, Eq.\ (\ref{ec1}), we use the fact that the effective mass Hamiltonian is invariant under inversion with respect to the origin, so that the envelope functions $F_j({\bf r})$ can be chosen to have definite parity. Since the features of the acceptor problem are rather like those of a hydrogenic-like problem, it has proved convenient to choose the envelope functions basically as an expansion in spherical harmonics and a linear combination of hydrogenic-like radial functions. In particular, we have chosen the following explicit form: \begin{equation} F_j({\bf r})=\sum_{l,m}f_l^{~j}(r)Y_{lm} (\theta ,\phi) \, , \label{ec4} \end{equation} % summing over all $l$ even (or odd), and with radial functions for a given hole band $j$ and angular momentum quantum number $l$ of the form % \begin{equation} f_l^{~j}(r)=\sum_{i=1}^NA_i^{~j}r^le^{-\alpha _ir}\, . \label{ec5} \end{equation} In this work, however, we are mostly interested on the ground state (the highest binding acceptor state), and in such state only even $l$ will contribute to the expansion --- as one would expect a ground state with even parity. This convenient simplification can be relaxed straightforwardly if desired, with little effect on the results. For numerical convenience, we find it useful to minimize or evaluate the acceptor binding energy choosing $\alpha _i^{\prime }s$ in the progression $\alpha _k=\alpha _1e^{\beta (k-1)}$, such that $ \beta= (N - 1)^{-1} \log (\alpha_N/\alpha_1) $, and the end point conditions are chosen as $\alpha _1=1.2\times 10^{-2}\ a_o^{*^{-1}}$, and $ \alpha _{N}=3.5\times 10^2~a_o^{*^{-1}}$. Here, $a_o^{*}= \tilde{\gamma_1}\epsilon _o a_o$ is the effective Bohr radius, and $\tilde{\gamma_1}$ is defined by \begin{equation} \tilde{\gamma_1} =\left\{ \begin{array}{ll} -(2m_o /\hbar^2) (A_2+A_4) & \mbox{ for WZ} \\ \gamma_{1} & \mbox{ for ZB} \end{array} \right. \, , \label{ec101} \end{equation} % such that the effective Rydberg energy is defined as $E_o^{*}= m_o e^4 / 2\hbar^2 \tilde{\gamma _1} \epsilon _o^2 = e^2/2a_o \tilde{\gamma_1} \epsilon_o$. The range of $\alpha_i$ values was designed to cover a wide spectrum of length scales. In the limit case of $\tilde{\gamma _1}=\epsilon _o=1$ (with N=25 and for $l=0,2$),
one obtains the hydrogen spectrum, so that for the first five states we obtain (in Rydbergs) $E_1=1.0000,$ $E_2=0.2500,$ $E_3=0.1111,$ $E_4=0.0625$ and $ E_5=0.0399$, as expected. \section{Impurity atom pseudopotential} As mentioned above, a simple hydrogenic (scaled Coulomb) potential would not yield the observed variations in the binding energy of acceptor states for different impurity atom species. Photoluminescence measurements show indeed important differences in the acceptor binding energies in WZ GaN for different impurities \cite{Strite}. To study those `chemical' shifts one needs to use impurity potentials properly constructed to insure that their physical properties reflect the expected shifts. The impurity potential here is obtained from an analytical representation of the pseudopotential for the bare impurity and host atoms. The analytic form follows Lam {\it et al.},\cite{Lam} who fit the first-principles pseudopotentials developed earlier by Zunger {\it et al.} \cite{Zunger} in a density functional formalism. Notice then that the acceptor potential is truly an impurity pseudopotential, having its origin in {\it ab initio} calculations. The pseudopotential for a bare atom can be written as \cite{Lam} \begin{equation} U_{ps}(r)=\sum_lV_{ps}^l(r) \hat{P_l}-\frac{Z_v}r, \label{ec8} \end{equation} with \begin{equation} V_{ps}^l(r)=\frac{C_1^l}{r^2}e^{-C_2^lr}-\frac{Z_c}re^{-C_3r}, \label{ec9} \end{equation} where $V_{ps}^l(r)$ represents the atomic core pseudopotential. $ \hat{P_l}$ is the projection operator which picks out the component of the wave function with angular momentum number $l$. The constants $C_1^l,$ $C_2^l$ and $C_3$ are the fitted parameters, with $Z_c$ and $Z_v$ representing the core and valence electron charges, respectively, as defined by Lam. \cite{Lam} The first term in (\ref{ec9}) represents a potential barrier which replaces the kinetic energy of the true valence states, while the second term arises from electrostatic screening of the nucleus by the core electrons and exchange-correlation forces. Using these pseudopotentials, the impurity model potential is constructed as follows. When the substitutional impurity atom replaces the host atom in the crystal, the impurity potential is defined as the difference between the impurity and host ion pseudopotentials. If $l=0$, for instance, \begin{equation} U(r)=\frac {e^2}{\epsilon _o}\Delta V_{ps}^o(r)-\frac{\Delta Z_v e^2}{\epsilon _or } \, , \label{ec10} \end{equation} with \begin{equation} \Delta V_{ps}^o(r) = \pm (V_{ps,host}^o(r)-V_{ps,imp}^o(r)) \mbox{ for } Z_{host} ~^>_< Z_{imp} \, . \label{ec11} \end{equation} Here, $\Delta Z_v=Z_v^{host}-Z_v^{imp}$ ($=1$ for single acceptors), and $\epsilon _o$ is the dielectric constant of the host lattice. Clearly the first term in $U(r)$ corresponds to the net potential produced by the difference between the bare core potentials of the impurity and the host; it is the short-range part. The last term is the long-range Coulombic potential due to the difference in the valence charge $\Delta Z_v$. The static dielectric constant $\epsilon _o$ is introduced here to reflect the effect of the lattice polarizability (screening) of the host crystal. Notice that in this approach the net effect of the redistribution of charge near the impurity defect and the accompanying screening of the foreign charge at `large' distances (several lattice units) is considered fully in the pseudopotential definition. In a different approach, frequent in the literature, \cite{Wang} the role of the pseudopotential is partly simulated using a $q$-dependent screening function $\epsilon (q) ~(\rightarrow \epsilon_o {\it ~for~} q \rightarrow 0)$ in the simple hydrogenic-style impurity potential. We avoid using $\epsilon (q)$ thanks to the impurity-specific pseudopotential. We believe our approach to be better in this problem, as it requires no further adjustable parameters and yields the expected chemical shifts quite accurately. To provide a simple and independent test of the model we have calculated the binding energies for several acceptors in the well characterized semiconductor GaAs. The results are shown in Table I. The theoretical binding energies are in excellent agreement with the experimental values, with no additional parameters. \section {Polaron correction} We should also notice that since the nitride semiconductors (GaN and AlN) are polar materials, one would expect that the electron-LO phonon coupling would introduce corrections to the bound states. In order to obtain an estimate of such correction, we assume that the polaron contribution to the acceptor binding energy close to the $\Gamma$ point is diagonal in band index. Therefore the acceptor binding energies will be enhanced by $(1+ \alpha_F (m^*_j)/6 )E^*_{o,j}$, up to first order in the Fr\"ohlich coupling constant $\alpha_F$ for each hole band. This coupling constant is defined by \cite{Sak} \begin{equation} \alpha_{F}(m^*_j) = \left( \frac{1}{\epsilon_\infty}-\frac{1}{\epsilon_o} \right) \left( \frac{E_o}{\hbar \omega} \frac{m^*_j}{m_o} \right)^{1/2} \, , \end{equation} where $E_{o}$ is one Rydberg, $E^*_{o,j}$ is the ground state energy of the impurity acceptor without the polaron correction, $\hbar \omega$ is the LO phonon energy, and $m^*_j$ is the average $j$-hole effective mass. In this way, the contribution of each hole band to the polaron energy is taken into account explicitly in the multiband calculation. Let us notice that the resulting polaron correction is relatively small (not greater than 8\%) in all cases, as shown in the tables below, despite the polar nature of these materials. This is presumably due to the fact that the coupling constant associated to each hole band is relatively small ($\le 1.5$) in all cases. \cite{XWFMP97} \section{Results and Discussion} Since the reported values of effective mass parameters obtained by different approaches for both the RSP and LK Hamiltonians may have significant discrepancies, we have used different sets of parameterizations in order to compare the resulting impurity states. \cite{Kim,Suzuki,Maje,Yeo,Wang,Meney} For the wurtzite system (Tables II and III), we use Kim {\it et al.} \cite{Kim} RSP parameterizations obtained by full potential linearized muffin-tin orbital (FP-LMTO) band structure calculations, in which the spin-orbit coupling effects were obtained via the atomic-sphere approximation. We have also used the Suzuki {\it et al.} \cite{Suzuki} RSP parameters obtained by full potential linearized augmented plane wave calculations (FLAPW); a different set reported by Majewski {\it et al.} \cite{Maje} based on the norm-conserving pseudopotential plane-wave (PPPW) method, and a fourth set obtained by Yeo {\it et al.}, \cite{Yeo} who employ an empirical pseudopotential method. Notice that differences in parameters between these two groups are typically small, but can be substantial in some cases (such as the value of the crystal field splitting $\Delta_1$), having important consequences on the binding energy calculations, as we see later. In the case of zincblende structures, the LK hole-parameters used are those reported by Kim {\it et al.},\cite{Kim} and Suzuki {\it et al}.\cite{Suzuki} as mentioned above, and a third set by Wang {\it et al.}, \cite{Wang} based on pseudopotential calculations. These parameters are summarized in Table IV. We first examine our results for the acceptor levels in WZ nitrides. We should emphasize here that the experimental values of the acceptor levels in WZ-GaN are not without controversy. Nevertheless, in order to have a trend of the binding energies for different dopants, we compare our theoretical calculations with the experimental values in the literature. For GaN the results are listed in Table V (theoretical binding energy values are reported here to the nearest meV, but are calculated with much higher numerical accuracy for each set of parameters). We note that in general the binding energies for different impurities are in good agreement with those values observed in experiments. For instance, our calculations with the Suzuki {\em et al}. \cite{Suzuki} parameters give rise to a binding energy for Be$_{\it Ga}$ and Mg$_{\it Ga}$ (241 and 253 meV, respectively, with the polaron correction included) which would seem to be in better accord with the reported experimental value (250 meV). Indeed, Salvador {\it et al.} \cite{Salvador} reported recently room-temperature photoluminescence spectra of Be-doped GaN films. They found strong features in the 390-420 nm range which were attributed to the acceptor state formed by Be at about 250 meV above the valence band edge. Even though residual impurities could be responsible as well for this level, no experiments have been reported to confirm either claim. Very recently, Bernardini {\it et al.} \cite{Bernardini} using first principles calculations, predict that Be is a shallow acceptor in GaN with a binding energy (BE) of only 60 meV, in clear contrast with our calculations and with the experimental data. It is interesting to note, however, that our BE's for Mg ($\sim$ 200-250 meV) are in satisfactory agreement with those theoretical values obtained from first principles calculations by Fiorentini {\it et al.} \cite{Fiorentini} ($\sim$\, 230 meV) and Neugebauer {\it et al.} \cite{Neugebauer} ($\sim$ 200 meV). In contrast, the binding energies with Ref.\ [\ref{Suzuki}] parameters, for Zn and C impurities, are overestimated with respect to the experimental values (presumably due to the high value of crystal field reported in [\ref{Suzuki}]). In principle we should expect the best fit precisely for these impurities since they are isocoric with Ga and N, respectively, which would produce negligible local relaxations and core polarization effects. The best agreement occurs when we use the parameters from Kim {\it et al.}, \cite{Kim} suggesting that their parameter set is somewhat better. For example, for Zn$_{Ga}$ in GaN, we obtain a BE of 331 meV using Kim's parameters, which is in good agreement with the experimental value of 340 meV, and in excellent agreement with the theoretical value reported by Bernardini {\it et al}. (330 meV). \cite{Bernard1} Concerning the C$_{N}$ substitutional impurity in a N site, we find that with exception of Suzuki's parameters, all the hole-band sets give BE's (223-240 meV) comparable with the experimental value of 230 meV from Fischer {\it et al}. \cite{Fischer} Note that using Kim's parameters gives the acceptor level just even with the experimental value, in a nice but probably fortuitous agreement, considering the possible sources of systematic errors. Boguslawski {\it et al.} \cite{Bogus} had predicted also an ionization energy for C$_{N}$ of $\sim 200$ meV, while Fiorentini {\it et al.} \cite{Fiorentini} report a deeper ($\sim 600$ meV) value. The formation energy for this impurity is found to have also a substantial difference (1.4 eV) between those authors. The relatively higher relaxation effects predicted by Ref.\ [\ref{Bogus}] seem to play a more crucial role here. Similar discrepancies are found between the present work and other calculations for Ca$_{\it Ga}$ and Si$_{\it N}$. \cite{Bogus} We found that Ca$_{\it Ga}$ has its acceptor level ($\sim 260$ meV), close to that of the Mg. It is interesting to notice that temperature-dependent Hall measurements of Ca-doped GaN have shown that the thermal ionization energy level of Ca ($\sim$\ 0.17 eV) is similar to that found in Mg ($\sim$\ 0.16 eV). \cite{Lee,Akasaki} This could indicate that the acceptor binding energy for Ca is also close to that for Mg as we have indeed predicted. Similarly, Si$_{\it N}$ was found to have a rather shallow level in WZ-GaN at about 0.2 eV. While the donor behavior of Si is well known, no reliable experimental evidency of Si acceptor has been reported. The collection of results discussed above indicates that the parameterization of Kim {\em et al.} \cite{Kim} leads to acceptor binding energies in overall better agreement with the experiments and other theoretical estimates. Notice however that the differences in binding energies in GaN with other sets of parameters are not large in most cases, within a few percent from each other. We would like now to comment on the effect of the crystal field splitting on our calculations. Whereas recent experiments seems to indicate that the $\Delta_1$ value is about 10 meV, \cite{Edwards,Gil,Gil1} the theoretical estimates are still controversial, varying between 22-73 meV for GaN depending upon the approach used. \cite{Kim,Suzuki,Maje,Yeo} For example, Ref. [\ref{Kim}] and [\ref{Suzuki}] had obtained $\Delta_1=$36, and 73 meV, respectively. The former authors attribute the large theoretical discrepancy to the use of an ideal-cell internal structure parameter $u$ in Ref.\ [\protect \ref{Suzuki}], instead of the relaxed one. In any case, to illustrate the effect of the binding energies upon the $\Delta_1$ value, we show in Fig.\ 1 their dependence on this parameter for Mg$_{\it Ga}$ in WZ-GaN, over a wide range. A rather monotonic behavior is seen in the binding energies, as one would expect. Note that for all parameter sets (with exception of those in Ref.\ [\ref{Maje}]) the BE's are consistently close for each $\Delta_1$ value. Note that using the experimental value of 10 meV for $\Delta_1$ would produce smaller binding energies, giving values of about 0.19 eV, regardless of the set employed. The behavior for other dopants shows an analogous trend, where the energy shift on the binding energy is nearly the difference in $\Delta_{1}$ values. This discussion indicates that additional experimental evidence for a smaller $\Delta_{1}$ value, and comparison with better optimized estimates, would be of interest. The results for AlN in the wurtzite structure are given in Table VI.\@ The first thing to notice here is that due perhaps to the large discrepancy in $\Delta_1$ values, $-215$, $-219$, and $-58$ meV for Kim, \cite{Kim} Majewski, \cite{Maje} and Suzuki, \cite{Suzuki} respectively, the binding energies differ by almost a factor of two for different parameter sets. Notice further that values of $A_5$ and $A_6$ also differ substantially for different authors, strongly affecting the band mixing and corresponding binding energies. Given the better agreement of Kim {\em et al}. parameters in WZ-GaN, we are inclined to think that the corresponding results in WZ-AlN will be perhaps closer to the experimental results. Unfortunately, as we mentioned earlier, the experimental spectrum for acceptors in AlN is unknown at present (due to the well known difficulties in doping this material \cite{Strite}). Further scrutiny of the parameters reported by these and future authors should be carried out to solve the disagreements. Notice that the BE of $C_N$ in WZ-AlN is found to exceed 0.65 eV in our calculations for all three sets of parameters (not shown in Table VI). This value, in the limit of validity perhaps of our EMT calculations, suggests nevertheless that such impurity will yield a somewhat deeper level than those reported in Table VI.\@ Although substitutional impurity calculations do not represent a strict test of the band parameterizations, the subtle interplay of the different valence bands on the resulting binding energies (or even excited impurity states) provide an interesting overall consistency check. For the ZB phase, we notice that predicted binding energies are consistently smaller (by nearly a factor of two) than in the WZ structure of GaN. Indeed, typical differences of roughly 100 meV are found in the binding energies between the two phases (ZB and WZ) in this material. This would have important consequences in electronic uses once doping of ZB phases is stabilized. Concerning the resulting impurity binding energies for GaN, we observe that the LK parameters given by Ref.\ [\ref{Kim}], [\ref{Suzuki}], and [\ref{Wang}] give rise to binding energies which are in close agreement with each other. We should also comment that a different set of band parameters in the ZB phase has been given by Meney {\it et al}, using a semi-empirical perturbative approach. \cite{Meney} However, using these parameters result in BE's much smaller than those presented here. This difference, even greater in the binding energies for ZB-AlN, reflects the more approximate nature of the parameters in Ref.\ [\ref{Meney}]. Notice that the Luttinger $\gamma$-parameters in Kim {\em et al}. are slightly smaller than for Suzuki {\em et al}. (or equivalently, slightly larger effective masses), which would be expected to yield slightly larger BE's for the former set of parameters, as is clearly seen in Table VII. Recent PL spectra of cubic GaN by As {\it et al.}\cite{As} had claimed as indeed we have predicted in our calculations, that acceptor BE's for cubic-GaN may have energies shallower than these in wurtzite-GaN. Acceptor energies of about 130 meV have been estimated by them. This in very good agreement with our calculations; as we can see in Table VII, the BE's ranges from $\sim$\ 130 meV for Si to $\sim$\ 180 meV for Zn. This acceptor level has not been identified and it is probably produced by residual impurities. The smaller binding energies in ZB, with respect to impurities in the WZ structure, is an interesting result that should be understood in terms of the different band structure parameters. Notice, however, that the difference in the effective Rydberg energy for WZ and ZB GaN, is not large at all, as seen in Tables II and IV. Similarly, the effective Bohr radius for both structures is nearly the same, as illustrated in the fact that $\tilde \gamma_1$ is of the same order in both cases, and that the dielectric constant for both polytypes has been taken as $\epsilon_o=9.5$. The polaron correction is certainly relatively small also, and is therefore not a possible source of the binding energy difference in these polytypes. However, the parameter that apparently gives rise to these large shifts in the acceptor energies could be identified with the in-plane heavy hole mass, which is indeed larger in wurtzite than in zincblende for both GaN and AlN, and hence produces larger binding energies. In order to verify the effect of the different effective masses in the two polytypes, we have calculated the acceptor levels for WZ-GaN using the quasicubic sets of parameters of Ref.\ [\ref{Kim}], with the same crystal field splitting than the obtained for the non-quasicubic set. It turns out that the binding energies are smaller correspondingly, which confirms our assumption. One should also mention that just as seen in Fig.\ 1, a vanishingly small $\Delta_1$ (as is the case in ZB) would produce an even smaller binding energy for a given impurity. [This would also explain the agreement among the three sets of parameters, since $\Delta_1$ differences are the most significant for different authors.] We then conclude that it is in fact a combination of the crystal field splitting and slightly larger hole masses that produce larger binding energies in WZ than in the ZB structure. An interesting and important effect of the different lattice and band structure. \section{Conclusion} We have carried out calculations for the shallow acceptor energies associated to different substitutional impurity atoms in GaN and AlN hosts. The calculations were performed within the effective mass theory, taking into consideration the appropriate valence band Hamiltonian symmetries for the WZ and ZB polytypes, and using the full $6\times 6$ acceptor Hamiltonian and included the actual spin-orbit energy splitting. In addition, the impurity pseudopotential and the electron-phonon (polaron) correction has been explicitly considered. These more realistic treatment allows us to compare directly with the observed data and verify that our calculation produces the appropriate `chemical shifts'. Indeed, our calculations of the acceptor binding energies are in quite good agreement with PL experiments, as the introduction of the impurity pseudopotential seems to be an excellent model to describe the chemical shifts associated with each impurity atom. It is interesting that the good fits were found without any adjustable parameters in the calculation, once the contribution due to the electron-phonon polar interaction was included. We find that small differences in the hole effective mass parameters could lead to relatively large discrepancies in the binding energies. Our overall evaluation of parameters suggests that the better BE values are obtained with those in Ref.\ [\ref{Kim}]. Correspondingly, we refer the reader to the first line in each impurity case in Tables V, VI, and VII, for what we consider the best BE estimates, within a few percent error. Further refinement of experimental values would be desirable to set narrower constraints on the theoretical values. We also find that the binding energies for acceptors in the ZB structures are much shallower than their counterparts in the WZ structures, suggesting perhaps much more efficient carrier doping in those systems (yet to be observed experimentally). Finally, we should mention that preliminary studies of the strain effects on the acceptor binding energies show an increase as the strain increases, although with a much stronger dependence than in other III-V materials. A complete report of these studies will be presented elsewhere. \acknowledgments We thank K. Kim, W.R.L. Lambrecht and B. Segall for kindly communicating unpublished results to us, and for very helpful discussions. This work was supported in part by grants ONR-URISP N00014-96-1-0782, DURIP N00014-97-1-0315, and from CONACyT-M\'{e}xico.
\section{Introduction} Over the past decade, Grothendieck's theory of motives has come to play an increasingly important role in theoretical physics. While the existence of a relation between motives and periods of algebraic varieties and computations in high-energy physics might have seemed surprising and unexpected, the existence of underlying motivic structures in quantum field theory has now been widely established, see for instance \cite{BEK}, \cite{BrSch}, \cite{CoMa}, \cite{Mar}. Typically, periods and motives occur in quantum field theory in the perturbative approach, through the asymptotic expansion in Feynman diagrams, where in the terms of the asymptotic expansion the renormalized Feynman integrals are identified with periods of certain hypersurface complements. The nature of the motive of the hypersurface constraints the class of numbers that can occur as periods. Similarly, a large body of recent work on amplitudes in $N=4$ Supersymmetric Yang--Mills has uncovered another setting where the connection to periods and motives plays an important role, see \cite{Ampl}, \cite{Gon1}, \cite{Gon2}. \smallskip In this paper, we present another surprising instance of the occurrences of periods and motives in theoretical physics, this time in a model of (modified) gravity based on the spectral action functional of \cite{CCact}. The situation is somewhat similar to the one seen in the quantum field theory setting, with some important differences. As in the QFT framework, we deal with an asymptotic expansion, which in our case is given by the large energy expansion of the spectral action functional. We show in this paper that, in the case of (Euclidean) Robertson--Walker spacetimes, the terms of the asymptotic expansion of the spectral action functional can be expressed as periods of mixed Tate motives, given by complements of quadric hypersurfaces. An important difference, with respect to the case of a scalar massless quantum field theory of \cite{BEK}, is that here we need to consider only one quadric hypersurface for each term of the expansion, whereas in the quantum field theory case one has to deal with the much more complicated motive of a union of quadric hypersurfaces, associated to the edges of the Feynman graph. On the other hand, the algebraic differential form that is integrated on a semi-algebraic set in the hypersurface complement is much more complicated in the spectral action case considered here, than in the quantum field theory case: the terms in the algebraic differential form arise from the computation, via pseudo-differential calculus, of a parametrix for the square of the Dirac operator on the Robertson--Walker spacetime, after a suitable change of variables in the integral. While the explicit expression of the differential form, even for the simplest cases of the coefficients $a_2$ and $a_4$ can take up several pages, the structure of the terms can be understood, as we explain in the following sections, and the domain of definition is, in the case of the $a_{2n}$ term, the complement of a union of two hyperplanes and a quadric hypersurfaces defined by a family of quadrics $Q_{\alpha, 2n}$ in an affine space ${\mathbb A}^{2n+3}$. \smallskip In Section~\ref{RWsec} we describe our choice of coordinates, and the resulting form of the pseudodifferential symbol of the square of the Dirac operator on a (Euclidean) Robertson-Walker metric. In Section~\ref{WodSec}, we describe briefly how the Seeley-DeWitt coefficients of the heat kernel expansion can be computed in terms of Wodzicki residues, by taking products with auxiliary tori with flat metrics. Section~\ref{a2Sec} gives the explicit computation of the $a_2$ term, showing that, before integrating in the time variable, and treating the scaling factor as an affine parameter $\alpha \in {\mathbb A}^1\smallsetminus\{ 0 \}={\mathbb G}_m$, one can write the resulting integral as a period obtained by integrating an algebraic (over ${\mathbb Q}$) differential form over a ${\mathbb Q}$-semi-algebraic set. The differential form is defined on the complement in ${\mathbb A}^5$ of a union of two hyperplanes and the quadric determined by the vanishing of the quadratic form $Q_{\alpha, 2}=u_1^2 +\alpha^{-2} (u_2^2 + u_3^2 + u_4^2)$. The ${\mathbb Q}$-semialgebraic set in this hypersurface complement has boundary contained in a divisor given by a union of coordinate hyperplanes. Although the boundary divisor and the hypersurface intersect nontrivially, all the integrals are convergent and we do not have a renormalization problem, unlike what happens in the quantum field theory setting. In Section~\ref{a4Sec} we prove an analogous result for the $a_4$ term, with the very lengthy full expression of the $a_4$ term given in the appendix. In Section~\ref{a2nSec}, using an inductive argument and the result of the previous cases, we prove that the terms $a_{2n}$ can all be identified (prior to time-integration) with periods of motives of complements of quadric hypersurfaces obtained from a family of quadrics $$ Q_{\alpha, 2n} = u_1^2 + \alpha^{-2} (u_2^2+u_3^2+u_4^2) + u_5^2 + \cdots + u_{2n+2}^2. $$ The algebraic differential forms depend on $2n$ auxiliary affine parameters $\alpha_1,\ldots,\alpha_{2n}$, which correspond to the time derivatives of the scaling factor of the Robertson--Walker metric. In Section \ref{MotSec} we analyze more explicitly the motive, showing that, over a quadratic field extension ${\mathbb Q}(\sqrt{-1})$ where the quadrics become isotropic, it is a mixed Tate motive, while over ${\mathbb Q}$ it is a form of a Tate motive in the sense of \cite{Rost}, \cite{Vishik1}, \cite{Vishik2}. We compute explicitly, by a simple inductive argument, the class in the Gro
thendieck ring of the relevant hypersurface complement. \smallskip \subsection{The spectral model of gravity} The {\em spectral action} functional, introduced in \cite{CCact} is a regularized trace of the Dirac operator $D$ given by $$ {\mathcal S}(\Lambda) = {\rm Tr}(f(D/\Lambda))= \sum_{\lambda \in \text{Spec}(D)}\text{Mult}(\lambda)f(\lambda/\Lambda),$$ where the test function $f$ is a smooth even rapidly decaying function, which should be thought of as a smooth approximation to a cutoff function. The parameter $\Lambda>0$ is an energy scale. One of the main advantages of this action functional is that it is not only defined for smooth compact Riemannian spin manifolds, but also for a more general class of geometric objects that include the noncommutative analogs of Riemannian manifolds, finitely summable spectral triples, see \cite{CoS3}. In particular, the spectral action functional applied to almost commutative geometries (products of manifolds and finite noncommutative spaces) is used as a method to generate particle physics models with varying possible matter sectors depending on the finite geometry and with matter coupled to gravity, see \cite{WvS} for a recent overview. It was shown in \cite{CCact} that, in the case of commutative and almost commutative geometries, the spectral action functional has an asymptotic expansion for large energy $\Lambda$, $$ {\rm Tr} (f(D/\Lambda))\sim\,\sum_{\beta \in \Sigma^+_{ST}}\,f_\beta\,\Lambda^\beta \,\, {\int \!\!\!\!\!\! -} |D|^{-\beta} \,\, +\,f(0)\,\zeta_{D}(0) + \cdots, $$ where the coefficients depend on momenta $f_\beta=\,\int_{0}^{\infty}f(v)\,v^{\beta-1}\,dv$ and Taylor coefficients of the test function $f$ and on residues $$ {\int \!\!\!\!\!\! -} |D|^{-\beta} = \frac{1}{2} {\rm Res}_{s=\beta} \,\, \zeta_D(s) $$ at poles of the zeta function $\zeta_D(s)$ of the Dirac operator. The leading terms of the asymptotic expansion recover the usual local terms of an action functional for gravity, the Einstein-Hilbert action with cosmological term, with additional modified gravity terms given by Weyl conformal gravity and Gauss-Bonnet gravity. In the case of an almost commutative geometry the leading terms of the asymptotic expansion also determine the Lagrangian of the resulting particle physics model. The spectral action on ordinary manifold, as an action functional of modified gravity, was applied to cosmological models, see \cite{Mar2} for an overview. In the manifold case, the Mellin transform relation between zeta function and trace of the heat kernel expresses the coefficients of the spectral action expansion in terms of the Seeley-DeWitt coefficients $a_{2n}$ of the heat kernel expansion, $$ {\rm Tr}(e^{-t D^2}) \sim_{t\to 0+}\,\,\, t^{-m/2} \sum_{n=0}^\infty a_{2n}(D^2) \,t^n . $$ Pseudodifferential calculus techniques and the parametrix method can then be applied to the computation of the symbol and the Seeley-DeWitt coefficients. The resulting computations can easily become intractable, but a computationally more efficient method introduced in \cite{FFMRationality}, based on Wodzicki residues and products by auxiliary flat tori can be applied to make the problem more easily tractable. \smallskip In the case of the (Euclidean) Robertson-Walker spacetimes, it was conjectured in \cite{CC} and proved in \cite{FGK} that all the terms in the expansion of the spectral action are polynomials with rational coefficients in the scaling factor and its derivatives. This rationality result suggests the existence of an underlying arithmetic structure. In the case of the Bianchi IX metrics, a similar rationality result was proved in \cite{FFMRationality} and the underlying arithmetic structure was analyzed in \cite{FFM2} for the Bianchi IX gravitational instantons, in terms of modular forms. Here we consider the case of the Robertson-Walker spacetimes and we look for arithmetic structures in the expansion of the spectral action in terms of periods and motives. A similar motivic analysis of the Bianchi IX case will be carried out in forthcoming work. \section{Robertson-Walker metric and the Dirac operator}\label{RWsec} We consider the Robertson-Walker metric with the expansion factor $a(t)$, \[ ds^2 = dt^2 + a(t)^2 d\sigma^2, \] where $d\sigma^2$ is the round metric on the 3-dimensional sphere $\mathbb{S}^3$. Using the Hopf coordinates for $\mathbb{S}^3$, we use the local chart \[ x = (t, \eta, \phi_1, \phi_2) \mapsto (t, \sin \eta \cos \phi_1, \sin \eta \sin \phi_2, \cos \eta \cos \phi_1, \cos \eta \sin \phi_2 ), \] \[ 0 < \eta < \frac{\pi}{2}, \qquad 0 < \phi_1 < 2 \pi, \qquad 0 < \phi_2 < 2 \pi. \] In this coordinate system, the Robertson-Walker metric is written as \[ ds^2 = dt^2 + a(t)^2 \left ( d \eta^2 + \sin^2 (\eta) \, d\phi_1^2 + \cos^2 (\eta) \, d \phi_2^2 \right ), \] or alternatively we write: \[ (g_{\mu \nu}) = \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & a(t)^2 & 0 & 0 \\ 0 & 0 & a(t)^2 \sin ^2(\eta ) & 0 \\ 0 & 0 & 0 & a(t
&6.038 10$^{-7}$ &220.399 &15.9\\ $^{12}$CO &$\nu$=0 &$J$=2-1 &6.910 10$^{-7}$ &230.538 &16.6\\ SiO &$\nu$=0 &$J$=5-4 &5.917 10$^{-4}$ &217.105 &31.26\\ SO$_2$ &$\nu$=0 &16$_{6,10}$-17$_{5,13}$ $^{**}$ &2.349 10$^{-5}$ &234.422 &213.32\\ \hline \multicolumn{6}{l}{$^*$ From \citet{Schoier2005}} \\ \multicolumn{6}{l}{$^{**}$ This is erroneously quoted by \citet{Homan2018} as 28$_{3,25}$-28$_{2,26}$} \\ \end{tabular} \\ \end{table*} Not clearly apparent on Figures \ref{fig3} and \ref{fig4} is the strong asymmetry of the SiO and SO$_2$ data cubes, which is revealed by comparing the flux densities integrated in each of the octants obtained by splitting red-shifted/blue-shifted, north/south and east/west (defined in rotated coordinates). Table \ref{tab4} summarizes these results by averaging over north and south. A toy model parameterization in terms of two parameters $u$ and $v$, respectively measuring the blue/red and east/west asymmetries, gives a good fit to the CO and SiO data but not to the SO$_2$ data. It uses two parameters to describe four quantities related by one relation (their sum is equal to 4): blue-east=$1+u+v$, blue-west=$1+u-v$, red-east=$1-u+v$ and red-west=$1-u-v$. The values of $u$ and $v$ are listed in the table together with the rms deviation between model and observed values. \begin{table*} \centering \caption{Normalised brightness of the $^{12}$CO(2-1), SiO and SO$_2$ data-cube quadrants. East and west mean $x>0$ and $x<0$ respectively.} \label{tab4} \begin{tabular}{ccccccc} \hline &\multicolumn{2}{c}{CO} &\multicolumn{2}{c}{SiO} &\multicolumn{2}{c}{SO$_2$} \\ &East &West &East &West &East &West \\ \hline Blue &1.02 &0.86 &0.86 &0.46 &1.48 &0.82\\ Red &1.09 &1.04 &1.55 &1.12 &0.81 &0.90\\ $u$ &\multicolumn{2}{c}{$-$0.06} &\multicolumn{2}{c}{$-$0.34} &\multicolumn{2}{c}{0.15}\\ $v$ &\multicolumn{2}{c}{0.05} &\multicolumn{2}{c}{0.21} &\multicolumn{2}{c}{0.14}\\ Rms &\multicolumn{2}{c}{0.03} &\multicolumn{2}{c}{0.01} &\multicolumn{2}{c}{0.19}\\ \hline \end{tabular} \end{table*} The very large asymmetry displayed by the SiO data-cube is characterized by a strong depression of the blue-western quadrant with respect to its red-eastern counterpart. It is interesting to note that the CO asymmetry, although much smaller, is of the same nature, suggesting that it may have the same source as the SiO asymmetry, its effect being much stronger at short distances from the star. At variance with CO and SiO emissions, SO$_2$ emission is dominated by a blue-red asymmetry in the eastern hemisphere and is not properly described by the toy model parameterization, revealing the very different morpho-kinematic regime probed by this line close to the star. We remark that both expansion and rotation, or for that matter any linear combination of those, should produce a centrally symmetric data-cube. The strong central asymmetry of the SiO data-cube is therefore evidence for something else than central expansion or rotation. The centrally symmetric component of the data-cube, $f_S(x,y,V_z)=\frac{1}{2}[f(x,y,V_z)+f(-x,-y,-V_z)]$ should keep track of the respective roles of rotation and expansion in the morpho-kinematics of the circumstellar envelope, leaving aside whatever is causing the strong central asymmetry; in the centrally symmetric data-cube, the rms deviation of the octant flux densities with respect to their mean is only 5\% instead of 40\% for the original data-cube (note that the value of 5\% gives an upper limit to the uncertainties attached to the normalised brightness values listed in Table \ref{tab4}). \section{SO$_2$ emission} \label{sec4} The confinement of SO$_2$ emission to the immediate neighbourhood of the star, less than 30 au, makes it a very precious source of information about the mass loss mechanism in its earlier phase. SO$_2$ is known to trace hot gas in the immediate neighbourhood of oxygen-rich AGB stars \citep{Yamamura1999} and its emission is mostly radiatively excited \citep{Danilovich2016}. The temperature has been measured by \citet{Hoai2019} from a comparison of $^{12}$CO(1-0) and $^{12}$CO(2-1) emissions at distances from the star exceeding $\sim$ 2 arcsec; they proposed two different forms for the radial dependence: $T\sim109 \exp(-r/3.1)$ and $T\sim106/r$ (with $T$ in Kelvin and $r$ in arcsec). While the former gives the best fit at large values of $r$, the latter is better adapted to extrapolation toward small values of $r$, suggesting that the gas temperature in the region probed by SO$_2$ is reaching a few hundred Kelvin, in agreement with the estimate of \citet{Yamamura1999} for other oxygen-rich AGB stars. While the details of the formation, emission and destruction of SO$_2$ molecules in the neighbourhood of oxygen-rich AGB stars are not fully understood, a number of features have been clearly established \citep{Yamamura1999, Cherchneff2006, Danilovich2016, Gobrecht2016}: they trace warm ($\sim$600 K) gas layers hosting turbulence and shocks produced by the star pulsations that favour their formation through the liberation of atomic oxygen; they absorb UV photons from the star that cause both their excitation and dissociation with the result that emission is confined to a narrow radial range, typically Gaussian with $\sigma$ values in the range of a few tens of au; line widths (FWHM) are typically at the level of 5 to 10 km s$^{-1}$, as also observed in star forming regions at lower temperatures \citep{Esplugues2013}. It is therefore natural to expect that the line profile in the present observations receives important contribution from such effects, competing with coherent Doppler broadening caused by rotation and by the expansion of the nascent wind. While the beam size (0.18$\times$0.17 arcsec$^2$) and the broad line width smear the data-cube in a way that prevents a detailed study of the morpho-kinematics, a number of conclusions can be drawn from closer inspection of the data-cube. Figure \ref{fig6} (left panel) displays the position-velocity (P-V) map of $|V_z|$ as a function of $R$. It reveals no particularly remarkable feature, such as a polar or equatorial enhancement, and how much radial expansion it implies depends on the amount of smearing caused by the broad line width. To obtain some rough evaluation of the line width, we calculate the width (FWHM) of the line profile observed in each pixel as rms deviation with respect to the mean. We find that it decreases from $\sim$ 7.7 km s$^{-1}$\ for $R<0.15$ arcsec to $\sim$ 5.6 km s$^{-1}$\ for $R>0.15$ arcsec. To separate different contributions to the line width would require a significantly better resolution; we simply retain, as an order of magnitude, that both radial expansion velocity and other possible sources of line broadening are at the scale of 5 km s$^{-1}$\ and cannot significantly exceed 7 km s$^{-1}$. The map of the mean Doppler velocity, displayed in Figure \ref{fig6} (middle left panel), suggests the presence of rotation, as previously remarked by \citet{Homan2018}. Figure \ref{fig6} (right panels) displays distributions of the Doppler velocity as a function of position angle $\psi$ measured counter-clockwise from the $y$ axis. In spite of the large beam size a significant comparison can be made between two radial intervals, below and above 0.15 arcsec. Sine wave fits give respective amplitudes of 0.70 and 0.99 km s$^{-1}$\ and phase shifts that nearly cancel. The amplitude obtained for the whole $R$ range is 0.81 km s$^{-1}$. The small phase shifts, meaning that the axis of rotation projects on the sky plane along the $y$ axis, imply also the absence of a significant anisotropy, polar or equatorial, of the expansion velocity \citep{Diep2016}. We retain from this a mean rotation velocity of approximately $0.81/\sin10^\circ \sim 4$ to 5 km s$^{-1}$\ . This number is but a rough evaluation of the scale of the velocities that are at stake but provides evidence for a significant rotation component of the velocity field. \begin{figure*} \centering \includegraphics[height=4.5cm,trim=0.cm .5cm 0.cm 0.cm,clip]{fig6-so2-vz-map.eps} \includegraphics[height=4.5cm,trim=0cm .5cm 1cm 0.cm,clip]{fig6-so2-vz-psi.eps} \caption{SO$_2$ emission (3-$\sigma$ noise cut). Left: P-V map of $|V_z|$ vs $R$. The colour scale is in units of Jy beam$^{-1}$. Middle left: sky map of the mean Doppler velocity (km s$^{-1}$, $R<0.2$ arcsec). Right panels: dependence of the Doppler velocity on position angle $\psi$ for $0.05<R<0.15$ arcsec (middle right) and $0.15<R<0.25$ arcsec (right).} \label{fig6} \end{figure*} Additional support is provided by the observation that the separation in $x$ between the blue-shifted and red-shifted components is significantly non-zero and persists up to the larger values of $|V_z|$. In summary, the observation of SO$_2$ emission, when limited to a projected distance from the star $R<\sim0.25$ arcsec, is consistent with a combination of rotation and isotropic radial expansion confined to less than $\sim$ 30 au from the star; rotation velocities are of the order of 4 to 5 km s$^{-1}$\ and radial expansion velocities increase to a few km s$^{-1}$\ with no evidence for departure from isotropy; both contribute to the observed line width of $\sim$ 7.5 km s$^{-1}$\ FWHM, which may also receive significant contribution from turbulent Doppler broadening. \section{S\lowercase{i}O emission} \label{sec5} \subsection{General remarks} \label{sec5.1} SiO emission probes the radial range between the immediate neighbourhood of the star, where rotation is present, and the outer part of the circumstellar envelope, dominated by expansion. As was shown in Figure \ref{fig5}, a remarkable feature of SiO emission is its short radial range, $\sim 3$ arcsec. This has important consequences on the structure of the data-cube: its projections on the $(x,V_z)$ and $(y,V_z)$ planes, which are P-V maps, display sharp boundaries associated with this short radial range. They are shown in Figure \ref{fig9} together with the projection of the data-cube on the $(x,y)$ plane (intensity map). Neglecting the inclination $\varphi$ of the star axis with respect to the line of sight, the ratio $R/r$ measures the cosine of the stellar latitude $\alpha$ and the position velocity maps can be redrawn in the $(R,V_z)$ plane, their boundary displaying the maximal value of $V_z$ as a function of $R$. Defining $\alpha^*=\cos^{-1}(R/r^*)$ with $r^*$ being the radial range of SiO emission, the projection of the data-cube on the $(\sin\alpha^*,V_z)$ plane displays therefore the dependence of $V_z$ on $\alpha$ at $r=r^*$. This is shown in Figure \ref{fig10}, using a 3-$\sigma$ noise cut on $f$ and assuming that $r^*$ is the same in all directions (and taken equal to 3.5 arcsec); also shown is the projection of the data-cube on the $(\sin\alpha^*,V_z/\sin\alpha^*)$ plane, displaying instead the dependence on $\alpha$ of the radial expansion velocity $V=V_z/\sin\alpha^*$ at $r=r^*$. Note that the deviation of $\varphi$ from zero, of the order of $\sim$ 10$^\circ$, simply smears the boundaries of the P-V maps without significantly affecting the result. Under such assumptions, the P-V maps of Figure \ref{fig10} cover latitudes between $\sim 45^\circ [\cos^{-1}(2.5/3.5)]$ and 90$^\circ$\ and provide direct evidence for three important features: 1) two narrow polar jets with velocity reaching $\sim$ 20 km s$^{-1}$, covering an interval of $\sim$ 0.015 units of $\sin\alpha^*$ below 1, meaning an opening angle of $\sim \pm10^\circ$; additional material related to the jet properties is presented in Appendix \ref{appa1}; 2) a slower wind with maximal radial velocity of $\sim$ 12 km s$^{-1}$\ decreasing very slowly with latitude in the red hemisphere, its counterpart in the blue hemisphere having a maximal velocity decreasing faster with latitude down to $\sim$ 5 km s$^{-1}$; 3) an important asymmetry between the blue and red hemispheres, consistent with the observations previously made in Section \ref{sec3}. \begin{figure*} \centering \includegraphics[width=0.8\textwidth,trim=0.cm 0.5cm 0.cm 0.cm,clip]{fig9-sio-pvmap.eps} \caption{Projections of the SiO data-cube on ($x,y$) (left), ($x,V_z$) (middle) and ($y,V_z$) (right).} \label{fig9} \end{figure*} However, the above arguments assume that the radial range $r^*$ of SiO emission is the same in all directions. But whatever is causing the wind to speed up and whatever is causing the emissivity to decrease must be related, if not directly at least as being both dependent on the physical parameters defining the state of the environment (density, temperature, turbulence and shocks, etc...). The strong red-east/blue-west asymmetry evidenced in the middle panel of Figure \ref{fig9} shows that the wind velocity is low at the edge of the emissivity region in the blue-western quadrant; but this may be because acceleration is less efficient in the blue-western quadrant or because the edge of emissivity is closer to the star in the blue-western quadrant. This ambiguity is inherent to the nature of the observations and must be kept in mind when seeking an interpretation. Yet, the P-V maps of Figures \ref{fig9} and \ref{fig10} show that the terminal velocity of $\sim$ 12 km s$^{-1}$\ observed in CO emission at large distances from the star \citep{Hoai2019} has already been reached in the red hemisphere within the range explored by SiO emission; they also provide evidence for the wind being accelerated at distances from the star ranging between $\sim$ 50 au and $\sim$ 300 au. To say more at this stage, without a preliminary understanding of the mechanism of acceleration and of grain formation, is not possible: we postpone further comments to Section \ref{sec7}. Several scenarios have been proposed to launch the nascent wind: direct acceleration from star pulsations \citep{McDonald2016}, magnetic fields \citep{Vlemmings2013}, stellar rotation \citep{GarciaSegura1999, GarciaSegura2014}, photon collisions on transparent silicate grains \citep{Woitke2006, Hofner2008} or absorption of the star UV light by standard dust grains \citep[for a recent review see][]{Hofner2018}; none of these, on its own, can generate an asymmetry of the type observed here. As was done for SO$_2$ emission, we obtain some rough evaluation of the line width, more precisely of an upper limit to the line width, by calculating the width of the line profile observed in each pixel as rms deviation with respect to the mean. The result is displayed in the right panel of Figure \ref{fig10} and gives evidence for a clear decrease of the line width (FWHM) as a function of $R$: respectively 13.6, 10.2, 8.1, 7.0 and 5.8 km s$^{-1}$\ for 0.5 arcsec wide intervals covering from 0 to 2.5 arcsec. While giving evidence for a major contribution of coherent Doppler broadening, this result leaves room for a significant contribution of turbulence. \begin{figure*} \centering \includegraphics[width=0.8\textwidth,trim=0.cm 0.5cm 0.cm 0.cm,clip]{fig10.eps} \caption{SiO emission. Left: Projection of the SiO data-cube on the ($|\sin\alpha^*|,V_z$) plane with $|\sin\alpha^*|=\sqrt{r^{*2}-R^2}/r^*$ and $r^*=3.5$ arcsec. The boundary displays the dependence of $V_z$ on $\alpha$ at $r=r^*$. Middle: same as left with $V_z$ replaced by $V=V_z/|\sin\alpha^*|$. The colour scales are in units of Jy beam$^{-1}$. Right: re-centred line profiles normalized to a same peak value for different intervals of $R$ (see insert).} \label{fig10} \end{figure*} \subsection{Detailed description of the asymmetry of the data-cube} \label{sec5.2} The evidence for narrow polar jets brings with it two major questions: which is the jet-launching mechanism? and which role do the jets play in the generation of the wind? The morpho-kinematics of SiO emission has been interpreted by \citet{Homan2018} in terms of the presence of a companion gravitationally attracting gas around it. These authors made a sign mistake when comparing Doppler velocities between SiO and CO emissions, invalidating their assertion that a lane of gas is bridging the gap between the star and its hypothetical companion (right panel of their Figure 13). Yet, their observation of a nearly point-like void in the channel maps of SiO emission, at $\sim$ 0.5 arcsec west of the star and covering a broad range of Doppler velocities in the red hemisphere (meaning in fact the blue hemisphere because of the sign mistake) remains valid. This is illustrated in Figure \ref{fig11} that displays channel maps in the relevant range of $V_z$ \citep[contrary to the rest of the article, we use here sky coordinates with north pointing up in order to ease the comparison with][]{Homan2018}. In spite of the slightly larger beam size \citep[0.29$\times$0.25 arcsec$^2$ instead of 0.20$\times$0.18 arcsec$^2$ for][]{Homan2018} the void is clearly visible and is seen to leave room for a significant depletion when moving further out in the blue-shifted direction. In order to better understand the nature of the above feature, we display in Figure \ref{fig12} (this time turning back to rotated coordinates with north pointing 20$^\circ$\ east of the $y$ axis) channel maps associated with two much broader intervals of $V_z$: respectively $-8<V_z<-2$ km s$^{-1}$\ and $-2<V_z<8$ km s$^{-1}$. Moreover we normalize the maps by dividing the intensity measured in the interval of $V_z$ by its value over the whole $V_z$ range. This reveals the sharp transition between data-cube elements located in the eastern and western hemispheres. Scanning toward the boundary displays a spectacular evolution of the Doppler velocity distribution progressively emptying the blue-shifted hemisphere when moving westward. \begin{figure*} \centering \includegraphics[width=0.9\textwidth,trim=0.cm 1.cm 0.cm 0.cm,clip]{fig11.eps} \caption{SiO emission. Channel maps at $V_z=-1.2$, $-$2, $-$2.6, $-$3.2 and $-$4 km s$^{-1}$\ as indicated in the inserts. The arrow in the left panel points to the feature associated by \citet{Homan2018} with a possible companion. To ease comparison with \citet{Homan2018} the maps are in sky coordinates, with north pointing up. The colour scale is in units of Jy beam$^{-1}$.} \label{fig11} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.8\textwidth,trim=0.cm 0.5cm 0.cm 0.cm,clip]{fig12.eps} \caption{SiO emission. Channel maps in $-8<V_z<-2$ km s$^{-1}$\ (left) and $-2<V_z<8$ km s$^{-1}$\ (middle) intervals normalized to the whole $V_z$ range. Black lines delineate the sharp transition to the depleted region. Right: individual pixel Doppler velocity spectra obtained by scanning in steps of 0.1 arcsec along the red line depicted in the left panels from ($x,y$)=(0,0) arcsec (black) to ($x,y$)=(0.6,0) arcsec (blue) in sky coordinates (not rotated by 20$^\circ$). } \label{fig12} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.49\textwidth,trim=0.cm .5cm 0.cm 2.cm,clip]{fig13a.eps} \includegraphics[width=0.49\textwidth,trim=0.cm 0.5cm 0.cm 2.cm,clip]{fig13b.eps} \caption{SiO emission. P-V maps in various intervals of $x$ indicated in the inserts. The left panels are for 0.3 arcsec wide $x$ intervals covering from 0.3 to $-$1.5 arcsec. The right panels are for 0.1 arcsec wide intervals (one pixel) centred at $-$0.4 to $-$0.9 arcsec. The colour scales are in units of Jy beam$^{-1}$.} \label{fig13} \end{figure*} An overall picture of the asymmetry evidenced in Figures \ref{fig11} and \ref{fig12} is presented in Figure \ref{fig13}, which scans across the data-cube in narrow slices of $x$ in the region where the asymmetry has been revealed. When scanning westward, approximate symmetry in the $(V_z,y)$ plane is observed up to $x \sim -0.3$ arcsec where a depression appears at $(V_z, y)\sim(-4$ km s$^{-1}$, 0.3 arcsec). In an $x$ interval of only 0.4 arcsec, this depression expands very rapidly to the whole south-western-blue octant: by $x=-0.7$ arcsec the region $x<-0.7$ arcsec, $y>0$, $V_z<-2$ km s$^{-1}$\ has been completely depleted. It then slowly expands into the south-eastern-blue octant. This depression dominates the asymmetry and is outstanding both because of its large amplitude and because of the sharpness of its boundary. The similarity between its morphology and that observed near QX Pup by \citet{SanchezContreras2018} suggests that we are looking at a similar phenomenon. Their interpretation is that the bow-shock of a bipolar SiO outflow launched at some 10 to 20 km s$^{-1}$\ (probably by a very close-by invisible companion) is carving a cavity into the ambient gas. A similar observation is reported in \citet{Sahai2006} who study the molecular flow of the pre-planetary nebula IRAS 22036+5306, however this time with a much faster wind. Additional material related to the asymmetry of the data-cube is presented in Appendix \ref{appa2}. \subsection{Rotation}\label{sec5.3} In Section \ref{sec4} evidence was given for rotation at small distances from the star; as expansion dominates at large distances, it is therefore natural to expect that the wind velocity evolves to nearly pure expansion in the range of distances explored by SiO emission. Both rotation and expansion produce centrally symmetric data-cubes, the former producing an east-west asymmetry of the mean Doppler velocity, the latter a north-south asymmetry. A major difference between the two is that rotation produces opposite redshifts on opposite sides of the axis; in the present case the whole eastern hemisphere is blue-shifted and the whole western hemisphere is red-shifted as seen in the case of SO$_2$ (Figure \ref{fig6}). In the case of expansion, equator and poles produce Doppler shifts of opposite signs in a same hemisphere, north or south: the map of the mean Doppler velocity, $\langle V_z\rangle$, depends on how prolate, or oblate, is the velocity latitudinal distribution. In particular, a spherical distribution trivially causes $\langle V_z\rangle$ to cancel everywhere on the sky map. This feature was used in \citet{Hoai2019} to give evidence for dominance of expansion when comparing the broad and narrow components of CO emission at large distances from the star. A consequence is that rotation is more efficient than expansion at generating an asymmetry of the $\langle V_z\rangle$ map and, in the present case, the strong blue-west depletion prevents a reliable distinction between expansion and rotation by introducing a bias that systematically favours an interpretation in terms of rotation. This is illustrated in Section \ref{appa3} of the appendix by using a centrally symmetrized data-cube. In the present section we simply limit the analysis to the red-shifted hemisphere, assuming that it is not significantly affected by the depletion. \begin{table*} \centering \caption{Fits of the form $\langle V_z \rangle=V_{z0}+V_{z1}\cos(\psi+\psi_0)$ to the dependences on position angle displayed in Figure \ref{fig14}.} \label{tab6} \begin{tabular}{ccccc} \hline $V_z$ range (km s$^{-1}$)&$R$ range (arcsec)& $V_{z0}$ (km s$^{-1}$) & $V_{z1}$ (km s$^{-1}$) & $\psi_0$ (deg)\\ \hline \multirow{2}{*}{2 to 5} & 0.5 to 1.5 & 3.40&$-$0.03&$-$3\\ \smallskip & 1.5 to 2.5 &3.31&$-$0.06&12\\ \multirow{2}{*}{5 to 10}&0.5 to 1.5&6.65&$-$0.31&25\\ \smallskip &1.5 to 2.5 &6.08&$-$0.35&17\\ \multirow{2}{*}{0 to 20}&0.5 to 1.5&3.35&$-$0.36&11\\ \smallskip &1.5 to 2.5&2.80&$-$0.31&$-$10\\ $>0^*$&$<8^*$&4.7$^*$&$-$0.34$^*$&$-$4$^*$\\ \hline \multicolumn{5}{c}{$^*$ Red-shifted CO broad component, \citet{Hoai2019}} \end{tabular} \end{table*} \begin{figure*} \centering \includegraphics[width=0.6\textwidth,trim=0.cm 0.5cm 0.cm 0.cm,clip]{fig14.eps} \caption{The mean Doppler velocity $\langle V_z \rangle$ (km s$^{-1}$) in the red hemisphere of SiO emission. Sky plane maps (upper panels) and dependence on position angle $\psi$ for $0.5<R<1.5$ arcsec (central panels) and $1.5<R<2.5$ arcsec (lower panels) are shown for different $V_z$ intervals: $2<V_z<5$ km s$^{-1}$\ (left), $5<V_z<10$ km s$^{-1}$\ (middle) and the whole red hemisphere, $V_z>0$ (right). The curves are the results of sine wave fits listed in Table \ref{tab6}.} \label{fig14} \end{figure*} Figure \ref{fig14} displays the map of $\langle V_z\rangle$ for the whole red-shifted hemisphere, $V_z>0$, as well as for two different $V_z$ intervals, $2<V_z<5$ km s$^{-1}$\ and $5<V_z<10$ km s$^{-1}$\ and two different $R$ intervals, $0.5<R<1.5$ arcsec and $1.5<R<2.5$ arcsec. Also shown, in each case, is the dependence of $\langle V_z\rangle$ on the position angle $\psi$. The results of sine wave fits of the form $\langle V_z\rangle=V_{z0}+V_{z1}\cos(\psi+\psi_0)$ are listed in Table \ref{tab6}. Note that for the red-shifted broad component in the whole $R$ range \citep{Hoai2019}, $\langle V_z\rangle \sim 4.7-0.34\cos(\psi-4^\circ)$ km s$^{-1}$. The values obtained here are very similar and are clearly dominated by expansion, leaving essentially no room for rotation. One might have expected significant rotation associated with the accretion disc responsible for collimating the jets, but such is not the case. \subsection{A closer look at the polar jets}\label{sec7} The presence of narrow polar jets in SiO emission and their absence from SO$_2$ emission argue against an interpretation in terms of a spherical shell ejected by star pulsation at short distances from the star, as described in \citet{Winters2003} and \citet{McDonald2016}. The shell would be expected to be ejected at short radial distances from the star, at a scale corresponding
to the escape velocity, meaning 4 to 6 au for a star mass of 1 to 2 solar masses. The question would then arise of the distance over which the shell would slow-down to a velocity of 12 km s$^{-1}$\ or less. Gravity alone would imply a decrease inversely proportional to the square root of the distance, namely a radial velocity reaching 12 km s$^{-1}$\ at some 15 au. Indeed, as shown in the channel maps displayed in Figure \ref{figa7}, this distance could not significantly exceed some 20 au or so, excluding interpretations in terms of accelerated expansion of the spherical shell up to distances at the 100 au scale. This is a region where SO$_2$ observations have given clear evidence for rotation, possibly combined with moderate expansion, both at the scale of a few km s$^{-1}$, typically 5. Such kinematic is at strong variance with that of a spherical shell expanding radially at velocities exceeding 12 km s$^{-1}$\ and reaching some 20 km s$^{-1}$. Not only is the spherical shell interpretation irreconcilable with SO$_2$ observations, but it is unrelated to the observation of a dominant emission displaying clear bipolarity with a factor of 6 between polar and equatorial winds, for which a completely independent mechanism would have to be invoked. \begin{table*} \centering \caption{Location of the jet projections on the sky plane for SiO emission.} \label{tab7} \begin{tabular}{cccc} \hline &&Mean (arcsec)&Rms(arcsec)\\ \hline \multirow{2}{*}{Blue}&$x$&+0.044&0.146\\ \smallskip &$y$&+0.076&0.150\\ \multirow{2}{*}{Red}&$x$&+0.022&0.120\\ &$y$&$-$0.028&0.110\\ \hline \end{tabular} \end{table*} \begin{figure*} \centering \includegraphics[width=0.7\textwidth,trim=0.cm 1.cm 0.cm 0.cm,clip]{fig19-sio-jet-dist.eps} \caption{SiO emission in the jets. Distributions of $f$ (left), $x$ (middle) and $y$ (right) are shown for $R<2.5$ arcsec and $V_z<-12$ km s$^{-1}$\ (blue) or $V_z>12$ km s$^{-1}$\ (red). In the rightmost panels a cut $f>0.02$ Jy beam$^{-1}$ is applied, as shown in the left panel.} \label{fig19} \end{figure*} Figure \ref{fig19} (left) displays the noise distribution in the region of the jets, $R<2.5$ arcsec and $|V_z|>12$ km s$^{-1}$\ for SiO emission. A clean signal is seen above $f\sim0.02$ Jy beam$^{-1}$. Retaining flux densities exceeding this value, we show in the middle and right panels of Figure \ref{fig19} the $x$ and $y$ distributions of the blue-shifted and red-shifted jets separately. They are very clean with mean and rms values listed in Table \ref{tab7}. Taking as uncertainty on these measurements the rms deviation of the beam profile with respect to its mean gives differences between the red and blue values of respectively $\Delta x=-0.022\pm0.19$ arcsec and $\Delta y=-0.104\pm0.19$ arcsec; the larger value of $\Delta y$ is probably the result of the inclination of the star axis with respect to the line of sight: the average value of $|z|$ is $0.5\Delta y/\tan\varphi$, namely 0.4$\pm$0.4 arcsec for an inclination of 10$^\circ$. In the case of $x$, the distribution of $\Delta x$ receives no contribution from the inclination of the jet axes and the rms deviation with respect to the mean provides an estimate of the opening angle of the jets: Rms($\Delta x$)/$|z|$, namely $\pm$9$^\circ$\ for $|z| = 1$ arcsec, $\pm$18$^\circ$\ for $|z| = 0.5$ arcsec. Moreover, the numbers in Table \ref{tab7} show that the jets are launched within $\pm$25 au from the star, excluding a possible relation to a distant companion. \section{CO emission}\label{sec6} \subsection{$^{12}$CO(2-1) emission} \label{sec6.1} A detailed study of CO emission, with emphasis on distances from the star in excess of $\sim$2 arcsec, was presented in \citet{Hoai2019}: it does not need to be repeated here. We limit instead the present study to a comparison with SiO observations, with the aim to contribute additional information to the description of the morpho-kinematics of the circumstellar envelope at short distances from the star ($r<2.5$ arcsec). Projections of the data-cube are displayed in Figure \ref{fig15} as was done for SiO emission in Figure \ref{fig9}. Comparing the two figures is very instructive. The presence of narrow polar jets at the limit of sensitivity is only revealed when applying a strong noise cut (4-$\sigma$ in Figure \ref{fig15}); additional clearer evidence is presented in Appendix \ref{appa1}. The P-V maps of Figure \ref{fig15} are red-blue asymmetric, as were those of SiO emission; but instead of displaying significantly different end points of the Doppler velocity spectra, they simply show that more gas has reached terminal velocity in the red hemisphere than in the blue. This was already clearly apparent on the global Doppler velocity spectrum displayed in Figure \ref{fig3}. Indeed, we know from \citet{Hoai2019} (their Figure 26) that absorption is twice as large on the blue side than on the red side for $|V_z|>8$ km s$^{-1}$\ and an independent confirmation is obtained from the study of $^{13}$CO emission presented in Section \ref{sec6.2} below. While the CO data-cube displays considerably less asymmetry than the SiO data-cube, striking similarity between the two is illustrated in Figure \ref{fig16} that compares maps of the depletion component $f_{deplet}$ in both absolute and relative terms. Here $f_{deplet}$, defined in Appendix \ref{appa3}, measures the missing emission associated with the depletion; namely the observed data-cube is written as $f=f_{uncut}-f_{deplet}$ where $f_{uncut}$ measures the emission of the centrally-symmetric intact data-cube, the observed data-cube $f$ being amputated by the contribution $f_{deplet}$ of the depletion. This figure provides remarkable evidence for the depletion to be present in both SiO and CO data in spite of the much smaller asymmetry that it produces in the latter: close inspection of the data-cube reveals its presence as a weak elliptical depression at $x=-0.4$, $-$0.5 and $-$0.6 arcsec. \begin{figure*} \centering \includegraphics[width=0.8\textwidth,trim=0.cm 0.5cm 0.cm 0.cm,clip]{fig15-sio-pvmap.eps} \caption{Projections of the CO data-cube on ($x,y$) (left), ($x,V_z$) (middle) and ($y,V_z$) (right).} \label{fig15} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.75\textwidth,trim=0.cm .5cm 0.cm 0.cm,clip]{fig16.eps} \caption{Maps of the depletion of the data-cube are compared for CO (left panels) and SiO (right panels) emissions. In each pair of panels, the leftmost panel shows $f_{deplet}$ integrated over the full velocity range (Jy beam$^{-1}$km s$^{-1}$) and the rightmost panel shows the ratio $f_{deplet}/f_{uncut}$.} \label{fig16} \end{figure*} Scanning through the data-cube in small steps of $x$ as was done for SiO in Figure \ref{fig13} shows that both SiO and CO maximal velocities are consistent with the same terminal velocity as obtained in \citet{Hoai2019} when assuming that SiO molecules explore a radial range reaching up to some 3 arcsec. It also shows that the jets are weak in CO emission, polar emission being enhanced at moderate velocities, as if they were slowing down at larger distances or if their aperture was broadening. Moreover, it shows that what was called the narrow component by \citet{Hoai2019}, equatorial emission at $V_z\sim0$ covering a broad range of $x$ and $y$, is absent from SiO emission, as if it were building up progressively and had no time to do so in the short radial range explored by SiO emission. Indeed, the star gravity, with an escape velocity of $\sim$4 km s$^{-1}$\ at $r=1$ arcsec and decreasing as $1/\sqrt{r}$, competes significantly against acceleration in the range explored by SiO, causing expansion to slow down as $r$ increases. \subsection{$^{13}$CO(2-1) emission}\label{sec6.2} Figure \ref{fig18} compares $^{12}$CO(2-1) and $^{13}$CO(2-1) emissions at projected distances $R$ not exceeding 2.5 arcsec, the former being typically 10 times brighter than the latter. As had been remarked in Figure 20 of \citet{Hoai2019} the $R$ interval between 1 and 3 arcsec corresponds to the maximal enhancement of the high $|V_z|$ horns at the extremities of the Doppler velocity spectra, a result of the flux of matter being maximal at intermediate stellar latitudes. The brightness ratio, averaged over position angle, is approximately $R$-independent but larger for Doppler velocities where absorption is more important, namely the narrow central component and the high $|V_z|$ horns, revealing the difference of optical thickness, $^{13}$CO emission being essentially optically thin. The effect is particularly important on the blue-shifted horn of the spectrum, causing it to disappear in the $^{12}$CO data. Unfortunately, the relatively low value of the signal-to-noise ratio of $^{13}$CO observations prevents making a more refined analysis of the absorption. Accounting for temperature, for an average absorption of 20$\pm$20\% of $^{12}$CO emission and for the values of the Einstein coefficients listed in Table \ref{tab2}, we measure a relative abundance ratio $^{12}$CO/$^{13}$CO of 9$\pm$2. This result corroborates the measurement by \citet{Cami2000} of CO$_2$ emission from distances corresponding to the region explored by SO$_2$ emission in the present work: they find a $^{12}$CO/$^{13}$CO ratio of the order of 10, however with a large uncertainty. \begin{figure*} \centering \includegraphics[width=0.75\textwidth,trim=0.cm .5cm 0.cm 0.cm,clip]{fig18-co12n13.eps} \caption{Comparing $^{13}$CO and $^{12}$CO emissions ($R<2.5$ arcsec, corrected for beam sizes). Left: ratio $^{13}$CO/$^{12}$CO of the P-V maps. Middle: ratio of the $R$ distributions of the brightness integrated over Doppler velocity and position angle. Right: Doppler velocity spectra of $^{12}$CO emission (black) and $^{13}$CO emission (red, scaled up by a factor 10).} \label{fig18} \end{figure*} \section{Discussion and conclusion} \label{sec8} \subsection{What has been learned: a summary}\label{sec8.1} The results obtained in the present study at small distances from the star add up to those obtained earlier at large distances from the star \citep{Hoai2019} to contribute an important number of new elements to our knowledge of the circumstellar envelope of EP Aqr. They draw complementary, but also significantly different pictures of its morphology and kinematics; at the same time as they help with a global understanding of the mechanisms at stake, they raise new question on the transition between the two regimes, such as: how do the jets disappear? how does the equatorial outflow (the narrow component) build up? How does the wind reach terminal velocity? It is useful, at this stage, to summarize what has been learnt. 1. Close to the star photosphere, at distances at the scale of 10 to 30 au, the kinematics is dominated by rotation, with a velocity of the order of $\sim$4 to 5 km s$^{-1}$, associated with a nascent radial expansion reaching a few km s$^{-1}$\ and displaying no significant anisotropy. The beam size, the line width and the sensitivity of the SO$_2$ observations prevent revealing more detailed features with sufficient confidence. 2. The line width of SO$_2$ emission is too large ($\sim$7.5 km s$^{-1}$\ FWHM) to be blamed exclusively on coherent Doppler broadening. It probably receives additional contribution from the turbulent regime that is expected to govern this region, host of shocks produced by the pulsation of the star and/or precursors of the nascent jets. 3. Two polar jets with a terminal velocity of some 20 km s$^{-1}$\ are launched from less than 25 au projected distance from the mass-losing star. Their opening angle is at the $\pm$10$^\circ$\ to 15$^\circ$\ scale and the measured splitting between their $y$ coordinates confirms the low value of the inclination angle of the star axis with respect to the line of sight. They are clearly seen in both SiO and CO emissions, however close to noise level in the latter case, but are absent from SO$_2$ emission. They are much weaker in CO than SiO emission, suggesting that they slow down and/or diverge at large distances from the star. 4. Observations are consistent with a same axis being the axis of rotation close to the star (SO$_2$), the jet axis (SiO and CO) and the axi-symmetry axis of the circumstellar envelope at distances in excess of $\sim$250 au (CO). This makes it unnecessary to invoke different symmetry-breaking geometries at different distances from the star. To a precision of $\sim$10$^\circ$\ the common axis projects on the sky plane $\sim$20$^\circ$\ west of north and is inclined by $\sim$10$^\circ$\ with respect to the line of sight. 5. The absence of detection of jet emission in the SO$_2$ data suggests that the jets acquire velocity over distances from the star between $\sim$20 au and $\sim$100 au, an interpretation that is consistent with the SiO and CO observations and agrees with observations of QX Pup \citep[]{SanchezContreras2018} that present some similarity with what is observed here. 6. The radial dependence of SO$_2$ emission is confined to the range where molecules are both excited and dissociated by the UV star light, below $\sim$30 au. SiO emission is observed to be confined to distances not exceeding $\sim$300 au, probably because of a combination of UV dissociation and the rapid aggregation of the gas onto dust grains. The boundary of the region that it populates is very sharp, again in agreement with observations of QX Pup \citep[]{SanchezContreras2018}. CO emission is slowly declining at larger distances, probably by dissociation from interstellar UV radiation. 7. A radial wind is building up at distances between $\sim$50 and $\sim$300 au from the star. A sensible description of the terminal velocity is given by the form $V\sim2+9\sin^2\alpha$ km s$^{-1}$\ with $\alpha$ being the stellar latitude, slightly larger in the red-shifted than in the blue-shifted hemisphere. 8. The temperature is well described at distances in excess of $\sim$250 au by an exponential radial dependence of the form $\sim$109[K]$\exp$($-r$[arcsec]/3.1) and is maximal at intermediate latitudes. At smaller distances a form 106[K]/$r$[arcsec] is better adapted to extrapolation, implying temperatures of $\sim$500 to 600 K in the region explored by SO$_2$ emission. 9. A very strong blue-west/red-east asymmetry dominates the SiO data-cube. Evidence for its presence in the CO data-cube has been obtained, inducing qualitatively similar, but quantitatively much smaller asymmetry. In the SiO case, where terminal velocity has not yet been reached in the explored part of the blue-western quadrant, it causes the end points of the Doppler velocity spectra to differ in the blue-shifted and red-shifted hemispheres. The asymmetry is best described as a depletion of blue-western emission having sharp boundaries, starting near the star and expanding rapidly to the whole blue-western quadrant. Its presence in both SiO and CO data suggests that it is associated with a low gas density but its stronger effect on the SiO data-cube may be the result of a lower SiO/CO abundance ratio in the depletion or of an inefficient acceleration of the wind. 10. Absorption was evaluated by \citet{Hoai2019} at a typical level of 20$\pm$20\% from a comparison between $^{12}$CO(2-1) and $^{12}$CO(1-0) observations at large distances from the star. $^{13}$CO(2-1) emission confirms this result, however emphasising the importance of absorption for velocities smaller than $\sim -8$ km s$^{-1}$\ causing the blue-shifted horn of the velocity spectrum to disappear when observed in $^{12}$CO emission. While having little influence on the observation of the blue-western depletion at Doppler velocities larger than $\sim -8$ km s$^{-1}$, it weakens the significance of the blue-west/red-east asymmetry at smaller velocities. 11. The transition between rotation dominance close to the star and expansion dominance farther out occurs at distances from the star smaller than ~50 au, such that no significant rotation can be detected in the SiO data. 12. The CO line width is measured as 1.2 km s$^{-1}$\ (FWHM) in the equatorial region, leaving only little room for contributions such as turbulence and coherent Doppler broadening (flaring). On the contrary, both SO$_2$ and SiO line widths seem to receive significant contribution from turbulence, at the scale of a few km s$^{-1}$\ FWHM. 13. CO emission at large distances from the star reveals irregularities of the equatorial morpho-kinematics in the form of a spiral arc in brightness \citep{Homan2018} and of apparently uncorrelated concentric circles in expansion velocity. Both show a radial modulation with a period at the scale of 3 to 4 arcsec, meaning a time scale of 800 to 1200 years for a wind velocity of 2 km s$^{-1}$. 14. $^{13}$CO emission displays the same morpho-kinematics as $^{12}$CO emission but is optically thinner. The abundance ratio $^{12}$CO/$^{13}$CO is measured as 9$\pm$2. 15. While all above statements are the result of careful scrutiny of the properties of the relevant data-cubes, one must keep in mind the ambiguity and arbitrariness inherent to the under-determination of radio observations. The validity of many of these statements rests in part on a subjective appreciation of what we consider the most plausible physical interpretation of the observations being analysed. \subsection{Constraints placed on possible interpretations}\label{sec8.2} Lacking a convincing description of the morphology and kinematics of the circumstellar envelope of EP Aqr in terms of the physics and chemistry governing its dynamics, it is useful to review the constraints that presently available observations and analyses can place on plausible interpretations and speculations. The observation of narrow polar jets in the environment of a star in an early stage of evolution on the AGB was unexpected. While jets are common in astrophysics, their occurrence in stellar physics is normally restricted to young stellar objects or to post-AGB stars and pre-planetary or planetary nebulae. They are known to share universal features \citep{Pudritz2012}, in particular to be associated with accretion discs that surround the source and contribute to their collimation. In most cases, they refer to collimated gas flows having velocities an order of magnitude larger than that of the present jets (a few hundred km s$^{-1}$\ rather than 20 km s$^{-1}$). Yet, the terminology remains justified in the present case where their velocity is twice as large as the terminal wind velocity and an order of magnitude larger than that of the slow equatorial wind and where evidence has been obtained for collimation. But the difference between the present jets and fast jets observed in post-AGB stars must be kept in mind, together with what it implies in terms of the mechanism governing the dynamics. The observation of a major asymmetry of the SiO data-cube, best described as a depletion of the blue-western quadrant, was also unexpected for a young AGB star. Breaking the spherical symmetry that governs the morpho-kinematics of red giants is normally discussed in the literature as occurring in the post-AGB phase with the launching of a super-wind. The main source of further symmetry breaking, this time beyond axi-symmetry rather than simple spherical symmetry, is considered to be binarity, which is widely recognized to play an important role in the evolution of mass-losing stars. A very abundant literature develops the above statements; here we can only quote a very few among the most recent, from which one can find one's way to a broader list of useful references: \citet{Soker2016}, \citet{Soker2017}, \citet{Sahai2018}, \citet{Lagadec2018}, \citet{Bollen2017}, \citet{Lykou2015}, \citet{PerezSanchez2013}. To the extent that the jets are launched by stars, which they don't have to, binarity may suggest two different scenarios: one where the jets are launched by the mass-losing star \citep{Mastrodemos1999} and where the companion is simply focussing the wind that it is blowing toward the orbital plane where it produces spiral patterns associated with its wake; the other where the jets are launched by the companion \citep{Soker2000} and interact with the slow wind blown by the mass-losing star, producing shocks and depletions. If the spiral observed in the equatorial plane \citep{Homan2018, Hoai2019} is to be interpreted as evidence for binarity, the inter-arm distance of some 350 to 400 au means a period of some 800 to 900 years at an expansion velocity of 2 km s$^{-1}$\ (the equatorial expansion velocity). This in turn implies, for a total mass of two solar masses a separation of some 100 to 120 au (note that \citet{Homan2018} use an equatorial expansion velocity of 12 km s$^{-1}$\ in their reasoning, resulting in a much shorter separation). In such a case, our observation that the jets are launched less than $\sim$25 au away from the central star seems to exclude that they be launched by the companion. Similarly, it seems to exclude that the tip of the depletion observed in the blue-western quadrant, at a distance not exceeding 40 au from the mass-losing star, be revealing the location of the companion. Therefore, if the spiral is taken as evidence for binarity the associated companion is probably unrelated to both the observed jets and the observed blue-western depletion. These need therefore to be interpreted independently from the equatorial spiral, possibly, but not necessarily as related to a closer companion. The large ratio between polar and equatorial terminal wind velocities, a factor of $\sim$6, suggests that the jets, or more precisely whatever mechanism is causing their acceleration, play a significant role in the acceleration of the slower wind. However, the impression that, qualitatively, the jets might accelerate the slow wind by entraining gas in their neighbourhood is not tenable quantitatively: they carry much too little momentum to support such an interpretation (we thank A. Zijlstra for clarification on this point). In this context the case of a young Herbig Ae star, HD 163296, \citep{Isella2016, Diep2019} is instructive: \citet{Klaassen2013} have shown that a collimated wind having a velocity just below 20 km s$^{-1}$ observed in the proximity of the knots of a very high velocity jet (at the scale of $\sim$250 km s$^{-1}$) reveals an outflow from the accretion disc of the young star that has been simply heated up by the fast jet rather than being entrained by it as had been assumed earlier.\\ The jets are launched close to the star and reach quickly a velocity of 20 km s$^{-1}$\ while the rest of the wind builds up more slowly and reaches only $\sim$11 km s$^{-1}$. The jets have sharp boundaries and a specific identity, separate from that of the radial wind blowing at intermediate latitudes, which therefore cannot be simply described as the wings of the jets. The jets fade away at larger distances, suggesting that they interact with the ambient gas. The mechanism that is launching the jets must differ from the mechanism that accelerates the wind at sub-polar latitudes. The latter is barely able to compete against gravity near the equator. While the present work has given evidence for narrow polar jets to be responsible for the higher velocity range of the observed SiO emission compared with that of CO emission, such difference is a general feature of the emission of dusty and low velocity outflows, as was noted earlier by \citet{Winters2003} and more recently by \citet{DeVicente2016}. It is often interpreted as resulting from star pulsations and is meant to be confined to the close neighbourhood of the star \citep{Winters2003,McDonald2016}. Recently, \citet{Decin2018} have observed the presence of wind velocities much larger than terminal in the inner regions of the circumstellar envelopes of oxygen-rich AGB stars IK Tau and R Dor and have discussed possible interpretations in terms of features other than narrow jets. The question then arises of how general, or how exceptional, is the presence of narrow polar jets in the nascent wind of AGB stars. The difficulty to detect such jets in geometries less favourable than that of EP Aqr makes it difficult to answer the question. The presence of an important blue-western depletion is surprising. As explained by \citet{Soker2000} and illustrated by \citet{GarciaArredondo2004} such depletion can naturally occur as a result of the interaction of the jets with the slow wind blown by the mass-losing star. Similarities between the present observations of the nearby environment of EP Aqr and recent observations of that of QX Pup \citep{SanchezContreras2018} seem to favour this interpretation. However, there is no simple reason for it to happen on one jet and not on the other. We cannot think of any hint in the present or earlier data that might credibly suggest a cause of this red/blue asymmetry. In the case of QX Pup a similar asymmetry is observed, this time north/south rather than blue/red, the star axis making an angle of only $\sim$35$^\circ$\ with the plane of the sky; the authors interpret it as caused by an early episode of violent mass loss, the star having ejected an important mass of gas and dust along its axis in one direction; we have no evidence for a similar effect in EP Aqr. The mechanism governing the launching of the observed jets remains therefore unknown. Evidence for rotation, with a velocity of $\sim$4 to 5 km s$^{-1}$\ at a radial distance of 10 to 30 au has been obtained; but such a rotation velocity, at a distance where the escape velocity is of the order of 10 km s$^{-1}$\ is a bit marginal to produce sufficient oblateness for jets to be launched by the mass-losing star. However, any mechanism that tends to inflate the equatorial region, rotation or else, may possibly generate sufficient pressure to push polar gas out along the axis and produce the observed polar structures (Zijlstra, private communication). Another unanswered question concerns the observation of relatively stronger SiO than CO emission in the jets when compared with the surrounding gas: it may reveal an enhanced SiO/CO abundance ratio, but it may as well reveal different physical conditions in their environment; understanding what governs the SiO/CO abundance ratio in jets is a difficult question \citep[see for example]{Cabrit2012}. Finally, we recall that the interaction of the circumstellar envelope with the interstellar medium is known to be important \citep{Cox2012, LeBertre2004} but simple geometric considerations exclude a possible relation to the observed blue-western depletion. In summary the present observations and their analysis have given evidence for early symmetry breaking of the morpho-kinematics of the circumstellar envelope of an AGB star, in strong contrast with the commonly accepted idea that such symmetry breaking normally occurs at the end of the AGB era, before the planetary nebula phase. Two narrow polar jets have been detected and whatever mechanism is causing their acceleration is likely playing a role in the acceleration of the slower wind at sub-polar latitudes. Evidence has been given for a strong depletion of the blue-western quadrant in both SiO and CO emissions, however much weaker in the latter case than in the former. We have shown that if the observed equatorial spiral is taken as evidence for binarity the associated companion is probably unrelated to both the observed jets and the observed blue-western depletion. These need therefore to be interpreted independently from the equatorial spiral. Many questions remain unanswered: which is the mechanism that governs the launching of the jets? which is the mechanism that governs the depletion of the blue-western quadrant? which is the mechanism that governs the acceleration of the wind to terminal velocity? what causes the observed asymmetry between the blue-shifted and red-shifted hemispheres? what precisely causes the difference between CO and SiO emission in the jets? how does the equatorial disc (the narrow component) build up? how do the jets slow down and/or open up? what causes the modulations observed in the equatorial plane at large distances (a spiral of intensity and circles of velocity)? These observations illustrate the complexity of the morpho-kinematics of nascent winds and warn against too hasty interpretations in the absence of observations of sufficient spatial and spectral resolutions. It might well be that other oxygen-rich AGB stars, such, for example, as RS Cnc \citep{Hoai2014, Nhung2015}, would reveal similar complexity and early deviation from spherical symmetry when observed in greater detail: the seeds of the strong distortions that govern the formation of planetary nebulae may be present at an earlier stage of the star evolution than commonly assumed. The need for further detailed observations of other oxygen-rich AGB stars is made evident. \section*{ACKNOWLEDGEMENTS} We express our deep gratitude to the referee, Professor Albert Zijlstra, for many pertinent comments that helped greatly with improving the quality of the manuscript. This paper makes use of the following ALMA data: 2016.1.00026.S and 2016.1.00057.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This work was supported by the Programme National Physique et Chimie du Milieu Interstellaire (PCMI) of CNRS/INSU with INC/INP co-funded by CEA and CNES. The Hanoi team acknowledges financial support from VNSC/VAST, the World Laboratory, the Odon Vallet Foundation and the Rencontres du Viet Nam. This research is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 103.99-2018.325.
\section{Introduction} The {\sc{Majorana Demonstrator}}~is a neutrinoless double-beta decay experiment using Germanium as source and detector. The {\sc{Demonstrator}}~contains 44.1-kg of Ge detectors divided between two independent cryostats \cite{general}. In total, the two modules contain 14.4 kg of \textsuperscript{nat}Ge and 29.7 kg of germanium enriched to 88\% Ge-76, the double beta decay isotope. The goals for the {\sc{Demonstrator}}~are to demonstrate background levels low enough to justify building a tonne scale experiment, establish the feasibility of constructing and fielding modular arrays of Ge detectors, and search for additional physics beyond the Standard Model, such as solar axions and dark matter. The {\sc{Demonstrator}}~is operating underground at the 4850' level of the Sanford Underground Research Facility with the best energy resolution of any $0\nu\beta\beta$ experiment. Initial results based on datasets 3 and 4 indicate a 2.4 keV FWHM at 2039 keV and a projected background of $5.1^{+8.9}_{-3.2}$ c/(ROI-t-y), which is in good agreement with the {\sc{Demonstrator}}'s background goals \cite{tcald}. Numerous measures are responsible for the {\sc{Demonstrator}}'s low background levels. In addition to the shielding provided by the rock overhead, the detector array is surrounded by a low-background passive Cu and Pb shield with an active muon veto. Ultra-low-activity components and construction techniques are also used to limit contaminants. In particular, the cryostats and other copper components were constructed using ultra-clean, electroformed copper. Current assay upper limits predict a background of $\leq 2.45$ counts/ROI-t-y based on the {\sc{Demonstrator}}'s achieved resolution\cite{general} \cite{assay}. In addition to these hardware-based background reduction techniques, the p-type point-contact detector design allows for optimal pulse shape discrimination to distinguish candidate double beta decay events from background events. The {\sc{Demonstrator}}'s background goal presents unique challenges in designing high voltage and signal cable systems. Cables and connectors must be kept as low mass as possible to limit radioactive backgrounds. Strict radiopurity requirements also control what materials can be used, meaning that standard commercial products are often not an option. Custom-made components were designed and implemented to meet these requirements, but connectivity problems and high voltage breakdowns have necessitated a redesign of some of these components. The {\sc{Demonstrator}}~is currently operating 41 of 58 installed detectors. 7 of the non-operating detectors have problems associated with the signal connectors that are located on the cryostat cold plate or with damaged Low Mass Front End boards. The other 10 non-operating detectors cannot be electrically biased due to because of problems with HV cables, connectors, and in one instance a likely detector problem. The improvements to cables and connectors discussed here are aimed at raising the percentage of operational detectors to $>90\%$. \section{High Voltage Cables and Connectors} In the {\sc{Demonstrator}}, high voltage (HV) is applied to the outer contact of P-Point Contact (PPC) High Purity Germanium (HPGe) detectors. An HV card supplies voltage to radiopure, in-vaccuum HV cables through custom pin connectors on a vacuum flange. Each HV cable carries this voltage to a detector through a custom electroformed Cu HV fork connected to a copper ring that makes contact with the outer surface of a detector, opposite the point contact. The HV cable is constructed with a picocoax design, in which a central conductor is wrapped in a layer of FEP insulation, a tightly wound copper ground shield, and finally a second layer of FEP insulation that serves as the outer jacket. These cables, manufactured by Axon', are rated to carry 5kV DC. They exhibit a low linear mass density of 3 g/m and have an outer diameter of 1.2mm. The Cu ground shield has a gauge of 50 AWG. \begin{figure}[htb] \begin{center} \includegraphics[scale=0.35]{HV_cable.png}\\ \textbf{Figure 1:} {\sc{Majorana Demonstrator}}~HV cable. The copper HV fork is shown in the upper left-hand corner of the photo. The flange connector is connected at the opposite end of the cross-arm at the vacuum flange. \end{center} \end{figure} \smallskip During initial operations multiple detectors exhibited HV ``breakdowns" in which there were significant discharges. These detectors were fully or partially biased down to prevent damage to associated electronics. It was determined the breakdowns were occuring between the central conductor and the outer ground shield. These breakdowns were largely eliminated when the HV cable Cu ground shields were disconnected from ground. Of the detectors that are currently operating, 11 were brought on-line due to this change. A series of stress tests were performed on a sample HV cable to determine possible failure modes. It was determined that kinked cables can lead to the same breakdown signatures observed in the {\sc{Demonstrator}}~commissioning phase. The likely cause of HV breakdowns is a deformity in the layer of insulation separating the Cu ground shield from the central conductor due to kinked or crushed cables. Damage to these cables likely occurred during installation, as no significant breakdowns were detected in cable testing following production and preceding installation. The collaboration has encountered additional problems with the current design of the HV cables and connectors. The Vespel clamp plug that covers the exposed end of the central conductor at the HV fork was found to not be secure for all detectors. Additionally, collaborators have identified a risk of intermittent connection at the vacuum flange. To address these issues, the collaboration plans on undertaking a full replacement of HV cables and connectors installed in the {\sc
{Demonstrator}}. An existing set of Axon' HV cables will be installed with the same specifications as before. To avoid the damaging of cables during installation, improved baffle plates will be set within the cross-arm to manage and direct cables to the detector cryostat. Additionally, ePTFE thread will be used to bundle the cables together, providing further management and protection within the cross-arm. Rather than using a Vespel clamp plug to cover the exposed end of the central conductor at the HV fork, a crimped connection will lock the central conductor in place with the HV fork, improving security. A new set of PEEK connectors will be assembled to provide improved connectivity of the high voltage cable at vacuum flange, with new sockets that have a higher clamping force. \section{Signal Cables and Connectors} The {\sc{Majorana}} signal cable and connector system is designed to transmit electronic pulses containing information about events in the germanium detectors. When an event occurs, charge is collected at the point contact and transmitted to a Low Mass Front End board (LMFE) with a FET that amplifies the signal\cite{guinn}. Each LMFE is connected to the preamp using four coaxial Axon' cables. Each set of four cables is divided into two separate cable bundles: one connecting the LMFE to a Vespel connector at the coldplate and another running between the coldplate and the D-sub connectors at the vacuum flange. The Axon' signal cables have the same picocoax design as the HV cables described above. However, the signal cables have a smaller outer diameter of 0.4 mm, leading to a reduced linear mass density of 0.4 g/m. The cables have an impedance of 50 $\Omega$ and a capacitance of 87 pF/m. The main challenge presented by the {\sc{Majorana}} signal cable system is the difficulty of fabricating Vespel connectors that are robust enough to withstand temperature cycling without the use of conventional spring components that fail the {\sc{Demonstrator}}'s radiopurity requirements. The beryllium copper (BeCu) contacts used in many commercial connectors have unacceptably high \textsuperscript{232}Th and \textsuperscript{238}U activities. The Vespel connectors currently installed in the {\sc{Demonstrator}}~are instead designed to avoid the need for contact springs, but this design requires very precise machining to ensure a secure connection. The machining constraints of the {\sc{Demonstrator}}'s underground machine shop have led to unreliable connectors. While the vacuum-side D-sub connectors are not responsible for observed connectivity issues in the {\sc{Demonstrator}}, installation problems in the D-sub connectors reduced the number of viable spare channels. There is also evidence of damage to the signal cables during installation. Observed instances of electrical shorts between signal cable ground shields and the coldplate indicate problems with at least one detector's signal cables. In order to improve the reliability of Vespel connectors at the coldplate, the signal connector design has been modified to incorporate a fuzz button contact. These fuzz button contacts are manufactured out of gold-plated molybdenum wool by Custom Interconnects. Unlike the BeCu contacts typically used to provide springiness in commercial connectors, fuzz buttons are expected to meet the {\sc{Demonstrator}}'s stringent radiopurity requirements based on assay results from the SuperCDMS collaboration \cite{CDMS}. The new connector design also provides a more secure connection using a mechanical locking mechanism. Prototypes of the improved connector design have passed 100\% of initial liquid nitrogen dunk tests, indicating that they will be able to withstand temperature cycling. A comparison of the old and new Vespel connector designs can be seen in Figure 2. The improved Vespel connectors will be installed during a replacement of the entire signal cable and connector system, planned to take place concurrently with the HV cable system upgrade. During this upgrade, the D-sub connectors at the vacuum flange will be replaced with more reliable commercial connectors from Glenair. Based on the evidence of damage to some existing signal cable ground shields, increased measures will be taken to protect signal cables during installation. Like the HV cables, signal cables will be bundled using ePTFE thread. \begin{figure}[htb] \begin{center} \includegraphics[scale=0.6]{fuzz_button_contact.png}\\ \textbf{Figure 2:} Comparison of the installed and upgraded Vespel connector designs. A close-up of the fuzz button contacts and a size comparison are shown in the center images. \end{center} \end{figure} \section{Status and Outlook} The upgrades to HV and signal cables and connectors discussed in sections 2 and 3 will undergo thorough testing. A test stand using the {\sc{Demonstrator}}~prototype cryostat will be used with a string of detectors to test upgraded HV cables and read out signal into upgraded signal cables. An assay of the materials that will be used for the upgrade is also underway. The manufactured cables to be used in the upgrade will be assembled with their corresponding connectors at UNC before shipment to the {\sc{Demonstrator}}~site. Installation in Module 1 is scheduled to begin in the summer of 2018. \section{Conclusion} The {\sc{Majorana Demonstrator}}~uses low-mass high voltage and signal cables that must meet stringent radiopurity, thermal stress, and mechanical stress requirement. Issues with connectivity and stability have laid out an initiative for a cable and connector upgrade. Thorough testing of all new high voltage and signal cables and connectors is underway. Upon completion of testing, cables assembled will be shipped to the {\sc{Demonstrator}}~site at SURF. Data collection following the upgrade should commence sometime in Q4 2018. \ack This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, the Particle Astrophysics and Nuclear Physics Programs of the National Science Foundation, and the Sanford Underground Research Facility. \section*{References} \medskip
\section{introduction} \label{introduction} The current understanding of the formation of large scale structure presumes that clusters, walls, filaments, etc.\ result from the gravitational collapse of small deviations from homogeneity in the early universe, but the source of these fluctuations is, at present, still a mystery. The most popular model is inflation, where vacuum energy in the early universe drives a period of exponential growth which stretches small quantum mechanical fluctuations to macroscopic scales much greater than the horizon at that time. As the universe expands, these fluctuations fall back into the causal horizon and collapse into the structure we see today. However, there is yet no definitive evidence in either accelerator experiments or cosmological observations to prove that a field with the appropriate characteristics exists to induce such an inflationary epoch. It is worthwhile, then, to consider alternatives to inflation which can also explain the origin of structure, specifically, topological defects. During cosmological phase transitions, topological defects can form where adjacent fields take on different vacuum states which can only be continuously connected if a region of false vacuum is trapped between them. The energy trapped in these defects can then gravitationaly induce perturbations which will collapse into the observed structure. A particularly interesting subclass of defect models is cosmic strings, which are lineal structures formed when a $U(1)$ symmetry is spontaneously broken. If these strings were formed at the GUT scale, they can have sufficient energy, and therfore mass, to drive the necessary perturbations in the ordinary matter which collapse into the structure we observe today. The question then becomes, how can we observe strings and compare these models with inflation? Inflationary models may best be tested by microwave background observations, as these models make precise predictions for the spectrum of microwave perturbations from $180^{\circ}$ scales down to several orders of magnitude smaller. For string models, things are less certain. On large scales, strings predict a spectrum similar enough to inflation as to be statistically indistinguishable \cite{allen}. On smaller scales, issues like decoherence\cite{magajio}, which smooth the acoustic peaks, are not fully understood, and while it is likely that we could rule out inflation if strings are the true model, we may not be able to confirm the existence of strings through microwave observations, given the current uncertianty about the CMBR fluctuations they produce. In this paper, we shall discuss another method by which cosmic strings can be observed, namely through gravitational lensing. Unlike other defect models, where we expect perhaps only one defect to remain in a horizon volume, simulations suggest that a significant length of string is observable today. The gravitational fluctuations these strings induce will bend light, making it possible to observe a cosmic string if it is back lit by a visible source. Since there are many sources, including galaxies and quasars, which can light up a string, we will consider the structure of images that would arise in such lensing systems containing long cosmic string. Our estimates of the string lensing probability suggest that string lenses occur at a rate of 10\% to 30\% of that of galaxy lenses in the case of quasar sources, a few dozen of which have been observed. We show that the resulting images have a unique signature, that is, along with the image pair that is associated with infinite straight strings, there will usually be a series of smaller, demagnified images which reside closer to the string itself. We conclude that new quasar surveys with large angular sky coverage, like the Sloan Digital Sky Survey, should contain a significant number of string lenses and be able to definitively observe or rule out the cosmic string model, at least for flat space. This paper is organized into several sections. In the following, we estimate the lensing probability for long cosmic strings and compare it to that for galaxies. In \S \ref{deflection} we calculate the deflection of a photon in the presence of a long cosmic string. In \S \ref{lenstheory}, we discuss the basic theory of gravitational lenses. The next section is broken into two parts, where \S \ref{network} discusses the generation of the long strings used in our calculations and \S \ref{images} explains how these were used to find lensed images. Finally, in \S \ref{conclusion} we discuss our results and suggest a search strategy for finding string lenses. \section{Long String Lensing probability} \label{probability} We begin by considering the likelihood observing a string lens system. Quasars represent the best objects for seeing long string lensing, as they are bright, with images not likely to be lost in any background, and new surveys like the Sloan Digital Sky Survey will observe them in large numbers (on the order of $10^5$ quasars over 1/4 of the sky). We would like to know how many quasar--string lensing systems we can expect to observe in such a survey, but this depends on how a particular quasar sample is observed. A simpler calculation is to measure the optical depth for lensing, that is the the probability of an object at a given redshift being lensed, which we can then compare to optical depths for quasar--galaxy lensing, which has already been observed for about a dozen systems and should occur on the order of hundreds of cases in the SDSS. To estimate the string lensing probability, we require two pieces of information: the projected angular density of string and a lensing cross section for that string. With regards to the former, numerical simulations of string networks in an expanding universe can give reliable estimates of the string density $\rho_{ls}$ in a horizon volume, characterized by the horizon radius $d_H$. Using the results of Bennett and Bouchet\cite{bennett} as a representative example, we see that in the matter dominated epoch the length of string in horizon units is a constant given by \begin{equation} \label{lstring} L_{ls} = {\rho_{ls}d_H^2 \over \mu} = 31 \pm 7, \end{equation} where $\mu$ is the energy per unit length of string. We shall treat this string as an ensemble of small links, with length $L_l$ in horizon units, distributed with a uniform probability density related to the above result, each with a random orientation with respect to the line of sight. These assumptions are appropriate when considering an average over many string networks. We shall also assume that each of the links is static, since we lack good information on the distribution of link velocities. In essence we are disregarding the effects of the Lorentz contraction (see next section for details), which means our estimate is probably only accurate to a factor of a few. We now subdivide space into differential volume elements such that the probability that more than one link, located by its center of mass, resides in the same volume is vanishingly small in comparision to the probability that one link resides in that volume. For the $i$th volume element, we find that the differential angle of sky subtended by the string is given by \begin{equation} \label{dangle} d\Theta_i = n_i {L_l d_H\over d_A} \sin(\beta_i) dV_i, \end{equation} where $n_i$ is the number of links in the volume, $\beta_i$ is a random orientation angle associated with each link, and $d_A$ is the angular diameter distance to the link \begin{equation} \label{angdia} d_A = {2 \over H_0 (1+z)^2} (1+z-\sqrt{1+z}), \end{equation} assuming a flat, matter dominated universe. Under the same assumptions, the volume element and horizon distance can also be expressed in terms of the redshift: \begin{eqnarray} dV &=& {4 \over H_0^3} {(1+z - \sqrt{1+z})^2 \over (1+z)^3 \sqrt{1+z}} dz d\Omega, \\ \nonumber d_H &=& {2 \over H_0 (1+z)^{3/2}}. \end{eqnarray} To estimate the lensing cross section, we assume that each link has the same cross section as if it were part of an infinite straight string. In this case, the angular cross section per radian of string is \begin{equation} \label{cross} \delta \phi = 8\pi G \mu {D_{lq} \over D_q} \sin{\beta}. \end{equation} The distances $D_{lq}$ and $D_q$ are respectively the angular diameter distances from the lens to the quasar and from the source to the quasar. In flat space for a source located at a redshift of $z_2$ and an observer located at $z_1$, the general form of the angular diameter distance is given by \begin{equation} \label{angulardiameter} d_A(z_1,z_2)= {2 \over H_0 (1+z_2)} \left[ {1 \over \sqrt{1+z_1}}-{1 \over \sqrt{1+z_2}} \right]. \end{equation} We ignore the effects of structure formation on theses distances which introduce deviations from homogeneity that can effect the path length. Convolving the cross section with the angular string density, we can determine the optical depth for a quasar at a redshift of $z = z_q$: \begin{equation} \label{optical} \tau(z_q) = 8\pi G\mu L_l{1\over \Omega_{O}} \sum_i n_i \sin^2(\beta_i) {d_H \over d_A} {D_{lq} \over D_q}, \end{equation} where $\Omega_{O}$ is the observed fraction of sky. Taking the expectation value for $\tau$, we find that $\langle n_i\rangle = L_{ls}/(L_l d_H^3)$ and $\langle\sin^2(\beta)\rangle = 2/3$. This leaves us with the integral over the observed volume \begin{equation} \label{optical1} \langle \tau \rangle = {16 \over 3} \pi G\mu L_{ls} \int {dV \over \Omega_0} {1 \over d_A d_H^2} {D_{lq} \over D_q}, \end{equation} which can be performed analytically, giving the final result: \begin{equation} \label{depth} \tau(z_q) = {8 \over 3} \pi G \mu L_{ls} \left[{1 \over 21} z_q^3 + {13 \over 105} z_q^2 + {3 \over 35} z_q + {2 \over 35} - \sqrt{1+z_q}\left( {2 \over 105} z_q^2 +{2 \over 35} z_q +{2 \over 35} \right)\right]. \end{equation} Now we would like to estimate the variance in $\tau$ which will permit us to set limits on string parameters using lensing statistics. Results from numerical simulations indicate that on scales of about $0.1 d_H$, the string undergoes a random walk. To make the problem tractible, we shall continue to treat each of these segments as uncorrelated with the rest, and presume that the random walk correlation is fairly small. The mean of the square of the optical depth is given by \begin{equation} \label{tausq} \left \langle \tau^2 \right\rangle = {1 \over \Omega_O^2}(8\pi G\mu L_l)^2 \sum_{i,j} \langle n_i n_j \rangle \langle \sin^2(\beta_i) \sin^2(\beta_j)\rangle \left({d_h \over d_A}{D_{lq} \over D_q}\right)_i \left({d_h\over d_A}{D_{lq} \over D_q}\right)_j. \end{equation} The expectation value $\langle n_i n_j \rangle$ in the limit of differential volumes is equal to \begin{equation} \label{ninj} \langle n_i n_j \rangle = n^2 dV_i dV_j + n dV_i \delta_{i,j}, \end{equation} when $n_i$ and $n_j$ are uncorrelated, and $n = \langle n_i \rangle$. Given this relation, one can easily show that \begin{equation} \label{variance} \sigma_\tau^2 = \langle \tau^2\rangle - \langle\tau\rangle^2 = {1 \over \Omega_O^2}(8\pi G\mu L_{ls})^2 {L_l \over L_{ls}} \int dV {8 \over 15} {1 \over d_H d_A^2} \left({D_{lq} \over D_q}\right)^2, \end{equation} which has the analytic result \begin{equation} \label{varianceAnalytic} \sigma_\tau^2 = {1 \over \Omega_O}(8\pi G\mu L_{ls})^2 {L_l\over L_{ls}} {2 \over 225} { z_q^3+3z_q^2-12z_q-24+24\sqrt{1+zq} \over (\sqrt{1+z_q}-1)^2}. \end{equation} To get the full error in $\tau$ we convolve this result with the theoretical uncertianty in the long string density $\sigma_{ls}$ given in eq.\ (\ref{lstring}), which adds in quadrature. Now consider a distribution of sources with mean number density as a function of redshift $N(z)$. The expected number of observed lenses is given simply by \begin{equation} \label{meanLens} \int dV N(z) \tau(z). \end{equation} To calculate the variance, we must account for both the variance in $\tau$ and the Poisson fluctuations in the source distribution. Including both these effects, we find the full variance is \begin{equation} \label{varianceLens} \sigma^2 = \int dV \left( N(z)\tau(z) + N(z)^2\sigma_\tau^2(z) + N(z)^2 \tau^2(z) {\sigma_{ls}^2 \over L_{ls}} \right). \end{equation} As a toy example, we consider a quasar distribution given by $N(z) = \delta(z-2) 10^5/\pi$, which rough approximates that of the Sloan Digital Sky Survey. The SDSS will observe one quarter of the full sky, so using the results of this section, we find for $G\mu = 10^{-6}$, that the number of observed quasar lenses is $18 \pm 6$. In figure \ref{tau} we show the optical depth for long cosmic strings, assuming a value $G \mu = 10^{-6}$, compared with the optical depth resulting from galaxy lenses as calculated by Turner, Ostriker, and Gott \cite{turner}. The galaxy estimate is based on an isothermal sphere model, which represent an upper limit to the lensing probability, as including a finite core to the galaxy tends to reduce the optical depth by up to 50\% \cite{white}. Thus we expect that for a wide field survey anywhere from 10\% to 30\% of the observed lenses will be the result of long cosmic strings when compared to galaxy lenses (assuming $G\mu = 10^{-6}$). Given that order 10 quasar lensed have been observed, one may expect that a few string lenses should have been seen. In fact there are no lenses that have yet been ascribed to cosmic strings, but this is statistically unsuprising because of the large variance associated with small number statistics. We can, however, reasonably conclude that string tensions $G\mu$ greater that a few $10^{-6}$, which would predict tens of observed lenses, probably are ruled out, consistant with the limits from pulsar timing. \section{Geodesic Deflection By A Long Cosmic String} \label{deflection} In a previous paper\cite{delaix}, we derived an equation for the deflection of a null geodesic--corresponding to a photon trajectory in the geometric limit--arising from the gravitational field of a cosmic string loop. For long cosmic strings, we can make use of some of that derivation, but we must now be careful, as the long strings stretch to horizon scales, requiring us to consider certain surface terms which could safely be ignored when examining small loops. Let us begin again by assuming a weak field so that the full metric may be expressed as \begin{equation} \label{metric} g_{\mu \nu} = \eta_{\mu \nu} + h_{\mu \nu}, \end{equation} where $\eta_{\mu \nu} = {\rm diag}(-1,1,1,1)$ is the usual Minkowski metric and $ h_{\mu \nu}$ is a small perturbation such that all terms of $O(h^2)$ are negligible. For simplicity, we choose to work in the harmonic gauge, which implies the condition $g_{\mu \nu} \Gamma^\lambda_{\mu \nu} = 0$. Using this gauge choice, to linear order we are left with a simple wave equation for the metric \begin{equation} \label{wave} \Box^2 h_{\mu \nu} = -16\pi G S_{\mu \nu}, \end{equation} where $S_{\mu \nu} = T_{\mu \nu} - 1/2 \eta_{\mu \nu} T^\lambda_\lambda$ is the traceless component of the stress energy tensor $T_{\mu \nu}$. If we decompose the photon four velocity $\gamma^\mu$ into its zeroth and first order pieces, $\gamma^\mu_{(0)}$ and $\gamma^\mu_{(1)}$ respectively, then it is a straightforward calculation to solve the geodesic equation and show that the first order deflection for a photon emitted at $t_1$ and observed at $t_2$ is given by \begin{equation} \label{geodesicsoln} \gamma_{\alpha (1)} = {1 \over 2} \int_{t_1}^{t_2} dt~h_{\mu \nu, \alpha}\gamma_{(0)}^\mu\gamma_{(0)}^\nu - \left.h_{\mu \alpha}\gamma_{(0)}^\mu\right|_{t_1}^{t_2}, \end{equation} Where we are integrating over the photons zeroth order trajectory, {\it i.e.}~$x_\mu = x_{\mu0}+\gamma_{\mu(0)} t$. For loops, which are compact, we could safely ignore the second surface term, but for now we should retain it when considering long cosmic strings as they are not compact objects. To make further progress, we need to consider the form of the stress energy tensor resulting from a cosmic string. Strings are well approximated as lineal gravitational sources, so the string configuration at any time is given by a two parameter vector function $\bbox{f}(\sigma,t)$ where $t$ is the time and $\sigma$ is a parameter which runs along the conformal length of the string. One may straight forwardly infer from this that the string traceless stress energy may be written in the form \begin{equation} \label{stress} S_{\mu \nu} = \mu \int d\sigma~F_{\mu \nu} \delta^{(3)}(\bbox{x}-\bbox{f}(\sigma, t)), \end{equation} Where $\mu$ is again the string energy density and $F_{\mu \nu}$ can be expressed in terms of $\bbox{f}$ and its derivatives. Now, let us designate $\tilde{\gamma}_\alpha$ to be the contribution to the first order deflection which comes from the integral in eq.\ (\ref{geodesicsoln}), separating it from the surface term. Contracting this with a derivative, we get \begin{eqnarray} \label{gammaderiv} \partial^{\alpha} \tilde{\gamma}_\alpha &=& {1 \over 2} \int_{t_1}^{t_2} dt~\Box^2 h_{\mu \nu} \gamma_{(0)}^\mu\gamma_{(0)}^\nu \nonumber \\ &=& -8 \pi G \int_{t_1}^{t_2} dt~S_{\mu \nu}\gamma_{(0)}^\mu\gamma_{(0)}^\nu, \end{eqnarray} where the second line comes from the metric equation (\ref{wave}). Plugging in the stress energy from eq.\ (\ref{stress}), we see that \begin{equation} \label{gammaderiv1} \partial^{\alpha} \tilde{\gamma}_\alpha = -8 \pi G \mu \int d\sigma \int_{t_1}^{t_2} dt~F_{\mu \nu}\delta^{(3)}(\bbox{x}_0+\bbox{\gamma}_{(0)}t- \bbox{f}(\sigma, t)) \gamma_{(0)}^\mu\gamma_{(0)}^\nu. \end{equation} We can evaluate the time integral if we decompose $\bbox{f}$ into components which are perpendicular, $\bbox{f}_\bot$, and parallel, $\bbox{f_\|}$, to the zeroth order photon trajectory, with the result \begin{equation} \label{gammaderiv2} \partial^{\alpha} \tilde{\gamma}_\alpha = -8 \pi G \mu\int d\sigma\left[ {F_{\mu \nu} \gamma_{(0)}^\mu\gamma_{(0)}^\nu \over 1 - \dot{f}_\|} \delta^{(2)}(\bbox{x}_{\bot 0}-\bbox{f}_\bot)\right]_{t=t_0}, \end{equation} where $t_0$ is the solution to the equation $t_0 = f_\|(\sigma,t_0)-x_{\| 0}$, {\it i.e.}~the light cone time slice, and dots refer to derivatives with respect to time. Note that the limits on the $\sigma$ integral are constrained to the regions where a solution with $t_1 < t_0 < t_2$ exists. Now we shall write out the left hand side of the equality in terms of a parallel and perpendicular gradients, $\partial^\alpha\tilde{\gamma}_{\alpha(1)} = \nabla_\bot \cdot \tilde{\bbox{\gamma}}_{\bot(1) } + \nabla_\| \cdot \tilde{\bbox{\gamma}}_{\|(1) }$, where the parallel gradient can be written as $ \nabla_\| \cdot \tilde{\bbox{\gamma}}_{\|(1) } = \gamma_{\delta (0)} \gamma_{(0)}^\beta \partial^\delta\tilde{\gamma}_{\beta (1)}$. Now we consider the contraction $\gamma_{(0)}^\beta\tilde{\gamma}_{\beta (1)}$, which from eq.\ (\ref{geodesicsoln}) is \begin{eqnarray} \label{dgammadt} \gamma_{(0)}^\beta \gamma_{\beta (1)} &=& {1 \over 2} \int_{t_1}^{t_2} dt~\gamma_{(0)}^\beta h_{\mu \nu, \beta}\gamma_{(0)}^\mu\gamma_{(0)}^\nu \\ \nonumber &=& {1 \over 2} \int_{t_1}^{t_2} dt~{d \over dt} h_{\mu \nu}\gamma_{(0)}^\mu\gamma_{(0)}^\nu \\ \nonumber &=& \left.{1 \over 2}h_{\mu \nu}\gamma_{(0)}^\mu\gamma_{(0)}^\nu \right|_{t_1}^{t_2}. \end{eqnarray} We are able to perform this integration because $\gamma_{(0)}^\beta \partial_\beta$ is equivalant to taking a complete derivative with time, $d/dt$. Using the above result in conjunction with eq.\ (\ref{gammaderiv2}), it is easy to show \begin{equation} \label{perpderiv} \nabla_\bot \cdot \tilde{\bbox{\gamma}}_{\bot(1)} = - 8 \pi G \mu\int d\sigma\left[ {F_{\mu \nu} \gamma_{(0)}^\mu\gamma_{(0)}^\nu \over 1 - \dot{f}_\|} \delta^{(2)}(\bbox{x}_{\bot 0}-\bbox{f}_\bot)\right]_{t=t_0} - \left.{1 \over 2} {d \over dt} h_{\mu \nu}\gamma_{(0)}^\mu\gamma_{(0)}^\nu\right|_{t_1}^{t_2}. \end{equation} We can solve this equation by assuming that the first order deflection can be written as a gradient of a potential, $\tilde{\bbox{\gamma}}_{\bot(1)} = \nabla_\bot \Phi$, leaving a two dimensional Poisson equation from which $\Phi$ may be found by integrating over the Greens function for the two dimensional Laplacian, $G(\bbox{x}_\bot,\bbox{x}'_\bot) = - \ln(|\bbox{x}'_{\bot 0}-\bbox{x}_{\bot 0}|^2)/4\pi$. Specifically, we get \begin{eqnarray} \label{phi} \Phi &=& \left.{1 \over 8\pi}\int d^2x'_\bot \ln(|\bbox{x}_{\bot 0}-\bbox{x}'_\bot|^2){d \over dt} h_{\mu \nu}(\bbox{x}'_\bot+\bbox{x}_{\|0}+\bbox{\gamma}t,t)\gamma^ \mu_{(0)} \gamma^\nu_{(0)} \right|_{t_1}^{t_2} \\ \nonumber && -2 G \mu \int d\sigma~\left[ {F_{\mu \nu} \gamma^\mu_{(0)} \gamma^\nu_{(0)} \over 1-\dot{f}_\|} \ln(|\bbox{f}_{\bot}-\bbox{x}_{\bot 0}|^2)\right]_{t = t_0}. \end{eqnarray} Finally, to recover the perpendicular deflection, we take the gradient and add the surface term from eq.\ (\ref{geodesicsoln}) which leaves us with \begin{eqnarray} \label{deflecttilde} \bbox{\gamma}_\bot &=& {1 \over 4\pi}\left.\int d^2x'_\bot { \bbox{x}_{\bot 0}-\bbox{x}'_\bot \over |\bbox{x}_{\bot 0}-\bbox{x}'_\bot|^2} {d \over dt} h_{\mu \nu}(\bbox{x}'_\bot+\bbox{x}_{\|0}+\bbox{\gamma}t,t) \gamma^\mu_{(0)}\gamma^\nu_{(0)} \right|_{t_1}^{t_2} \\ \nonumber && + 4 G \mu \int d\sigma~\left[ {F_{\mu \nu} \gamma^\mu_{(0)} \gamma^\nu_{(0)} \over 1-\dot{f}_\|} {\bbox{f}_{\bot}-\bbox{x}_{\bot 0} \over |\bbox{f}_{\bot}-\bbox{x}_{\bot 0}|^2}\right]_{t = t_0} - \left. \bbox{h}_\bot \right|_{t_1}^{t_2}. \end{eqnarray} where $\bbox{h}_\bot$ is defined to be the perpendicular part of $h_{\mu \alpha} \gamma^\mu_{(0)}$. This result gives us exactly what we want for lensing calculations, the photon deflection away from its zeroth order path. The values of $\gamma_{0}$ and $\gamma_{\|}$ which are equivalent to first order, give us the redshift of a photon as it passes a string, but we are not interested in this calculation here. Eq. (\ref{deflecttilde}) may at first seem to be a retrograde step as it involves a two dimensional integral of the metric in space where originally we had only a one dimensional integral of the metric over time. However, we argue that the first and third terms which contain the metric explicitly can be neglected. To do so we require an explicit solution for the metric in terms of the stress energy: \begin{equation} \label{stressmetric} h_{\mu \nu}(\bbox{x},t) = 4 G \int d^3x' {S_{\mu \nu}(\bbox{x}', \tau) \over \left| \bbox{x} - \bbox{x}' \right| }, \end{equation} where $\tau = t - \left| \bbox{x} - \bbox{x}' \right|$ is the retarded time, and this solution is derived from the Greens function for the $\Box^2$ operator. Using our string stress energy given in eq.\ (\ref{stress}), we can reduce the metric to \begin{equation} \label{stringmetric} h_{\mu \nu}(\bbox{x},t) = 4 G \mu \int d\sigma {F_{\mu \nu}(\sigma, \tau) \over \left| \bbox{x} - \bbox{f} \right| - \left( \bbox{x} - \bbox{f} \right) \cdot \dot{\bbox{f}}}. \end{equation} Now consider points which are far from the string. The metric goes like an integral over $|\bbox{x} - \bbox{f}|^{-1}$ while the middle term in eq.\ (\ref{deflecttilde}) goes like an integral over $|\bbox{x}_{\bot 0} - \bbox{f}_\bot|^{-1}$. The latter represents the minimum distance between the zeroth order photon trajectory and the cosmic string, while the former will, in general, go like the distance from the photon to the string at the current time. In the case of string loops, we could expect that the two metric terms would fall off like the inverse of the distance while the middle term remained constant, thus allowing us to drop the explicit metric terms. With an infinite string, one must be more careful because the distance from the string can never be large with respect to the string size. However, when determining the image structure in lensing, it is not the absolute deflection of the photons which matters, but rather, it is the difference in deflection between two nearby rays that counts. In this case, we see that for photons far from the string, the difference in the contribution between the two metric terms declines with distance while the difference between the static terms remains constant. So, for photons which pass nearby the string, that is a small distance when compared to the source and observer, the effect of the metric terms then is merely to cause all of the images to be displaced, but not to alter the shape or relative orientation of the images. Thus, for the purposes of determining the structure of strong lensing due to strings, we may drop the explicit metric terms all together and write \begin{equation} \label{deflect} \bbox{\gamma}_\bot = 4 G \mu \int d\sigma~\left[ {F_{\mu \nu} \gamma^\mu_{(0)} \gamma^\nu_{(0)} \over 1-\dot{f}_\|} {\bbox{f}_{\bot}-\bbox{x}_{\bot 0} \over |\bbox{f}_{\bot}-\bbox{x}_{\bot 0}|^2}\right]_{t = t_0}, \end{equation} where the omitted terms only add a constant to this result when the photon impact parameter is small compared to the distances of the source and observer to the lens. \section{Basic Gravitational Lensing Theory} \label{lenstheory} Before we consider the specific example of a long cosmic string lens, let us first take a detour into the basic theory of gravitational lensing. For most calculations, it is more than adequate to consider photon trajectories as the path taken by null geodesics like those discussed in the previous section, so that the lensing system is described by simple geometry and wave properties are
ignored. In figure \ref{lensdiag} we show a pictorial representation of a gravitational lensing system with an optical axis defined to roughly intersect the center of the lens. A photon is emitted from a source $S$ displaced from the optical axis by a vector $\bbox{\eta}$ at a distance of $D_s$ from an observer and $D_{ls}$ from the lens, and it travels in a straight line until it reaches the plane of the lens. There, its trajectory is deflected by a vector $\bar{\bbox{\alpha}}$, defined as the initial trajectory minus the final, and it again travels in a straight line to the observer located at a distance $D_l$ from the lens. This approximation is known as the thin lens approximation because the lens and the deflection it induces are presumed to occur in a single plane. To first order, this is valid for long cosmic strings. The location of the image $I$, where the ray intersects the lens plane, defines the vector $\bbox{\xi}$ which is measured from the the optical axis to $I$ in the lens plane. Since we will be considering sources and observers that are separated on cosmological scales, The distances $D_{l}$, $D_{ls}$ and $D_{s}$ are angular diameter distances (see eq. (\ref{angulardiameter})). In the small angle limit, the condition that a ray emitted from the source will have an image seen at $I$ by an observer at $O$ requires that \begin{equation} \label{lens} \bbox{\eta} = {D_s \over D_l} \bbox{\xi} - D_{ls} \bar{\bbox{\alpha}}(\bbox{\xi}) . \end{equation} This equation, often referred to as the lens equation, gives one the unique source location for any observed image, but its inverse, however, is not unique, as a single source may generate more that one image. It is convenient to recast the lens equation into a dimensionless form by dividing through by the length $D_l$, so that the coordinates are given units of angular displacement with respect to the optical axis. Thus we define a new set of variables \begin{eqnarray} \label{dimensonlessvar} \bbox{x} &\equiv& {\bbox{\xi} \over D_l}, \\ \nonumber \bbox{y} &\equiv& {\bbox{\eta} \over D_s}, \\ \nonumber \bbox{\alpha} &\equiv& \bar{\bbox{\alpha}} {D_{ls}\over D_s}, \\ \end{eqnarray} which yield a dimensionless lens equation \begin{equation} \label{lens} \bbox{y} = \bbox{x} - \bbox{\alpha}(\bbox{x}). \end{equation} The above equation can be inverted to give the location of images for a particular source location, but one can also use the information contained therein to determine the magnification of those images. Suppose a source emits a narrow pencil beam of photons which subtends a solid angle $d\Omega^*$, while the image of the of the source beam subtends an angle $d\Omega$, so that the ratio of the solid angles $d\Omega^* /d\Omega$ will give the flux magnification of the image to the source. Using the lens equation, one can derive the magnification by considering the Jacobian matrix \begin{equation} \label{jacobian} A_{ij} = {\partial y_i \over \partial x_j}, \end{equation} and observing that the magnification factor ${\cal M}(\bbox{x}) = d\Omega^*(\bbox{x}) /d\Omega(\bbox{x})$ is given by the inverse of the determinant of $A_{ij}$, \begin{equation} \label{determinant} {\cal M}(\bbox{x}) = {1 \over \det A(\bbox{x})}. \end{equation} Note that this same factor will give the angular magnification for extended objects, so gravitational lenses conserve surface brightness. \section{Numerical Calculations with Long Strings} \subsection{Constructing Long String Networks} \label{network} In \S \ref{deflection}, we derived an expression for the photon deflection that will be relevant to gravitational lensing with cosmic strings; now we must consider the problem of generating realistic long cosmic strings. Analytic work, in particular scaling soultions, has successufully described some of the average properties of long strings, but it cannot give the details of the structure of a particular string. Only by simulating networks of interacting strings can we hope to generate realistic structure. This exercise can be greatly simplified if we restrict ourselves to flat, Minkowski space rather than considering the expanding universe, but the resulting structure of these strings will not quantitatively agree with those produced by expanding universe models. However, there should be good qualitative agreement, sufficient for our lensing analysis. That is, we expect that the structure of the lensed images should be similar enough to those which would result from more accurate string simulations to make general conclusions about string lenses. Let us begin by considering the equations of motion for a cosmic string. The dynamics of strings are dominated by their tension, while gravitational effects are suppressed by factors of order $G \mu$, so for now we can ignore the back reaction, but we will discuss its effects later. We need the string location which is described by a four vector $f^\mu$ with a time like component $f^0 = t$ and spatial displacement $\bbox{f}(\sigma,t)$ that we used in the previous section. Then, considering only tension, the equations of motion for the string are \begin{equation} \label{eqofmotion} \ddot{f}^\mu - f''^{\mu} = 0, \end{equation} where dots refer to derivatives with respect to time and primes refer to derivatives with respect to $\sigma$. Our choice of the harmonic gauge enforces two constraints, \begin{equation} \label{constraint} \dot{f}^\mu f'_\mu = 0 \end{equation} and \begin{equation} \label{constraint1} \dot{f}^2 + f'^2 = 0, \end{equation} which restrict the motion to transverse directions and mandate conservation of energy respectively. The stress energy can also be expressed as a function of $\bbox{f}$ and has the form \begin{eqnarray} \label{stressenergy} T_{\mu \nu} = \mu \int d\sigma~(\dot{f}_\mu \dot{f}_\nu -f'_\mu f'_\nu) \delta^{(3)}(\bbox{x} - \bbox{f}(\sigma,t)), \end{eqnarray} consistent with the form suggested in eq.\ (\ref{stress}). For non--interacting strings, it would be sufficient to specify some initial conditions consistent with the gauge constraints and evolve them using the wave equation. However, real strings can interact when two different segments intersect, causing them to reconnect with the opposite segment. To handle both the evolution and intersection of the string network, we shall turn to a clever algorithm first proposed by Smith and Villenkin \cite{smith}. The foundation of the Smith--Villenkin algorithm is the fact that for a set of points equally spaced in $\sigma$, separated by $\delta$, the wave equation (\ref{eqofmotion}) can be reduced exactly to a finite difference equation on a lattice of $\sigma$ and $t$ points. In terms of the displacement vector $\bbox{f}$, we get \begin{equation} \label{eqofmotionlattice2nd} \bbox{f}(\sigma,t+\delta) =\bbox{f}(\sigma+\delta,t) + \bbox{f}(\sigma-\delta,t) - \bbox{f}(\sigma,t-\delta) \end{equation} This second order equation can be reduced to a pair of first order equations if we consider the velocity, defined as \begin{equation} \label{velocity} \dot{\bbox{f}} \equiv \left\{ \bbox{f}(\sigma,t+\delta)- {1 \over 2} \left[ \bbox{f}(\sigma+\delta,t) + \bbox{f}(\sigma-\delta,t)\right] \right\}/\delta. \end{equation} Thus, we get \begin{equation} \label{feq} \bbox{f}(\sigma,t+\delta) = {1\over 2} [\bbox{f}(\sigma+\delta,t)+\bbox{f}(\sigma-\delta,t)] + \dot{\bbox{f}}(\sigma,t)\delta, \end{equation} and \begin{equation} \label{fdoteq} \dot{\bbox{f}}(\sigma,t+\delta) = {1 \over 2} [\dot{\bbox{f}}(\sigma+\delta,t) + \dot{\bbox{f}}(\sigma-\delta,t)] + [\bbox{f}(\sigma+2\delta,t) -2\bbox{f}(\sigma,t) + \bbox{f}(\sigma-2\delta,t)]/4\delta. \end{equation} A complete solution can be specified by initially fixing the positions for every even point on the $\sigma$ lattice and velocities for every odd point. After the first time step, eq.'s (\ref{feq} \& \ref{fdoteq}) will give the positions for each odd point on the $\sigma$ lattice and velocities for each even point on the lattice. After the second time step, even points will again be positions and odd points will again be velocities, and so on, so that in general, the plane of $\sigma$ and $t$ will be filled with interlocking diamond lattices of positions and velocities. The next challenge is to satisfy the gauge constraints on the lattice. In eq. (\ref{velocity}) we have given a discrete velocity, and now we need a discrete version for the $\sigma$ derivative. It has the obvious form \begin{equation} \label{fprime} \bbox{f}'(\sigma,t) \equiv [\bbox{f}(\sigma+\delta,t)-\bbox{f}(\sigma-\delta,t)]/2\delta, \end{equation} where the gauge constraints for the discrete $\bbox{f}$ remain unchanged, \begin{equation} \label{descretegauge1} \dot{\bbox{f}}\cdot \bbox{f}' = 0, \end{equation} and \begin{equation} \label{descretegauge2} \dot{\bbox{f}}^2 + \bbox{f}'^2 = 1. \end{equation} To ensure that these conditions would be preserved through the entire evolution of the string, Smith and Villenkin proposed discritizing space in the same way as $\sigma$ and $t$, that is, the space itself is a lattice of points with spacing $\delta$. String configurations are then described by a series of connected links between successive string points on the $\sigma$ lattice consisting of three possible types. The first type is a static link with $\dot{\bbox{f}} = 0$ and $|\Delta \bbox{f} = 2|$, implying that one coordinate of $\Delta \bbox{f}$ is $\pm 2\delta$ while the others are zero. The second type is a moving link for which $|\bbox{f}'| = |\dot{\bbox{f}}| = \sqrt{2}/2$. Two components of $\Delta \bbox{f}$ are $\pm \delta$ while the final is zero, and the velocity $\dot{\bbox{f}}$ also has two non zero components which are each $\pm 1/2$ and must be normal to $\Delta \bbox{f}$. The last type of link is a cusp where $\Delta \bbox{f} = 0$. The velocity of a cusp is 1 corresponding to a link traveling at the speed of light parallel to one of the axes. One can verify easily that all of these links satisfy the gauge conditions and that the constraints will be preserved by the equations of motion. Finally, to accurately evolve a string network, we must account for string inter--commutations which occur when different segments of the string collide. We have defined links which are connections of two successive string positions $\bbox{f}(\sigma,t)$ and $\bbox{f}(\sigma+\delta,t)$. A collision occurs when two string points $\bbox{f}(\sigma_i,t)$ and $\bbox{f}(\sigma_j,t)$ both fall on the same location. If this happens, we inter--commute the links so now the point $\bbox{f}(\sigma_i,t)$ is connected to $\bbox{f}(\sigma_j+\delta,t)$ and the point $\bbox{f}(\sigma_j,t)$ is connected to $\bbox{f}(\sigma_i+\delta,t)$ with care taken to ensure that the proper velocity is assigned to each link. Prior to each time step, one tests each of the points to see if it lies in the same position as any other. One inter--commutes all the colliding links, and then evolves time for one step $\delta$ and checks the strings again for inter--commutations. To ensure that cusps are not mistaken for collisions, we set a minimum separation in $\sigma$ space, 4$\delta$, required before an inter-commutation on a distinct string is allowed. This fixes a minimum loop size, but the structure of the long strings is insensitive to the choice so long as it is small in comparison to the overall string length. In passing we mention that the order $N^2$ process of testing each point for intersection can be reduced to an order few $N\log N$ process if one sorts the points with respect to their positions using a quick sorting routine first. The Smith--Villenkin algorithm is a powerful tool for evolving networks of cosmic strings, and now we shall consider the initial conditions which we will use to produce long string segments for gravitational lensing. We start with the standard initial conditions introduced by Vachaspati and Villenkin\cite{vachaspati}, who lay down a periodic lattice of phases, and locate strings through the faces of the lattice cubes which have non--zero winding number. The result is a network of strings made of static segments which are parallel to one of the axes of the box, ideally suited as initial conditions for the Smith--Villenkin evolution algorithm. To ensure that space discritization effects do not strongly influence the resulting strings, each of these segments was subdivided into sixteen static links 2$\delta$ long, where the overall segment length is 32$\delta$. Tests of our simulations found that subdividing the segments with a larger number of links did not change structure of the evolved network, so spatial discritization effects have been minimized. Ideally, we would like to evolve this network until it has completely relaxed, and use the resulting long strings for our lensing calculations. Unfortunately, the periodic boundary conditions insure that there can be no net string flux through the box, meaning that given sufficient time, all of the long strings will fragment into loops leaving us nothing to study. To minimize these periodic effects, we evolve the network for a time equal to half the light crossing, {\it i.e.} if the box is $n\delta$ wide, we take $n/2$ time steps, each $\delta$ long. We expect scales on the box size to preserve the structure of the initial conditions, but on scales smaller than $n \delta / 2$, interactions should have sufficient time to relax the system to its final state. Previous analysis of evolved string networks has shown that the small scale structure of the surviving long strings is self--similar, that is, the structure is statistically the same on all scales well below the horizon\cite{refs}. We compare our results with our predecessors, as a consistency check. The simplest test is to measure the fractal dimension of the string defined as the exponent $n$ such that $L \propto D^n$, where $L$ is the mean conformal length of string measured between points separated by a distance $D$ in physical space. For self--similar structures, $n$ is a constant for all $D$, and for the particular example of a random walk string, $n = 2.0$. In figure \ref{fractal} we show a log--log graph of the conformal length $L$ plotted as a function of $D$ for the long strings produced in a box simulation with periodic length 1024$\delta$ at times $t = 0\delta,~256\delta$, and $512\delta$. Note that our initial segments were formed in a $32^3$ box of phases and each initial segment was 32$\delta$. We can see that for the unevolved strings that the fractal dimension is roughly uniform above the initial link scale. A linear regression of these points gives a slope of $n = 2.0$ which is what one expects from a random walk. As the simulation evolves, we see that the shorter length scales begin to relax into a new structure with a smaller fractal dimension, and by the time $t = 512\delta$, scale below $D \sim 512\delta$ have almost completely relaxed. A fit to these points gives a fractal dimension of $n = 1.3$, a result consistent with those of Sakellariadou and Villenkin \cite{sakellariadou}, and interestingly, also consistent with the early time results of expanding universe simulations. In this last type of simulation, apparently the strings first fragment and relax on time scales that are short compared to the expansion time of the universes and then are stretched so that $n$ falls below the initial flat space value. \subsection{Finding Images With Long Strings} \label{images} Now that we have a network of realistic long strings to work with, we consider how to use them in gravitational lensing systems. The best approach would be to evolve a large network with very fine spatial resolution (on the order of $10^{-5}$ the box length, where we could associate that scale with a few times the horizon scale), and just send photons through the box. Of course, the numerical resolution required to perform such a calculation is well beyond the capacity of current computers, so as an alternative, we exploit the fractal nature of the small scale structure of the string discussed in the previous subsection. Sections of string which can fit in a box of length about half the simulation box relax into a self similar structure described by a constant fractal dimension, meaning that if we were to magnify a small piece of this string, we would observe the same structure in proportion to the new scale. Thus, to use our long strings which are not resolved on the scales relevant to gravitational lensing, we need only rescale the string, as long as we restrict ourselves to segments which have had sufficient time to relax. Let us be more specific about precisely how we accomplish this. From our simulations described in the previous section, we have a periodic box filled with loops and long strings. We can eliminate the loops by considering only strings which are significantly longer than the box length, leaving the long strings which tend to wrap around the box periodically once or more. It is the smaller scales, for which the string has relaxed to its final structure, that we wish to consider. To do so, we remove a long string from the box entirely, laying it end to end no longer in a periodic box. If the string wraps around the original network box more than once, we connect the the box size segments by shifting the endpoints to make one super--long string. Now we need to determine the the light cone projection of the string required to find the deflection given by eq.\ (\ref{deflecttilde}). We accomplish this by allowing our single long string to evolve independently of the network, turning off all inter--commutations and connecting the end points periodically. Inter--commutations are ignored because we do not want to alter the small structure of the string. A photon is presumed to travel along one of the axes, and the location of the intersection of each of the string points with the light cone, along with the string velocity is recorded. Since each string point is equal spaced in $\sigma$, we can use the set of light cone projected points to reduce the integral in eq.\ (\ref{deflecttilde}) to a discrete sum. We do not, however, wish to use the entire long string, since the structure has not relaxed on the largest scales. Instead, starting at an arbitrary point, we select shorter segments--those which can fit in a box with side length half that of the original network--and use only these points to calculate the photon deflection. In truncating the summation and considering only a finite string segment, we obviously introduce errors which we would like to quantify. As a order of magnitude estimate, let us consider the special case of an finite straight string segment perpendicular to the photon trajectory with equal lengths $\ell$ above and below the photon axis. We consider the photon to be moving along the $z$ axis and place the string segment parallel to the $y$ axis a distance $d$ from the origin (defined by the photon) so that the deflection will be along the $x$ direction. Using eq.\ (\ref{deflect}), while observing that $f_\| = 0$ and $F_{\mu\nu} \gamma^\mu\gamma^\nu = 1$, we see that the magnitude of the deflection will be proportional to \begin{equation} \label{truncate} \int_{-\ell}^\ell { d \over \sigma^2 +d^2} = 2 \tan^{-1}(\ell/d), \end{equation} and the truncation error, that is the difference between $\ell \rightarrow \infty$ and finite $\ell$, goes as $\pi - 2 \tan^{-1}(\ell/d)$. In the limit of large $\ell$ this is approximately $2 d / \ell$. If instead we wish to look at a fractal string, then we should replace the $\sigma^2$ in the integral with something proportional to $\sigma^{2/n}$ which gives a truncation error which falls like $(d / \ell)^{(2 / n) - 1}$. So, we see that the error depends on how far the photon passes from the string, which is fortuitous since most of the interesting lensing occurs for photons which pass close to the string segment. We can also see that the thin lens approximation is justified. So long as $d$ is much smaller than the distances of the source and observer from the lens,$\sim D$, then the deflection will occur in a region within a few lengths $d$ from the string which, and including the effects arising from the non--instantaneous deflection would be second order in $d/D$. Given the projected string segment, we have everything necessary to examine gravitational lensing, but now the challenge is to solve the lens equation. Currently there is no fast way to invert a multidimensional equation like eq. (\ref{lens}), so our only choice is to solve the problem by brute force. Given a source and lens redshift, we calculate the deflection $\bbox{\alpha}(\bbox{x})$ on a uniform grid in $\bbox{x}$ space, and then use the lens equation to solve for $\bbox{y}$ at each $\bbox{x}$. In other words, we are mapping a uniform grid in the image plane back onto the source plane where we can use this map to locate the images of a particular source. We compare triangles made of nearest neighbors in the image grid mapped onto the source plane to the locations of the sources. If a source point is enclosed by a mapped image triangle, then its image must lie somewhere in that triangle in the image plane, and thus, for any source point, we can locate its image to the uncertainty given by the image grid spacing. One must be careful, though, when considering image triangles that enclose a piece of string because the photon deflection is discontinuous as $\bbox{x}$ crosses the string. Images in these triangles should be ignored because they represent photons which must pass through the string itself to be observed and are therefore spurious. Since our strings are resolved down to the scale $\delta$, it is natural that the image grid spacing should be $\delta$ as well. And, having assumed a self--similar structure for the string, we are free to choose the physical scale that $\delta$ represents. For the objects we shall consider for lensing, we find that a good choice for this scale is $\delta / D_l = 0.1$ arc sec. Using the techniques described above, we shall consider two types of sources, point--like and extended, which will roughly correspond to quasars and galaxies respectively. Let us first consider quasars as they represent what are likely the best objects to look at when trying to observe strings through lensing. Quasars possess a trio of virtues regarding lensing, namely they are high redshift objects ($z \sim 1-5$), they are bright and therefore easily observed, and they are typically separated by large enough distances that the chance coincidence of two different quasars being separated at the same scales as typical lensing systems is rare. However, there are a sufficient number to make the observation of string quasar lensing systems probable (see \S \ref{probability}). Quasars are compact, distant, and cannot be spatially resolved with current technology. Thus we shall treat them as idealized point sources. The locations of the resulting images are found as we have described, along with their magnifications which are determined by eq.\ (\ref{determinant}). Figures \ref{quasar1}, \ref{quasar2}, and \ref{quasar3} show the results quasar lensing with three different segments of cosmic string. In each panel, the source location is shown as a hatched circle, the images as open circles, and the projected string segment as a dashed line. The relative area of the image circles gives the ratios of magnifications of the image to the source. The string is located at a redshift of one and the quasar is located at a redshift of two, while the angular size of each panel is 25 arc sec. The full length of string used in the lensing calculations is not shown, but we use the same segments for the galaxy lenses, and we do show the full segment in figures \ref{source1}, \ref{source2}, and \ref{source3} respectively. Typical in many of these examples are a number of small demagnified images which reside close to the string itself. These results are qualitatively similar to those one would see if we replaced the string by several point masses with masses similar to the energy in the string. With point masses, The deflection induced by neighboring points cancels around the midpoint between the two masses. Small images tend to form here because the deflection angle changes rapidly around the minima. For strings, it is the kinks and wiggles which provide a similar opportunity. Inside a kink, contributions from different parts of the string can cancel, leading to rapid changes in the deflection. Kinks can also produce small images by just being large concentrations of energy. They produce images in a manner similar to point masses when the source is outside the Einstein ring. The secondary image is smaller and therefore demagnified. High redshift galaxies also provide interesting candidates for cosmic string lensing, but observing these systems is significantly more challenging. The greatest difficulty lies in observing such objects because they are so faint. Things are further complicated because foreground galaxies may also contaminate the systems making it more difficult to observe the images. Also, typical galaxies are not spherical making it difficult to determine if one is observing a lensing arc or merely an edge on galaxy. However, to qualitatively illustrate a galaxy lensing system, we consider an idealized case of a set of extended spherical sources located at a redshift of $z=2$, lensed by a string at $z=1$. The angular size of the sources is 0.5 arc sec which corresponds roughly to the observed size of high redshift galaxies. Seven such sources are scattered randomly in side a $(25~{\rm arc~ sec})^2$ viewing area mimicking the approximate angular density of real high redshift galaxies \cite{sawicki}. In figures \ref{source1}, \ref{source2}, and \ref{source3} we show the full string segment used in the lensing calculation along with the sources to be lensed. The dashed box shows the area for which we calculate the images, where outside this region, we expect that the photon deflections will not be particularly accurate due to truncation error. In figures \ref{image1}, \ref{image2}, and \ref{image3} we show the observed images corresponding their respective sources, and again we note the proliferation of smaller, demagnified images similar to those observed in the quasar images. \section{Conclusions} \label{conclusion} From the results in the previous section, we can see that cosmic strings can produce images which have characteristics unlike the more prosaic galaxy lens. In particular, the proliferation of small demagnified images is a signature that may be unique to long cosmic strings. This suggest the exciting possibility that a cosmic string can be positively identified through gravitational lensing, confirming the existence of these topological defects. Unfortunately, our enthusiasm must be tempered by two important caveats. The first is that these strings are the product of Minkowski space simulations and do not include the effects of the expanding universe. When theses are considered, the structure on the string is stretched so that the fractal dimension falls to approximately $n = 1.1$ \cite{bouchet}, so we expect real strings to have smaller kinks and wiggles. The other effect that we ignored is that of gravitational radiation. For loops it is well known that they radiate energy at a rate of $\Gamma G \mu$ where $\Gamma$ is on the order of 100, so loops shorter than $\Gamma G \mu t$ will have radiated completely away. The effects of gravitational rediation on the structure of fractal strings is not as well understood. An alalytic example of a helical string has been calculated by Sakellariadou \cite{sakellariadou1} while the case of small amplitude kinks has been considered by Hindmarsh \cite{hindmarsh}. Their results suggest that fluctuations on long strings should have a life expectancy of about $d/G\mu$ where $d$ is the typical separation length between kinks--on the same order as the fluctuation. So, one might expect that the strings are straight on scales smaller than $G\mu$ times the horizon scale at the epoch of lensing. For a string located at $z = 1$ this corresponds to an angular scale of 0.5 arc sec, so gravitational radiation may just influence the long string lensing structure. Our examples then represent the most extreme results that one should expect, and real signatures may be less distinct. However, because the small scale structure of our strings is responsible for the demagnified images, we can conclude that gravitational lensing may be a good way to probe that small scale structure. If strings are relatively smooth on the scales we considered in our examples, then any strong lenses observed should produce a pair of undistorted images like those resulting from an infinite straight string. Conversely, if there is significant small scale structure, then one expects to see a number of demagnified images like those seen in the figures. In \S \ref{probability}, we found that quasar--string lensing systems should constitute about 10\% to 30\% of the observed quasar lenses, where the rest arise from lensing with galaxies. We suggest that the search for gravitational lenses could on its own confirm or rule out the cosmic string model of structure formation depending on whether such lenses are observed. Since the strings are obviously correlated, a large angle of sky coverage with a large number of quasars is required. In fact, precisely such a survey is currently being developed, namely the Sloan Digital Sky Survey. Some $10^5$ quasars with a mean redshift of two are to be observed in a $\pi$ steradian slice of the sky. We have estimated that one should observe on the order of ten string lenses in the SDSS for an $\Omega = 1$, cosmic string model. The failure to observe any string lensed quasars would require $G\mu < 10^6$, making strings an unlikely candidate for structure formation. Conversely, should a suspected string lens be observed, it is possible to confirm it by looking at galaxy observations in the neighborhood of the lens. With precise observations, one expects to see lensing of high redshift galaxies from nearby parts of the same string. In concert then, quasar observations followed by galaxy observations could provide definitive proof of GUT scale cosmic strings, or rule out cosmic strings as a viable model for the formation of large scale structure. We would like to thank Tanmay Vachaspati for his help and suggestions. This work was supported with funding from the Department of Energy.
\section{Introduction} The first detection of gravitational waves (GWs) from a binary neutron star (BNS) merger, GW170817, in August of 2017 signaled the beginning of the era of GW-multimessenger astronomy \cite{Abbott2017}. Finite-size effects during the pre-merger phase lead to constraints on the tidal deformability~\cite{Hinderer2010} (which is related to the radius) directly from the GW signal~\cite{Abbott2017,Chatziioannou2018,PhysRevLett.121.161101,De2018,Carney2018}. With additional detections expected in the forthcoming observation runs of the aLIGO/aVIRGO detectors it is expected that constraints from the inspiral phase will gradually tighten e.g.\ \cite{Read2013,DelPozzo2013,Wade2014,Agathos2015,Chatziioannou2015,Hotokezaka2016,Chatziioannou2018}. Employing a multi-messenger interpretation of GW170817, i.e.\ exploiting additional information from the electromagnetic counterpart, additional constraints on neutron-star (NS) parameters were derived including a robust lower bound on NS radii~\cite{Margalit2017,Bauswein2017,Shibata2017,Rezzolla2018,Radice2018,Ruiz2018,Coughlin2018,Koeppel2019,Kiuchi2019,Capano2019,2019AIPC.2127b0013B,BAUSWEIN2019167958}. Constraints on NS radii and the tidal deformability can be directly translated to constraints on the high-density part of the NS equation of state (EOS) \cite{Fattoyev2017,Raithel2018,PhysRevLett.121.161101,2019arXiv190511212T,2019arXiv190605978C}. Another method for directly measuring NS radii is through observations of the {\it postmerger} phase of BNS mergers (see~\cite{Bauswein2012,Bauswein2012a} for initial publications and \cite{2019arXiv190106969B,2019arXiv190708534B} for recent extensive reviews and references therein). For GW170817 the GW instruments were not yet sufficiently sensitive to detect GW emission from the postmerger phase~\cite{Abbott2017a}, but measurements can be anticipated when the detectors reach design sensitivity or when projected upgrades are installed, e.g.~\cite{Torres-Rivas2019}. For typical NS masses, this method is complementary to measuring the tidal deformability in the inspiral phase, but it also has the potential of placing even tighter constraints on the radius of massive NS, the maximum mass of nonrotating NSs, the tidal deformability or to probe the existence of a quark core~\cite{Bauswein2012a,Bauswein2013,Bauswein2014a,CORE1,Most2019,Bauswein2019a}. This is because the remnant in the postmerger phase reaches higher maximum densities that are inaccessible by methods which consider the relatively light progenitor stars before merging. The remnant of a BNS merger that has a sufficiently low mass to avoid prompt collapse is a stable or meta-stable differentially rotating NS remnant, whose dynamics are influenced mainly by the EOS, the total binary mass and the mass ratio. Gravitational waves emitted in the post-merger phase contain quasi-discrete, long-lived frequency components, as well as short-lived initial transients, e.g.~\cite{Shibata2005a,Stergioulas2011,Bauswein2015,Takami2015,Paschalidis2015,Clark2016,Foucart2016,Rezzolla2016,Radice2016a,Maione2017}. These originate from specific mechanisms that are sensitive to the EOS. By relating the post-merger spectrum to properties of individual NSs one can constrain the EOS. Specifically, the postmerger spectrum has several distinct peaks in the kHz regime which are produced by certain physical mechanisms connected to oscillation modes and dynamical features of the postmerger remnant. The dominant oscillation frequency $f_{\mathrm{peak}}$ in the GW spectrum is a generic feature, which occurs in all merger simulations that do not result in a prompt collapse \cite{Shibata2005}. The underlying mechanism that produces this frequency is the excitation of the \textit{fundamental quadrupolar fluid mode l=m=2}, as shown in~\cite{Stergioulas2011,Bauswein2016}. At frequencies somewhat smaller than $f_{\mathrm{peak}} $ two additional, potentially detectable secondary peaks can appear, $f_{\mathrm{2-0}}$ and $f_{\mathrm{spiral}}$~\cite{Shibata2005a,Stergioulas2011,Bauswein2015,Bauswein2016}. If we denote the frequency of the fundamental quasi-radial mode of the remnant as $f_0$ (which itself produces extremely weak GW emission), then \textit{quasi-linear combination frequencies} $f_{2\pm 0} = f_2 \pm f_0$ are present in the GW spectrum (where $f_2 \equiv f_{\mathrm{peak}}$). The existence of such combination frequencies is a natural consequence of the nonlinear nature of the evolution of simultaneous oscillations in the remnant. In some models, the $f_{2-0}$ peak is potentially detectable, while in others it is suppressed, due to a strong damping of the postmerger quasi-radial oscillations~\cite{Bauswein2015}. The other secondary peak, $f_{\rm spiral}$ occurs at frequencies between $f_{\mathrm{2-0}}$ and $f_{\mathrm{peak}}$~\cite{Bauswein2015}. This secondary peak is generated by the orbital motion of two antipodal bulges that form at the surface of the remnant after the merging, due to a tidal deformation, which has a spiral form in the case of equal-mass remnants. Matter in the two antipodal bulges orbits around the remnant with an orbital frequency smaller than the pattern speed of the $l=m=2$ $f-$mode oscillating in the inner region. This is a transient feature that lasts only for a few milliseconds. Bivariate empirical relations between the dominant postmerger frequency $f_\mathrm{peak}$ and EOS properties were first investigated for fixed binary mass configurations varying the total mass and the mass ratio \cite{Bauswein2012,Bauswein2012a}. Stellar parameters of nonrotating NSs are uniquely linked to the EOS through the Tolman-Oppenheimer-Volkoff (TOV) equations. For example, the peak frequency $f_\mathrm{peak}$ of 1.35-1.35~$M_\odot$ mergers shows a clear correlation with the radius $R_{1.35}$ of a nonrotating NS with 1.35~$M_\odot$ (see Fig.~4 in \cite{Bauswein2012} and Fig.~12 in \cite{Bauswein2012a}). Similar tight correlations exist for other fiducial masses (see Figs.~9 to 12 in \cite{Bauswein2012a}). The tightest relation for a 1.35-1.35~$M_\odot$ is with the radius $R_{1.6}$. This relation can be written as \begin{equation} f _ { \mathrm { peak } } = \left\{ \begin{array} { l l } { - 0.2823 \cdot R _ { 1.6 } + 6.284, } & { \textrm { for } f _ { \rm peak } < 2.8 \mathrm { kHz } }, \\ { - 0.4667 \cdot R _ { 1.6 } + 8.713, } & { \textrm { for } f _ { \rm peak } > 2.8 \mathrm { kHz } }, \end{array} \right. \end{equation} (the maximum deviation of the data points from a least-square fit is considered as figure of merit to assess the quality and accuracy of the relations). For $R_{1.6}$ the maximum scatter is less than 200~m. For other fixed binary masses, e.g.\ 1.2-1.2~$M_\odot$, 1.2-1.5~$M_\odot$ or 1.5-1.5~$M_\odot$ mergers, similar scalings between $f_\mathrm{peak}$ and NS radii exist~\cite{Bauswein2012a} and a single relation, scaled by the total mass is \cite{Bauswein2016} \begin{equation} f _ { \rm peak } / M _ { \mathrm { tot } } = 0.0157 \cdot R _ { 1.6 } ^ { 2 } - 0.5495 \cdot R _ { 1.6 } + 5.5030, \end{equation} (see~\cite{CORE1} for a similar rescaling but with the tidal coupling constant). For nonrotating stars it is known that the fundamental quadrupolar oscillation mode roughly scales with the mean density $\sqrt{M/R^3}$ \cite{Andersson1998}. For fixed-mass sequences a strong radius dependence may thus be expected. Since the mass of merger remnants typically exceeds the maximum mass of nonrotating NSs, the oscillation frequencies of the remnant cannot be directly connected to oscillation modes of a nonrotating NS of the same mass. But, the corrections by rotation and the extrapolation to higher masses are likely to depend in a continuous manner on the EoS. A detailed investigation of oscillation modes of differentially rotating merger remnants is still to be developed, but quasi-normal modes for uniformly rotating stars in full general relativity have already been calculated \cite{2019arXiv191008370K}. A tentative explanation of the relations between $f_\mathrm{peak}$ and NS properties is presented in \cite{Chakravarti2019}. For a detailed summary of the work leading to the present publication see the review article \cite{2019arXiv190106969B}. Here, we extend the scaled bivariate empirical frequency-radius relations of \cite{Bauswein2016} to multivariate relations, by including the dependency on the chirp mass $M_{\rm chirp}$ of binary systems. This dependency of frequency on both the radius and chirp mass is expanded to second order, yielding accurate empirical relations over a wide range of masses. The procedure is repeated for the secondary peaks, demonstrating a clear distinction between $f_{2-0}$ and $f_{\rm spiral}$, in agreement with \cite{Bauswein2015}. For $f_{\rm peak}$, we also construct the inverse multivariate empirical relations, which describe the radius as a function of $f_{\rm peak}$ and $M_{\rm chirp}$, again with terms expanded up to second order. These inverse relations can be implemented directly in the data analysis of GW searches~\cite{Clark2014,Clark2016,Chatziioannou2017,Bose2018,Yang2018,Torres-Rivas2019} and show a consistency in determining the radius over a wide range of neutron star masses. Moreover, we employ a machine-learning algorithm to corroborate the existence of distinct classes of postmerger spectra, depending on the strength and presence of the different secondary GW features. The algorithm detects three different types of postmerger spectra, fully in line with the spectral classification scheme introduced in~\cite{Bauswein2015}, where three classes of postmerger GW spectra were manually identified depending on presence or absence of $f_{2-0}$ and $f_{\rm spiral}$. Constraints on the high-density EOS can also be set by inferring the tidal deformability of neutron stars in the inspiral phase (see ~\cite{Abbott2017,Chatziioannou2018,TheLIGOScientificCollaboration2018a,PhysRevLett.121.161101,De2018,Carney2018} as well as \cite{2019arXiv190708534B} and references therein). On the other hand, when using gravitational waves generated by the postmerger oscillations, the EOS constraints obtained through the inference of the tidal deformability should be nearly (but not exactly) equivalent to EOS constraints obtained through the inference of radii. In addition to our empirical relations for radii, we thus construct multivariate empirical relations also for tidal deformabilities. In \cite{CORE1}, a bivariate empirical relation between $f_{\rm peak} M_{\rm tot}$ and the dimensionless quadrupole tidal coupling constant $\kappa_{2}^{\mathrm{T}}$ was found (see also \cite{Takami2015,Rezzolla2016}), whereas in \cite{2019PhRvD.100d4047T} a similar relation in terms the mass-weighted tidal deformability $\tilde \Lambda$ (adjusted for the mass dependence) was constructed. The multivariate relations we construct are of the form $\Lambda_{\rm x}(M_{\rm chirp}, f_{\rm peak})$, where $ \Lambda_{\rm x}$ is the dimensionless tidal deformability at a specific mass (indicated as the subscript $'x'$ in solar masses). These relations depend only on quantities that are directly measurable from the gravitational wave signal and are of significantly better accuracy than corresponding bivariate relations. The paper is organized as follows. In Sect.~\ref{sec:data} we summarize the data which we use for constructing empirical relations. Then we describe fits for postmerger GW frequencies in Sect.~\ref{sec:freq}. Sect.~\ref{sec:machine} discusses the results of machine-learning algorithms, which we employ for the identification of different types of postmerger spectra. In Sect.~\ref{sec:rad} empirical relations for the determination of NS radii from measured postmerger frequencies are presented. We describe an application of these relations for constraining the mass-radius relation of neutron stars in Sect.~\ref{sec:mr}. The validation of the empirical relations using an independent data set is discussed in Sect.~\ref{CORE:relations}. Sec. \ref{sec:fpeakL} presents empirical relations for the dominant postmerger frequency in terms of tidal deformabilities and Sect. \ref{sec:Lfpeak} discusses the inverse relations. We close with a discussion and our conclusions in Sect.~\ref{sec:sum}. Throughout the text, \textit{all frequencies} in empirical relations and figures are given in units of kHz and \textit{all masses} are given in units of $M_\odot$ and refer to the gravitational mass (for binary systems at infinite orbital separation). Radii refer to the circumferential radius. \begin{figure*}[ht] \includegraphics[width=17cm]{./figures-final/surface_fRM_fpeak_Rall_e0.png} \caption{Surfaces $f_{\mathrm{peak}}(R_{\mathrm{x}},M_{\mathrm{chirp}})$ using the whole SPH/CFC data set. Red dots show the extracted frequencies $f_{\mathrm{peak}}$ scaled by the chirp mass $M_{\rm chirp}$ (in units of kHz/$M_\odot$), while the light blue surface represents the empirical relations of the form of Eq. (\ref{fRM}). In the different panels, the radius of nonrotating neutron stars of mass 1.2, 1.4, 1.6 and 1.8$M_\odot$ was used. The surfaces are shown only in regions where data points are available.} \label{fRMsurfaces} \end{figure*} \section{Data sets}\label{sec:data} We construct empirical relations for the main post-merger GW frequencies using two different catalogues of GW waveforms. We start with 90 waveforms produced by a smoothed-particle (SPH) hydrodynamics code~\cite{Bauswein2012a,Bauswein2013a,Bauswein2014a,Bauswein2015} in the general-relativistc spatial conformal flatness (CFC) approximation~\cite{Wilson1996,Isenberg1980}. After establishing the new empirical relations, we use 28 waveforms of the publicly released CoRe data set~\cite{CORE} (which were produced by simulations in full general relativity and with high-resolution shock-capturing methods) to confirm the validity and accuracy of the new empirical relations. Finally, we produce empirical relations based on the combined data sets. \subsection{CFC/SPH GW catalogue} \label{CFC-SPH} Our first GW catalogue of BNS mergers is produced with a 3D SPH code~\cite{Oechslin2002,Oechslin2007,Bauswein2010a,Bauswein2012a}, which employs the CFC approximation for the evolution of the spacetime~\cite{Wilson1996,Isenberg1980}. Gravitational waves are extracted through a modified version of the quadrupole formula~\cite{Oechslin2007}. Both temperature-dependent EOSs and cold, barotropic models with an approximate treatment of thermal effects (see~\cite{Bauswein2010} for details and an assessment of this approximation in the context of BNS) are used. There are 49 equal mass models, with masses ranging from 1.2 $M_\odot$ to 1.9 $M_\odot$ and 41 unequal mass models, with masses ranging from 1.2 $M_\odot$ to 2.0 $M_\odot$ and mass ratios as low as 0.67. A summary of the main properties of this catalogue is given in Appendix \ref{Appendix.D} in Table \ref{table:cfc/sph-data} and in Appendix \ref{Appendix.A} in Figs. \ref{fig:eoschirp} and \ref{fig:eosconf}. Before Fourier transforming the time domain data, we applied a Tukey window with a rolloff parameter $\alpha = 0.1$ and zero padded each time series to 16384 samples in total. We construct the effective amplitude $h_{eff} = \tilde{h} \sqrt{f}$, where $\tilde{h}$ is the Fourier transform of the time domain GW signal, from which individual frequency peaks are extracted. The extraction of the dominant postmerger frequency $f_{\rm peak}$ is always unambiguous, since it is the peak with the highest effective amplitude in the postmerger phase. For the extraction and identification of the two secondary peaks $f_{2-0}$ and $f_{\rm spiral}$ we use the spectral classification scheme introduced in \cite{Bauswein2015}, which distinguishes three different types of postmerger spectra: Type I , where $f_{2-0}$ dominates over $f_{\rm spiral}$, Type II, where $f_{2-0}$ and $f_{\rm spiral}$ are roughly comparable in amplitude, and Type III, where $f_{\rm spiral}$ dominates over $f_{2-0}$. The occurrence of the different types depends in a systematic way on the EOS and the binary masses. Specifically, the $f_{2-0}$ frequency can be found in the range $f_{\rm peak}-1.3$kHz to $f_{\rm peak}-0.9$kHz (except for models very near the threshold mass to collapse, where the quasi-radial frequency diminishes) whereas the $f_{\rm spiral}$ frequency can be found in the range $f_{\rm peak}-0.9$kHz to $f_{\rm peak}-0.5$kHz. In the cases where a model is of Type I ($f_{2-0}$ dominates over $f_{\rm spiral}$) or Type III ($f_{\rm spiral}$ dominates over $f_{2-0}$) the correct identification of the main secondary frequency is straightforward. In a small number of mainly Type IIs cases, where $f_{2-0}$ and $f_{\rm spiral}$ are of comparable amplitude, some further considerations were required (for example, extraction of the quasi-radial frequency from the hydrodynamical simulation) in order to correctly identify the secondary peaks. In order to relate the postmerger GW frequencies to the radius of individual nonrotating stars, we computed nonrotating models of different masses with the same set of EOSs as for the BNS merger simulations. For EOSs that are defined as a piecewise polytropes in \cite{Read2009}, we used the {pyTOVpp code} \footnote{The python code {\tt pyTOVpp} was used, available at \protect\url{https://github.com/niksterg/pyTOVpp} .}, whereas other EOSs were implemented in their original tabulated form with the RNS code \cite{Stergioulas1995}. Small discrepancies that arise in the determination of the radius of a nonrotating star between the tabulated and the piecewise polytropic approximation of an EOS are within the maximum deviation of the empirical relations. \subsection{CoRe GW catalogue} The CoRe GW catalogue \cite{CORE} is a large public database of BNS waveforms constructed through simulations in full numerical relativity. We selected a subset of models for which the initial stars have zero spin and eccentricity lower than 0.02. In cases where the same model is available for multiple resolutions, we selected the highest resolution (denoted as $R01$ in \footnote{\protect\url{http://www.computational-relativity.org}}). Also, in cases where multiple waveforms were available for initial setups that differed only slightly in mass (due to a different initial separation distance), we selected the model with the lowest initial GW frequency (at the start of the simulation, before merger) which corresponds to the largest initial seperation distance. The subset of models we \textit{selected} from the CoRe GW catalogue are described in more detail in Appendix \ref{Appendix.B}, in Figs. \ref{fig:COREeoschirp} and \ref{fig:COREmodels} and in Appendix \ref{Appendix.D} in Table \ref{table:CORE-data}. This subset includes equal-mass models in the mass range 1.35 $M_\odot$ to 1.5 $M_\odot$ and unequal mass models in the mass range 0.94 $M_\odot$ to 1.94 $M_\odot$ and with a mass ratio as low as 0.49. There are 6 different EOS in this subset of selected models (compared to 13 different EOS in the Bauswein et al. catalogue described in Sec. \ref{CFC-SPH}). It also covers a smaller range of chirp masses, $1.06-1.2$, compared to $1.04-1.65$ in the CFC/SPH GW catalogue, but a larger range of mass ratios, $0.49-1.0$, compared to $0.67-1.0$ in the CFC/SPH GW catalogue. For more detailed information on the specific models we selected see the references \cite{CORE1,CORE2,CORE3,CORE4,CORE5,CORE6,CORE7,CORE8}. For our seleced subset of models from the CoRe catalogue we only extracted the dominant $f_{\mathrm{peak}}$ frequency, in the same way as described in Sec. \ref{CFC-SPH}. These frequencies are then used in Sec. \ref{CORE:relations} to validate the empirical relations constructed with the CFC/SPH GW catalogue, but also to construct empirical relations for the combined data set (i.e.\ combination of the Bauswein et al. data and the selected subset of the CoRe catalogue) in Sec. \ref{CORE:relations}. \section{Empirical relations FOR\ FREQUENCIES\ based on the CFC/SPH catalogue}\label{sec:freq} Using a least-squares minimization method \footnote{The python package {\tt Lmfit} was used, avalable at \protect\url{https://lmfit.github.io/lmfit-py/} . } (see \cite{lmfit}), we construct two-parameter relations of the form $f_j(R_{\rm x}, M_{\rm chirp})$, where $j$ stands for one of the three frequency peaks $f_{\rm peak}$, $f_{2-0}$ or $f_{\rm spiral}$ and ${\rm x}$ stands for the mass of fiducial nonrotating NS models, in solar masses (e.g.\ $R_{1.6}$ stands for the radius of a nonrotating model of mass $M=1.6 M_\odot$) $M_{\rm chirp}$ is the usual chirp mass for inspiraling binaries. Relations are obtained both for the subset of equal mass configurations and for the whole set of models, which includes both equal and unequal mass configurations. \begin{figure*} \includegraphics[width=17cm]{./figures-final/3surfaces2gether_R16_R18_e0.png} \caption{Empirical surfaces for frequencies with $R_{1.6}$ and $R_{1.8}$ and for all mass configurations. The blue surface corresponds to $f_{\mathrm{peak}}$ , the red surface to $f_{\mathrm{spiral}}$ and the green surface to $f_{\mathrm{2-0}}$. The surfaces are shown only in regions where data points are available.} \label{fRMsurfaces2gether} \end{figure*} The two-parameter empirical relations of the form $f_j(R_{\rm x}, M_{\rm chirp})$ were chosen to be second-order expansions in the two parameters (including a mixed term): \begin{equation} \begin{split} f_j / M_{\mathrm{chirp}}= b_0 + b_1 M_{\mathrm{chirp}} + b_2 R_{\mathrm{x}} + b_3 M_{\mathrm{chirp}}^2 \\ +b_4 R_{\mathrm{x}} M_{\mathrm{chirp}}+ b_5 R_{\mathrm{x}}^2. \end{split} \label{fRM} \end{equation} This relation was obtained for different values of the mass of the fiducial nonrotating NS models (different values of $\rm x$ in $R_{\rm x}$). Specifically, we employ $R_{1.2}$, $R_{1.4}$, $R_{1.6}$ and $R_{1.8}$. In each case, the maximum residual and the {\it adjusted} coefficient of determination $R^2$ was evaluated (see Table \ref{table:fRM} in Appendix \ref{Appendix.E}). Below, we present for each post-merger frequency the empirical relation that has the smallest error. \subsection{Empirical relations for $f_{\mathrm{peak}}$} For the dominant postmerger peak frequency $f_{\rm peak}$ and using the subset of {\it equal-mass} configurations, the empirical relation with the smallest error is obtained for NSs of mass $1.6 M_\odot$: \begin{equation} \begin{split} f_{\mathrm{peak}} / M_{\mathrm{chirp}}= 13.822 -0.576 M_{\mathrm{chirp}} -1.375 R_{1.6} \\ + 0.479 M_{\mathrm{chirp}}^2 -0.073 R_{1.6} M_{\mathrm{chirp}}+ 0.044 R_{1.6}^2. \end{split} \label{fRM1} \end{equation} This fit has a maximum residual which translates to 0.196 kHz over the whole parameter space and $R^2=0.98$. The coefficients $b_0$ -- $b_5$ for the empirical relations constructed for other masses are shown in Table \ref{table:fRM} in Appendix B. The maximum residual in $f_{\rm peak}$ ranges from 0.196 kHz to 0.257 kHz. For the {\it whole set} of models (including both equal and unequal masses), we display the empirical relations of the form of Eq. (\ref{fRM}), for $R_{\rm x}=$1.2, 1.4, 1.6 and 1.8 $M_\odot$, in Fig. \ref{fRMsurfaces} (notice that the surfaces in this figure are only shown in regions where data points are available, whereas for higher chirp masses and soft EOSs, i.e.\ small NS radii, the merger remnant directly collapses to a black hole~and does not produce strong postmerger GW emission - see \cite{Bauswein2013} for an empirical relation for the threshold mass to collapse). The empirical relation with the smallest residual is obtained for neutron stars of mass $1.8 M_\odot$: \begin{equation} \begin{split} f_{\mathrm{peak}} / M_{\mathrm{chirp}}= 10.942-0.369 M_{\mathrm{chirp}}-0.987 R_{1.8} \\ + 1.095 M_{\mathrm{chirp}}^2 -0.201 R_{1.8} M_{\mathrm{chirp}}+ 0.036 R_{1.8}^2, \end{split} \label{fRM2} \end{equation} which has a maximum residual that translates to 0.247 kHz over the whole parameter space and $R^2=0.976$. The coefficients $b_0$ -- $b_5$ for the empirical relations constructed for other masses are shown in Table \ref{table:fRM} in Appendix \ref{Appendix.B}. The maximum residual in $f_{\rm peak}$ ranges from 0.247 kHz to 0.374 kHz. \subsection{Empirical relations for $f_{\mathrm{2-0}}$} For the secondary postmerger frequency $f_{2-0}$ and using the subset of {\it equal-mass} configurations, the empirical relation with the smallest error is obtained for neutron stars of mass $1.6 M_\odot$: \begin{equation} \begin{split} f_{\mathrm{2-0}} / M_{\mathrm{chirp}}= 8.943 + 4.059 M_{\mathrm{chirp}}-1.332 R_{1.6} \\ -0.358 M_{\mathrm{chirp}}^2 -0.182 R_{1.6} M_{\mathrm{chirp}}+ 0.048 R_{1.6}^2, \end{split} \end{equation} with a maximum residual that translates to 0.229 kHz and $R^2=0.931$. The coefficients $b_0$ -- $b_5$ for the empirical relations constructed for other masses are shown in Table \ref{table:fRM} in Appendix \ref{Appendix.E}. The maximum residual ranges from 0.229 kHz to 0.366 kHz. For the {\it whole set} of models (including both equal and unequal masses), the empirical relation with the smallest error is obtained for neutron stars of mass $1.6 M_\odot$: \begin{equation} \begin{split} f_{\mathrm{2-0}} / M_{\mathrm{chirp}}= 9.586 + 4.09 M_{\mathrm{chirp}}- 1.427 R_{1.6} \\ + 0.048 M_{\mathrm{chirp}}^2 -0.261 R_{1.6} M_{\mathrm{chirp}}+ 0.055 R_{1.6}^2, \end{split} \end{equation} with a maximum residual that translates to 0.252 kHz and $R^2=0.947$. The coefficients $b_0$ -- $b_5$ for the empirical relations constructed for other masses are shown in Table \ref{table:fRM} in Appendix \ref{Appendix.E}. The maximum residual in $f_{\rm 2-0}$ ranges from 0.252 kHz to 0.383 kHz. \subsection{Empirical relations for $f_{\mathrm{spiral}}$} For the secondary postmerger frequency $f_{\rm spiral}$ and using the subset of {\it equal-mass} configurations, the empirical relation with the smallest error is obtained for neutron stars of mass $1.8 M_\odot$: \begin{equation} \begin{split} f_{\mathrm{spiral}} / M_{\mathrm{chirp}}= 6.264 + 1.929 M_{\mathrm{chirp}}-0.645 R_{1.8} \\ + 0.881 M_{\mathrm{chirp}}^2 -0.311 R_{1.8} M_{\mathrm{chirp}}+ 0.03 R_{1.8}^2, \end{split} \end{equation} with a maximum residual that translates to 0.286 kHz and $R^2=0.944$. The coefficients $b_0$ -- $b_5$ for the empirical relations constructed for other masses are shown in Table \ref{table:fRM} in Appendix \ref{Appendix.E}. The maximum residual in $f_{\rm spiral}$ ranges from 0.286 kHz to 0.422 kHz. For the {\it whole set} of models (including both equal and unequal masses), the empirical relation with the smallest error is obtained again for neutron stars of mass $1.8 M_\odot$: \begin{equation} \begin{split} f_{\mathrm{spiral}} / M_{\mathrm{chirp}}= 5.846 + 1.75 M_{\mathrm{chirp}}-0.555 R_{1.8} \\ + 1.002 M_{\mathrm{chirp}}^2 -0.316 R_{1.8} M_{\mathrm{chirp}}+ 0.026 R_{1.8}^2, \end{split} \end{equation} with a maximum residual that translates to 0.27 kHz and $R^2=0.93$ . The coefficients $b_0$ -- $b_5$ for the empirical relations constructed for other masses are shown in Table \ref{table:fRM} in Appendix \ref{Appendix.E}. The maximum residual in $f_{\rm spiral}$ ranges from 0.27 kHz to 0.438 kHz. \subsection{Comparison of distinct postmerger frequencies} In Fig. \ref{fRMsurfaces2gether}. we display the surfaces corresponding to the empirical relations for the three different postmerger frequencies $f_{\mathrm{peak}}$, $f_{\mathrm{spiral}}$ and $f_{\mathrm{2-0}}$ for the whole CFC/SPH dataset, as a function of $M_{\rm chirp}$ and $R_{\rm x}$ (using $R_{1.6}$ in the left panel and and $R_{1.8}$ in the right panel). The surfaces are shown only in regions where data exist. It is clear that the three frequencies are \textit{distinct} in the whole parameter space. This verifies that the two secondary post-merger frequencies $f_{\mathrm{2-0}}$ and $f_{\mathrm{spiral}}$ are distinct, each satisfying a different empirical relation, as proposed in \cite{Bauswein2015}. Our findings are in contrast with the "quasi-universal" relation that was initially proposed in \cite{Takami2014,Takami2015} for a single secondary postmerger frequency, denoted there as $f_1$. Ref. \cite{Rezzolla2016} accepts the existence of distinct postmerger frequencies, noting that their $f_1$ frequencies coincide with $f_{\rm spiral}$ in many models and with a different mode in other models (the $f_{2-0}$ frequency is identified in some models), but $f_1$ is still treated as a single feature of the post-merger spectrum that appears to satisfy a quasi-universal relation in the whole parameter space. Inspecting the data for the different extracted frequencies published in \cite{Rezzolla2016}, one can make the case that a) their $f_1$ frequency coincides with $f_{\rm spiral}$ of \cite{Bauswein2015} in part of the parameter space, whereas it coincides with $f_{2-0}$ in other parts of the parameter space. This has already been remarked in \cite{Bauswein2015} and argued to be fully in line with the therein devised unified picture of postmerger GW emission. This scheme explains (by the underlying physical mechanisms) which secondary peaks are particularly pronounced for different setups (binary mass, EOS) and may thus be denoted as $f_1$ (see also \cite{Clark2016,Bauswein2016,Bauswein2019} for further explanations). Since $f_{\rm spiral}$ and $f_{2-0}$ are in fact distinct frequencies of different origin, that do not satisfy universal relations (unless one restricts to fixed masses), it follows that the quasi-universal relation for $f_1$ suggested in \cite{Takami2014,Takami2015} and again in \cite{Rezzolla2016} can only thought of as a very rough relation, having a large spread of data points (as is also evident from several outliers in the relevant figures published in the above references). The $f_{\rm spiral}$ frequency (and hence also $f_1$ in \cite{Takami2014,Takami2015,Rezzolla2016}) is, in reality, not universal, but satisfies relations of the form (\ref{fRM}) for each chosen mass of nonrotating models (see also Fig. 7 in \cite{Bauswein2015} regarding the non-universality of $f_{\rm spiral}$ and \cite{2019arXiv190106969B} for a more extended discussion). Furthermore, \cite{Rezzolla2016} suggest that (in their notation) $f_2 \simeq (f_1 + f_3)/2$. But, there is no a priori reason for this relation to hold for models where $f_1$ is in fact $f_{\rm spiral}$. Instead, the existence of the quasi-linear combination frequencies $f_{2-0}$ and $f_{2+0}$ naturally implies $f_2=f_{\rm peak}=(f_{2-0}+f_{2+0})/2.$ \begin{figure} \includegraphics[width=8.5cm]{./figures-final/classification.png} \caption{Spectral classification of the postmerger GW emission, as obtained by a machine-learning algorithm, applied to the whole CFC/SPH data set. The classification is shown in the mass vs. radius parameter space of isolated, nonrotating neutron star models, constructed with various EOS and masses. A clustering algorithm separates the models into three different types (shown as red boxes for Type I, black $\times$ for Type II\ and blue circles for Type III). Then, a supervised-learning classification algorithm locates the borders between the three different types in this parameter space (see text for details). The region corresponding to each type is shown in different color. The results confirm the spectral classification scheme introduced in \cite{Bauswein2015}. Compare to Fig.~5 in~\cite{Bauswein2015}, where waveform models were classified manually, yielding a very similar pattern that is here reproduced by an automated machine-learning algorithm.} \label{fig:classification} \end{figure} \section{Spectral classification of postmerger frequencies USING\ MACHINE LEARNING} \label{sec:machine} In \cite{Bauswein2015}, a spectral classification scheme was introduced, based on the relative amplitudes between the postmerger $f_{\mathrm{2-0}}$ and $f_{\mathrm{spiral}}$ frequencies (see also \cite{2019arXiv190106969B} for a recent review). Here, we reproduce the classification of \cite{Bauswein2015}, using a machine-learning algorithm. We choose to define the \textit{distance} between two waveforms $s$ and $h$ to be \begin{equation} \mathcal{D} = 1- \mathcal{M}, \end{equation} where $\mathcal{M}$ is the match\begin{equation} \mathcal{M} = \underset{t_0,\phi_0}{\mathrm{max}} \frac{(s|h)}{\sqrt{(s|s) (h|h)}}, \end{equation} with (.|.) being the scalar product \begin{equation} (s|h) = 4 \mathbb{R}\mathrm{e} \int_{f_{\mathrm{low}}}^{f_{\mathrm{high}}} \frac{\tilde{s}(f) \tilde{h}^\ast (f)}{S_n(f)}df, \end{equation} implemented through \cite{pycbc} (we note that for the purpose described below, other definitions of the distance between two waveforms may also be used). Above, we denote with $\tilde{s}$ the Fourier transform of a waveform $s$ and with $\tilde{s}^\ast$ its complex conjugate. $S_n(f)$ corresponds to the advanced LIGO BNS-optimized noise \cite{bns-psd}. We calculated the $n\times n$ distance matrix between all of the $n= 89$. GW spectra of the whole CFC/SPH dataset, in the frequency range between $f_{\mathrm{low}}=1$ kHz and $f_{\mathrm{high}} = 4$ kHz (in which the three dominant postmerger frequencies lie). The data were clustered with two algorithms of the publicly available python library Scikit-Learn \cite{scikit-learn}. Both algorithms detect the number of distinct classes (without any prior information on their possible number) and depend on specific input parameters related to their algorithmic implementation. Both the Affinity Propagation algorithm, with a damping factor of 0.82 and a preference of 0.34 and the DBSCAN algorithm, with parameters $\varepsilon=0.05$ and a minimum of six points per class detected the existence of \textit{three} distinct classes, as was proposed in \cite{Bauswein2015}. We retain the same nomenclature as in \cite{Bauswein2015}, that is, we call a postmerger spectrum Type I when $f_{\mathrm{2-0 }}$ is stronger than $f_{\mathrm{spiral}}$ (occurring for soft EOS and total binary mass not far from the threshold mass to prompt collapse), Type II when these two secondary postmerger frequencies have comparable amplitudes (occurring for moderately soft EOS and intermediate total binary masses) and Type III when $f_{\mathrm{spiral}}$ is stronger than $f_{\mathrm{2-0 }}$ (occurring for stiff EOS and total binary mass far from the threshold mass to prompt collapse), see \cite{Bauswein2015,Bauswein2016,Clark2016,2019arXiv190106969B} for a more detailed description. Fig. \ref{fig:classification} shows the different models of the whole CFC/SPH dataset, in a mass vs. radius graph, where in each case the mass and radius of the isolated neutron stars before merger is indicated (for each EOS that was used). In the case of unequal mass mergers, the isolated model is shown for $M_{\rm tot}/2$, where $M_{\rm tot}=M_1+M_2$ is the total mass of the individual stars. Type I models are showns as red boxes, Type as black $\times$ and Type III as blue circles. The labels of each data point are used in a classification algorithm, in order to find the borders between the different spectral classes in the mass vs. radius parameter space. Specifically, we used the Multi-layer Perceptron (MLP) supervised learning algorithm, with an adaptive learning rate and with the limited-memory BFGS algorithm as a solver, available also as part of the Scikit-Learn library (other options were set to their default values). Fig. \ref{fig:classification} shows the boundaries between the different spectral classes, obtained in this way (the region corresponding to each spectral class is shown in a different color). These results are consistent with the postmerger spectral classification scheme introduced in \cite{Bauswein2015}. Notice that in the case of LS375 1.8+1.8, the fundamental radial mode is around $f_0 \sim 600$Hz, which is less than the typical range for other models, because this model is very close to the threshold mass. As a result, the secondary peaks in the postmerger spectrum appear in opposite order, compared to lower-mass cases (for somewhat higher central density the remnant would have a quasi-radial frequency even smaller, tending to zero, which marks the onset of collapse). Because of this exceptional morphology, the spectrum of this model was classified as type II by the algorithm described above. Demanding that the fundamental radial mode frequency $f_0$ only decreases as one approaches the threshold mass to collapse, for a given EOS, restores the correct identification of the secondary peaks and is used as an additional criterion in setting the right labels in Knowing the reason, we still show this single data point as type I in Fig. \ref{fig:classification} (in our sample this re-labeling was needed only for the LS375 1.8+1.8 model). \begin{figure*}[ht] \includegraphics[width=17cm]{./figures-final/surface_Rall_fpeak_e0.png} \caption{Surfaces $R_{\mathrm{x}}(f_{\mathrm{peak}},M_{\mathrm{chirp}})$ using the whole SPH/CFC data set. Red dots correspond to simulation data ($f_{\mathrm{peak}},M_{\mathrm{chirp}})$ with the vertical axis corresponding to the radius $R_{\rm x}$ of a nonrotating model with the same EOS as used in each simulation (in the different panels, the radius of nonrotating neutron stars of mass 1.2, 1.4, 1.6 and 1.8$M_\odot$ was used). The light blue surfaces represent the empirical relations of the form of Eq. (\ref{RfM}). The surfaces are shown only in regions where data points are available.} \label{fig:RfMsurfaces} \end{figure*} \begin{figure*}[ht] \includegraphics[width=17cm]{./figures-final/surface_R16_f2-0_R18_fspiral_e0.png} \caption{Surfaces $R_{\mathrm{1.6}}(f_{\mathrm{2-0}},M_{\mathrm{chirp}})$ (left panel) and $R_{\mathrm{1.8}}(f_{\mathrm{spiral}},M_{\mathrm{chirp}})$ (right panel) using the whole SPH/CFC data set. Red dots correspond to simulation data ($f_{\mathrm{peak}},M_{\mathrm{chirp}})$ with the vertical axis corresponding to the radius $R_{\rm 1.6}$ (or $R_{\rm 1.8}$, correspondingly) of a nonrotating model with the same EOS as used in each simulation. The light blue surfaces represent the empirical relations of the form of Eq. (\ref{RfM}). The surfaces are shown only in regions where data points are available.\\ } \label{fig:fRMsurfaces2} \end{figure*} \section{Empirical relations for radii based on the Bauswein et al. CFC/SPH catalogue}\label{sec:rad} The empirical relations for postmerger frequencies as function of radius and chirp mass, of the form $f_j(R_{\rm x}, M_{\rm chirp})$ investigated in Sec. \ref{sec:freq}, can be inverted, in order to obtain relations for chosen radii or nonrotating models as function of postmerger frequencies and chirp mass, of the form $R_{\rm x}(f_j, M_{\rm chirp})$, where $\rm x$ can be $\{1.4, 1.6, 1.8\}$ and $j = \{ \mathrm{peak}, \mathrm{spiral}, \mathrm{2-0} \}$. Instead of direct inversion of the empirical relations found in Sec. \ref{sec:freq}, we construct new relations applying a least-squares minimization to the same data. After investigating different possible forms, we found that a good choice is the second order expansion in both $f_j$ and $M_{\rm chirp}$ (including the mixed term) \begin{equation} \begin{split} R_{\mathrm{x}}= b_0 + b_1 M_{\mathrm{chirp}} + b_2 f_{\mathrm{j}}/M_{\mathrm{chirp}} +b_3 M_{\mathrm{chirp}}^2\\ +b_4 f_{\mathrm{j}} +b_5 \left( f_{\mathrm{j}}/M_{\mathrm{chirp}} \right)^2, \end{split} \label{RfM} \end{equation} (more details on the performance of the above and of other investigated forms are given in Appendix \ref{Appendix.E}). \par \par When constructing the empirical relations of the form (\ref{RfM}), we noticed the following optimization: for the $R_{1.2}(f_{\mathrm{j}},M_{\mathrm{chirp}})$ and $R_{1.4}(f_{\mathrm{j}},M_{\mathrm{chirp}})$ relations, we use only the data for which $M_{\mathrm{chirp}} < 1.3$, whereas for the $R_{1.8}(f_{\mathrm{j}},M_{\mathrm{chirp}})$ we use only the data for which $M_{\mathrm{chirp}}> 1.3$. This is natural, since the lower mass ($M_{\mathrm{chirp}}$) binaries are not suitable for inferring information for neutron stars of large mass and vice versa. Since, in this way, the dataset is separated into two regions, depending on the target radius, we use the superscript (\textless \ or \textgreater ) in naming the empirical relations. We note that for the $R_{1.6}(f_{\mathrm{j}},M_{\mathrm{chirp}})$ relation we use the whole dataset, since this is an intermediate case. We emphasize that in principle one should consider distinct relations for relatively small ranges in $M_{\mathrm{chirp}}$, which can be measured with high precision, as those relations should yield the tightest correlations and thus the smallest errors in radius measurements through postmerger GW emission. This approach, however, requires an even larger set of simulations with systematically varied binary mass parameters, especially the mass ratio. \subsection{Empirical relations for ${R_{1.2}}$} For ${R_{1.2}}$ and using the subset of {\it equal-mass} configurations, the empirical relation with the smallest error is \begin{equation} \begin{split} R_{1.2}^{<}= 52.201 -29.769 M_{\mathrm{chirp}} -15.398 f_{\mathrm{peak}}/M_{\mathrm{chirp}}\\ +8.918 M_{\mathrm{chirp}}^2 +3.333 f_{\mathrm{peak}} +1.832 \left( f_{\mathrm{peak}}/M_{\mathrm{chirp}} \right)^2, \end{split} \label{RfM_R12_e1} \end{equation} with a maximum residual of 0.52 km and $R^2=0.945$. The coefficients $b_0$ -- $b_5$ for the empirical relations constructed when using other frequencies are shown in Table \ref{table:RfM} in Appendix \ref{Appendix.E}. The maximum residual ranges between 0.52 km and 0.8 km. For the {\it whole set} of models (including both equal and unequal masses), the empirical relation with the smallest error is \begin{equation} \begin{split} R_{1.2}^{<}= 56.906 -37.252 M_{\mathrm{chirp}} -15.701 f_{\mathrm{peak}}/M_{\mathrm{chirp}}\\ +11.756 M_{\mathrm{chirp}}^2 +3.638 f_{\mathrm{peak}} +1.83 \left( f_{\mathrm{peak}}/M_{\mathrm{chirp}} \right)^2, \end{split} \label{RfM_R12_e0} \end{equation} with a maximum residual of 0.526 km and $R^2=0.951$. The coeffients $b_0$ -- $b_5$ for the empirical relations constructed for other frequencies are shown in Table \ref{table:RfM} in Appendix \ref{Appendix.E}. The maximum residual ranges between 0.526 km and 0.737 km. \subsection{Empirical relations for ${R_{1.4}}$} For ${R_{1.4}}$ and using the subset of {\it equal-mass} configurations, the empirical relation with the smallest error is \begin{equation} \begin{split} R_{1.4}^{<}=51.229 -30.463 M_{\mathrm{chirp}} -14.143 f_{\mathrm{peak}}/M_{\mathrm{chirp}}\\ +9.46 M_{\mathrm{chirp}}^2 + 3.09 f_{\mathrm{peak}} +1.612 \left( f_{\mathrm{peak}}/M_{\mathrm{chirp}} \right)^2, \end{split} \label{RfM_R14_e1} \end{equation} with a maximum residual of 0.412 km and $R^2=0.966$. The coefficients $b_0$ -- $b_5$ for the empirical relations constructed when using other frequencies are shown in Table \ref{table:RfM} in Appendix \ref{Appendix.E}. The maximum residual ranges between 0.412 km and 0.731 km. For the {\it whole set} of models (including both equal and unequal masses), the empirical relation with the smallest error is \begin{equation} \begin{split} R_{1.4}^{<}=55.809 -37.642 M_{\mathrm{chirp}} -14.473 f_{\mathrm{peak}}/M_{\mathrm{chirp}}\\ +12.15 M_{\mathrm{chirp}}^2 +3.41 f_{\mathrm{peak}} +1.609 \left( f_{\mathrm{peak}}/M_{\mathrm{chirp}} \right)^2, \end{split} \label{RfM_R14_e0} \end{equation} with a maximum residual of 0.493 km and $R^2=0.968$. The coefficients $b_0$ -- $b_5$ for the empirical relations constructed when using other frequencies are shown in Table \ref{table:RfM} in Appendix \ref{Appendix.E}. in Appendix B. The maximum residual ranges between 0.493 km and 0.676 km. \subsection{Empirical relations for ${R_{1.6}}$} For ${R_{1.6}}$ and using the subset of {\it equal-mass} configurations, the empirical relation with the smallest error is obtained when using the dominant postmerger frequency $f_{\mathrm{peak}}$ \begin{equation} \begin{split} R_{1.6}= 41.316 - 16.654 M_{\mathrm{chirp}} -12.458 f_{\mathrm{peak}}/M_{\mathrm{chirp}}\\ +3.722 M_{\mathrm{chirp}}^2 +2.936 f_{\mathrm{peak}} +1.269 \left( f_{\mathrm{peak}}/M_{\mathrm{chirp}} \right)^2, \end{split} \label{RfM_R16_e1} \end{equation} with a maximum residual of 0.462 km and $R^2=0.97$. A comparable performance is obtained when using the secondary postmerger frequency $f_{2-0}$ \begin{equation} \begin{split} R_{1.6}= 15.271 + 4.123 M_{\mathrm{chirp}} -6.661 f_{\mathrm{2-0}}/M_{\mathrm{chirp}}\\ -1.188 M_{\mathrm{chirp}}^2 +1.23 f_{\mathrm{2-0}} +0.783 \left( f_{\mathrm{2-0}}/M_{\mathrm{chirp}} \right)^2, \end{split} \end{equation} which has a maximum residual of 0.465 km and $R^2=0.942$. The coefficients $b_0$ -- $b_5$ for the empirical relation constructed when using $f_{\rm spiral}$ are shown in Table \ref{table:RfM} in Appendix \ref{Appendix.E}. Among all different choices, the maximum residual ranges between 0.462 km and 0.706 km. We stress that secondary peaks being weaker in gravitational waves are more difficult to detect and typically have a larger full width at half maximum (FWHM) implying that the error of a frequency measurement of secondary features in a GW detection will be larger compared to that of the main peak. For the {\it whole set} of models (including both equal and unequal masses), the empirical relation with the smallest error is obtained when using the secondary postmerger frequency $f_{2-0}$ \begin{equation} \begin{split} R_{1.6}= 17.764 + 2.497 M_{\mathrm{chirp}} -8.797 f_{\mathrm{2-0}}/M_{\mathrm{chirp}}\\ -0.639 M_{\mathrm{chirp}}^2 +1.393 f_{\mathrm{2-0}} +1.452 \left( f_{\mathrm{2-0}}/M_{\mathrm{chirp}} \right)^2, \end{split} \end{equation} with a maximum residual of 0.518 km and $R^2=0.955$. A comparable performance is obtained when using the dominant postmerger frequency $f_{\mathrm{peak}}$ \begin{equation} \begin{split} R_{1.6}= 43.796 -19.984 M_{\mathrm{chirp}} -12.921 f_{\mathrm{peak}}/M_{\mathrm{chirp}}\\ +4.674 M_{\mathrm{chirp}}^2 +3.371 f_{\mathrm{peak}} +1.26 \left( f_{\mathrm{peak}}/M_{\mathrm{chirp}} \right)^2, \end{split} \label{RfM_R16_e0} \end{equation} with a maximum residual of 0.526 km and $R^2=0.969$. The coefficients $b_0$ -- $b_5$ for the empirical relation constructed when using $f_{\rm spiral}$ is shown in Table \ref{table:RfM} in Appendix \ref{Appendix.E}. Among all different choices, the maximum residual ranges between 0.518 km and 0.674 km. \begin{figure*} \includegraphics[width=17cm]{./figures-final/histogram_e=1} \caption{In each panel, the percentage of data points that are closer in radius to each of the empirical relations (constructed with the corresponding frequency) is shown. The top row shows results of equal-mass configurations only and the bottom row uses all CFC/SPH data (see text for more explanations). Generally, these figures imply that other statistical measures for the quality of empirical relations (involving some sort of weighting like the 2-norm) would reveal tighter relations for $f_\mathrm{peak}$ in comparison to the subdominant frequencies.} \label{hist} \end{figure*} \subsection{Empirical relations for $\mathrm{R_{1.8}}$} For $\mathrm{R_{1.8}}$ and using the subset of {\it equal-mass} configurations, the empirical relation with the smallest error is obtained for the secondary postmerger frequency $f_{\mathrm{spiral}}$ \begin{equation} \begin{split} R_{1.8}^{>}= 55.934 -37.162 M_{\mathrm{chirp}} - 17.139 f_{\mathrm{spiral}}/M_{\mathrm{chirp}}\\ +7.961 M_{\mathrm{chirp}}^2 +9.897 f_{\mathrm{spiral}} -0.382 \left( f_{\mathrm{spiral}}/M_{\mathrm{chirp}} \right)^2, \end{split} \label{RfM_R18_e1s} \end{equation} with a maximum residual of 0.212 km and $R^2=0.951$. A comparable performance is obtained when using the dominant postmerger frequency $f_{\mathrm{peak}}$ \begin{equation} \begin{split} R_{1.8}^{>}= 33.802 -3.069 M_{\mathrm{chirp}} -15.522 f_{\mathrm{peak}}/M_{\mathrm{chirp}}\\ -1.439 M_{\mathrm{chirp}}^2 + 4.112 f_{\mathrm{peak}} + 1.605 \left( f_{\mathrm{peak}}/M_{\mathrm{chirp}} \right)^2, \end{split} \label{RfM_R18_e1} \end{equation} with a maximum residual of 0.276 km and $R^2=0.951$. The coefficients $b_0$ -- $b_5$ for the empirical relation constructed when using $f_{\rm 2-0}$ are shown in Table \ref{table:RfM} in Appendix \ref{Appendix.E}. in Appendix B. Among all different choices, the maximum residual ranges between 0.212 km and 0.597 km. For the {\it whole set} of models (including both equal and unequal masses), the empirical relation with the smallest error is \begin{equation} \begin{split} R_{1.8}^{>}= 54.467 -38.851 M_{\mathrm{chirp}} -13.992 f_{\mathrm{peak}}/M_{\mathrm{chirp}}\\ +9.305 M_{\mathrm{chirp}}^2 +8.453 f_{\mathrm{peak}} -0.614 \left( f_{\mathrm{peak}}/M_{\mathrm{chirp}} \right)^2, \end{split} \label{RfM_R18_e0} \end{equation} with a maximum residual of 0.275 km and $R^2=0.958$. The coefficients $b_0$ -- $b_5$ for the empirical relations constructed for other frequencies are shown in Table \ref{table:RfM} in Appendix \ref{Appendix.E}. The maximum residual ranges from 0.275 km to 0.569 km. \subsection{Comparing the performance of empirical relations for radii} For the {\it whole set} of models (including both equal and unequal masses), we display the empirical relations of the form of Eq. (\ref{RfM}), for $R_X=$1.2, 1.4, 1.6 and 1.8 $M_\odot$, when using $f_{\rm peak}$, in Fig. \ref{fig:RfMsurfaces} (notice that the surfaces in the different panels of this figure are only shown in regions where data points are available). Each $R_{\rm x}$ depends mainly on $f_{\rm peak}$ and to a smaller degree on $M_{\rm chirp}$, as anticipated from the previous results by \cite{Bauswein2012a} (see e.g.~\cite{2019arXiv190106969B} for a review). If one would not be interested in the smallest possible residual, then a linear approximation (a plane surface in this figure) would be sufficient. But, for high accuracy, the extension to second order, as is done here through Eq. (\ref{RfM}), is required. Since the empirical relations $R_{\mathrm{1.6}}(f_{\mathrm{2-0}},M_{\mathrm{chirp}})$ and $R_{\mathrm{1.8}}(f_{\mathrm{spiral}},M_{\mathrm{chirp}})$ had a comparable accuracy to the corresponding relations with $f_{\rm peak}$, we display these in Fig. \ref{fig:fRMsurfaces2}. For $R_{\mathrm{1.6}}(f_{\mathrm{2-0}},M_{\mathrm{chirp}})$, the dependence on $M_{\rm chirp}$ is weak, but for $R_{\mathrm{1.8}}(f_{\mathrm{spiral}},M_{\mathrm{chirp}})$ it is strong in the limit of low masses. \begin{figure*} \includegraphics[width=15cm]{./figures-final/tovplot3} \caption{Predictions for radius determinations at various masses using $f_{\mathrm{peak}}$ in the empirical relations (\ref{RfM}) assuming mergers with either $1.35+1.35 M_\odot$\ (squares) or $1.6+1.6 M_\odot$ (triangles), for three different candidate EOS. We assume that either the APR, DD2 or TM1 EOS is the correct EOS of high-density matter and predict the radius for certain masses. In the mass range of $1.2-1.6 M_\odot$, the true radius is within the maximum possible residual of $\sim \pm0.5$km from the predicted radius. For $1.8 M_\odot$ (EOS DD2 and TM1 only) the true radius is within a smaller maximum possible residual of only $\sim \pm0.28$km from the predicted radius. } \label{fig:tovplot} \end{figure*} Relations of the form of Eq. (\ref{RfM}) can be used to obtain the radii \(R_{\rm x}\) at different masses, when using any of the three postmerger frequencies $f_{\rm peak}, f_{2-0}$ or $f_{\rm spiral}$. We investigated the performance of each empirical relation in obtaining \(R_{\rm x}\) and a comparison is shown in Fig. \ref{hist} (the top row corresponds to equal-mass models only). In each panel, we show the percentage of data points that have the smallest residual among the different choices for the postmerger frequency (each column corresponds to a different mass \(R_{\rm x}\)). For all different masses, the corresponding radius of nonrotating stars is obtained more accurately when using the empirical relations for $f_{\mathrm{peak}}$ in more than 50\% of cases. For the remaining cases, the empirical relations using either the $f_{\mathrm{2-0}}$ or the $f_{\mathrm{spiral}}$ frequencies were more accurate in predicting radii, with the relations using $f_{\mathrm{2-0}}$ outperforming the relations using $f_{\mathrm{spiral}}$ for most masses, except for the lowest mass of \(1.2 M_{\odot }\) maybe to help to explain these data: These statistics exemplify that for the majority of all models the $f_\mathrm{peak}$ data points are closest to the respective empirical relation, whereas the data points of secondary peaks show a much larger scatter on average. Generally, these figures imply that other statistical measures for the quality of empirical relations (involving some sort of weighting like the 2-norm) would reveal tighter relations for $f_\mathrm{peak}$ in comparison to the subdominant frequencies, but as commented in Sect. \ref{sec:sum} we do not follow this approach here. We emphasize that the errors we quote for radius measurements through relations of the form of Eq. (\ref{RfM}) represent \textit{upper limits} (the maximum residuals correspond to the worst case in the whole sample) using our currently large set of representative EOS. These maximum residuals can improve in two ways: First, in an actual detection, binary mass parameters, such as the chirp mass and the mass ratio, will be measured. Hence, employing\textit{ optimized} relations that can be constructed for a narrower range of measured binary parameters will likely result in significantly smaller residuals. Second, future EOS constraints from a variety of experimental and observational methods may faithfully restrict the sample of representative EOS to a smaller sample, spanning a narrower region in the mass vs. radius parameter space. We therefore anticipate that our empirical relations of the form of Eq. (\ref{RfM}) will significantly improve over time. In a realistic detection scenario, the signal-to-noise ratio (SNR) of each frequency peak will determine its detectability and greatly influence its accuracy in measuring radii. In this sense, we expect the dominant postmerger frequency $f_{\mathrm{peak}}$ to play the dominant role in measuring radii, with the other two frequencies (typically having smaller SNR and larger width than $f_{\mathrm{peak}}$) being useful for extracting additional information on the characteristics of the postmerger remnant. These considerations and the data displayed in Fig.~\ref{hist} demonstrate that $f_{\mathrm{peak}}$ is the most promising feature for EOS constraints from the postmerger phase. It is fortunate that the empirical relations for $R^{\rm >}_{1.8}$ have very small residuals, between 0.212 km and 0.275 km. When one considers the currently available observed sample of neutron stars in binary systems, it is reasonable to expect that neutron stars with a mass of 1.8$M_\odot$ will only rarely be members of merging binary systems (see e.g.~\cite{2012ApJ...757...55O,2016ARA&A..54..401O,2019ApJ...876...18F}). Even less frequent would be a case of equal mass mergers with both stars having such a high mass. This implies that it will be quite difficult to accurately measure the radius or tidal deformability of high-mass neutron stars, when using methods based on the inspiral part of the gravitational-wave emission, i.e.\ methods based on measuring the tidal deformability (see, e.g.~\cite{2019arXiv190708534B} for a recent review and references therein) or frequencies excited through resonances (see e.g.~\cite{2019arXiv190500818S}). Moreover, finite-size effects decrease for higher masses as the tidal deformability is smaller. Hence, even if the inspiral of a high-mass binary is observed, the extraction of NS parameters may be more challenging and associated with larger errors. In contrast, the postmerger empirical relations (\ref{RfM_R18_e1s}), (\ref{RfM_R18_e1}) and (\ref{RfM_R18_e0}) provide a competitive method for measuring the radius of high-mass neutron stars and thus for constraining the very high density part of the EOS. \section{Constraining the mass-radius relation}\label{sec:mr} We consider three particular case studies, where we a assume that a certain EOS is the correct one, a soft EOS, APR, an intermediate EOS, DD2 and a stiff EOS, TM1. For the soft EOS APR we assume that the dominant postmerger frequency $f_{\rm peak}$ is detected in a single event with $M_{\rm chirp}<1.3M_\odot$ (specifically, from a $1.35+1.35 M_\odot$\ merger), whereas for the other two EOS we assume that $f_{\rm peak}$ is detected in two distinct binary neutron star merger events, one with $M_{\rm chirp}<1.3M_\odot$ (a $1.35+1.35 M_\odot$\ merger) and a second with $M_{\rm chirp}>1.3M_\odot $ (a $1.6+1.6 M_\odot$\ merger). Fig. \ref{fig:tovplot} shows the predicted radii $R_{1.2}, R_{1.4}, R_{1.6}$ and $R_{1.8}$ (the latter only for the intermediate and stiff EOS) in a mass vs. radius diagram, where also different sample EOS are shown. For each predicted radius, we show error bars that correspond to the \textit{maximum residual } of each empirical relation that was used. Filled boxes correspond to empirical relations that are valid for $M_{\rm chirp}<1.3M_\odot$, while filled triangles correspond to empirical relations that are valid for $M_{\rm chirp}>1.3M_\odot$. From the results displayed in the figure, it is apparent that our empirical relations can be used to constrain the mass-radius relation of nonrotating neutron stars with a maximum uncertainty of about $\pm 0.5$km in the range $1.2-1.6M_\odot$\ and with an even smaller maximum uncertainty of $\pm0.28$km for neutron stars of mass $1.8M_\odot$ (see Sec.~\ref{sec:sum} for discussion). Such radius constraints can readily be translated to constraints on the pressure vs. energy density, $P(\epsilon)$, relation, i.e.\ the EOS (see e.g.~\cite{Fattoyev2017,Raithel2018,PhysRevLett.121.161101,2019arXiv190511212T,2019arXiv190605978C}). In our examples the actual recovery of the radii for individual models is much better than indicated by the error bars. This is because we assign the maximum residual as error bar because one cannot know a priori how well the true EOS of NSs follows the empirical relations. By considering a large representative sample of candidate EOS, we expect that the maximum residual among all viable EOS models provides a safe proxy for the error although it is quite possible that the actual error will be smaller. We emphasize again that the error can be further reduced by considering empirical relations for a fixed chirp mass or a chirp mass within a small range (recall that the chirp mass can be measured very precisely from the inspiral phase). The situation depicted in Fig.~\ref{fig:tovplot} thus represents a \textit{worst-case scenario}. \begin{figure}[!] \includegraphics[width=8.5cm]{./figures-final/surfaceCombined_fRM_fpeak_R18_e0.pdf} \caption{Combined data sets surfaces for frequencies. Red points correspond to CFC/SPH data and green points correspond to data extracted from the CoRe GW catalogue. surface for $R_{1.8}$ and all binary mass configurations.} \label{Combined_fRM} \end{figure} \section{Validation of empirical relations using frequencies extracted from the CORE GW catalogue} \label{CORE:relations} Using the CoRe\ GW catalogue, we extracted the peak post-merger frequency $f_{\mathrm{peak}}$ for each waveform and then constructed empirical relations of the form $f_{\mathrm{peak}}(R_{\mathrm{x}},M_{\mathrm{chirp}})$ and $R_{\mathrm{x}}(f_{\mathrm{peak}},M_{\mathrm{chirp}})$ (additional relations based on other post-merger frequencies will not be reported here). The aim was to validate the empirical relations constructed with the CFC/SPH dataset of Bauswein et al. using a dataset that was obtained with very different numerical methods. The second-order dependence of the empirical relations on the dependent variables is rather weak. The CFC/SPH dataset of Bauswein et al. had a sufficient number of data points such that second-order empirical relations lead to advantages compared to simpler first-order ones. The models of the CoRe dataset used here are fewer and the maximum residual is comparable between the choices of first-order or second-order empirical relations. In the following, we will present some examples of second-order empirical relations constructed using the combined data sets (adding the models of the CFC/SPH and CoRe datasets). We construct new empirical relations for the \textit{combined dataset }of the CFC/SPH models and our subset of CoRe models. For the dominant postmerger frequency peak and using the subset of {\it equal-mass} configurations, the empirical relation with the smallest error is obtained for neutron stars of mass $1.8 M_\odot$. \begin{equation} \begin{split} f_{\mathrm{peak}} / M_{\mathrm{chirp}}= 11.476 +0.025 M_{\mathrm{chirp}} -1.102 R_{1.8} \\ + 1.181 M_{\mathrm{chirp}}^2 -0.242 R_{1.8} M_{\mathrm{chirp}}+ 0.042 R_{1.8}^2, \end{split} \label{Combined_fRM_R18_e1} \end{equation} with a maximum residual of 0.14 kHz and $R^2 = 0.975$. In this case, the addition of the CoRe data to the CFC/SPH dataset improves the empirical fit somewhat, resulting in a slightly higher $R^2$ and somewhat smaller maximum residual than for the CFC/SPH dataset alone. Similarly, when using the {\it whole set} of models, the empirical relation with the smallest error is obtained for neutron stars of mass $1.8 M_\odot$. \begin{equation} \begin{split} f_{\mathrm{peak}} / M_{\mathrm{chirp}}= 9.044 +0.713 M_{\mathrm{chirp}} -0.804 R_{1.8} \\ + 1.017 M_{\mathrm{chirp}}^2 -0.259 R_{1.8} M_{\mathrm{chirp}}+ 0.031 R_{1.8}^2, \end{split} \label{Combined_fRM_R18_e0} \end{equation} with a maximum residual of 0.197 kHz and $R^2 = 0.966$. Figure (\ref{Combined_fRM}) shows the above empirical fit as a surface as well as the CFC/SPH data points (red dots) and the CoRe data points (green points). The distribution of the CoRe data points is in excellent agreement with the distribution of the CFC/SPH data points. Turning to the inverse empirical relations of the
form $R_{\rm x}(f_{\rm peak}, M_{\rm chirp})$, for $M = 1.6 M_\odot$ and using the subset of {\it equal-mass} configurations, the empirical relation for the radius is\begin{equation} \begin{split} R_{1.6}= 39.258 -16.672 M_{\mathrm{chirp}} -10.784 f_{\mathrm{peak}}/M_{\mathrm{chirp}}\\ +3.952 M_{\mathrm{chirp}}^2 +2.75 f_{\mathrm{peak}} + 0.971 \left( f_{\mathrm{peak}}/M_{\mathrm{chirp}} \right)^2, \end{split} \label{Combined_RfM_R16_e1} \end{equation} with a maximum residual of 0.605 km and $R^2=0.962$. For the {\it whole set} of models (including both equal and unequal masses), the empirical relation for the radius is \begin{equation} \begin{split} R_{1.6}= 35.442 -13.46 M_{\mathrm{chirp}} -9.262 f_{\mathrm{peak}}/M_{\mathrm{chirp}}\\ +3.118 M_{\mathrm{chirp}}^2 +2.307 f_{\mathrm{peak}} +0.758 \left( f_{\mathrm{peak}}/M_{\mathrm{chirp}} \right)^2, \end{split} \label{Combined_RfM_R16_e0} \end{equation} with a maximum residual of 0.654 km and $R^2=0.954$. The corresponding surface and data points are shown in Fig. \ref{Combined_RfMsurfaces}. For $M=1.6M_\odot$ the addition of the CoRe data points thus somewhat increases the maximum residual and this trend continues for lower masses, pointing to small systematic differences due to the different numerical treatments between the two data sets. Note that there are too few data points for high-mass models in our chosen subset of CoRe models. We thus do not construct a new relation for the radius of neutron stars with mass $M=1.8M_\odot$. \begin{figure}[!] \includegraphics[width=8.5cm]{./figures-final/surfaceCombined_R16_fpeak_e0.pdf} \caption{Empirical relation for $R_{1.6}$ using the whole set of models of the combined data set. Blue surface is the combined data sets surfaces. The red points correspond to Bauswein et. al. data and green points correspond to frequencies extracted from the CoRe GW catalogue.} \label{Combined_RfMsurfaces} \end{figure} \section{Empirical Relations for $f_{\rm peak}$ using tidal deformabilities} \label{sec:fpeakL} In \cite{CORE1}, an empirical relation between $f_{\rm peak} M_{\rm tot}$ and the dimensionless quadrupole tidal coupling constant \begin{equation} \kappa_{2}^{\mathrm{T}} \equiv 2\left[\frac{1}{q}\left(\frac{X_{A}}{C_{A}}\right)^{5} k_{2}^{A}+q\left(\frac{X_{B}}{C_{B}}\right)^{5} k_{2}^{B}\right], \end{equation} was found, where $q:=M_A/M_B\geq 1$ is the mass ratio, $X_{A,B}:=M_{A,B}/M_{\rm tot}$, $k_2^{A,B}$ are the dimensionless quadrupole Love numbers and $C_{A,B}:=M_{A,B}/R$ is compactness {(see also \cite{Takami2015,Rezzolla2016})}. Ref. \cite{2019PhRvD.100d4047T} reports that practically the same accuracy is achieved when using the mass-weighted tidal deformability \begin{equation} \tilde{\Lambda}=\frac{16}{13} \frac{\left(M_{A}+12 M_{B}\right) M_{A}^{4} \Lambda_{A}+\left(M_{B}+12 M_{A}\right) M_{B}^{4} \Lambda_{B}}{\left(M_{A}+M_{B}\right)^{5}}, \end{equation} in place of $\kappa_2^T$ and an improvement is obtained by defining a new variable \begin{equation} \zeta:=\frac{3}{16}\tilde\Lambda+a \frac{M_{\rm tot}}{M^{\mathrm{TOV}}_{\rm max}}, \label{zetadef} \end{equation} where $a=-131.701$ (determined empirically by minimizing the RMS error) and $M^{\mathrm{TOV}}_{\rm max}$ is the maximum mass for nonrotating models allowed by a given EOS. The second term in (\ref{zetadef}) absorbs (to some degree) the mass dependence of the empirical relation found in \cite{CORE1} (see also \cite{Coughlin2018a}). The variable $\zeta$ used in the bivariate empirical relation in \cite{2019PhRvD.100d4047T} depends on the tidal deformabilities of both stars, as well as on $M^{\mathrm{TOV}}_{\rm max}$. Determining $\zeta$ through a measurement of $f_{\rm peak}$ does not lead to a \textit{direct} constraint on the tidal deformability $\Lambda_{\rm x}$ at a specific mass (but indirect constraints could be inferred). A bivariate relation of the form $f_{\rm peak}(\Lambda_{\rm x})$ can be expected, since there exists a direct relation $\Lambda_{\rm x}(R_{\rm x})$, as demonstrated in \cite{2018PhRvL.120q2703A} for the particular case of $\Lambda_{\rm 1.4}$ (see also \cite{De2018}). Indeed, we find such a relation in Section \ref{LaX}. Even tighter empirical relations than the bivariate $f_{\rm peak}(\Lambda_{\rm x})$ relation discussed in Section \ref{LaX} can be obtained by adding another variable, i.e.\ by constructing relations of the form $f_{\rm peak}(\Lambda_{\rm x}, M_{\rm chirp})$. Such a multivariate relation can also be constructed using the mass-weighted tidal deformability $\tilde \Lambda$. We thus seek relations of the form \begin{equation} f_{\rm peak}M_{\rm chirp} = b_0 + b_1 M_{\rm chirp} + b_2 \Lambda^{-1/2}, \label{eq:fLM:Ltilde} \end{equation} where $\Lambda$ is a placeholder for either $\tilde \Lambda$ or $\Lambda_{\rm x}$. The exponent of $-1/2$ in the last term was determined empirically. We chose $M_{\rm chirp}$ instead of $M_{\rm tot}$ in \cite{CORE1,2019PhRvD.100d4047T}, since it is better constrained by observations. In this section we will only use the CFC/SPH dataset. \subsection{Empirical relations using $\tilde{\Lambda}$} \label{subsecempL} For $\tilde{\Lambda}$ and using the {\it whole set} of models, including both equal and unequal mass configurations, the empirical relation for the frequency $f_{\rm peak}$ is \begin{equation} f_{\rm peak}M_{\rm chirp} = 1.392 - 0.108 M_{\rm chirp} + 51.70 \tilde{\Lambda}^{-1/2}, \label{fpeak2DL} \end{equation} with a maximum residual corresponding to 0.302 kHz in terms of the frequency $f_{\rm peak}$ and $R^2=0.985$. The corresponding surface and data points are shown in Fig. \ref{fig:fLM:Ltilde}. Restricting to equal-mass configurations, one obtains comparable (only slightly better) values for the maximum residual and $R^2$ of the fit. Moreover, restricting to a bivariate relation of the type $f_{\rm peak}(\tilde \Lambda)$ (motivated by the bivariate relations found in \cite{CORE1,2019PhRvD.100d4047T}) one obtains a relation (the inverse of the $\tilde \Lambda(f_{\rm peak})$ fit discussed below in Section \ref{LtildeEmp}), which has a similar maximum residual and $R^2$ as for the multivariate fit (\ref{fpeak2DL}) and is comparable with the fits in \cite{CORE1,2019PhRvD.100d4047T}. Thus, for the relation between $f_{\rm peak}$ and $\tilde \Lambda$ there exists no obvious advantage in using a multivariate relation of the form (\ref{fpeak2DL}), but this changes, when we consider tidal deformabilities at specific masses, $\Lambda_{\rm x}$, as we show below. Note that the accuracy can increase significantly, if one considers setups with a fixed total binary mass. In \cite{Bauswein2019a} the maximum residual when using $\Lambda_{1.35}$ was found to be only of order 100Hz for symmetric binaries of 1.35+1.35$M_\odot$ employing a large set of purely hadronic EOS. \begin{figure*}[!] \includegraphics[width=14cm]{./figures-final/surface_fLM_Ltilde_fpeak_e0-fM.pdf} \caption{Empirical surfaces for $f_{\rm peak}$ using the chirp mass $M_{\rm chirp}$ and the tidal deformability $\tilde{\Lambda}$. The red dots correspond to the CFC/SPH data. The left figure corresponds to all models in the dataset and the right figure corresponds to equal mass models only.} \label{fig:fLM:Ltilde} \end{figure*} \subsection{Empirical relations using different $\Lambda_{\rm x}$} We construct empirical relations for $f_{\rm peak}$ using different $\Lambda_x$, where $x=1.4, 1.6$ and 1.8. Here, we present the relations only for the whole set of models, including both equal and unequal-mass models (restricting to equal-mass models only, yields slightly better fits). For $\Lambda_{1.4}$ the empirical relation is \begin{equation} f_{\rm peak}M_{\rm chirp} = -4.015 + 4.490 M_{\rm chirp} + 47.14 \Lambda_{1.4}^{-1/2}, \label{fpeak2DL1.4} \end{equation} with a maximum residual of 0.452 kHz in terms of the frequency $f_{\rm peak}$ and $R^2=0.971$. We note that neglecting the exponent of $-1/2$ in the last term of (\ref{fpeak2DL1.4}) gave a slightly better fit, but we keep this exponent for uniformity with the corresponding relations for higher masses. For $\Lambda_{1.6}$ the empirical relation is \begin{equation} f_{\rm peak}M_{\rm chirp} = -3.922 + 4.528 M_{\rm chirp} +28.35 \Lambda_{1.6}^{-1/2}, \end{equation} with a maximum residual of 0.373 kHz in terms of the frequency $f_{\rm peak}$ and $R^2=0.973$ (see left panel of Fig. \ref{fig:fLM:L16}) and for $\Lambda_{1.8}$ the empirical relation is \begin{equation} f_{\rm peak}M_{\rm chirp} = -3.73 + 4.548 M_{\rm chirp} +15.94 \Lambda_{1.8}^{-1/2}, \end{equation} with a maximum residual of 0.283 kHz in terms of the frequency $f_{\rm peak}$ and $R^2=0.967$ (see right panel of Fig. \ref{fig:fLM:L16}). \begin{figure*}[!] \includegraphics[width=8.5cm]{./figures-final/surface_fLM_L16_fpeak_e0-fM.png} \includegraphics[width=8.5cm]{./figures-final/surface_fLM_L18_fpeak_e0-fM.png} \caption{{\it Left panel:} Multivariate empirical relation for $f_{\rm peak}$ using the tidal deformability $\Lambda_{1.6}$ and the chirp mass $M_{\rm chirp}$. The red dots correspond to the CFC/SPH data. {\it Right panel:} Same as left panel, but with $\Lambda_{1.8}$} \label{fig:fLM:L16} \end{figure*} \section{Empirical relations for Tidal Deformabilities using $f_{\rm peak}$} \label{sec:Lfpeak} We construct multivariate and empirical relations for the tidal deformabilities $\tilde{\Lambda}$ and $\Lambda_{x}$ with $x=1.4,1.6$ and 1.8. The relation for $\tilde{\Lambda}$ has the form of \begin{equation} \tilde{\Lambda} = b_0 + b_1 M_{\rm chirp} f_{\rm peak} + b_2 f_{\rm peak}^{-2}, \label{eq:LfM:Ltilde} \end{equation} whereas the relations for different $\Lambda_{x}$ are of the form \begin{equation} \Lambda_{\rm x} = b_0 + b_1 M_{\rm chirp} + b_2 f_{\rm peak} + b_3 f_{\rm peak}^2, \label{eq:LfM:Lx} \end{equation} (the above forms represent optimal choices among a number of different versions that we investigated). In addition, we explore bivariate relations of the form $\tilde \Lambda (f_{\rm peak}M_{\rm chirp})$ and $ \Lambda_{x} (f_{\rm peak}/M_{\rm chirp})$, in which the product $f_{\rm peak}M_{\rm chirp}$ or the ratio $f_{\rm peak}/M_{\rm chirp}$, correspondingly, are treated as a single variable. \subsection{Empirical relations for $\tilde{\Lambda}$} \label{LtildeEmp} For $\tilde{\Lambda}$ and using the subset of {\it equal-mass} configurations, the empirical relation using the frequency $f_{\rm peak}$ is \begin{equation} \tilde{\Lambda} = -1434 + 120.1 M_{\rm chirp} f_{\rm peak} + 18053 f_{\rm peak}^{-2}, \end{equation} with a maximum residual of 315.8 and $R^2=0.985$, whereas using the the {\it whole set} of models, including both equal and unequal mass configurations, the empirical relation is \begin{equation} \tilde{\Lambda} = -1344 + 108.9 M_{\rm chirp} f_{\rm peak} + 17208 f_{\rm peak}^{-2}, \end{equation} with a maximum residual of 433.1 and $R^2=0.975$ (see top left panel of Fig. \ref{fig:LfM:Ltilde2D}). \begin{figure*} \includegraphics[width=16cm]{./figures-final/surface_LfM_all.png} \caption{\textit{Top row:} Multivariate empirical relations (left)\ and bivariate empirical relations (right) for $\tilde{\Lambda}$. Both have comparable accuracy. \textit{Bottow row:} Multivariate (left)\ and bivariate (right) empirical relations for $\Lambda_{1.6}$. The multivariate relation has a significantly smaller maximum residual than the bivariate relation. Red dots correspond to the CFC/SPH data. } \label{fig:LfM:Ltilde2D} \end{figure*} For $\tilde \Lambda$ we also construct a bivariate empirical relation of the form \begin{equation} \tilde{\Lambda} = b_0 e^{-z/b_1}, \label{tildeLuni} \end{equation} where the product $z=f_{\rm peak} M_{\rm chirp}$ is treated as a single variable (this is motivated by the existence of bivariate relations of the form $z( \kappa_2^T)$ in \cite{CORE1} and $z(\tilde \Lambda)$ or $z(\zeta)$ in \cite{2019PhRvD.100d4047T}, but we use a different functional form of the fit, that gave a smaller residual). For the subset of {\it equal-mass} configurations, we find $b_0=0.836$ and $b_1=36014$, with a maximum residual of 325.5 and $R^2=0.979$, whereas for the {\it whole set} of models, including both equal and unequal mass configurations, we find $b_0=0.817$ and $b_1=37096$, with a maximum residual of 403.1 and $R^2=0.969$ (see top right panel of Fig. \ref{fig:LfM:Ltilde2D}). We thus find that the bivariate empirical relation of the form (\ref{tildeLuni}) is of comparable accuracy as the multivariate empirical relation of the form (\ref{eq:LfM:Ltilde}) and the latter does not have an advantage over the former, as anticipated by the results of Section \ref{subsecempL}. \subsection{Empirical relations for $\Lambda_{x}$} \label{LaX} Next, we construct multivariate empirical relations of the form (\ref{eq:LfM:Lx}) for different $\Lambda_x$, where $x=1.4, 1.6$ and 1.8. Here, we present the relations only for the whole set of models, including both equal and unequal-mass models (restricting to equal-mass models only yields fits of comparable accuracy). The empirical relations we construct are:\begin{equation} \Lambda_{1.4} = 5083+ 1588M_{\rm chirp} - 3787 f_{\rm peak} + 535.7 f_{\rm peak}^2, \label{L1.4} \end{equation} (maximum residual of 185.4 and $R^2=0.958)$, \begin{equation} \Lambda_{1.6} = 2417 + 770.2 M_{\rm chirp} - 1841 f_{\rm peak} + 262.9 f_{\rm peak}^2, \label{L1.6} \end{equation} (maximum residual of 99.85 and $R^2=0.964$, see bottom left panel of Fig. \ref{fig:LfM:Ltilde2D}), and\begin{equation} \Lambda_{1.8} = 1253 + 398.7 M_{\rm chirp} - 982.8 f_{\rm peak} + 143.2 f_{\rm peak}^2, \label{L1.8} \end{equation} (maximum residual of 74.35 and $R^2=0.933)$. Notice that the maximum residual in $\Lambda_x$ is getting smaller as the target mass increases. Finally, we construct bivariate empirical relations of the form $\Lambda_x(u)$, where the ratio $u=f_{\rm peak}/ M_{\rm chirp}$ is treated as a single variable. The empirical relations are \begin{equation} \Lambda_{1.4} = 12845 e^{-u/0.77}, \label{bL1.4} \end{equation} (maximum residual of 345.4 and $R^2=0.92)$, \begin{equation} \Lambda_{1.6} = 7251e ^{-u/0.703}, \label{bL1.6} \end{equation} (maximum residual of 187.4 and $R^2=0.931$, see bottom right panel of Fig. \ref{fig:LfM:Ltilde2D}), and \begin{equation} \Lambda_{1.8} = 4977e^{-u/0.612}, \label{bL1.8} \end{equation} (maximum residual of 107.5 and $R^2=0.911)$. The multivariate empirical fits (\ref{L1.4}) $-$ (\ref{L1.8}) have a maximum residual for $\Lambda_x$ that is consistently roughly half of the corresponding maximum residual for the bivariate fits (\ref{bL1.4}) $-$ (\ref{bL1.8}) This allows for an accurate determination of the tidal deformability at specific masses, $\Lambda_x$, through the observables $f_{\rm peak}$ and $M_{\rm chirp}$, which would then place direct constraints on the EOS. This is complementary (and of similar accuracy) to the accurate determination of radii at specific masses, $R_x$, which we presented in Sections \ref{sec:rad} and \ref{sec:mr}. We note that further reduction of the maximum residual can be attained for certain fixed chirp masses (or fixed total masses), essentially taking slices of the \ empirical surface in Fig. \ref{fig:LfM:Ltilde2D} for fixed $M_{\rm chirp}$ (which will be known to high accuracy from the inspiral phase). Such relations for fixed binary setups as shown in [39] should be ultimately used for constraints on the tidal deformability from $f_\mathrm{peak}$ because they yield the smallest scatter, which determines the systematic error. Binary masses can be accurately measured for events where postmerger GWs are detectable. \section{Discussion and conclusions}\label{sec:sum} In this paper we explore empirical relations for distinct postmerger GW frequencies of BNSs such that they can be directly implemented in GW data analysis procedures for parameter estimation. These frequencies are extracted from a large representative sample of BNS merger simulations for different binary mass configurations and model EOSs. We employ results from two different catalogues of simulations, which are based on different numerical codes. We focus on relations between postmerger GW frequencies, the chirp mass of the binary system and NS radii. The latter are determined by the incompletely known EOS, and we investigate radii of different fiducial NS masses to characterize different density ranges of the EOS. Since the binary mass ratio $q$ may not be measured with high precision, our complete set of models includes binaries within a relatively large range of mass ratios. To approximately assess the impact of the mass ratio, we derive empirical relations also for equal-mass mergers only and find unsurprisingly tighter relations. This demonstrates that, if available, information on $q$ should be included in such empirical relations. Because we aim at GW data analysis applications, we derive two separate sets of relations. Once, the GW frequencies are dependent variables. This type of relations can be implemented to predict the expected postmerger GW signal for given EOS models (Sect.~\ref{sec:freq}) and may be linked to EOS information from the GW inspiral phase. The maximum residuals found for our relations may be used to quantify the uncertainties (or to define priors in other types of analyses). For another set of relations, NS radii are treated as dependent variables. These relations can be employed to determine NS radii from the measurement of postmerger GW frequencies (Sect.~\ref{sec:rad}). By using a large sample of BNS simulations we can assess the quality of the individual empirical relations, which we obtain by least-square fits. We quantify the accuracy of these relations by the maximum residual. This deserves a comment. The maximum residual is the most meaningful figure of merit for an empirical relation because any other statistical measure could be strongly biased by the chosen sample of underlying models. This is because the data for constructing the fits do not follow a statistical distribution, but they are simply given by the available models for the EOS and chosen simulation setups. We caution that even if one uses some sort of parametrization of the EOS, it is not obvious that one can employ other statistical measures to assess the quality of an empirical relation. It is not clear which distribution the parameters should follow in order to be representative unless they can be physically motivated. Moreover, the space of EOS parameters is mapped in a non-trivial way on NS properties and GW frequencies. Obviously, also the maximum residual depends on the underlying data. But we expect that by employing a very large sample of models, the data will contain the most extreme outliers. Then, the maximum residual provides a meaningful upper limit on the uncertainties and by how much the true value could at most deviate from the fit. Our main findings can be summarized as follows. (1) We find generally tight relations between postmerger GW frequencies, the chirp mass and NS radii. Typically maximum residuals are of the order of 300~Hz (or a few hundred meters if NS radii are the dependent quantity). (2) Apart from tight relations for the dominant postmerger GW frequency, we confirm the existence of two separate empirical relations for two distinct subdominant peaks of the postmerger GW spectrum, in agreement with~\cite{Bauswein2015,Bauswein2016}. These findings are in tension with the interpretation of~\cite{Takami2014,Takami2015} that a single universal function is sufficient to describe the behavior of subdominant peaks in the postmerger GW spectrum. (Slight disagreements of up to a few 100~Hz between the frequencies of secondary peaks predicted by fit formulae in~\cite{Bauswein2015} on one hand and the data in~\cite{Rezzolla2016} on the other hand are fully compatible with the scatter of the fit formulae in~\cite{Bauswein2015} and the maximum residuals we observe in this study for a larger set of models.) The existence of two distinct subdominant peaks, and thus corresponding relations, is impressively corroborated by a machine learning algorithm, which identified three different classes of postmerger spectra in remarkable agreement with the classification scheme introduced in~\cite{Bauswein2015}. The machine-learning method employed here may be used for an automated identification of the type of postmerger spectrum in numerical simulations or in future GW data analysis application. (3) For most relations investigated here those with the dominant postmerger frequency $f_\mathrm{peak}$ yield the smallest maximum residual in comparison to relations where the subdominant peaks were used. This stresses the importance of $f_\mathrm{peak}$ for EOS constraints, considering also that secondary peaks may be harder to measure (because of their lower signal-to-noise ratio) and may yield larger statistical errors in a measurement because of their generally larger width in comparison to $f_\mathrm{peak}$. (4) Our study also confirms that radii of high-mass NSs are more suitable to describe the EOS dependence of postmerger frequencies~\cite{Bauswein2012a}. We compare empirical relations for $R_{1.2}$, $R_{1.4}$, $R_{1.6}$ and $R_{1.8}$, i.e.\ we characterize a given EOS by the radius of nonrotating NSs with different masses of 1.2~$M_\odot$, 1.4~$M_\odot$, 1.6~$M_\odot$ and 1.8~$M_\odot$. The radii of nonrotating NSs with different masses represent integral characteristics of the EOS in different density regimes, i.e.\ high-mass NSs reflect the EOS behavior at higher densities. Empirical relations for postmerger frequencies with $R_{1.6}$ or $R_{1.8}$ lead to systematically smaller maximum residuals considering the full range of binary masses. This behavior had already been observed in~\cite{Bauswein2012a} and explained by the fact that during merging the densities increase, which is why high-mass NSs better represent the density regime of the postmerger remnant and thus its GW emission. The confirmation of this finding is important because the inspiral GW signal of a BNS constrains the EOS regime of the two coalescing stars stars, which in most cases are expected to be NSs with moderate masses. Moreover, the finite-size effects of high-mass NSs decrease in magnitude and are thus harder to measure with good accuracy. Hence, measuring radii of high-mass NSs through the postmerger phase provides complementary information on the high-density regime of the EOS. (5) Constructing different fits in this work, we recognize, not unexpectedly, that it is meaningful to restrict the parameter range, because this leads to tighter relations and thus implies smaller uncertainties in applications of these relations, e.g.\ for radius measurements. For instance, we find that considering only equal-mass mergers leads to tighter fits. In this context we briefly comment on the analysis of~\cite{Kiuchi2019a} who find a somewhat larger scatter between $f_\mathrm{peak}$ and NS radii compared to previous results. This observation, however, is entirely a consequence of including unequal-mass binary configurations as well as equal-mass binaries with a variation of the binary mass ratio comparable to that inferred for GW170817 (i.e.\ between about 0.7 and 1.0). Considering for instance only the equal-mass results of~\cite{Kiuchi2019a} yields similarly tight relations as in previous studies. In future events which will allow the extraction of postmerger GW frequencies, the binary mass ratio can be expected to be measured with significantly better precision than for GW170817~\cite{Farr2016}. We thus expect that empirical relations of the form we employ here, specialized to certain ranges for the binary mass ratio, would still have maximum residuals comparable to what is found in previous and the present works. We also emphasize that Ref.~\cite{Kiuchi2019a} use a simplified description of the NS crust. The crust EOS in fact is known with better precision. This explains a quantitative bias between the GW frequencies in~\cite{Kiuchi2019a} compared to the previous fitting formulas in~\cite{Bauswein2012a} which are based on a proper description of the crust material. We expect that GW frequencies are, to good approximation, unaffected by the description of the crust EOS, while TOV solutions and thus NS radii do change by several 100m if a simplified NS crust is employed. Correcting this systematic underestimation of NS radii by the crust treatment, one finds that the equal-mass data of~\cite{Kiuchi2019a} are in excellent agreement with the fit formula in~\cite{Bauswein2012a}. We remark that a similar issue arises for the relations presented in~\cite{Takami2014,Takami2015,Rezzolla2016}, where the simplified crust treatment also introduces a quantitative bias, which implies that the resulting fit formulae cannot be directly applied for comparisons or for radius/EOS constraints. Instead the systematic shift of the TOV solutions should be removed for real applications. Restricting our sample of models to a smaller range in the chirp mass, yields smaller maximum residuals. This is not unexpected considering previous results in the literature, which often focused on fixed binary mass configurations and found generally smaller deviations. Recall that the chirp mass is measured with very good precision from the GW inspiral phase. We also anticipate that including additional constraints on the possible EOSs will result in more accurate fits with smaller maximum residuals. We do not further elaborate these considerations because in this study we want to quantify the maximum possible deviations from empirical relations for the postmerger GW emission. We expect to obtain robust upper limits by considering the largest possible set of models, which likely includes the most extreme, and possibly unrealistic, cases. We thus study here the worst-case scenario and stress that in future measurements significant improvements are anticipated if additional limits on the parameter range (mass ratio, chirp mass, EOS) are taken into account. Notice that a few of the EOS we use are somewhat (but not dramatically)\ disfavoured by the inferred EOS constraints by the GW170817 event (the radii for typical neutron star masses are about 1km larger than the 90\% credibility constraints in \cite{PhysRevLett.121.161101}). When tighter EOS constraints from the inspiral phase (or from other observational methods) become available, then this will reduce the available parameter space, leading to improved empirical relation for the post-merger phase. (6) As another important step to assess the maximum residuals and the quality and reliability of empirical relations for the dominant postmerger frequency $f_{\rm peak}$, we construct fits based on two independent catalogues of models (CFC/SPH and CoRe). We do not find significant systematic differences between the two data sets, which is important because the codes are based on different numerical methods and slightly differ with regard to the implemented physical model. We also observe that the maximum residuals do not appreciably change if we include the second data set to our baseline models (CFC/SPH). This may indicate that the maximum residuals determined in this study are approximately converged. (7) We confirm the existence of a bivariate empirical relation for $\tilde \Lambda$, e.g.~\cite{CORE1,Takami2015,2019PhRvD.100d4047T}. For the tidal deformability at specific masses $\Lambda_x$ (which is related to the radius at specific masses $R_x$) we find accurate \textit{multivariate} empirical relations, which can lead to tight constraints on the EOS. The empirical relations involving the tidal deformability can actually be improved by fixing the chirp mass (or total binary mass), as demonstrated in \cite{Bauswein2019a}. We conclude by mentioning a few caveats of our study and describe directions of future research. The data sets which we employ for constructing fits are based on a large sample of models but not on a systematic variation of the model parameters, which is in particular for the EOS not trivial to realize. Hence, the derived fits as well as the corresponding maximum residuals may be to some extent biased by the available models for instance because 1.35-1.35~$M_\odot$ binary models are over-represented, as a very common configuration. It may be interesting to choose merger models in a more systematic way, to check whether the current study is prone to selection effects. We also emphasize that the occurrence of a strong first-order phase transition (no included in the present study) can lead to a significant increase of the postmerger frequencies and thus to deviations from the empirical relations which are based on models without strong phase transitions \cite{Bauswein2019a}. This also deserves more attention in future work. Finally, this study highlights the potential of machine learning for the recognition of specific types of postmerger spectra, which are linked to the underlying dynamics. Future work should explore whether these algorithms work in GW data analysis of actual events. \acknowledgements We are thankful to Luca Baiotti,\ Gabriele Bozzola, Tim Dietrich, Jocelyn Read and Kostas Kokkotas for comments. AB acknowledges support by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No. 759253, by the Sonderforschungsbereich SFB 881 ``The Milky Way System'' and the Sonderforschungsbereich SFB 1245 ``Nuclei: From fundamental interactions to structure and stars'' of the German Research Foundation (DFG) and the Klaus-Tschira Foundation. NS is supported by the ARIS facility of GRNET in Athens (GWAVES, GRAVASYM and SIMGRAV allocations). We are grateful for networking support through the COST\ actions CA16214 "PHAROS" and CA16104 "GWVerse", CA17137 "G2Net" and CA18108 "QG-MM".
\section{Introduction} The quark-gluon plasma (QGP) has been established as a relativistic fluid through experimental and theoretical efforts and discoveries in the past two decades. The dynamical QCD system created in high-energy nuclear collisions is quantified as a nearly-perfect fluid with extremely small yet finite viscosity. The beam energy scan programs at BNL Relativistic Heavy Ion Collider have been performed to explore the QCD phase diagram in recent years. For a comprehensive analysis of the quark matter in such collisions, one has to take account of finite net baryon density and off-equilibrium corrections in the hydrodynamic model \cite{Monnai:2012jc,Denicol:2018wdp}. On the other hand, relativistic hydrodynamics is known to have the long-standing issue of frame fixing in a system with non-vanishing conserved currents. Two most common frame choices are the Landau frame, where the local rest frame of energy flow is used for the hydrodynamic flow \cite{Landau:1959}, and the Eckart frame, where that of conserved charge flow is used \cite{Eckart:1940te}. In this study, I investigate relativistic dissipative hydrodynamics in the Landau and Eckart frames \cite{Monnai:2019jkc}. The stability and causality conditions are examined in both frames. Numerical simulations are then performed to quantify the effects of frame choice on hydrodynamic variables as well as rapidity distributions in nuclear collisions. \section{Stability and causality of relativistic dissipative hydrodynamics} The relativistic fluid with a single conserved charge is considered. The hydrodynamic equations of motion consist of conservation laws $\partial_\mu T^{\mu \nu} = 0$ and $\partial_\mu N^\mu = 0$, and constitutive relations. Here the energy momentum tensor and the conserved charge current can be decomposed as $T^{\mu \nu} = e_L u_L^\mu u_L^\nu - (P_L + \Pi_L) \Delta_L^{\mu \nu} + \pi_L^{\mu \nu}$ and $N^\mu = n_L u_L^\mu + V_L^\mu$ using the flow $u_L^\mu$ in the Landau frame. The projection operator is defined as $\Delta^{\mu \nu} = g^{\mu \nu} - u^\mu u^\nu$. $e$ is the energy density, $P$ is the hydrostatic pressure, $\Pi$ is the bulk pressure, $\pi^{\mu \nu}$ is the shear stress tensor, $n$ is the conserved charge density, and $V$ is the diffusion current. The subscript $L$ denotes the quantities defined in the Landau frame. Hereafter the shear and bulk viscosities are neglected to focus on the vector dissipative currents. The full second-order constitutive relations from the extended Israel-Stewart theory \cite{Israel:1979wp, Monnai:2010qp} reads \begin{align} V_L^\mu &= \kappa_V \nabla_L^\mu \frac{\mu}{T} - \tau_V (\Delta_{L})^{\mu}_{\ \nu} D_L V_L^\nu + \chi_V^a V_L^\mu D_L \frac{\mu}{T} \nonumber \\ &+ \chi_V^b V_L^\mu D_L \frac{1}{T} + \chi_V^c V_L^\mu \nabla^L_\nu u_L^\nu + \chi_V^d V_L^\nu \nabla^L_\nu u_L^\mu + \chi_V^e V_L^\nu \nabla_L^\mu u^L_\mu, \label{eq:diffusion} \end{align} in the Landau frame where $D = u^\mu \partial_\mu$ and $\nabla^\mu = \Delta^{\mu \nu} \partial_\nu$. $\kappa_V$ is the conductivity, $\tau_V$ is the relaxation time of the diffusion current, and $\chi_V$ are the other second-order transport coefficients. Hydrodynamic modes can be obtained by taking the plane wave perturbation $\delta Q = \delta \bar{Q} e^{i(\omega t - kx)}$ from global equilibrium where $Q$ is a macroscopic variable \cite{Hiscock:1987zz}. The causality condition $| \partial \mathrm{Re} (\omega)/\partial k | \leq~1$ and the stability condition $\mathrm{Im} (\omega) \geq 0$ are satisfied when $\kappa_V\geq 0$ and $\tau_V \geq 0$ in the long wavelength limit. The detailed calculation can be found in Ref.~\cite{Monnai:2019jkc}. The energy-momentum tensor and conserved charge current in the Eckart frame, on the other hand, are decomposed as $T^{\mu \nu} = e_E u_E^\mu u_E^\nu - (P_E + \Pi_E) \Delta_E^{\mu \nu} + W_E^\mu u_E^\nu + W_E^\nu u_E^\mu + \pi_E^{\mu \nu}$ and $N^\mu = n_E u_E^\mu$. Here $W^\mu$ is the energy dissipation current. The subscript $E$ is used to denote the quantities defined in the Eckart frame. The second-order constitutive relation is \begin{align} W_E^\mu &= - \kappa_W \bigg( \nabla_E^\mu \frac{1}{T} + \frac{1}{T} D_E u_E^\mu \bigg) - \tau_W (\Delta_{E})^{\mu}_{\ \nu} D_E W_E^\nu + \chi_W^a W_E^\mu D_E \frac{\mu}{T} \nonumber \\ &+ \chi_W^b W_E^\mu D_E \frac{1}{T} +
\chi_W^c W_E^\mu \nabla^E_\nu u_E^\nu + \chi_W^d W_E^\nu \nabla^E_\nu u_E^\mu + \chi_W^e W_E^\nu \nabla_E^\mu u^E_\nu, \label{eq:dissipation} \end{align} where $\kappa_W$ is the energy conductivity, $\tau_W$ is the relaxation time of the energy dissipation, and $\chi_W$ are the second-order transport coefficients. The mode analyses indicate that the causality and stability conditions are satisfied when $\kappa_W\geq 0$ and $\tau_W - \kappa_W/(e+P)T\geq 0$ in the long wavelength limit. The correspondences between the transport coefficients are obtained by identifying the entropy production $\partial_\mu s^\mu$ in the two frames. They are expressed as \begin{align} \kappa_V &= \kappa_W \bigg( \frac{n}{e+P} \bigg)^2, \ \ \tau_V = \tau_W - \frac{\kappa_W}{(e+P)T}, \ \ \chi_V^a = \chi_W^a - \frac{\tau_W nT}{e+P}, \nonumber \\ \chi_V^b &= \chi_W^b + \tau_W T - \frac{\kappa_W}{e+P}, \ \ \chi_V^c = \chi_W^c + \frac{\kappa_W}{(e+P)T}, \ \ \chi_V^d = \chi_W^d + \frac{\kappa_W}{(e+P)T}, \ \ \chi_V^e = \chi_W^e . \label{rel} \end{align} They suggest that the causality and stability conditions in the Landau and the Eckart frames are strongly related because the conditions set on $\kappa_V$ and $\tau_V$ are equivalent to those on $\kappa_W$ and $\tau_W$. \section{Numerical simulations for nuclear collisions} \begin{figure}[tb] \includegraphics[width=.45\textwidth]{netbaryon.pdf} \includegraphics[width=.45\textwidth]{flow.pdf} \ \\[-28pt] \caption{(a) The net baryon density and (b) the difference between the flow and space-time rapidities at the initial time (thin solid line) compared to those during ideal (thick solid line), baryon diffusive (dashed line), and energy dissipative (dotted line) hydrodynamic evolutions at $\tau = 20$ fm/$c$.} \label{fig:1} \end{figure} The hydrodynamic model of relativistic nuclear collisions is developed in the Landau and Eckart frames in a (1+1)-dimensional geometry for numerical comparison. The conserved charge in the system is the net baryon number. The initial conditions at $\tau_\mathrm{th}=3$ fm are parametrically constructed in accordance with the SPS data of 17.3 GeV Pb+Pb collisions. The equation of state is from \textsc{neos} B \cite{Monnai:2019hkn,Monnai:2021kgu}. $\kappa_W = 10 (e+P)$, $\tau_W = 2 \kappa_W / (e+P)T$, and $\chi_W^{a,b,c,d,e} = 0$ are used in the Eckart frame and converted for the use in the Landau frame with the relations (\ref{rel}). Figure \ref{fig:1} shows the space-time rapidity distributions of the net baryon density and the deviation of the flow from boost-invariance at $\tau = 20$ fm. The latter quantity is defined using the flow rapidity $Y_f$ as $Y_f-\eta_s$. The effects of the baryon diffusion and the energy dissipation on the net baryon density distribution are similar, exhibiting little frame dependence. On the other hand, the flow is shown to be sensitive to the frame choice. Figure \ref{fig:2} illustrates the rapidity distributions of charged particles and net baryon number calculated using the Cooper-Frye formula with off-equilibrium corrections to the phase-space distributions \cite{Monnai:2010qp}. It indicates that the latter is visibly affected by diffusion and dissipation processes, though the frame dependence on the observable would be small. \begin{figure}[tb] \includegraphics[width=.45\textwidth]{dnchdy_df.pdf} \includegraphics[width=.45\textwidth]{dnbdy_df.pdf} \ \\[-28pt] \caption{The rapidity distributions of (a) charged particles and (b) net baryon number at freeze-out after ideal hydrodynamic evolution (solid line) compared to those after baryon diffusive evolution in the Landau frame (dashed line) and energy dissipative evolution in the Eckart frame (dotted line).} \label{fig:2} \end{figure} \section{Discussion and summary} The stability and causality conditions are investigated for the full second-order relativistic hydrodynamics in the Landau and Eckart frames. Numerical analyses for heavy-ion collisions imply that a frame choice has visible effects on the hydrodynamic flow but not on the particle distributions. It is note-worthy that the second-order accuracy is required for the discussion of stability and causality even in the first-order theories because the first-order terms may have second-order differences that could be interpreted as a relaxation term when the identity $(e+P)Du^\mu = \nabla^\mu P - W^\mu \nabla_\nu u^\nu - W^\nu \nabla_\nu u^\mu - \Delta^{\mu \nu} DW_\nu $ is used \cite{Monnai:2019jkc}. Future prospects include analyses of boosted systems, interplay with shear and bulk viscosities and (3+1)D hydrodynamic evolution.
\section{Introduction} Twisted photons are photons with a shaped wavefront with swirling local momentum or swirling Poynting vectors about a vortex line~\cite{Andrews-book,FrankeArnold_2017}. Due to the swirling wave vector, the intrinsic total angular momentum (AM) of the twisted photon along the direction of propagation is $m_\gamma \hbar$, where $m_\gamma$ can be any integer. Processes initiated by twisted photons follow enhanced AM selection rules~\cite{Afanasev:2013kaa,2014PhRvA..90a3425S} different from plane-wave photons. These selection rules have been confirmed by experiments with cold trapped $^{40}$Ca ions~\cite{2016NatCo...712998S, Afanasev_2018}. The swirling local momentum of the twisted photon can give significant transverse momentum to the final state, as pointed out by Barnett and Berry~\cite{Barnett_08,Barnett_2013}. Near a vortex in a monochromatic light beam, the length of the local wave vector, or local momentum, can in fact exceed the wavenumber of any of the plane waves in the superposition representing the beam. These large transverse momenta potentially impart what Barnett and Berry call ``superkicks'' to small particles located near the vortex, as those particles absorb light from the beam. It has been explicitly shown in a quantum formalism of twisted photon absorption by single atoms, that the AM that does not go into internal electronic excitations is passed to the target atom's CM motion~\cite{Babiker2002,Afanasev:14} due to AM conservation. Thus the superkick follows as a result of AM conservation. The existence of the superkick, which adds to the kinetic energy of the final state, must lead to a modification of the threshold energies needed for a variety of physical processes. In the present paper, we consider the kinematics of twisted-photon absorption, on an atom or on another photon, and in particular how the energy threshold requirements vary with the distance of the target from the photon's vortex line. We will discuss the significance of the enhanced threshold requirements, and possible effects of upon the reaction cross section. We will see that in an atomic situation the superkick effect is small but potentially laboratory observable. The effect becomes more pronounced for processes in nuclear physics and and some cases becomes striking for astrophysically interesting high-energy photon-photon collisions. Regarding the observability of the superkick, consider an atomic ion struck by visible light twisted photons. Approaching the vortex line, the density of the photon state decreases. However, the local momentum relative to the probability density in the same region can get very large. There is thus a region where densities are very low and the momenta very high. A sufficiently small probe, for example the ion, fitting in this region may interact rarely but upon interaction will receive a lot of transverse momentum, in some circumstances considerably more than the longitudinal momentum of the Fourier components of the twisted photon. Hence the name ``superkick.'' While an atom itself is small relative to the wavelength of visible light twisted photons, the size that matters in the scale of the confinement region in the trap that is holding the ion in place. That means that the relevant atomic size scale is of order tens of nanometers rather than tenths of nanometers. Nonetheless the confinement region appears small enough to see an effect, as we shall argue below. The ion is trapped and the superkick is not sufficient to free it, but can be enough to push the ion into a higher level in the confining harmonic oscillator potential, with visible consequences. An interest in the astrophysical situation lies in the fact that using known physics and unpolarized or simply polarized light, observational estimates of extragalactic background starlight (EBL) give not more that could be obtained from existing visible galaxies. There had been some expectation that early extragalactic stars existed, and though they no longer exist today, their light would linger in the universe. The astrophysical data then may be interpreted as either that these early stars never existed or that the universe is more transparent than supposed in the EBL estimates. The estimates of EBL come from observations of Very High Energy (VHE) photons, or $\gamma$-rays, from distant sources \cite{Aharonian_2006,Albert_2008,Madejski_2016}. VHE $\gamma$-ray propagation is diminished by $\gamma\gamma\to e^+e^-$ interactions with EBL, with $\gamma$-rays having energies above $\approx$ 100 GeV interacting with visible light to produce electron pairs. Comparing the fluxes of VHE photons as well as lower energy photons from distant sources to the relative fluxes from similar nearby sources allows an observational estimate of the EBL. If there be transparency, the existence of beyond Standard Model axion-like particles, or ALPs, has been offered as an explanation \cite{deAngelis_2007,Jaeckel_2010,Anantua:2010zz}. The transparency mechanism is that some photons oscillate to ALPs which propagate unhindered and oscillate back into photons. Twisted light can be an alternative explanation. While twisted light is commonly produced on earth, for the present considerations more interesting is that there are mechanisms that produce twisted light in extreme astrophysical situations. Examples are nonlinear inverse Thompson scattering \cite{Taira_2018}, light bremsstrahlung from energetic electrons as they spiral in strong magnetic fields \cite{Katoh_2017,2019NatSR...9...51M,Maruyama_2019,Maruyama:2019bin}, or by radiation from the warped space near a rotating black hole \cite{Tamburini_2011}. The reduced cross section engendered by the sometimes higher energy thresholds of twisted photon reactions will lead to increased transparency. Other novel kinematic effects in collisions of twisted particles have been recently discussed in the literature \cite{Ivanov:2019vxe,Ivanov_20}. Throughout the paper, we use units where $\hbar=c=1$. \section{Kinematics of Twisted-Photon Absorption} \subsection{Angular-Momentum Conservation and a Superkick} Let us consider an atom, or another sub-wavelength-size target that absorbs a twisted photon; the target is located at a distance $b$ away from the photon's axis, as shown in Figure \ref{fig:Wave}. The formalism for calculating individual quantum transition amplitudes due to absorption of the twisted photons can be found elsewhere, \cite{Afanasev:2013kaa,2014PhRvA..90a3425S,Afanasev_2016}; here we are concerned with a magnitude of recoil momentum $p_T$ of the target after photo-absorption. \begin{figure}[h] \centering \includegraphics[height = 75 mm]{Superkick_figs/ccWaveAndRecoil} \hfil \caption{Twisted photon's helical wavefront and an atomic target located at an impact parameter $b$ from the photon's axis $z$ (or phase singularity). The momenta $p_T$ and $p_z$ show transverse recoil and longitudinal recoil, respectively.} \label{fig:Wave} \end{figure} The transverse kick given to a target atom offset distance $b$ from the vortex line of the photon relates directly to the AM transferred to the atom's overall center of mass. Hence we start by considering the average angular momenta given to the internal electronic state and to the atomic c.m.~in a photoexcitation process. The expectation value $\langle \ell_z \rangle$ of AM transferred by a twisted photon with AM $z$-projection $m_\gamma$ to internal degrees of freedom of an atom can be expressed in terms of the probabilities $w(m_f)$ for exciting the atom to states with magnetic quantum numbers $m_f$~\cite{curvenote,Afanasev_2020}, \begin{equation} \label{eq:Lz} \langle \ell_z \rangle =\sum_{-\ell<m_f<\ell} m_f \, w(m_f). \end{equation} From AM conservation, the initial AM not transferred to the internal excitation goes to the atom's c.m.~motion~\cite{Afanasev:14}, \begin{equation} \label{eq:LzAtom} \langle \ell_z \rangle_{c.m.} =m_{\gamma}-\braket{l_z}. \end{equation} Plots of AM transfer vs impact parameter $b$ are shown in Figs.~\ref{fig:Lz1} and~\ref{fig:Lz-1}. We considered $S \to P$, $S \to D$, and $S \to F$ atomic transitions for several choices of incoming twisted photon quantum numbers as labeled in the Figures. With exceptions at some values of $b$, for $S \to P$ transitions the atoms still absorb just one unit of angular momentum into their electronic degrees of freedom, just like for plane waves (with $\Delta m=\pm1$ dipole selection rules), leaving the rest of the AM for c.m. motion. For $S \to D$ and $S \to F$ transitions, the AM transfers to $\braket{l_z}_{c.m.}$ deviate from plane-wave selection rules for smaller, sub-wavelength, values of $b$, especially when the total incoming photon AM is greater than a single $\hbar$. The calculation applies for the case when magnetic quantum numbers of the excited atom are not resolved. There is also a possibility to measure individual transitions into Zeeman sublevels if these levels are split by an external magnetic field, as was done in Refs.\cite{2016NatCo...712998S,Afanasev_2018}. \begin{figure}[h] \centering (a) \\ \includegraphics[width = 0.85 \columnwidth]{Superkick_figs/ccCM_dM_L
1_Lam1} \\[2 ex] (b) \\ \includegraphics[width = 0.85 \columnwidth]{Superkick_figs/ccCM_dM_L2_Lam1} \\[2 ex] (c) \\ \includegraphics[width = 0.85 \columnwidth]{Superkick_figs/ccCM_dM_L3_Lam1} \caption{Mean angular momentum transfer $\langle \ell_z \rangle_{c.m.}$, Eq.~(\ref{eq:Lz}), along the beam direction passed by twisted light of total angular momentum $m_\gamma$ to an atom's c.m. motion for (a) $S\to P$ transitions, (b) $S\to D$ transitions, and (c) $S\to F$ transitions. For all cases, $\Lambda = +1$ where $\Lambda$ is (paraxially) the spin angular momentum of the twisted photon. The horizontal axis shows atom's position $b$ with respect to the vortex center measured in units of light's wavelength. See text for further comments. } \label{fig:Lz1} \end{figure} \begin{figure}[h] \centering (a) \\ \includegraphics[width = 0.85 \columnwidth]{Superkick_figs/ccCM_dM_L1_Lam-1} \\[2 ex] (b) \\ \includegraphics[width = 0.85 \columnwidth]{Superkick_figs/ccCM_dM_L2_Lam-1} \\[2 ex] (c) \\ \includegraphics[width = 0.85 \columnwidth]{Superkick_figs/ccCM_dM_L3_Lam-1} \caption{Same as Fig.~\ref{fig:Lz1} but for $\Lambda = -1$. } \label{fig:Lz-1} \end{figure} While the longitudinal momentum of the atom's recoil equals photon's longitudinal momentum $p_z=\hbar \omega /c$ (in the paraxial approximation, and where $\omega$ is photon's angular frequency), the transverse recoil momentum $p_T$ can be evaluated through AM conservation as $p_T=\langle \ell_z \rangle_{c.m.}/b$ (at least for values of $b$ well larger than the radius of the target~\cite{Barnett_2013}). Therefore their ratio is: \begin{equation} \frac{p_T}{p_z}= \frac{\langle \ell_z\rangle_{c.m.}}{2\pi \hbar} \frac{\lambda}{b} \,, \end{equation} where $\lambda$ is twisted photon's wave length. For the case when photon's spin ($\Lambda$) and total AM ($m_\gamma$) are aligned, and the atomic transition is $S\to P$, as shown in Fig.\ref{fig:Lz1}a, the orbital AM is given by $\langle \ell_z\rangle_{c.m.}=\hbar(m_\gamma-\Lambda)$ and we obtain a simple formula for the transverse recoil momentum, \begin{equation} p_T=\hbar\frac{m_\gamma-\Lambda}{b} \label{eq:pT} \end{equation} It follows that the longitudinal and transverse recoil momenta become equal at the value of impact parameter \begin{equation} b=\lambda \frac{m_\gamma-\Lambda}{2 \pi}. \end{equation} However, if some of the excess AM of the incoming twisted photon is passed to internal excitation of the target, then the c.m. recoil is dampened at sub-wavelength distances near the vortex center. This effect is shown for the ratios $p_T/p_z$ in Figs.~\ref{fig:pRat1},~\ref{fig:pRat-1}. Qualitatively, this effect was discussed in Ref.~\cite{Babiker_2018}, but specific predictions for non-dipole transitions are presented here for the first time, to the best of our knowledge. \begin{figure}[h] \centering (a) \\ \includegraphics[width = 0.85 \columnwidth]{Superkick_figs/ccpRat_lf=1_Lam=1.pdf} \\[2 ex] (b) \\ \includegraphics[width = 0.85 \columnwidth]{Superkick_figs/ccpRat_lf=2_Lam=1.pdf} \\[2 ex] (c) \\ \includegraphics[width = 0.85 \columnwidth]{Superkick_figs/ccpRat_lf=3_Lam=1.pdf} \caption{Ratio of longitudinal to transverse recoil momentum of a free atomic target after absorbing a twisted photon in $S\to P$ transition (a) $S\to D$ transition (b) and $S\to F$ transition. For all cases, $\Lambda = +1$, $i.e.$ the spin of a twisted photon is aligned with its orbital AM. The horizontal axis shows atom's position $b$. } \label{fig:pRat1} \end{figure} \begin{figure}[h] \centering (a) \\ \includegraphics[width = 0.85 \columnwidth]{Superkick_figs/ccpRat_lf=1_Lam=-1.pdf} \\[2 ex] (b) \\ \includegraphics[width = 0.85 \columnwidth]{Superkick_figs/ccpRat_lf=2_Lam=-1.pdf} \\[2 ex] (c) \\ \includegraphics[width = 0.85 \columnwidth]{Superkick_figs/ccpRat_lf=3_Lam=-1.pdf} \caption{Same as Figure ~\ref{fig:pRat1} but for $\Lambda = -1$. } \label{fig:pRat-1} \end{figure} The above results indicate that for $S\to P$ transitions (see Fig.~\ref{fig:Lz1}a and \ref{fig:pRat1}a for $m_\gamma=2,3$), the approach of Barnett and Berry \cite{Barnett_2013} to the evaluation of atomic recoil for absorption of the twisted light is justified. They do have general $m_\gamma$, but limit consideration to a single level final state and only have dipole transitions. For general cases, modification is required, as shown here. \subsection{Twisted Photon Absorption on Cold Trapped Ions} Let us consider atomic recoil of a $^{40}$Ca$^+$ ion after absorption of 397 nm photon in an $S\to P$ transition or 729 nm photon in an $S\to D$ transition that define the ``carrier" frequency of the absorbed photons. In presence of atomic target recoil, energy conservation is modified as follows, \begin{equation} \hbar \omega=\hbar \omega_0+\frac{p_z^2+p_T^2}{2M}, \label{eq:E_th} \end{equation} where $\hbar \omega_0$ define the excited energy level. If, for example, a free ion of $^{40}$Ca absorbs a plane-wave photon of wavelength $\lambda=397$ nm and energy E$_\gamma = 3.12$ eV, corresponding to an $E1$ $S\to P$ transition, it gives an atom longitudinal recoil energy of $p_z^2/(2M)=0.13$ neV, where $M$ is target's mass. Twisted-photon absorption generates additional transverse recoil momentum that depends on the impact parameter $b$ but is independent on photon's wavelength. It can be read off Figs.~\ref{fig:pRat1}a and~\ref{fig:pRat-1}a. The conversion into transverse recoil energy, $E_T=p_T^2/(2M)$, is plotted in Fig.~\ref{fig:40Ca}, with a comparison line for the longitudinal recoil energy $p_z^2/(2M)$. For the $S \to D$ electric quadrupole $E2$ transition at $\lambda = 729$ nm, the corresponding values of transverse recoil momentum can be read off Figs~\ref{fig:pRat1}b and~\ref{fig:pRat-1}b. \begin{figure}[t] \centering \includegraphics[width = 0.98 \columnwidth]{Superkick_figs/ccRecoil_Energy_40Ca} \hfill \caption{Recoil energy as a function of impact parameter for longitudinal recoil and transverse recoil at different values of total AM of the absorbed photon, $\Lambda=1$, for $\lambda$ = 397 nm $E1$ $S\to P$ transition on $^{40}$Ca$^+$ ion.} \label{fig:40Ca} \end{figure} In actual experiments the ions are being held in electromagnetic traps; for example, a segmented Paul trap was used in Refs.~\cite{2016NatCo...712998S, Afanasev_2018} with RF frequencies of about $f=\omega^z_\text{trap} /2\pi=1.5$ MHz (along the trap's axis $z$) which corresponds to 6.2 neV energy level spacing in a harmonic oscillator. Different frequencies $\omega^{x,y}_\text{trap}$ describe transverse motion of ions in the trap. From the above, we can estimate the value of impact parameter $b=10$ nm for which the transverse recoil equals the energy level spacing of the trap. It affects the Lamb-Dicke parameter $\eta$ that is crucial for determining ion behavior in the trap. It can be obtained from $\eta=\sqrt{E_\text{rec}/\hbar \omega_\text{trap}}$, where $E_\text{rec}$ is the recoil energy