Skip to content

Stationary solutions in AdS

Jie Ren (renphysics@adscft.org)

Rμν12gμνR+Λgμν=8πGTμν\boxed{R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R+\Lambda g_{\mu\nu}=8\pi G T_{\mu\nu}}

A central lesson of AdS/CFT is that a strongly coupled quantum field theory can often be mapped to a classical gravitational system in a higher-dimensional asymptotically AdS spacetime. For stationary problems, this opens a remarkably rich landscape of geometries that are natural physically but rarely accessible analytically. The purpose of this page is to explain why stationary solutions in AdS are so rich, why the Einstein–DeTurck method is the standard tool for constructing them, and how one turns a physical question into a precise numerical boundary value problem.

AspectNumerical relativity for astrophysicsNumerical relativity for AdS
Physical motivationSimulate compact-object mergers, collapse, and other strong-field events; understand general relativity and gravitational wavesConstruct bulk geometries dual to strongly coupled QFT states; study phases, transport, and nonequilibrium phenomena in AdS/CFT
Global structureGlobally hyperbolic; one evolves Cauchy data on a spatial sliceNot globally hyperbolic; one must also prescribe data on the timelike AdS boundary, where the dual CFT lives
Stationary solutionsStrongly constrained by topology, rigidity, uniqueness, and no-hair theoremsMuch richer landscape: black funnels, black droplets, lumpy black holes, black rings, black resonators, and more
Foundational period1990s2009–2013
ExamplesBinary black hole mergers, neutron-star mergersBlack funnels, black droplets, holographic superconductors, deformed AdS black holes
Common gauge choicesBSSN, generalized harmonic, moving puncture, etc.Einstein–DeTurck, Chesler–Yaffe, Bondi–Sachs, etc.

Commercial software is usually designed for standard equations on complicated domains, using the finite element method. Stationary AdS problems are often the opposite: the equations are highly nonstandard, while the domains are simple—or can be made simple by a clever coordinate choice. This is one reason pseudospectral methods are so effective here.

1. Exact solutions of Einstein’s equations

Section titled “1. Exact solutions of Einstein’s equations”

A basic and practical way to classify gravitational solutions is by their symmetries: not only the symmetry group itself, but also the structure of its orbits. When we solve Einstein’s equations, one of the first questions is therefore: how many Killing vectors does the spacetime have? A familiar example is the stationary and axisymmetric case, where there are two Killing vectors.

Because Einstein’s equations are diffeomorphism invariant, the same geometry can be written in many different coordinate systems. This is why classification must ultimately rely on coordinate-independent data. Two especially useful ingredients are

  • the isometry group GG (continuous and discrete symmetries), and
  • the structure of its orbits.

Besides symmetry, there are several other important ways to classify spacetimes:

  • Petrov classification (algebraic type of the Weyl tensor);
  • classification by the Ricci tensor;
  • classification by the energy–momentum tensor (matter content, equation of state, conserved currents);
  • horizon topology and global structure, especially in D5D\ge 5.

A particularly useful invariant in practice is the cohomogeneity:

  • If the metric depends nontrivially on nn coordinates after quotienting by symmetries, we call it cohomogeneity-nn.
  • Cohomogeneity-1 problems typically reduce to ODEs and are often analytically solvable or numerically straightforward.
  • Cohomogeneity-2 and higher generically require PDEs, and this is precisely where the methods on this page become essential.

In asymptotically flat 4D vacuum gravity, stationary solutions are heavily constrained—for example by Kerr uniqueness. In AdS, or in the presence of nontrivial boundary data or matter fields, these uniqueness theorems typically fail, and the space of stationary solutions becomes vastly richer.

Let us begin with the maximally symmetric case: Minkowski, de Sitter, and anti–de Sitter spacetime. These have the largest possible number of Killing vectors, but they are only the beginning. Once black holes or nontrivial boundary sources are present, the isometry group becomes smaller and the solution space becomes far more interesting.

In highly symmetric cases one can still write analytic AdS solutions explicitly. Very often these depend only on the radial coordinate. But once one drops to lower symmetry—say, to only two Killing vectors—analytic control usually disappears, except in special algebraically special cases such as Petrov type D in four dimensions. By contrast, in the asymptotically flat vacuum stationary and axisymmetric case, Einstein’s equations are integrable. In AdS, one typically has to solve Einstein’s equations numerically.

An exact solution resembles a poem: both are concise, elegant, and capable of holding abundant meaning with very few words. Poetry and exact solutions both exhibit symmetry, though in opposite ways. In poetry, the more symmetry we demand, the harder the poem is to write, because the constraints become tighter. In exact solutions, the more symmetry we assume, the easier the solution is to find, because the problem becomes simpler. Wen Yiduo once said that writing poetry is like dancing in shackles: one must work within constraints and still achieve beauty. Likewise, when searching for exact solutions, imposing symmetry assumptions can greatly simplify the problem. But if the symmetry is too strong, spacetime becomes overly simple, so we must relax symmetry in order to seek more general solutions. Once the symmetry is reduced beyond a certain point, analytic solutions are no longer available, and numerical solutions must take their place.

An analytic solution is like a classical Chinese poem, and a numerical solution is like a modern poem. — Jie Ren

形骸已与流年老,诗句犹争造化功。
My body creaks under the weight of passing years; my poems aim still to rival the perfection of nature.
— Lu You, translated by C. N. Yang

境自远尘皆入咏,物含妙理总堪寻。
Far from the dust of the world, every scene invites verse; each thing holds a subtle truth, waiting to be sought.

In AdS/CFT, classical gravity in the bulk provides access to strongly coupled quantum field theory on the boundary. The simplest bulk geometries, Schwarzschild–AdS and Reissner–Nordström–AdS, already encode important equilibrium states of the dual theory. But once we want more general states, more general boundary metrics, or less symmetry, exact solutions become scarce and numerical solutions become indispensable.

Historically, when people spoke about numerical relativity—especially in the last century—they usually meant time evolution: binary black hole mergers, gravitational collapse, neutron-star mergers, and related hyperbolic problems. That emphasis made perfect sense. In asymptotically flat spacetimes, uniqueness, no-hair, topology, and rigidity theorems leave relatively little room for exotic stationary solutions. Roughly speaking, in 4D asymptotically flat Einstein–Maxwell theory the stationary black holes are Kerr or Kerr–Newman.

AdS is different. The negative cosmological constant, the physical role of the timelike boundary, and the failure of key uniqueness assumptions together lead to a much richer stationary sector.

A useful way to summarize the contrast with traditional numerical relativity is this:

  • In time-evolution problems one performs a 3+13+1 split and emphasizes the distinction between evolution equations and constraints.
  • In stationary AdS problems one instead aims to formulate a well-posed boundary value problem for as many independent second-order equations as unknown fields.

The two viewpoints are complementary: time evolution and stationary boundary value methods answer different questions, and neither replaces the other.

An analytic solution: topological Schwarzschild–AdS

Section titled “An analytic solution: topological Schwarzschild–AdS”

A simple but very instructive example is the topological Schwarzschild–AdS family. In asymptotically flat 4D gravity, under mild assumptions the horizon of a black hole must be compact and spherical. In AdS, by contrast, planar and hyperbolic black holes are perfectly allowed.

In DD spacetime dimensions one may write the metric as

ds2=f(r)dt2+dr2f(r)+r2dΣk,D22,ds^2=-f(r)\,dt^2+\frac{dr^2}{f(r)}+r^2 d\Sigma_{k,D-2}^2,

with

f(r)=kμrD3+r2L2,f(r)=k-\frac{\mu}{r^{D-3}}+\frac{r^2}{L^2},

where k{+1,0,1}k\in\{+1,0,-1\}, and dΣk,D22d\Sigma_{k,D-2}^2 is the unit metric on SD2S^{D-2} (k=1k=1), RD2\mathbb{R}^{D-2} (k=0k=0), or HD2\mathbb{H}^{D-2} (k=1k=-1).

For D=4D=4 one often writes

f(r)=k2MrΛ3r2,f(r)=k-\frac{2M}{r}-\frac{\Lambda}{3}r^2,

which is equivalent to μ=2M\mu=2M and L2=3/ΛL^2=-3/\Lambda.

If the cosmological constant is zero or positive, the standard black-hole interpretation is essentially restricted to the spherical case in four dimensions. But for negative cosmological constant one also has planar and hyperbolic horizons.

Even when Λ=0\Lambda=0, higher dimensions already admit richer horizon topologies such as black rings. In AdS, the cosmological constant and the boundary together enlarge the possibilities even further: black holes, black strings, black rings, black bottles, black globules, and boundary-forced horizon geometries all become natural.


More on hyperbolic black holes

Hyperbolic AdS black holes have several features that appear repeatedly in AdS/CFT and in practical computations:

  • Temperature-dependent causal structure.
    At high temperature, the causal structure resembles Schwarzschild–AdS. At sufficiently low temperature—even without charge—it can resemble Reissner–Nordström with multiple horizons.

  • AdS-Rindler slicing.
    The case k=1k=-1 and μ=0\mu=0 gives a hyperbolic slicing of pure AdS, often called AdS-Rindler.

  • Rényi entropy and entanglement.
    Hyperbolic black holes play a central role in the standard mapping between Rényi entropies of a CFT and thermal physics on Hd1\mathbb{H}^{d-1}.

  • Convenient domains.
    In useful coordinate choices, hyperbolic horizons can intersect the conformal boundary in such a way that a geometrically complicated domain becomes a simple rectangle in compact coordinates—ideal for tensor-product spectral grids.


In AdS/CFT, a stationary bulk geometry is not merely “a nice solution of Einstein’s equation.” It is often the dual description of a stationary state of a strongly coupled QFT on a prescribed background spacetime.

Two facts make AdS a natural numerical laboratory:

  1. Boundary conditions are physical data.
    In asymptotically AdS spacetimes one specifies a boundary metric, more precisely a conformal class. In AdS/CFT this is the spacetime on which the QFT lives.

  2. The radial direction is not the only variable.
    Once the boundary geometry or sources depend on spatial coordinates, bulk fields depend on both the holographic direction and boundary directions. The problem therefore becomes genuinely multi-dimensional.

A convenient AdS parametrization is

Λ=(D1)(D2)2L2,\Lambda=-\frac{(D-1)(D-2)}{2L^2},

so the vacuum equation becomes

Rμν+D1L2gμν=0.R_{\mu\nu}+\frac{D-1}{L^2}g_{\mu\nu}=0.

How can uniqueness be violated?

  1. Add matter fields.
    Holographic superconductors are the standard example: black holes can develop scalar or vector hair. Bulk scalars are also ubiquitous in holographic QCD.

  2. Even in pure gravity with Λ<0\Lambda<0, the stationary sector is much larger.
    The horizon topology need not be spherical; it may be planar, hyperbolic, or disconnected. The rigidity theorem can fail. The horizon need not even be a Killing horizon. In flowing setups one may have heat transport along a spatial direction, and metric components such as dtdxdt\,dx can appear even when xx is not a Killing coordinate.

  3. The AdS boundary can be conformal to almost any spacetime.
    For example, if we want to study N=4\mathcal{N}=4 super-Yang–Mills theory on a black hole background, we can impose boundary conditions so that the conformal boundary metric is Schwarzschild. The bulk solution is then asymptotically locally AdS rather than globally AdS.

Beyond analytic solutions: boundary black holes, funnels, and droplets

Section titled “Beyond analytic solutions: boundary black holes, funnels, and droplets”

An interesting AdS/CFT application is to place the boundary CFT on a black hole spacetime. The boundary horizon then extends into the bulk in qualitatively different ways:

  • black funnels: the bulk horizon is connected to the boundary black hole horizon;
  • black droplets: a horizon hangs from the boundary but is disconnected from a separate bulk or planar horizon.

These are typically cohomogeneity-2 or higher problems, and in practice they are numerical.

From the numerical point of view, the key lesson is simple:

Choose coordinates so that the computational domain is as simple as possible—ideally a rectangle or a product of intervals—and factor out all known singular or vanishing behavior analytically so that the remaining unknown functions are smooth.

When facing a PDE system, the first question is usually: what kind of equation is it? Elliptic, hyperbolic, or parabolic? These classes have very different mathematical and numerical behavior.

  • For elliptic systems, discretization turns the differential operator into a matrix, and solving the PDE becomes an inversion problem with appropriate boundary conditions.
  • For hyperbolic systems, one usually has an evolution problem instead.
  • Parabolic systems have their own diffusion-type behavior.

Einstein’s equation is difficult partly because it does not have a fixed character by itself. This is not only because it is nonlinear, but because it is gauge invariant.

More precisely, Einstein’s equation is a quasi-linear second-order PDE system for gμνg_{\mu\nu}. To determine its character one linearizes around a background,

gμνgμν+hμν,g_{\mu\nu}\to g_{\mu\nu}+h_{\mu\nu},

and keeps only the highest-derivative terms. This defines the principal symbol.

For the Ricci tensor, the principal symbol acting on hμνh_{\mu\nu} can be written as

Pghμν=12gαβ(2αβhμν+2α(μhν)βμνhαβ).P_g h_{\mu\nu} =\frac12 g^{\alpha\beta} \bigl( -2\,\partial_\alpha\partial_\beta h_{\mu\nu} +2\,\partial_\alpha\partial_{(\mu}h_{\nu)\beta} -\partial_\mu\partial_\nu h_{\alpha\beta} \bigr).

In Fourier space, replacing αξα\partial_\alpha\to \xi_\alpha gives

Pg(ξ)hμν=12(ξ2hμν+2ξαξ(μhν)αξμξνh),P_g(\xi)h_{\mu\nu} =\frac12\bigl( -\xi^2 h_{\mu\nu} +2\,\xi^\alpha\xi_{(\mu}h_{\nu)\alpha} -\xi_\mu\xi_\nu h \bigr),

where hgαβhαβh\equiv g^{\alpha\beta}h_{\alpha\beta} and ξ2gαβξαξβ\xi^2\equiv g^{\alpha\beta}\xi_\alpha\xi_\beta.

For an elliptic system one would like

Pg(ξ)hμν=0hμν=0P_g(\xi)h_{\mu\nu}=0 \quad \Rightarrow\quad h_{\mu\nu}=0

for every nonzero covector ξ\xi. Einstein’s equation fails this test because pure gauge perturbations lie in the kernel. Infinitesimally, a diffeomorphism generated by a vector field vμv_\mu produces

hμν=(μvν),h_{\mu\nu}=\nabla_{(\mu}v_{\nu)},

and such perturbations always annihilate the principal symbol.

This is the PDE manifestation of a familiar counting statement. The metric has D(D+1)/2D(D+1)/2 components, but Einstein’s equations do not determine all of them without gauge fixing because the Bianchi identity removes DD independent combinations. In 4D one often summarizes this as 10=6+410 = 6 + 4. So the conclusion is fundamental: Ellipticity requires gauge fixing.

At the linearized level, harmonic (de Donder) gauge can be written as

νhμν12μh=0.\partial^\nu h_{\mu\nu}-\frac12\partial_\mu h=0.

A geometric way to implement this idea nonlinearly is to introduce a covector field built from the connection. In coordinates one may write

ξμ=gαβΓαβμ=2xμ,\xi^\mu = g^{\alpha\beta}\Gamma^{\mu}_{\alpha\beta} = -\nabla^2 x^\mu,

which is the harmonic-coordinate condition in a familiar form.

For stationary numerical relativity, the most useful formulation is the Einstein–DeTurck method, in which one introduces a fixed reference metric gˉμν\bar g_{\mu\nu} and defines the DeTurck vector

ξμ=gαβ(Γαβμ(g)Γˉαβμ(gˉ)).\xi^\mu = g^{\alpha\beta} \bigl( \Gamma^\mu_{\alpha\beta}(g)-\bar\Gamma^\mu_{\alpha\beta}(\bar g) \bigr).

Because the difference of two Levi-Civita connections is a tensor, ξμ\xi^\mu is globally well defined.

One then defines the DeTurck-modified Ricci tensor

RμνH=Rμν(μξν),R^H_{\mu\nu}=R_{\mu\nu}-\nabla_{(\mu}\xi_{\nu)},

and solves the Einstein–DeTurck equation

RμνH2ΛD2gμν=0.R^H_{\mu\nu}-\frac{2\Lambda}{D-2}g_{\mu\nu}=0.

In AdSd+1_{d+1} notation this is

RμνH+dL2gμν=0.R^H_{\mu\nu}+\frac{d}{L^2}g_{\mu\nu}=0.

Now the counting works out correctly:

  • gμνg_{\mu\nu} has 12D(D+1)\frac12 D(D+1) unknown components;
  • the Einstein equation has only 12D(D1)\frac12 D(D-1) independent equations before gauge fixing;
  • the Einstein–DeTurck equation supplies the missing gauge conditions and gives a determined system with 12D(D+1)\frac12 D(D+1) independent equations.

Counting independent equations in a reduced ansatz

The slogan 10=6+410=6+4 refers to the most general 4D metric. In a symmetry-reduced ansatz, the numbers change, but the logic is the same: unknown functions = independent field equations + gauge conditions.

For example, if one writes

ds2=A(r)dt2+B(r)dr2+C(r)dΩ2,ds^2=-A(r)dt^2+B(r)dr^2+C(r)d\Omega^2,

then there are three unknown functions, but not three independent Einstein equations: one combination is still pure gauge because of the reparameterization freedom rf(r)r\to f(r). One can fix this freedom directly, for example by setting C(r)=r2C(r)=r^2 or B(r)=1/A(r)B(r)=1/A(r), or one can let the DeTurck method handle it automatically.


The reward is visible at the level of the principal symbol:

PgHhμν=12gαβαβhμν,P_g^H h_{\mu\nu}=-\frac12 g^{\alpha\beta}\partial_\alpha\partial_\beta h_{\mu\nu},

which is Laplace-like. In a Riemannian setting it is elliptic. For stationary Lorentzian problems it is elliptic after a suitable symmetry reduction, as discussed next.

After solving the Einstein–DeTurck equation, one must still check

ξμ=0.\xi^\mu=0.

If ξμ=0\xi^\mu=0, the solution is an Einstein solution. If ξμ0\xi^\mu\neq 0, the solution is a Ricci soliton: a solution of the DeTurck-modified system but not of Einstein’s equation. This is not a technicality; it is a genuine issue and must be monitored explicitly.

3.3 When stationary Lorentzian problems are elliptic

Section titled “3.3 When stationary Lorentzian problems are elliptic”

A subtle but central point is that we are usually solving Lorentzian stationary problems. Why, then, can they often be treated as elliptic boundary value problems?

The answer is that stationary symmetry reduction removes the time-evolution direction from the principal symbol. A standard decomposition is

ds2=GAB(x)(dzA+AiA(x)dxi)(dzB+AjB(x)dxj)+hij(x)dxidxj,ds^2= G_{AB}(x)\bigl(dz^A+A^A_i(x)\,dx^i\bigr)\bigl(dz^B+A^B_j(x)\,dx^j\bigr) +h_{ij}(x)\,dx^i dx^j,

where zAz^A are coordinates along Killing directions (for example t,ϕ1,t,\phi_1,\dots) and xix^i are coordinates on the orbit space or base manifold.

If the base metric hijh_{ij} is Riemannian (positive definite), then the principal symbol of the Einstein–DeTurck operator is controlled by hijijh^{ij}\partial_i\partial_j, which is elliptic.

This gives a practical criterion:

  • For static solutions, one can often Wick rotate tiτt\to i\tau, turning the problem into an explicitly Riemannian boundary value problem.
  • For rotating but stationary solutions with Killing horizons, the reduced base metric can still be Riemannian outside the horizon, and the Einstein–DeTurck problem is elliptic there.
  • For flowing or nonequilibrium steady states, especially with non-Killing horizons or heat transport, the reduced equations can become mixed elliptic–hyperbolic. In such cases the DeTurck method remains a powerful gauge-fixing method, but it is not a magical “ellipticizer.”

In practice, this means one should:

  1. choose an ansatz that is closed under residual reparameterizations of the base coordinates;
  2. monitor the conditioning of the linear system solved at each Newton step;
  3. verify the expected behavior a posteriori through robust convergence tests.

3.4 Einstein–DeTurck with matter fields (and gauge fields)

Section titled “3.4 Einstein–DeTurck with matter fields (and gauge fields)”

In holography one often solves Einstein’s equations coupled to matter. The DeTurck method extends directly.

In units where 8πG=18\pi G=1, the trace-reversed Einstein equation is

Rμν=2ΛD2gμν+Tμν1D2Tgμν,R_{\mu\nu} = \frac{2\Lambda}{D-2}g_{\mu\nu} + T_{\mu\nu} -\frac{1}{D-2}T\,g_{\mu\nu},

and the Einstein–DeTurck version is

Rμν(μξν)=2ΛD2gμν+Tμν1D2Tgμν.R_{\mu\nu}-\nabla_{(\mu}\xi_{\nu)} = \frac{2\Lambda}{D-2}g_{\mu\nu} + T_{\mu\nu} -\frac{1}{D-2}T\,g_{\mu\nu}.

For scalar fields, the principal symbol is already well behaved. For gauge fields, one must also fix gauge. A standard Maxwell example is

μFμν+νχ=0,χ=μAμμAˉμ,\nabla_\mu F^{\mu\nu}+\nabla^\nu\chi=0, \qquad \chi=\nabla_\mu A^\mu-\nabla_\mu \bar A^\mu,

where Aˉ\bar A is a reference gauge field. This makes the principal symbol Laplace-like and therefore suitable for stationary elliptic problems. In practice, it is usually unnecessary to modify Maxwell’s equations.

3.5 Ricci solitons and how to rule them out

Section titled “3.5 Ricci solitons and how to rule them out”

Solutions to Einstein’s equation are solutions to the Einstein-DeTurck equation, but not vice versa. Thus, one must exclude the possibility of Ricci solitons. There are two standard approaches:

  1. A posteriori numerical check.
    Verify that

    ξ2=ξμξμ|\xi|^2=\xi^\mu\xi_\mu

    is driven to zero with increasing resolution and tighter tolerances.

  2. Analytic nonexistence arguments.
    In certain classes of problems—especially static ones with Λ0\Lambda\le 0 and suitable boundary conditions—one can use maximum-principle arguments to show that nontrivial solitons do not exist.

3.6 Choosing the reference metric and comparing with direct gauge fixing

Section titled “3.6 Choosing the reference metric and comparing with direct gauge fixing”

Choosing the reference metric gˉμν\bar g_{\mu\nu} is equivalent to choosing the gauge. Different gˉμν\bar g_{\mu\nu} correspond to different coordinate conditions, but the gauge fixing is encoded geometrically rather than imposed by hand.

Practical guidelines for gˉ\bar g are:

  • it should share the same conformal boundary as the desired solution;
  • it should have the same fictitious boundaries (axes, horizons, asymptotic ends) in the chosen coordinate chart;
  • it should have the same topology and broadly the same causal structure;
  • near each boundary it is often advantageous to build gˉ\bar g by patching together known local model geometries.

This is one of the main strengths of the Einstein–DeTurck method: it gives enormous coordinate flexibility without forcing a rigid coordinate gauge at the ansatz stage.

A useful comparison is the following. In a cohomogeneity-2 static problem, one might try to impose a direct gauge condition and reduce the number of unknown functions. For example, a gauge-fixed ansatz might look like

ds2=A(r,x)dt2+B(r,x)(dr2+dx2)+C(r,x)dy2.ds^2=-A(r,x)dt^2+B(r,x)(dr^2+dx^2)+C(r,x)dy^2.

A more flexible DeTurck ansatz keeps the most general base metric compatible with the symmetries, for example

ds2=Tdt2+Adr2+Bdx2+Sdy2+2Fdrdx.ds^2=-T\,dt^2+A\,dr^2+B\,dx^2+S\,dy^2+2F\,dr\,dx.

The direct gauge-fixed approach can be elegant, but it carries two practical disadvantages:

  1. Residual gauge freedom.
    Even after simplifying the ansatz, there may remain coordinate transformations that preserve its form. Numerically, these often show up as a singular Jacobian in Newton’s method.

  2. Problem-dependent coordinates.
    A gauge that is convenient for one family of geometries may be badly adapted to another, especially when multiple scales or complicated boundaries are present.

The Einstein–DeTurck method asks for more unknown functions and therefore longer equations, but the gain in robustness is usually worth it. In practice, what matters is not whether the printed equations look elegant, but whether the numerical pipeline reliably finds the metric.

Bondi–Sachs gauge and other direct gauge choices are also useful in appropriate contexts, especially for characteristic evolution, but the Einstein–DeTurck method remains the most flexible general tool for stationary boundary value problems.

A brief historical remark: the original “DeTurck trick” arose in the study of Ricci flow, where it renders the flow strictly parabolic. Its adaptation to stationary Einstein problems is one of the key developments behind modern AdS numerics.

The traditional geometric definition of stationarity (the existence of a timelike Killing vector), the definition in terms of elliptic boundary value problems, and the definition in terms of well-posed numerical problems are not identical. But in the applications relevant here, they overlap strongly.

It is useful to distinguish:

  • Defining boundary conditions: the data that defines the physical problem, such as the boundary metric, external sources, temperature, and angular velocities;
  • Derived boundary conditions: the conditions forced by regularity and by the equations of motion once the defining data is chosen.

In practice, asymptotic AdS data is of the first type, while most axis and horizon conditions are of the second type.

Let M\partial\mathcal{M} be a boundary with outward unit normal nμn^\mu. The boundary geometry is described by the induced metric γij\gamma_{ij} and the extrinsic curvature KijK_{ij}.

Common types of boundary conditions include:

  • Dirichlet: fix γij\gamma_{ij}, i.e. the boundary metric;
  • Neumann or mixed: fix KijK_{ij} or a linear combination aγij+bKija\,\gamma_{ij}+b\,K_{ij};
  • Modified Dirichlet: fix the conformal class [γ][\gamma] together with a scalar such as the trace K=γijKijK=\gamma^{ij}K_{ij}.

Which choice is appropriate depends on the variational principle and on the physical interpretation of the problem: what is regarded as source data, and what is regarded as response.

There are also asymptotic boundaries, such as spatial infinity in asymptotically flat gravity or the conformal boundary of AdS. These are not physical boundaries of the manifold, but from the point of view of the numerical problem one still imposes boundary conditions there.

For asymptotically AdS spacetimes, the boundary is at infinite proper distance, but one can compactify conformally. If zz is a defining function with z=0z=0 at the boundary, then near z=0z=0 one writes

gμνL2z2g~μν,z0,g_{\mu\nu}\sim \frac{L^2}{z^2}\,\tilde g_{\mu\nu}, \qquad z\to 0,

where g~μν\tilde g_{\mu\nu} is regular at the boundary. The boundary metric is g~z=0\tilde g|_{z=0}, defined only up to a Weyl rescaling.

Numerically, the practical rule is:

  • factor out the known divergent conformal factor;
  • impose Dirichlet-type conditions on the remaining smooth fields so that the induced boundary metric is the one you want.

In AdS/CFT, this boundary metric—and likewise the boundary values of matter fields—is the physical input specifying the QFT background and its external sources.

The same logic applies to matter fields. For a second-order bulk field equation, such as that of a scalar, there are generically two independent asymptotic behaviors near the AdS boundary. A boundary condition is precisely the choice of one relation between those two pieces of data.

4.3 Fictitious boundaries: axes and horizons

Section titled “4.3 Fictitious boundaries: axes and horizons”

Many “boundaries” in numerical coordinates are not genuine boundaries of the spacetime manifold. They are loci where a cycle shrinks smoothly or where a Killing vector degenerates. The correct conditions there are regularity conditions.

Near an axis where an angular coordinate ϕ\phi degenerates, regularity requires that the (ρ,ϕ)(\rho,\phi) plane look locally like flat polar coordinates,

dρ2+ρ2dϕ2,d\rho^2+\rho^2 d\phi^2,

up to smooth warping in the transverse directions. The period of ϕ\phi must be 2π2\pi if one wants a smooth axis rather than a conical singularity. This implies:

  • no singular cross terms,
  • evenness of metric functions in ρ\rho, which can be implemented by Neumann-type conditions ρ()=0\partial_\rho(\cdots)=0,
  • one algebraic relation among metric coefficients to avoid a conical defect.

In Euclidean signature, a non-extremal Killing horizon is locally the same as the origin of polar coordinates. This is why regularity fixes the periodicity of Euclidean time and therefore the temperature.

For a static black hole, after Wick rotation one has

ττ+β,β=2πκ=1T,\tau\sim \tau+\beta, \qquad \beta=\frac{2\pi}{\kappa}=\frac{1}{T},

where κ\kappa is the surface gravity and TT is the Hawking temperature.

More generally, for a non-extremal Killing horizon generated by

K=t+aΩH(a)ϕa,K=\partial_t+\sum_a\Omega_H^{(a)}\partial_{\phi_a},

one expects near the horizon at ρ=0\rho=0

ds2κ2ρ2dt2+dρ2+γab(x)dxadxb+.ds^2\approx -\kappa^2\rho^2 dt^2+d\rho^2+\gamma_{ab}(x)\,dx^a dx^b+\cdots.

Regularity therefore imposes:

  • matched vanishing behavior of the relevant metric functions;
  • absence of divergent cross terms in a corotating frame;
  • constancy of κ\kappa over the horizon for a genuine Killing horizon.

In actual numerical work, the most efficient strategy is often:

  • choose a reference metric gˉ\bar g with the desired horizon location and temperature;
  • factor out the expected (1y)(1-y) or ρ2\rho^2 behavior in the ansatz;
  • impose the remaining regularity as boundary conditions on smooth functions.

This is a good illustration of an important principle: the physical statement “the horizon is smooth” must always be translated into explicit equations, and the explicit form depends on the chosen coordinates.

Non-extremal horizons are comparatively standard. Extremal (zero-temperature) horizons are harder.

Schematically, one may think of the difference as

gtt(rr+)(non-extremal)g_{tt}\sim -(r-r_+) \qquad \text{(non-extremal)}

versus

gtt(rr+)2(extremal).g_{tt}\sim -(r-r_+)^2 \qquad \text{(extremal)}.

Extremal horizons are qualitatively different because

  • the surface gravity vanishes, κ=0\kappa=0;
  • the near-horizon region often develops an AdS2\mathrm{AdS}_2 throat or another scaling geometry;
  • the horizon can sit at infinite proper distance along suitable slices.

Numerically, extremal horizons often behave more like an additional asymptotic region than like a smooth axis-like boundary. Continuation from finite temperature down to T0T\to 0 may become increasingly stiff. Spectral methods can still work, but one typically needs coordinates adapted to the scaling region and boundary conditions derived from near-horizon expansions rather than naive smoothness alone.

In physics, extremal horizons are important especially in condensed matter applications, because zero temperature corresponds to the ground state. In holography language, we often think of the AdS boundary as the UV and the deep interior as the IR. At zero temperature, the IR geometry can take different forms: it might be an extremal horizon, it might be a “pinching” geometry, it might be singular, and so on. So extremal horizons are difficult and diverse.

For a genuine Killing horizon, the temperature is constant. In flowing or nonequilibrium stationary states, the horizon need not be Killing, and an effective local temperature may vary along it. Such problems are typically harder than the standard finite-temperature Killing-horizon case and may cease to be elliptic in the strict sense discussed in section 3.

Boundary conditions and the DeTurck vector

Section titled “Boundary conditions and the DeTurck vector”

Boundary conditions must also be compatible with the requirement ξ=0\xi=0.

The contracted Bianchi identity implies that on a solution of the Einstein–DeTurck system, the DeTurck vector satisfies a linear second-order equation of the schematic form

2ξν+Rνμξμ=0.\nabla^2\xi_\nu + R_{\nu\mu}\xi^\mu = 0.

Therefore, if the boundary conditions force ξ\xi to vanish on the boundary—or otherwise exclude nontrivial solutions of this auxiliary elliptic problem—then ξ\xi must vanish everywhere.

In many practical coordinate systems, where a fictitious boundary sits at ρ=0\rho=0, regularity implies boundary conditions of the form

ξρρ=0=0,ρξi~ρ=0=0,\xi^\rho\big|_{\rho=0}=0, \qquad \partial_\rho \xi^{\tilde i}\big|_{\rho=0}=0,

where i~\tilde i denotes directions tangent to the boundary. In static AdS problems, this can sometimes be turned into a rigorous nonexistence theorem for solitons. More generally, it provides a stringent numerical diagnostic.


What happens if boundary conditions are missing or wrong?

This linear-algebra viewpoint is extremely useful in practice. A well-posed discretized boundary value problem should lead to an invertible linear system. A schematic equation

Du=f\mathcal{D}u=f

does not define a good boundary value problem by itself. The operator becomes invertible only after the correct boundary conditions are imposed.

A simple example is

ddxu(x)=f(x).\frac{d}{dx}u(x)=f(x).

Without a boundary condition, the solution is not unique because u(x)+Cu(x)+C is another solution. After discretization this appears as a singular matrix. If the boundary conditions are inconsistent, there may be no solution at all.


5. Numerical implementation: pseudospectral method and Newton–Raphson

Section titled “5. Numerical implementation: pseudospectral method and Newton–Raphson”

For stationary solutions, the most useful numerical toolbox is the combination of

  1. the Einstein–DeTurck method for gauge fixing and for formulating a determined boundary value problem;
  2. a pseudospectral method for discretization;
  3. Newton’s method for solving the resulting nonlinear algebraic system.

Each ingredient has alternatives:

  • Einstein–DeTurck versus direct gauge fixing in the ansatz;
  • spectral methods versus finite differences versus finite elements;
  • Newton versus flow or relaxation methods such as Ricci–DeTurck flow.

A compact way to remember the distinction between PDE types is through the prototype equations

(2+m2)u=f(elliptic),(\nabla^2+m^2)u=f \qquad\text{(elliptic)}, (t22)u=f(hyperbolic),(\partial_t^2-\nabla^2)u=f \qquad\text{(hyperbolic)}, (t2)u=f(parabolic).(\partial_t-\nabla^2)u=f \qquad\text{(parabolic)}.

The whole point of the stationary Einstein–DeTurck formulation is to design the problem so that, after symmetry reduction and gauge fixing, it behaves like an elliptic boundary value problem.

5.2 Spectral collocation and why it works so well here

Section titled “5.2 Spectral collocation and why it works so well here”

The philosophy of spectral methods is simple:

Expand the unknown functions in a complete basis on the domain, and try to capture a smooth solution with as few basis elements as possible.

This is the Hilbert-space point of view: a function is treated like a vector in an infinite-dimensional space, and the choice of basis is the analogue of choosing coordinates in ordinary Euclidean space.


The Hilbert-space viewpoint

In finite-dimensional linear algebra, a vector is expanded in a basis. In a Hilbert space, a function is expanded in a basis of functions. Orthogonality, normalization, and completeness play the same role in both settings.

This is why Fourier series are so natural for periodic functions and why orthogonal polynomials are so useful on finite intervals. Solving a differential equation numerically then becomes the problem of solving for the coordinates of the function in the chosen basis.


For smooth stationary geometries on simple domains, spectral collocation is often the most efficient choice:

  • Finite differences use local stencils. They are flexible and easy to implement, but high accuracy usually requires many grid points.
  • Finite elements use piecewise polynomial approximations on a mesh. They are excellent for complicated geometries, but for Einstein–DeTurck they often require weak-form infrastructure that is cumbersome to set up.
  • Spectral methods use global basis functions. For smooth solutions they often achieve exponential convergence.

On a finite interval, the most useful basis is usually the Chebyshev basis. The standard Chebyshev–Gauss–Lobatto points on [x,x+][x_-,x_+] are

xj=x++x2+x+x2cos ⁣(jπN),j=0,1,,N.x_j = \frac{x_+ + x_-}{2} + \frac{x_+ - x_-}{2}\cos\!\left(\frac{j\pi}{N}\right), \qquad j=0,1,\dots,N.

These points cluster near the endpoints, avoid the Runge phenomenon, and are ideal for smooth boundary layers.

At collocation points, derivatives are approximated by differentiation matrices:

f(n)(xi)jDij(n)f(xj).f^{(n)}(x_i)\approx \sum_j D^{(n)}_{ij} f(x_j).

So after discretization, a differential operator really does become a matrix.

Consider a 1D example. The Chebyshev grid (8 points) is

x=(1.,0.9009,0.6234,0.2225,0.2225,0.6234,0.9009,1.)Tx=(-1.,\, -0.9009,\, -0.6234,\, -0.2225,\, 0.2225,\, 0.6234,\, 0.9009,\, 1.)^T

The corresponding first-derivative differentiation matrix is

Dddx=(16.520.195.3112.5721.6351.2311.0520.55.0482.3923.6031.4730.89000.65590.55490.26301.3273.6030.51002.4931.1820.80190.65590.30790.64311.4732.4930.11702.2461.1820.89000.40890.40890.89001.1822.2460.11702.4931.4730.64310.30790.65590.80191.1822.4930.51003.6031.3270.26300.55490.65590.89001.4733.6032.3925.0480.51.0521.2311.6352.5725.31120.1916.5)D\equiv\frac{d}{dx}= \left( \begin{array}{cccccccc} -16.5 & 20.19 & -5.311 & 2.572 & -1.635 & 1.231 & -1.052 & 0.5 \\ -5.048 & 2.392 & 3.603 & -1.473 & 0.8900 & -0.6559 & 0.5549 & -0.2630 \\ 1.327 & -3.603 & 0.5100 & 2.493 & -1.182 & 0.8019 & -0.6559 & 0.3079 \\ -0.6431 & 1.473 & -2.493 & 0.1170 & 2.246 & -1.182 & 0.8900 & -0.4089 \\ 0.4089 & -0.8900 & 1.182 & -2.246 & -0.1170 & 2.493 & -1.473 & 0.6431 \\ -0.3079 & 0.6559 & -0.8019 & 1.182 & -2.493 & -0.5100 & 3.603 & -1.327 \\ 0.2630 & -0.5549 & 0.6559 & -0.8900 & 1.473 & -3.603 & -2.392 & 5.048 \\ -0.5 & 1.052 & -1.231 & 1.635 & -2.572 & 5.311 & -20.19 & 16.5 \\ \end{array} \right)

Once derivatives become matrix multiplications, a linear differential equation schematically becomes a linear algebra problem:

Lu=fu=L1f,Lu=f \quad\Rightarrow\quad u=L^{-1}f,

For multi-dimensional problems on rectangular domains, tensor-product grids are especially convenient. If f(xi,yj)f(x_i,y_j) is represented as a vector, then

Dx=INy+1Dx(1),Dy=Dy(1)INx+1.D_x = I_{N_y+1}\otimes D_x^{(1)}, \qquad D_y = D_y^{(1)}\otimes I_{N_x+1}.

This is one of the main reasons to choose coordinates that make the computational domain a rectangle.

Boundary conditions enter the linear algebra problem by replacing the corresponding rows of the discrete operator itself. A practical rule in collocation methods is:

The PDEs are satisfied at interior points, and the boundary conditions are satisfied at boundary points.

Eigenvalue problems versus ordinary boundary value problems: In linear algebra, a system Au=bAu=b with b0b\neq 0 is an ordinary solve. If b=0b=0, nontrivial solutions exist only when detA=0\det A=0, which is an eigenvalue condition.

The same distinction appears for differential equations. Solving the Einstein-DeTurck equation is an ordinary boundary value (non-eigenvalue) problem. Solving the quasinormal modes for a black hole is an eigenvalue problem.

After discretization, the unknown fields become a finite-dimensional vector UU, and the PDE system becomes a nonlinear algebraic system

F(U)=0.F(U)=0.

Newton’s method linearizes this at each step:

J(U(k))δU(k)=F(U(k)),U(k+1)=U(k)+δU(k),J(U^{(k)})\,\delta U^{(k)} = -F(U^{(k)}), \qquad U^{(k+1)}=U^{(k)}+\delta U^{(k)},

where J=F/UJ=\partial F/\partial U is the Jacobian.

A concise pseudocode summary is:

Given an initial guess U_0
for k = 0,1,2,...
compute residual F(U_k)
if ||F|| and ||ξ|| are below tolerance: stop
construct or apply the Jacobian J(U_k)
solve J δU = -F
optionally damp the step: U_{k+1} = U_k + λ δU, 0 < λ ≤ 1
end

In one variable this is just the familiar formula

xn+1=xnf(xn)f(xn).x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}.

For large PDE systems, one usually solves the linear system JδU=FJ\delta U=-F directly or iteratively rather than forming an inverse matrix.

In practice, the most time-consuming part of stationary Einstein numerics is often the linear solve inside each Newton step.

5.4 Initial guesses, continuation, and multiple branches

Section titled “5.4 Initial guesses, continuation, and multiple branches”

Every nonlinear solve needs an initial guess. This is not a peculiarity of Einstein–DeTurck; it is a universal feature of nonlinear numerical analysis.

Two standard strategies are:

  1. Reference-metric seed.
    Start from the reference metric gˉ\bar g (and reference matter fields, if present). This works when the target solution is close to the reference geometry.

  2. Parameter continuation.
    Start in a regime where the solution is easy to find—high temperature, weak deformation, small amplitude—and then vary the parameter gradually, using each converged solution as the initial guess for the next one.

Continuation is often essential near critical points or at low temperature, where Newton’s basin of attraction becomes small.

It is also important to remember that different initial guesses can converge to different solutions if the problem admits multiple branches. This is not a numerical nuisance; it is often the physics.

A credible stationary solution should pass several independent checks:

  • DeTurck norm: verify that ξ\|\xi\| is small and decreases with resolution;
  • Residual norm: verify that the Einstein–DeTurck residual converges to zero;
  • Convergence with resolution: for smooth problems, spectral convergence should be rapid and often exponential;
  • Constraint or identity checks: Smarr relations, first laws, boundary Ward identities, or any other independent consistency relation;
  • Regularity: inspect curvature invariants and confirm the absence of unphysical singularities.

Another practical check concerns working precision. As the grid is refined, the discretized matrices become larger and often more ill conditioned. It is common to see the error decrease rapidly and then plateau because machine precision is no longer sufficient. In that regime, increasing the working precision is necessary.

Spectral methods are extraordinarily powerful, but they are not magic. Their natural regime is a domain on which the unknown functions are smooth—ideally analytic after the known leading asymptotics have been factored out.

They can still tolerate mild non-analytic structures at the boundary, such as logarithmic terms or removable fractional powers, provided one rewrites the unknown functions appropriately. But if the genuine solution contains competing non-analytic branches that cannot be regularized, spectral methods become much harder to use.

This is why asymptotic analysis matters so much: one often factors out the singular, vanishing, or non-analytic behavior analytically and then asks the spectral method to solve only for the smooth remainder.

A useful general workflow for any problem in mathematical physics is:

  1. pose the physical problem;
  2. translate it into a precise mathematical problem;
  3. solve that mathematical problem;
  4. interpret the physical meaning of the solution.

For stationary AdS solutions, step 2 is often more nontrivial than you think. Before any computation can begin, you must decide on the ansatz, the coordinate system, the reference metric, and the full set of boundary conditions.

6.1 Black funnels, droplets, tunnels, hammocks, binaries, and islands

Section titled “6.1 Black funnels, droplets, tunnels, hammocks, binaries, and islands”

If we place N=4\mathcal{N}=4 super-Yang–Mills theory on a Schwarzschild black hole background, then there are two familiar phases:

  • the deconfined phase, dual to a black funnel [Santos & Way, arXiv:1208.6291];
  • the confined phase, dual to a black droplet [Santos & Way, arXiv:1405.2078].

A black funnel has a single connected bulk horizon reaching the boundary black hole horizon. A black droplet has a horizon suspended from the boundary and disconnected from a separate bulk or planar horizon. The droplet geometry is therefore significantly more intricate numerically.

If instead the boundary geometry has both a black hole horizon and a cosmological horizon—as in a Schwarzschild–de Sitter boundary metric—one finds analogous phases known as the black hammock and the black tunnel [Biggs & Santos, arXiv:2207.14306].

All of these examples share the same core lesson: once the boundary data breaks enough symmetry, the problem becomes a genuinely multidimensional elliptic boundary value problem.

6.2 A standard cohomogeneity-2 ansatz for a funnel

Section titled “6.2 A standard cohomogeneity-2 ansatz for a funnel”

For funnel-like geometries, a common strategy is to introduce compact coordinates (x,y)(x,y) so that the computational domain is a rectangle. Different edges of the rectangle then represent different asymptotic or fictitious boundaries.

A standard ansatz is

ds2=L2xy(1+x)2{x(1y)(1+xy)Tdt2+x(1+x)2Ady24y(1y)(1+xy)+r02B[dx+x(1x)2Fdy]2x(1x)4+r02S(1x)2dΩ22},ds^2=\frac{L^2}{x\,y(1+x)^2}\bigg\{ -x(1-y)(1+x\,y)\,T\,dt^2 +\frac{x(1+x)^2A\,dy^2}{4y(1-y)(1+x\,y)} +\frac{r_0^2B\,[dx+x(1-x)^2F\,dy]^2}{x(1-x)^4} +\frac{r_0^2S}{(1-x)^2}\,d\Omega_2^2 \bigg\},

where TT, AA, BB, SS, and FF are functions of (x,y)(x,y).

The prefactors have been chosen so that the known singular or vanishing behavior at each boundary has already been factored out. The unknown functions are therefore expected to be smooth on the closed rectangle.

A simple reference metric is obtained by setting

T=A=B=S=1,F=0.T=A=B=S=1, \qquad F=0.

Now let us inspect the boundaries.

The x=1x=1 edge: asymptotically planar black hole region

Section titled “The x=1x=1x=1 edge: asymptotically planar black hole region”

With the reference values inserted, the metric near x=1x=1 becomes

ds2=L2y[14(1y2)dt2+dy24y(1y2)+r02dx24(1x)4+r024(1x)2dΩ22].ds^2=\frac{L^2}{y}\left[ -\frac{1}{4}(1-y^2)\,dt^2 +\frac{dy^2}{4y(1-y^2)} +\frac{r_0^2\,dx^2}{4(1-x)^4} +\frac{r_0^2}{4(1-x)^2}d\Omega_2^2 \right].

After the coordinate transformation

t=2τ,x=1r02R,y=z2,t=2\tau,\qquad x=1-\frac{r_0}{2R},\qquad y=z^2,

this becomes

ds2=L2z2[(1z4)dτ2+dz21z4+dR2+R2dΩ22],ds^2=\frac{L^2}{z^2}\left[ -(1-z^4)\,d\tau^2+\frac{dz^2}{1-z^4}+dR^2+R^2d\Omega_2^2 \right],

which is the standard planar AdS black hole.

This is why one often imposes at x=1x=1

T=A=B=S=1,F=0.T=A=B=S=1,\qquad F=0.

Near y=0y=0 one has

ds2=L2y[dy24y+1x(1+x)2ds2],ds^2=\frac{L^2}{y}\left[ \frac{dy^2}{4y} +\frac{1}{x(1+x)^2}\,ds_\partial^2 \right],

with

ds2=xdt2+r02dx2x(1x)4+r02(1x)2dΩ22.ds_\partial^2= -x\,dt^2+\frac{r_0^2\,dx^2}{x(1-x)^4} +\frac{r_0^2}{(1-x)^2}d\Omega_2^2.

After the transformation

x=1r0r,x=1-\frac{r_0}{r},

this becomes

ds2=(1r0r)dt2+dr21r0r+r2dΩ22,ds_\partial^2= -\left(1-\frac{r_0}{r}\right)dt^2 +\frac{dr^2}{1-\frac{r_0}{r}} +r^2 d\Omega_2^2,

namely the Schwarzschild boundary metric.

This is the AdS/CFT input: the dual field theory lives on a Schwarzschild black hole background.

Expanding the equations near y=1y=1 gives the regularity condition

T=AT=A

together with additional conditions on yA\partial_y A, yB\partial_y B, yF\partial_y F, and yS\partial_y S at the horizon. This is a version of the general horizon regularity discussion in section 4.

The x=0x=0 edge: asymptotically hyperbolic region

Section titled “The x=0x=0x=0 edge: asymptotically hyperbolic region”

Near x=0x=0, the reference metric behaves as

ds2=L2y[(1y)dt2+dy24y(1y)+dx24x2+14xdΩ22].ds^2=\frac{L^2}{y}\left[ -(1-y)\,dt^2+\frac{dy^2}{4y(1-y)} +\frac{dx^2}{4x^2} +\frac{1}{4x}d\Omega_2^2 \right].

After

y=z2,x=e2η,y=z^2,\qquad x=e^{-2\eta},

this becomes

ds2=L2z2[(1z2)dt2+dz21z2+dη2+e2η4dΩ22],ds^2=\frac{L^2}{z^2}\left[ -(1-z^2)\,dt^2+\frac{dz^2}{1-z^2} +d\eta^2+\frac{e^{2\eta}}{4}d\Omega_2^2 \right],

which is asymptotic to the zero-energy hyperbolic black hole,

dsH2=L2z2[(1z2)dt2+dz21z2+dη2+sinh2ηdΩ22].ds^2_{\mathbb{H}}=\frac{L^2}{z^2}\left[ -(1-z^2)\,dt^2+\frac{dz^2}{1-z^2} +d\eta^2+\sinh^2\eta\,d\Omega_2^2 \right].

The moral is that the ansatz has been engineered so that every boundary corresponds either to a known asymptotic geometry or to a regular fictitious boundary, and the unknown functions are smooth everywhere on the computational domain.

Once one is comfortable with the Einstein–DeTurck + spectral + Newton pipeline, the same backbone applies to many other problems:

  • rotating AdS black holes with deformed horizons and lumpy branches;
  • horizonless solutions such as AdS geons and boson stars;
  • black resonators and solutions with only one Killing field;
  • higher-dimensional black rings, black strings, and related topologies;
  • steady-state transport geometries, where one must pay special attention to mixed-type behavior.
  • black binaries in AdS, where one looks for static or stationary multi-black-hole configurations rather than time-dependent inspirals;
  • entanglement islands and brane-world constructions, where mixed boundary conditions on a brane backreact on the bulk geometry;
  • holographic quantum matter solutions, including superconductors, charge density waves, pair density waves, and holographic QCD backgrounds.

The details of the ansatz and boundary conditions change from case to case, but the strategy remains the same.

Mathematica’s built-in NDSolve can handle simple Dirichlet problems reasonably well. But once one moves to the Einstein–DeTurck equations with nonlinear Neumann or mixed boundary conditions, it becomes very difficult to use directly.

This reflects a broader distinction. Generic finite element workflows and commercial software are optimized for a different regime: standard equations on complicated domains. Stationary AdS gravity problems are often the opposite—highly nonstandard equations on domains that are simple, or that can be made simple by a good coordinate choice. This is why a dedicated pseudospectral package is natural here.

The motivation behind GRSpectral is therefore straightforward: input the equations and boundary conditions directly in strong form, without having to derive a weak form by hand, and solve the resulting boundary value problem spectrally. The package is available on GitHub. In spirit, it is a spectral analogue of NDSolve, but tailored to the structure of stationary gravitational boundary value problems.

The basic workflow has two stages:

  1. a symbolic preprocessing stage, where one specifies the unknown functions, equations, and boundary conditions;
  2. a numerical stage, where one specifies the coordinate maps, grid sizes, parameters, initial guesses, and desired outputs, and then solves the system.

In the package, this is implemented by a preprocessing step such as EqProcess and a spectral solve routine often referred to as spNDSolve.

The essential inputs are:

  1. the unknown functions;
  2. the equations;
  3. the boundary conditions, written explicitly as equations;
  4. an initial guess;
  5. the coordinate domains and mappings;
  6. the grid sizes and, if needed, periodic directions;
  7. any external parameters to be scanned or continued.

A typical workflow is:

  • map each finite coordinate interval to the Chebyshev interval [1,1][-1,1];
  • choose the collocation grid;
  • assemble the discretized equations and boundary conditions;
  • use Newton’s method to solve the nonlinear system;
  • verify convergence and physical regularity.

Because the equations and boundary conditions are treated on the same footing, one can handle nonlinear boundary conditions just as naturally as nonlinear field equations.

8. References for numerical relativity in AdS

Section titled “8. References for numerical relativity in AdS”

[1] M. Headrick, S. Kitchen and T. Wiseman, A new approach to static numerical relativity, and its application to Kaluza-Klein black holes, Class. Quant. Grav. 27 (2010) 035002 [arXiv:0905.1822].

[2] P. Figueras, J. Lucietti and T. Wiseman, Ricci solitons, Ricci flow, and strongly coupled CFT in the Schwarzschild Unruh or Boulware vacua, Class. Quant. Grav. 28 (2011) 215018 [arXiv:1104.4489].

[3] A. Adam, S. Kitchen and T. Wiseman, A numerical approach to finding general stationary vacuum black holes, Class. Quant. Grav. 29 (2012) 165002 [arXiv:1105.6347].

[4] O. J. C. Dias, J. E. Santos and B. Way, Numerical Methods for Finding Stationary Gravitational Solutions, Class. Quant. Grav. 33 (2016) 133001 [arXiv:1510.02804]. ---Review

physics-oriented review: holographic thermal field theory

Pseudospectral method: two books

[5] P. M. Chesler and L. G. Yaffe, Numerical solution of gravitational dynamics in asymptotically anti-de Sitter spacetimes, JHEP 07 (2014) 086 [arXiv:1309.1439].

[6] K. Balasubramanian and C. P. Herzog, Losing Forward Momentum Holographically, Class. Quant. Grav. 31 (2014) 125010 [arXiv:1312.4953].

[7] Z. Ning, Q. Chen, Y. Tian, X. Wu and H. Zhang, Spontaneous Deformation of an AdS Spherical Black Hole, Phys. Rev. D 109 (2024) 064082 [arXiv:2307.14156].