Skip to content

Complexity Proposals: Volume, Action, and Beyond

The previous page explained why the thermofield-double state is dual to the eternal two-sided AdS black hole. A striking feature of that geometry is that the Einstein-Rosen bridge keeps growing even after ordinary thermodynamic quantities have reached equilibrium. The entropy of each boundary theory is constant, but the interior volume of the two-sided black hole continues to increase for a very long time.

This observation led to a new question:

What boundary quantity measures the growth of the black-hole interior after entropy has saturated?

The proposed answer is quantum computational complexity.

Roughly, the complexity of a quantum state is the minimal number of elementary gates required to prepare it from a simple reference state. Complexity is not as universal as entropy. It depends on the choice of reference state, gate set, tolerance, and allowed operations. Nevertheless, for chaotic large-NN systems, its qualitative behavior seems to match an important set of gravitational facts:

black-hole interiors keep growing long after ordinary entropy has equilibrated.\boxed{ \text{black-hole interiors keep growing long after ordinary entropy has equilibrated.} }

This page introduces the two standard holographic complexity proposals:

complexity = volume,complexity = action,\text{complexity = volume}, \qquad \text{complexity = action},

and then explains how complexity enters black-hole information through shock waves, the switchback effect, decoding hardness, and Python’s-lunch geometries.

The most important warning is this: holographic complexity is more conjectural than RT/HRT or QES. It is not a theorem with the same status as the area law for entropy in controlled semiclassical regimes. It is a research program, motivated by a remarkable collection of structural matches between quantum circuits and black-hole interiors.

Consider the eternal AdS black hole dual to the thermofield-double state

TFD(β)=1Z(β)neβEn/2EnLEnR.|\mathrm{TFD}(\beta)\rangle = \frac{1}{\sqrt{Z(\beta)}} \sum_n e^{-\beta E_n/2}|E_n\rangle_L |E_n\rangle_R.

The reduced density matrix on the right is thermal:

ρR=eβHRZ(β).\rho_R=\frac{e^{-\beta H_R}}{Z(\beta)}.

The right entropy is therefore the thermal entropy,

S(R)=SthSBH,S(R)=S_{\rm th}\approx S_{\rm BH},

for a large holographic black hole. This entropy is time-independent under the two-sided evolution generated by

Htotal=HL+HR.H_{\rm total}=H_L+H_R.

Yet the two-sided geometry changes. A spatial slice passing through the Einstein-Rosen bridge grows longer behind the horizons. The exterior geometry is stationary, but the interior bridge grows.

This is the puzzle entropy cannot answer:

SBH=A4GNis fixed, butVERB(t)grows.S_{\rm BH}=\frac{A}{4G_N} \quad\text{is fixed, but}\quad V_{\rm ERB}(t)\quad\text{grows.}

The horizon area counts a coarse measure of available black-hole degrees of freedom. It does not count how complicated the state has become under chaotic time evolution. In a chaotic quantum system, a state can remain thermal in all simple observables while becoming increasingly complex as a vector in Hilbert space.

A useful hierarchy of timescales is:

tdissβ,tβ2πlogS,tcompeS,treceeS.t_{\rm diss}\sim \beta, \qquad t_*\sim \frac{\beta}{2\pi}\log S, \qquad t_{\rm comp}\sim e^S, \qquad t_{\rm rec}\sim e^{e^S}.

Here tdisst_{\rm diss} is a local thermalization or dissipation time, tt_* is the scrambling time, tcompt_{\rm comp} is the timescale on which complexity saturates near its maximum, and trect_{\rm rec} is a rough recurrence timescale. Entropy is already close to equilibrium near tdisst_{\rm diss}, while complexity can continue growing until exponentially late times.

This long growth is the central motivation for using complexity to describe black-hole interiors.

Let ψref|\psi_{\rm ref}\rangle be a simple reference state and let G\mathcal G be a chosen set of elementary gates. The state complexity of ψ|\psi\rangle at tolerance ϵ\epsilon is schematically

Cϵ(ψ)=minU{# gates in U:Uψrefψ<ϵ}.\mathcal C_\epsilon(|\psi\rangle) = \min_U \left\{ \#\text{ gates in }U: \|U|\psi_{\rm ref}\rangle-|\psi\rangle\|<\epsilon \right\}.

Similarly, the complexity of a unitary UU is the minimal number of elementary gates required to approximate UU:

Cϵ(U)=minVG{# gates in V:UV<ϵ}.\mathcal C_\epsilon(U) = \min_{V\in \langle\mathcal G\rangle} \left\{ \#\text{ gates in }V: \|U-V\|<\epsilon \right\}.

For a system with KK effective qubits, the Hilbert-space dimension is

dimH=2K.\dim\mathcal H=2^K.

A generic state has exponentially large complexity,

CmaxeK,\mathcal C_{\rm max}\sim e^K,

up to conventions about the gate set and tolerance. In a black-hole context, KK is of order the entropy:

KSBH.K\sim S_{\rm BH}.

Thus the maximum complexity of a black-hole microstate is expected to scale roughly as

CmaxeSBH.\mathcal C_{\rm max}\sim e^{S_{\rm BH}}.

This is qualitatively very different from entropy. The entropy of the black hole is order SBHS_{\rm BH}, but the maximum complexity is exponentially large in SBHS_{\rm BH}. This makes complexity a natural candidate for a quantity that can keep growing for a time of order eSBHe^{S_{\rm BH}}.

There is no unique definition of complexity in continuum QFT. One must choose a reference state, a notion of elementary operation, and a regulator. The holographic proposals should therefore be understood as large-NN, strongly coupled, geometrized notions of complexity, not as universal field-theoretic definitions valid in every context.

The first standard holographic proposal says that the complexity of the two-sided boundary state is proportional to the volume of a maximal bulk spatial slice connecting the two boundaries:

CV(tL,tR)=Vol(Σmax)GNL\boxed{ \mathcal C_V(t_L,t_R) = \frac{\operatorname{Vol}(\Sigma_{\rm max})}{G_N L_*} }

up to an order-one convention-dependent coefficient. Here LL_* is a length scale of order the AdS radius, and Σmax\Sigma_{\rm max} is a codimension-one bulk surface satisfying

Σmax=ΣL(tL)ΣR(tR),\partial\Sigma_{\rm max}=\Sigma_L(t_L)\cup \Sigma_R(t_R),

where ΣL(tL)\Sigma_L(t_L) and ΣR(tR)\Sigma_R(t_R) are boundary time slices on the left and right CFTs.

The dimensions work as follows. In d+1d+1 bulk dimensions,

[Vol]=lengthd,[GN]=lengthd1,[L]=length.[\operatorname{Vol}]=\text{length}^d, \qquad [G_N]=\text{length}^{d-1}, \qquad [L_*]=\text{length}.

Therefore

VolGNL\frac{\operatorname{Vol}}{G_N L_*}

is dimensionless, as complexity should be.

The complexity equals volume proposal for a two-sided AdS black hole

The complexity=volume proposal identifies boundary complexity with the volume of a maximal codimension-one slice stretching through the Einstein-Rosen bridge. The horizon area is constant, but the maximal slice can continue growing behind the horizons.

For the eternal black hole with tL=tR=t/2t_L=t_R=t/2, the maximal slice grows at late times approximately linearly:

Vol(Σmax)vtht+constant,\operatorname{Vol}(\Sigma_{\rm max}) \sim v_{\rm th} t+\text{constant},

where vthv_{\rm th} is a geometry-dependent rate set by the black-hole scale. Hence

CV(t)vthGNLt.\mathcal C_V(t)\sim \frac{v_{\rm th}}{G_N L_*}t.

Since 1/GN1/G_N is proportional to the number of degrees of freedom, the growth rate is of order entropy times temperature:

dCVdtSBHT.\frac{d\mathcal C_V}{dt} \sim S_{\rm BH}T.

This scaling is what one expects for a large chaotic quantum system whose many degrees of freedom are all performing computational work at a thermal rate.

The volume proposal says that the black-hole interior is not primarily measured by entropy. Entropy measures the horizon area. Complexity measures how long and intricate the bridge has become.

In the thermofield-double state, increasing tL+tRt_L+t_R makes the wormhole longer. In the boundary theory, the same operation applies time evolution to both CFTs:

Ψ(tL,tR)=eiHLtLeiHRtRTFD.|\Psi(t_L,t_R)\rangle = e^{-iH_L t_L}e^{-iH_R t_R}|\mathrm{TFD}\rangle.

Although one-point functions and thermal correlators may look stationary, the state has moved to a more complex location in Hilbert space. The ER bridge is the geometric record of this motion.

The second standard proposal is complexity = action:

CA(tL,tR)=IWDW(tL,tR)π.\boxed{ \mathcal C_A(t_L,t_R)=\frac{I_{\rm WDW}(t_L,t_R)}{\pi\hbar}. }

Here IWDWI_{\rm WDW} is the on-shell gravitational action evaluated on the Wheeler-DeWitt patch. The Wheeler-DeWitt patch is the domain of dependence of any bulk Cauchy slice anchored at the chosen boundary times. Equivalently, it is the bulk region swept out by all spacelike surfaces ending at the specified boundary time slices.

The Wheeler-DeWitt patch used in the complexity equals action proposal

The complexity=action proposal evaluates the gravitational action on the Wheeler-DeWitt patch anchored at the chosen boundary times. The patch reaches behind the horizons, so it is sensitive to the growing black-hole interior.

The action includes bulk terms, boundary terms, joint terms, and null-boundary contributions. Schematically,

IWDW=Ibulk+Ibdy+Ijoint+Inull.I_{\rm WDW} = I_{\rm bulk}+I_{\rm bdy}+I_{\rm joint}+I_{\rm null}.

For Einstein gravity with negative cosmological constant, the bulk term is

Ibulk=116πGNWDWdd+1xg(R2Λ).I_{\rm bulk} = \frac{1}{16\pi G_N} \int_{\rm WDW} d^{d+1}x\sqrt{-g}\,(R-2\Lambda).

The full definition is subtle because null boundaries and their joints require careful treatment. This is one reason the action proposal is technically more delicate than the volume proposal.

Nevertheless, the action proposal has an attractive feature: it does not require choosing an arbitrary length scale by hand. The conversion factor is simply π\pi\hbar.

For many neutral AdS black holes, the complexity=action proposal gives a late-time growth rate

dCAdt2Mπ,\frac{d\mathcal C_A}{dt} \longrightarrow \frac{2M}{\pi\hbar},

where MM is the black-hole mass. This resembles Lloyd’s proposed bound on the rate of computation,

dCdt2Eπ.\frac{d\mathcal C}{dt} \leq \frac{2E}{\pi\hbar}.

This match was one of the motivations for the action proposal and for the slogan that black holes are among the fastest computers in nature.

A few cautions are important.

First, the precise meaning and general validity of Lloyd’s bound in quantum systems are subtle. Second, charged, rotating, time-dependent, or higher-derivative black holes require additional care. Third, holographic complexity itself is not defined as sharply as entropy, so one should not treat the late-time rate as a universal theorem.

The robust qualitative lesson is more modest:

large black holes exhibit long, nearly linear growth of interior-sensitive complexity.\boxed{ \text{large black holes exhibit long, nearly linear growth of interior-sensitive complexity.} }

This is the feature shared by the volume and action proposals.

A useful diagnostic is the complexity of formation. For the thermofield-double state, define

ΔC=C(TFD)C(0L)C(0R),\Delta\mathcal C = \mathcal C(|\mathrm{TFD}\rangle) - \mathcal C(|0\rangle_L) - \mathcal C(|0\rangle_R),

where 0L|0\rangle_L and 0R|0\rangle_R are reference vacuum states or ground states for the two CFTs. In the bulk, this compares the complexity of the eternal black hole with that of two copies of empty AdS.

The complexity of formation is analogous to a thermodynamic formation quantity. It asks how hard it is to create the entangled two-sided black-hole state, not how fast complexity grows after it has been created.

At high temperature, one expects the complexity of formation to be proportional to the entropy:

ΔCSBH.\Delta\mathcal C\sim S_{\rm BH}.

This is much smaller than the maximum complexity eSBHe^{S_{\rm BH}}, but much larger than order-one complexity. It reflects the fact that preparing a large entangled black hole is already computationally expensive, even before the bridge begins its long linear growth.

One of the most beautiful tests of holographic complexity involves shock waves.

Consider perturbing the thermofield-double state by an operator WW inserted at an early boundary time tw-t_w. Evolving to the present produces a precursor operator schematically of the form

Uprec(tw)=eiHtwWeiHtw.U_{\rm prec}(t_w) = e^{-iHt_w}W e^{iHt_w}.

The circuit has a forward time evolution, an insertion, and a backward time evolution. Naively, one might expect the complexity to grow like

C2vtw,\mathcal C\sim 2v t_w,

where vv is a complexity growth rate. But chaotic dynamics produces cancellations for a while. The forward and backward evolutions cancel except near the perturbation, until the perturbation has spread over the system.

The time needed for this spreading is the scrambling time:

tβ2πlogSBH.t_*\sim \frac{\beta}{2\pi}\log S_{\rm BH}.

The resulting behavior is the switchback effect:

C ⁣(eiHtwWeiHtw){O(1),twt,2v(twt),twt.\mathcal C\!\left(e^{-iHt_w}W e^{iHt_w}\right) \approx \begin{cases} O(1), & t_w\lesssim t_*,\\ 2v(t_w-t_*), & t_w\gtrsim t_*. \end{cases}

The switchback effect in the complexity of a precursor

A precursor circuit $e^{-iHt_w}W e^{iHt_w}$ has forward and backward time evolutions that largely cancel until the perturbation has scrambled. The delay by the scrambling time $t_*$ is the switchback effect.

In the bulk, an early operator insertion creates a gravitational shock wave near the horizon. The shock changes the geometry of the Einstein-Rosen bridge. The volume and action proposals reproduce the same scrambling-time delay that appears in the boundary circuit picture.

This is one of the strongest pieces of evidence that holographic complexity is capturing something real: the same delay appears in two very different languages.

The switchback effect is closely related to quantum chaos.

In a chaotic many-body system, a simple operator becomes complicated under Heisenberg evolution:

W(t)=eiHtWeiHt.W(t)=e^{iHt}W e^{-iHt}.

At early times, W(t)W(t) acts only on a few degrees of freedom. After scrambling, it acts effectively on order SS degrees of freedom. The scrambling time is therefore the time when a local perturbation has grown into a system-size perturbation.

In holographic systems, this growth is diagnosed by out-of-time-order correlators. A characteristic behavior is

V(0)W(t)V(0)W(t)11SeλLt+,\langle V(0)W(t)V(0)W(t)\rangle \sim 1-\frac{1}{S}e^{\lambda_L t}+\cdots,

with Lyapunov exponent

λL2πβ.\lambda_L\leq \frac{2\pi}{\beta}.

Black holes saturate this chaos bound in the simplest Einstein-gravity regime. The scrambling time follows from the condition

1SeλLt1,\frac{1}{S}e^{\lambda_L t_*}\sim 1,

which gives

t1λLlogSβ2πlogS.t_*\sim \frac{1}{\lambda_L}\log S \sim \frac{\beta}{2\pi}\log S.

This is the same timescale that appears in shock-wave geometries and in the switchback effect. Complexity, chaos, and near-horizon gravitational blueshift are therefore tightly linked.

The island formula says that after the Page time, the entanglement wedge of the Hawking radiation can include an island inside the black hole. In principle, this means that certain interior operators are encoded in the radiation.

But “encoded” is not the same as “easy to decode.”

The Harlow-Hayden argument says that distilling a particular qubit of information from Hawking radiation may require time exponential in the black-hole entropy:

tdecodeecSBH,t_{\rm decode}\sim e^{cS_{\rm BH}},

for some order-one constant cc in the relevant complexity model. Thus information can be present in the radiation while remaining computationally inaccessible to any realistic observer.

This distinction is crucial. The Page curve is an entropy statement. It tells us about the fine-grained correlations between the radiation and the remaining black hole. It does not provide an efficient algorithm for extracting a diary thrown into the black hole.

Holographically, the same distinction appears geometrically. Entanglement wedge reconstruction says that a region can reconstruct operators in its wedge. Complexity asks how hard that reconstruction is.

A modern geometric refinement of decoding hardness is the Python’s lunch proposal.

The rough picture is this. In some semiclassical geometries, a reconstruction region has an entanglement wedge containing the desired bulk point, but the path from the boundary to that point passes through a geometric obstruction. In entropy language, there may be more than one quantum extremal surface: a locally minimal QES and a larger “bulge” surface. The difference in generalized entropy behaves like a computational barrier.

Let XminX_{\rm min} be the relevant locally minimal QES and let XbulgeX_{\rm bulge} be the larger intermediate surface. Define

ΔSgen=Sgen(Xbulge)Sgen(Xmin).\Delta S_{\rm gen} = S_{\rm gen}(X_{\rm bulge})-S_{\rm gen}(X_{\rm min}).

Then the decoding complexity is conjectured to scale schematically as

Cdecodeexp ⁣(12ΔSgen),\mathcal C_{\rm decode} \sim \exp\!\left(\frac{1}{2}\Delta S_{\rm gen}\right),

up to model-dependent prefactors and polynomial corrections.

Python's lunch as a geometric obstruction to decoding Hawking radiation

A Python's-lunch geometry contains an entropic bulge between a reconstruction region and the target bulk information. The generalized-entropy difference between the bulge and the locally minimal QES is interpreted as a decoding barrier.

This proposal is especially relevant for black-hole information. Islands say that the radiation contains the interior in its entanglement wedge. Python’s lunch says that reconstructing the interior from the radiation can nevertheless be exponentially hard.

The message is clean:

entanglement wedge inclusion is kinematic; efficient reconstruction is computational.\boxed{ \text{entanglement wedge inclusion is kinematic; efficient reconstruction is computational.} }

A region may contain the information in principle, but the recovery map may be too complex to implement in practice.

Because both entropy and complexity are information-theoretic words, it is easy to confuse them. They measure different things.

Entropy counts effective uncertainty or the logarithm of a number of accessible states. For a thermal system,

SlogdimHthermal.S\sim \log \dim\mathcal H_{\rm thermal}.

Complexity measures the minimal number of elementary operations needed to prepare a state or implement a unitary. For a typical state in a Hilbert space of entropy SS,

CeS.\mathcal C\sim e^S.

Entropy can saturate quickly. Complexity can keep growing for exponentially long times.

In black holes, this distinction becomes geometric:

entropyhorizon area or extremal-surface area,complexityinterior volume, action, or a related bulk functional.\begin{array}{ccl} \text{entropy} &\longleftrightarrow& \text{horizon area or extremal-surface area},\\ \text{complexity} &\longleftrightarrow& \text{interior volume, action, or a related bulk functional}. \end{array}

The RT/QES formula tells us which degrees of freedom are entangled with which. Complexity proposals try to tell us how hard it is to prepare, evolve, or decode those degrees of freedom.

The two proposals above are the classics, but they are not the end of the story. Many variants have been explored. Some replace volume by spacetime volume; some modify the bulk functional; some use generalized volumes; some try to connect holographic complexity to tensor-network costs, circuit geometry, path-integral optimization, Nielsen geometry, Krylov complexity, or operator growth.

A deliberately schematic family of proposals is

CextRFbulk(R),\mathcal C \sim \underset{\mathcal R}{\operatorname{ext}} \,\mathcal F_{\rm bulk}(\mathcal R),

where R\mathcal R is a bulk region or surface associated with the boundary time slice, and Fbulk\mathcal F_{\rm bulk} is a diffeomorphism-invariant functional.

This flexibility is both a strength and a weakness. It allows the subject to adapt to different physical questions, but it also means that complexity is less sharply pinned down than entropy.

A good working attitude is:

holographic complexity is a diagnostic of interior growth and reconstruction hardness, not yet a unique observable.\boxed{ \text{holographic complexity is a diagnostic of interior growth and reconstruction hardness, not yet a unique observable.} }

This attitude is particularly useful for students. One should learn the standard CV and CA proposals, understand the evidence from shock waves and late-time growth, and then treat newer proposals as attempts to isolate which features are universal.

Relation to the black-hole information problem

Section titled “Relation to the black-hole information problem”

Complexity enters black-hole information in three logically distinct ways.

First, it explains why the black-hole interior can keep growing after entropy saturates. This is most transparent in the eternal two-sided black hole.

Second, it sharpens the difference between information being present and information being accessible. The Page curve and island formula can imply that the radiation contains information, while Harlow-Hayden and Python’s-lunch arguments suggest that extracting it may be exponentially hard.

Third, it helps diagnose the operational meaning of bulk reconstruction. Entanglement wedge reconstruction is an existence statement: an operator has a boundary representation. Complexity asks whether that representation is simple, nonlocal, exponentially complicated, or practically unreachable.

This gives a useful dictionary:

Page curvehow much information is in the radiation,islandswhere the information is encoded,complexityhow hard the information is to recover.\begin{array}{ccl} \text{Page curve} &\longrightarrow& \text{how much information is in the radiation},\\ \text{islands} &\longrightarrow& \text{where the information is encoded},\\ \text{complexity} &\longrightarrow& \text{how hard the information is to recover}. \end{array}

This is why complexity belongs in a sequence on black-hole information even though it is not itself an entropy formula.

Pitfall 1: “Complexity is just another entropy.”

Section titled “Pitfall 1: “Complexity is just another entropy.””

No. Entropy measures the size of an effective state space or the mixedness of a reduced density matrix. Complexity measures the minimal cost of preparing a state or implementing a unitary. They have very different saturation scales.

Pitfall 2: “The volume proposal and action proposal must both be exactly right.”

Section titled “Pitfall 2: “The volume proposal and action proposal must both be exactly right.””

They cannot both be unique exact definitions in all regimes unless their differences are physically irrelevant in the appropriate limit. It is better to regard them as candidate bulk duals of related notions of complexity.

Pitfall 3: “If the radiation contains the island, decoding must be easy.”

Section titled “Pitfall 3: “If the radiation contains the island, decoding must be easy.””

Entanglement wedge reconstruction guarantees existence within a code subspace. It does not guarantee an efficient decoding algorithm. Complexity is the missing operational ingredient.

Pitfall 4: “A long Einstein-Rosen bridge means a traversable shortcut.”

Section titled “Pitfall 4: “A long Einstein-Rosen bridge means a traversable shortcut.””

No. The bridge can become very long while remaining nontraversable. Complexity growth is compatible with causal disconnection of the two boundaries unless an explicit coupling is added.

Pitfall 5: “The CA late-time rate proves Lloyd’s bound.”

Section titled “Pitfall 5: “The CA late-time rate proves Lloyd’s bound.””

The match is suggestive, but the bound and its precise domain of validity are subtle. The main robust lesson is the long linear growth of an interior-sensitive quantity.

In d+1d+1 bulk dimensions, show that

CV=Vol(Σ)GNL\mathcal C_V=\frac{\operatorname{Vol}(\Sigma)}{G_N L_*}

is dimensionless.

Solution

In d+1d+1 bulk dimensions, a codimension-one spatial slice has dimension dd, so

[Vol(Σ)]=lengthd.[\operatorname{Vol}(\Sigma)]=\text{length}^d.

The Newton constant has dimension

[GN]=lengthd1,[G_N]=\text{length}^{d-1},

because the Einstein-Hilbert action contains

1GNdd+1xgR,\frac{1}{G_N}\int d^{d+1}x\sqrt{-g}\,R,

and RR has dimension length2\text{length}^{-2}. Therefore

[GNL]=lengthd1length=lengthd.[G_N L_*]=\text{length}^{d-1}\text{length}=\text{length}^d.

Thus

Vol(Σ)GNL\frac{\operatorname{Vol}(\Sigma)}{G_N L_*}

is dimensionless.

Exercise 2: entropy saturation versus complexity growth

Section titled “Exercise 2: entropy saturation versus complexity growth”

Suppose a black hole has entropy SS and temperature Tβ1T\sim \beta^{-1}. Explain why it is natural for entropy to saturate on a time of order β\beta or βlogS\beta\log S, while complexity can grow until a time of order eSe^S.

Solution

Entropy measures coarse information such as the effective number of thermally occupied states. Local thermalization occurs when simple observables lose memory of their initial values, which happens on a dissipation time of order β\beta in a strongly coupled thermal system. Scrambling of initially localized information across all degrees of freedom takes the longer time

tβlogS.t_*\sim \beta\log S.

After this, the system can already look thermal to simple probes.

Complexity is different. It measures the minimal circuit size needed to prepare the full state. A typical state in a Hilbert space with entropy SS has complexity of order

CmaxeS.\mathcal C_{\rm max}\sim e^S.

If complexity grows at a rate polynomial in SS, it takes an exponential time to approach this maximum. Hence entropy can saturate quickly while complexity continues to grow until times of order eSe^S.

Consider the precursor unitary

Uprec(t)=eiHtWeiHt.U_{\rm prec}(t)=e^{-iHt}W e^{iHt}.

Explain why its complexity is not simply the sum of the complexities of eiHte^{-iHt} and eiHte^{iHt} for all tt.

Solution

If WW were absent, the product would be

eiHteiHt=1,e^{-iHt}e^{iHt}=1,

so the forward and backward evolutions would cancel exactly. With WW inserted, the cancellation is spoiled only in the part of the circuit affected by WW.

At early times, the Heisenberg-evolved operator

W(t)=eiHtWeiHtW(t)=e^{iHt}W e^{-iHt}

has not yet spread through the whole system. Most of the forward and backward time evolutions still cancel. Only after the scrambling time

tβ2πlogSt_*\sim \frac{\beta}{2\pi}\log S

has W(t)W(t) become a system-size operator. After that, the cancellation is inefficient, and the complexity grows approximately like the sum of the two time evolutions with a delay. This delay is the switchback effect.

Assume the complexity=action proposal gives

dCAdt=2Mπ\frac{d\mathcal C_A}{dt}=\frac{2M}{\pi\hbar}

at late times for a neutral AdS black hole. Explain why this resembles a bound on computational speed.

Solution

A proposed quantum bound on computation says that a system with available energy EE cannot increase its complexity faster than roughly

dCdt2Eπ.\frac{d\mathcal C}{dt}\leq \frac{2E}{\pi\hbar}.

For a neutral black hole, the relevant energy is its mass MM. The CA result

dCAdt=2Mπ\frac{d\mathcal C_A}{dt}=\frac{2M}{\pi\hbar}

therefore saturates the same form of the bound. This motivates the slogan that black holes are extremely efficient computers.

The interpretation should be treated carefully because the general validity of such bounds and the precise definition of complexity are subtle. The important point is that the gravitational action growth has the same scaling as the fastest possible energy-limited computation.

The island formula says that after the Page time, the radiation entropy is computed by a saddle including an island. Does this imply that an observer can efficiently extract an infalling diary from the radiation? Explain.

Solution

No. The island formula is an entropy statement. It says that the fine-grained entropy of the radiation is computed as though part of the black-hole interior belongs to the radiation’s entanglement wedge. Equivalently, it implies that the radiation contains enough quantum information to reconstruct certain interior degrees of freedom within the appropriate code subspace.

But reconstruction need not be efficient. The boundary representation of an interior operator may be extremely complicated. Harlow-Hayden-type arguments suggest that distilling specific information from Hawking radiation can require time exponential in the black-hole entropy:

tdecodeecSBH.t_{\rm decode}\sim e^{cS_{\rm BH}}.

Thus the information can be present in principle but inaccessible in practice.

Suppose a reconstruction problem has a locally minimal QES with generalized entropy Sgen(Xmin)S_{\rm gen}(X_{\rm min}) and a bulge surface with generalized entropy Sgen(Xbulge)S_{\rm gen}(X_{\rm bulge}). If

Sgen(Xbulge)Sgen(Xmin)=2logN,S_{\rm gen}(X_{\rm bulge})-S_{\rm gen}(X_{\rm min})=2\log N,

estimate the Python’s-lunch decoding complexity using

Cdecodeexp(12ΔSgen).\mathcal C_{\rm decode}\sim \exp\left(\frac{1}{2}\Delta S_{\rm gen}\right).
Solution

Here

ΔSgen=2logN.\Delta S_{\rm gen}=2\log N.

The proposed scaling gives

Cdecodeexp(122logN)=exp(logN)=N.\mathcal C_{\rm decode} \sim \exp\left(\frac{1}{2}2\log N\right) = \exp(\log N) = N.

Thus the decoding cost is polynomial in NN in this toy estimate. If instead ΔSgen\Delta S_{\rm gen} were of order SBHS_{\rm BH}, the same formula would give an exponentially large decoding complexity.