# Large-order NSPT for lattice gauge theories with fermions: the plaquette in massless QCD

###### Abstract

Numerical Stochastic Perturbation Theory (NSPT) allows for perturbative computations in quantum field theory. We present an implementation of NSPT that yields results for high orders in the perturbative expansion of lattice gauge theories coupled to fermions. The zero-momentum mode is removed by imposing twisted boundary conditions; in turn, twisted boundary conditions require us to introduce a smell degree of freedom in order to include fermions in the fundamental representation. As a first application, we compute the critical mass of two flavours of Wilson fermions up to order in a gauge theory. We also implement, for the first time, staggered fermions in NSPT. The residual chiral symmetry of staggered fermions protects the theory from an additive mass renormalisation. We compute the perturbative expansion of the plaquette with two flavours of massless staggered fermions up to order in a gauge theory, and investigate the renormalon behaviour of such series. We are able to subtract the power divergence in the Operator Product Expansion (OPE) for the plaquette and determine the gluon condensate in massless QCD. Our results confirm that NSPT provides a viable way to probe systematically the asymptotic behaviour of perturbative series in QCD and, eventually, gauge theories with fermions in higher representations.

###### Contents

- 1 Introduction
- 2 Lattice gauge theories in NSPT
- 3 Twisted boundary conditions and smell
- 4 Fermions in NSPT
- 5 The critical mass of Wilson fermions
- 6 Perturbative expansion of the plaquette
- 7 Gluon condensate
- 8 Conclusions
- A Group theory conventions
- B Optimisation of the fermion drift
- C Fourier transforms with twisted boundary conditions
- D Autocorrelations and cross-correlations
- E Twisted lattice perturbation theory
- F Code development for NSPT
- G The nearest covariance matrix

## 1 Introduction

The success of perturbation theory in High Energy Physics (HEP) can hardly be denied. In particular, in asymptotically free theories, field correlators at short distances are reliably approximated by perturbative expansions in the running coupling at a large momentum scale. At the same time, even in these (lucky) cases, it is mandatory to have some control on nonperturbative effects, i.e. contributions that scale like powers of the QCD scale . We will often refer to these as power corrections. A tool to take the latter into account was suggested back in the late seventies. This goes under the name of QCD sum rules, or Shifman-Vainshtein-Zakharov (SVZ) sum rules [1, 2]. One of the authors defined the method as “an expansion of the correlation functions in the vacuum condensates” [3]. These condensates are the vacuum expectation value of the operators that emerge in the Operator Product Expansion (OPE) for the relevant correlation function. In the OPE formalism the condensates are fundamental quantities, which are in principle supposed to parametrise power corrections in a universal way. By determining the value of a condensate in one context, one gains insight into different physical processes; this has in turn motivated several approaches to the determination of condensates. Having said all this, the sad news is that not all the condensates have actually the same status. In particular not all the condensates can be defined in a neat way, which ultimately means disentangled from perturbation theory. While this is the case for the chiral condensate, the same cannot be said for the gluon condensate, which is the one we will be concerned with in this work.

Based on a separation of scales, the OPE makes pretty clear what can/must be computed in perturbation theory, i.e. the Wilson coefficients. Still, this does not automatically imply that perturbative and nonperturbative contributions are separated in a clear-cut way. The key issue is that perturbative expansions in HEP are expected to be asymptotic ones on very general grounds. In particular, the series in asymptotically free theories are plagued by ambiguities which are due to so-called infrared renormalons [4]. From a technical point of view, renormalons show up as singularities which are encountered if one tries to Borel resum the perturbative series. All in all, there is a power-like ambiguity in any procedure one can devise in order to sum the series, and this ambiguity unavoidably reshuffles perturbative and nonperturbative contributions in the structure of the OPE. Being the Wilson coefficients affected by ambiguities that are power corrections, the general strategy is to reabsorb the latter in the definition of the condensates. This amounts to a prescription to give a precise meaning both to the perturbative series and to the condensates that appear in the OPE.

The idea of determining the gluon condensate from nonperturbative (Monte Carlo) measurements in lattice gauge theories dates back to the eighties and early nineties [5, 6, 7]. Based on symmetry grounds and dimensional counting, the two leading contributions in the OPE for the basic plaquette are given by the identity operator and the gluon condensate. Both operators appear multiplied by Wilson coefficients that can be computed in perturbation theory, and in particular the coefficient that multiplies the identity operator is simply the perturbative expansion of the plaquette. Other operators that appear in the OPE are of higher dimension, and their contributions are therefore suppressed by powers of . Subtracting from a nonperturbative (Monte Carlo) measurement of the plaquette the sum of the perturbative series, and repeating the procedure at different values of the coupling, the signature of asymptotic scaling, i.e. the signature of a quantity of (mass) dimension four, should become visible. With renormalons attracting more and more attention, it eventually became clear that such a procedure must be deeply affected by the ambiguities we discussed above, suggesting that a precise definition of the resummed perturbative expansion is necessary.

In the meantime Numerical Stochastic Perturbation Theory
(NSPT) [8] was developed as a new tool for computing high
orders in lattice perturbation theory. NSPT paved the way to the evaluation of
many more terms in the perturbative expansion of the plaquette, and in turn made
it at least conceivable that the behaviour of the series could be understood at
the level of pinning down the correct order of magnitude of the ambiguity
involved. Results of early investigations [9] were
interesting: for the first time, it was clear that very high order contributions
can be computed in perturbative series for lattice gauge theories. Unfortunately
the pioneering NSPT studies of that time were far away from computing the series
up to the orders at which the renormalon growth actually shows up in its full
glory. With limited computing power available, a way out was sought in the form
of a change of scheme (i.e. a scheme in which the renormalon behaviour is best
recognised, possibly at lower orders than in the lattice scheme). Still, the
numerical results were in the end puzzling as for consequences, since trying to
sum the series from the information available even suggested the idea that an
unexpected contribution from a dimension- operator was
present [10]. Other attempts were made [11],
but it eventually took roughly twenty years before the renormalon behaviour was
actually captured [12, 13, 14], needless to
say, via NSPT ^{1}^{1}1One should note that one of the reason why the renormalon
growth was correctly reproduced and the OPE correctly reconstructed is the
adoption of twisted boundary conditions: in this way zero modes are absent and
the theoretical picture is clear.. In Yang-Mills theory the IR
renormalon was indeed directly inspected, and the finite-size effects that are
unavoidable on finite lattices assessed. The bottom line is that the victory is
twofold. On one side, the renormalon growth is indeed proved to be present as
conjectured (ironically, in a scheme - the lattice - which one would have
regarded as the very worst to perform the computations). Given this, one has a
prescription to sum the series and perform the subtraction (if sufficiently high
orders are available, one can look for the inversion point in the series, where
contributions start to grow and a minimum indetermination in summing the series
can be attained).

The present work is a first attempt at performing the determination of the gluon condensate from the plaquette in full QCD, i.e. with fermionic contributions taken into account. The main focus here is in developing the NSPT technology, and present a first set of results, which allow a definition of the gluon condensate. In particular for the first exploration, we use existing Monte Carlo simulations for the plaquette in full QCD, as detailed below. Having ascertained that the procedure is viable, a precise determination of the condensate in full QCD will require a dedicated Monte Carlo simulation, with a careful choice of the fermionic action. On top of being interesting per se, the methodology presented here opens the way to other applications, in which different colour groups and different matter contents can be investigated. The final goal would be to inspect whether in a theory that has an IR fixed point, the renormalon growth is tamed, as one would expect in theories where the condensates vanish. We defer these questions to future investigations, hoping to gain extra insight into the task of identifying the boundaries of the conformal window.

The paper is organised as follows. In Sect. 2 we review briefly how NSPT can be applied to lattice gauge theories. In Sect. 3 twisted boundary conditions for fermions in the fundamental representation are introduced. In Sect. 4 we discuss how to take into account fermions with smell in NSPT. We present our results for the expansion of the critical mass of Wilson fermions in Sect. 5, and for the expansion of the plaquette with staggered fermions in Sect. 6. In Sect. 7 we investigate the asymptotic behaviour of the expansion of the plaquette and extract the gluon condensate in massless QCD. In Sect. 8 we draw our conclusions and present some possible future steps.

## 2 Lattice gauge theories in NSPT

Let us here summarise the main steps in defining NSPT for lattice gauge
theories. Rather than trying to give a comprehensive review of the method, we
aim here to introduce a consistent notation that will allow us to discuss the
new developments in the rest of the paper. For a more detailed discussion of the
NSPT formulation, the interested reader can consult e.g.
Ref. [15], whose notation we shall try to follow
consistently ^{2}^{2}2For convenience, we summarise our group theory conventions in
Appendix A.. In particular, we assume to work with a hypercubic lattice
with volume and assume the lattice spacing to be , unless
where stated otherwise. We use for position indices,
for Lorentz indices and
for Dirac indices.

The original formulation of NSPT is based on the Stochastic Quantization formulation of lattice field theories, in the case at hand lattice gauge theories. For the purposes of this study, we focus on gauge theories that are defined by the Euclidean Wilson action for the gauge group :

(1) |

where is the product of the link variables, denoted , around the plaquette , and the sum extends over all the plaquettes in the lattice. Introducing a stochastic time , a field can be defined that satisfies the Langevin equation

(2) |

As detailed in Appendix A, we have denoted by the left derivative in the group; is a stochastic variable defined in the algebra of the group,

(3) |

where are the generators of the group, and are Gaussian variables such that

(4) |

The key point of Stochastic Quantization is that the large- distribution of observables built from the solution of the Langevin equation above corresponds to the distribution that defines the path integral of the quantum theory [16, 17]:

(5) |

In order to develop NSPT, the dynamical variables can be expanded in powers of the coupling constant , which is given in the lattice formulation by :

(6) |

Solving the Langevin equation, Eq. (2), order by
order in yields a system of coupled equations for the
perturbative components of the link variables
.

Expanding the solution of Langevin equation in powers of the coupling is a standard
approach to proving the equivalence of stochastic and canonical
quantisation, i.e. Eq. (5) [18], and was the starting
point for stochastic perturbation theory: with this respect NSPT is just the numerical implementation
of the latter on a computer. The idea of studying the
convergence properties of a stochastic process order by order after an expansion in the
coupling is actually quite general. In this spirit different NSPT schemes can be
set up, also based on stochastic differential equations different from
Langevin [19, 20].

#### Euler integrator

Discretising the stochastic time in steps of size allows a numerical integration of the Langevin equation,

(7) |

where the force driving the evolution is

(8) |

and the operator projects on the algebra (see Appendix A). Note that Eq. (2) does not lend itself to a perturbative solution in powers of , since there is a mismatch between the deterministic drift term, which starts at order , and the stochastic noise, which is of order . This is easily resolved by rescaling the integration step by a factor of , so that both contributions start at order . Denoting the new time step , the force term becomes

(9) |

Expanding in powers of ,

(10) |

leads to a system of coupled equations for the evolution of the coefficients of the perturbative expansion of . Omitting Lorentz and position indices, we get \[email protected]equationparentequation

(11a) | ||||

(11b) | ||||

where only contributes to the term.

#### Stochastic gauge fixing

The zero modes of the gauge action do not generate a deterministic drift term in the Langevin equation, and therefore their evolution in stochastic time is entirely driven by the stochastic noise, which gives rise to diverging fluctuations. This phenomenon is well known since the early days of NSPT, see e.g. Ref. [21], and is cured by the so-called stochastic gauge fixing procedure [22] applied to the theory formulated on the lattice. The procedure implemented in this work alternates an integration step as described above with a gauge transformation:

(12) |

where the field is defined in the algebra of the group,

(13) |

is a free parameter, which we choose equal to and is the backward derivative in direction . Note that there is nothing compelling in the choice for . In this work we make the same choice as in Ref. [21], which is slightly different from the one adopted in Ref. [15]: the corresponding gauge transformation does not lead, if iterated, to the Landau gauge. In NSPT the gauge transformation is expanded in powers of the coupling,

(14) |

and the transformation in Eq. (12) is implemented order by order in perturbation theory.

The combined step for the integrator adopted in this work can be summarised as \[email protected]equationparentequation

(15a) | ||||

(15b) |

where all the terms are expanded in powers of , and the perturbative components are updated.

#### Runge-Kutta integrator

Higher order integrators, in particular Runge-Kutta schemes, have been used for the lattice version of the Langevin equation since the early days [17]. A new, very effective second-order integration scheme for NSPT in lattice gauge theories has been introduced in Ref. [12]. While we have tested Runge-Kutta schemes ourselves for pure gauge NSPT simulations, in this work we adhere to the simpler Euler scheme: when making use of the (standard) stochastic evaluation of the fermionic equations of motion (see Sect. 4), Runge-Kutta schemes are actually more demanding (extra terms are needed [23, 24]).

## 3 Twisted boundary conditions and smell

When a theory is defined in finite volume, the fields can be required to satisfy any boundary conditions that are compatible with the symmetries of the action. We adopt twisted boundary conditions (TBC) [25] in order to remove the zero-mode of the gauge field, and have an unambiguous perturbative expansion, which is not plagued by toron vacua [26]. The gauge fields undergo a constant gauge transformation when translated by a multiple of the lattice size; therefore twisted boundary conditions in direction are

(16) |

where are a set of constant matrices satisfying

(17) |

Fermions in the adjoint representation can be introduced in a straightforward manner; the boundary conditions with the fermionic field in the matrix representation read

(18) |

The inclusion of fermions in the fundamental representation is not straightforward; indeed, the gauge transformation for the fermions when translated by a multiple of the lattice size reads

(19) |

leading to an ambiguous definition of
. An idea to overcome this
problem, proposed in Ref. [27] and implemented e.g. in
Ref. [28], is to introduce a new quantum number so that
fermions exist in different copies, or *smells*, which transform
into each other according to the antifundamental representation of
. The theory has a new global symmetry, but physical
observables are singlets under the smell group. Thus, configurations
related by a smell transformations are equivalent, and in finite volume
we are free to substitute Eq. (19) with

(20) |

where . It is useful to think of the fermion field as a matrix in colour-smell space. If the transformation matrices in smell space satisfy the same relations as in Eq. (17) (in particular we choose them to be equal to the s), then twisted boundary conditions are well-defined.

It is worth pointing out that, through a change of variable in the path integral [29, 30], twisted boundary conditions could be equivalently implemented by multiplying particular sets of plaquettes in the action by suitable elements of and considering the fields to be periodic. This change of variable works only in the pure gauge or fermions in the adjoint representation cases. Thus, the explicit transformation of Eq. (20) is required when fermions in the fundamental representation with smell are considered.

## 4 Fermions in NSPT

If is the action of a single fermion, then dynamical fermions in NSPT can be included thanks to a new term in the drift, as shown in Refs. [17, 31]: the determinant arising from degenerate fermions can be rewritten as

(21) |

and can be taken into account by adding to the gauge action. From the Lie derivative of the additional term and recalling that a rescaled time step is used in the Euler update, we obtain the new contribution

(22) |

to be added to the pure gauge drift. It is important to note that the coefficient of is purely real because the Wilson operator is -Hermitian and the staggered operator is antihermitian: this is consistent with the drift being an element of the algebra. The trace can be evaluated stochastically: Eq. (22) is replaced by

(23) |

thanks to the introduction of a new complex Gaussian noise satisfying ^{3}^{3}3
Obviously does not have any Dirac structure in the staggered case. The noise can be built from the independent generation of real and imaginary part with zero mean and variance .

(24) |

The real part must be enforced, otherwise the dynamics would lead the links out of the group since the drift would be guaranteed to be in the algebra only on average. In NSPT, the Dirac operator inherits a formal perturbative expansion from the links, , so the inverse can be computed efficiently from the knowledge of the inverse free operator via the recursive formula \[email protected]equationparentequation

(25a) | ||||

(25b) |

The inverse of the free operator is conveniently applied in Fourier space.

If fermions have smell, then the rescaling is required in order to have flavours in the infinite-volume limit. In other words, this is the same as considering the th root of the determinant of the fermion operator. In principle such rooted determinant could come from a nonlocal action, because twisted boundary conditions break the invariance under smell transformations. Nevertheless, this rooting procedure is sound since we know in advance that in the infinite-volume limit all the dependence on boundary conditions will be lost and the determinant will factorise as the fermion determinant of a single smell times the identity in smell space. It is also possible to show with arguments similar to those presented in Ref. [32] that, if the theory without smell is renormalisable, this operation leads to a perturbatively renormalisable theory as well. Below we describe in detail Wilson and staggered fermions in the fundamental representation, so we explicitly rescale . It is also important to remember that the fermion field, seen as a matrix in colour-smell space, is not required to be traceless, thus its Fourier zero-mode does not vanish: we require antiperiodic boundary conditions in time direction not to hit the pole of the free propagator in the massless case. We avoid twisted boundary conditions in time direction because in the massless case it might happen for the free fermion propagator to develop a pole at some particular momenta.

### 4.1 Wilson fermions

The Wilson Dirac operator and its Lie derivative are \[email protected]equationparentequation

(26a) | ||||

(26b) |

where the non-diagonal term has been expressed through

(27) |

We must give a perturbative structure to the mass to account for an additive mass renormalisation, see Sect. 5. The stochastic evaluation of the trace leads to

(28) |

where , and the fermion fields have been represented as matrices in colour-smell space. After taking the real part, the fermion drift can be finally written as

(29) |

In Appendix B the actual implementation of the fermion drift is described (only one of the two terms in Eq. (4.1) is actually needed).

With the Fourier transform described in Appendix C, the inverse free Wilson operator with twisted boundary conditions is diagonal in momentum space and can be expressed as

(30) |

### 4.2 Staggered fermions

We implemented for the first time staggered fermions in NSPT. The staggered field has no Dirac structure and describes four physical fermions in the continuum limit. Therefore, we rescale and the staggered operator is understood to be rooted when the number of flavour is not a multiple of four. The staggered Dirac operator and its Lie derivative are \[email protected]equationparentequation

(31a) | ||||

(31b) |

where the non-diagonal term has been expressed through

(32) |

and is the staggered phase. The stochastic evaluation of the trace is analogous to the Wilson fermion case and Eq. (28) becomes

(33) |

with and , leading to the final expression

(34) |

Again, the actual implementation of the staggered drift is shown in Appendix B.

With the Fourier transform described in Appendix C, the inverse free staggered operator with twisted boundary conditions is found to be

(35) |

where , and is the periodic Kronecker delta, with support in . The propagator is not diagonal in momentum space because the action depends explicitly on the position through , but it is simple enough to avoid a complete matrix multiplication over all the degrees of freedom. If we aim to compute for some field in momentum space, it is useful to represent as matrices with indices defined at each site (see again Appendix C). Then the non-diagonal terms become diagonal when shifting iteratively by in the space. Incidentally, we must consider to be even so that at the same time is well defined and (in the massless case) no spurious pole is hit when Eq. (35) is evaluated in finite volume: this stems from the fact that the staggered action is only invariant under translation of two lattice spacings, therefore twisted boundary conditions would be inconsistent for odd.

## 5 The critical mass of Wilson fermions

The inverse of the Wilson fermion propagator in momentum space can be expressed as

(36) |

where , and is the self energy. In this section the lattice spacing is written explicitly. Wilson fermions are not equipped with chiral symmetry when the bare mass vanishes: the self energy at zero momentum is affected by a power divergence , which has to be cured by an additive renormalisation. In an on-shell renormalisation scheme, the critical value of the bare mass, , for which the lattice theory describes massless fermions, is given by the solution of

(37) |

As observed in Ref. [33], this prescription matches the one obtained by requiring the chiral Ward identity to hold in the continuum limit. Expanding Eq. (37) defines the critical mass order by order in perturbation theory. The perturbative expansion of the inverse propagator is

(38) |

where we have indicated explicitly the dependence of the coefficients on the bare mass . The functions are matrices in Dirac space; since we are interested in the small momentum region and is proportional to the identity, we consider as scalar functions: when a projection onto the identity is understood. Plugging the perturbative expansion of the critical mass

(39) |

into Eq. (38) results in

(40) |

where the dependence of on has been made explicit and depends only on . Therefore, the renormalisation condition in Eq. (37) becomes order by order

(41) |

For illustration, we can compute the recursive solution of Eq. (37) at first- and second-order in the expansion in powers of , which yields \[email protected]equationparentequation

(42a) | ||||

(42b) |

Both results are familiar from analytical calculations of the critical mass. The first equation encodes the fact that the mass counterterm at first order in perturbation theory is given by the one-loop diagrams computed at zero bare mass. The second equation states that the second-order correction is given by summing two-loop diagrams evaluated at vanishing bare mass, and one-loop diagrams with the insertion of the counterterm, see e.g. Ref. [34].

It should also be noted that, when working in finite volume, momenta are quantised. Unless periodic boundary conditions are used, is not an allowed value for the momentum of the states in a box. Therefore, condition (37) can only be imposed after extrapolating the value of to vanishing momentum. The detailed implementation is discussed below in Sect. 5.1.

Critical masses have been computed analytically up to two loops [34, 35], and in NSPT at three and four loops [36, 37]. High-order perturbation theory with massless Wilson fermions requires the tuning of the critical mass at the same order in , and it is possible to determine this renormalisation using NSPT. Let us illustrate the strategy in detail. We begin by collecting configurations for different time steps of the stochastic process; for each configuration the gauge is fixed to the Landau gauge [38, 39]. The propagator at momentum is computed by applying the inverse Dirac operator to a point source in momentum space,

(43) |

For each simulation at a given value of , the error bars are computed as detailed in Appendix D. The propagator with periodic boundary conditions is a (diagonal) matrix in colour and momentum space and has a Dirac structure; it is important to stress again that with TBC there is not a colour structure any more and the momentum has a finer quantisation. The average over all the configurations gives the Monte Carlo estimate of . We can now extrapolate the stochastic time step to zero and invert the propagator to obtain . Finally, the inverse propagator is projected onto the identity in Dirac space. All these operations are performed order by order in perturbation theory keeping in mind that, after the measure of the propagator, all perturbative orders with an odd are discarded, since the expansion in powers of is an artefact of NSPT. The errors can be estimated by bootstrapping the whole procedure.

The legacy of this process is the measure of the functions , as it is clear from Eq. (40). The renormalisation condition in Eq. (41) must then be imposed: this can be done iteratively one order after the other. When all the coefficients up to some are included in the simulation, all the functions up to extrapolate to zero; on the other hand, from we can read . In order to move on and compute the following coefficient of the critical mass, a new set of configurations where is taken into account must be generated.

The procedure we described is well defined and even theoretically clean, since it enlightens the status of our as a perturbative additive renormalisation: once it is plugged in at a given order, the renormalised mass turns out to be zero at the prescribed order. On the other side, it is not at all the only possible procedure. The prescription of the authors of Ref. [20] is to expand the solution of the stochastic process both in the coupling and in the mass counterterm. This is in the same spirit of Ref. [40]: the solution of the stochastic process can be expanded in more than one parameter and once a precise power counting is in place, the resulting hierarchy of equations can be exactly truncated at any given order. There are pros and contras for both approaches, i.e. the one we followed and the double expansion. The latter can provide a better handle on estimating errors due to the critical mass value; on the other side, it is expected to be numerical more demanding. All in all, we did not push Wilson fermions to very high orders: moving to the staggered formulation was by far the most natural option for the purpose of this work.

### 5.1 Zero-momentum extrapolation and valence twist

Since in finite volume it is possible to measure only for
discretised non-zero momenta, the data need to be extrapolated to zero
momentum using a suitable functional form. The strategy adopted in the
literature – see for example Eqs. (13) and (14) in
Ref. [37] – is based on expanding the quantities of
interest in powers of . In the infinite-volume limit, such an expansion
leads to a hypercubic symmetric Taylor expansion composed of invariants in ,
logarithms of and ratios of invariants; an explicit one-loop computation to
order is shown e.g. in Eq. (24) of Ref. [41]. The
ratios and the logarithms arise because we are expanding a nonanalytic function
of the lattice spacing: infrared divergences appear when expanding the
integrands in . On the other hand, working consistently in finite volume
does not cause any infrared divergence: expressions for will be just sums of ratios of trigonometric functions,
which we can expand in obtaining simply a combination of polynomial lattice
invariants ^{4}^{4}4Expanding in and sending the lattice size to infinity
are operations that do not commute; in particular this gives rise to different
series in the finite- and infinite-volume cases..

Still, this is not enough for a reliable extrapolation to vanishing momenta. In order to understand better the range of momenta that allow a reliable extrapolation, we computed in twisted lattice perturbation theory (see Appendix E). As a cross-check of our calculation we verified that is gauge-invariant (this result must be true at all orders because of the gauge-invariance of the pole mass [42]). It can be seen from the analytic expansion of that even the lowest momentum allowed on our finite-size lattices, , , is far from the convergence region of this series. This happens even for reasonably big lattices, . In order to increase the range of available momenta, we use -boundary conditions [43] for the valence fermions,

(44) |

thereby reaching momenta which are within the convergence radius of the -expansion. The hypercubic series becomes just a polynomial in by setting all the other components to zero.

The agreement between data and the analytic finite-volume calculations can be seen in Figure 1. It is worthwhile to emphasise that measuring such low momenta requires a careful analysis of the thermalisation. At the lowest order we can check directly when the measures agree with the theoretical predictions. At higher orders, it is necessary to wait until the statistical average has clearly stabilised, as shown in Figure 2. This kind of analysis is computationally intensive: in the case at hand, we performed up to lattice sweeps, saving one propagator every sweeps. The first configurations have been discarded in the analysis.

### 5.2 A first attempt for high-order critical mass for SU(3),

We determined the first coefficients of the critical mass for and on a lattice with twisted boundary conditions on a plane. The twist matrices are

(45) |

corresponding to . Configurations are
collected at three different time steps, , , . Because
the volume and the number of colours are large compared to the former test in
Figure 1, it is computationally too expensive to replicate the
same statistics at all orders: we settled for sweeps at the
smallest , measuring the propagator every sweeps. At larger time steps, we rescale these numbers to keep the product constant.
The propagator is measured at the smallest available momentum, which has
in the time component and vanishes elsewhere; we choose three
different values for the phase of the valence twist,
, , . Extrapolations to zero momentum are performed
using a linear fit in . The analysis is performed on different subsets of the data ^{5}^{5}5
The different subsets are built by varying the number of initial configurations that are excluded in the analysis and by rejecting data at different rates.
to estimate systematic errors. The total error is the sum in quadrature of half the spread around the
central value among the different fits and the largest error from the fits.

The procedure described in Sect. 5.1, even though well-defined, is found to be numerically unstable at high orders. The number of propagators required to reach a clear plateau, like the ones shown in Figure 2, is beyond what it can be reasonably collected with the current NSPT implementations. Therefore, we decided to proceed with a smaller statistics and to add a new systematic uncertainty for the extrapolated coefficients, as explained below. It has to be emphasised that once a coefficient of the critical mass is determined, only the central value is used as input for the following runs: even if we could collect enough statistics and manage to reduce the error, that is not included in the simulations. This makes the impact of the uncertainty of on and higher hard to assess; also, performing simulations for several values of each coefficient is not feasible. To be conservative, we adopted the following strategy. Once a critical mass is determined and put in the next-order simulation, the corresponding should extrapolate to zero. If it extrapolates to , we take as an estimate of the relative systematic error to be added in quadrature to the determination of all the higher-order critical masses.

Despite these instabilities, the lower-order results are close to the known coefficients (keeping in mind that we might resolve finite-volume effects), as it can be seen for example in Figure 3. We stopped the procedure at , when the errors started dominating over the central value of the coefficient, see Figure 4. Our results are summarised in Table 1.

on | in infinite volume | |
---|---|---|

[34, 35] | ||

[36, 37] | ||

[37] | ||

- | ||

- | ||

- |

## 6 Perturbative expansion of the plaquette

Following Ref. [13], we define the average plaquette

(46) |

so that the value of ranges between 0, when all link variables are equal to the identity, and 1. The plaquette expectation value has the perturbative expansion

(47) |

the coefficients are obtained from the Langevin process.

### 6.1 Simulation details

We run NSPT simulations of an gauge theory with massless staggered fermions in the fundamental representation, measuring the average plaquette after each Langevin update. Twisted boundary conditions are imposed on a plane, with twist matrices chosen as in Eq. (45). These simulations have been mostly run with the GridNSPT code on KNL and Skylake nodes provided by the Cambridge Service for Data Driven Discovery (CSD3); simulations on the smallest lattice have been run on the Skylake nodes on the Marconi system provided by CINECA in Bologna. The main features of our code are described in Appendix F. We simulate volumes up to order in the expansion of the links. We gradually switch on higher orders when the plaquette at lower orders is thermalised. Because of the instabilities discussed in Sect. 6.2, results are presented only up to the order shown in Table 2. All simulations are run independently at three different time steps, and we have at least measures for the largest order at the smallest time step. The length of the runs at larger time steps is rescaled to have approximately the same Langevin time history for all .

### 6.2 Numerical instabilities

The study of the NSPT hierarchy of stochastic processes is not trivial. While there are general results for the convergence of the generic correlation function of a finite number of perturbative components of the fields [44, 15], the study of variances is more involved, and many results can only come from direct inspection of the outcome of numerical simulations. In particular, one should keep in mind that in the context of (any formulation of) NSPT, variances are not an intrinsic property of the theory under study; in other words, they are not obtained as field correlators of the underlying theory. Big fluctuations and correspondingly huge variances were observed at (terrifically) high orders in toy models [44]: signals are plagued by several spikes and it is found by inspection that a fluctuation at a given order is reflected and amplified at higher orders. All in all, variances increase with the perturbative order (not surprisingly, given the recursive nature of the equations of motion). Moving to more realistic theories, a robust rule of thumb is that, as expected on general grounds, the larger the number of degrees of freedom, the less severe the problems with fluctuations are. In particular, we have not yet found (nor has anyone else reported) big problems with fluctuations in the computation of high orders in pure Yang-Mills theory.

We now found that the introduction of fermions indeed causes instabilities at orders as high as the ones we are considering in this work. Once again, this effect can be tamed by working on increasingly large volumes. Once a fluctuation takes place, the restoring force would eventually take the signal back around its average value but in practice this mechanism is not always effective. At high orders the instabilities can be so frequent and large that the signal is actually lost, and the average value of the plaquette becomes negligible compared to its standard deviation, as it is illustrated in Figure 5. The order at which the signal is lost is pushed to higher values by increasing the volume, but eventually uncontrolled fluctuations will dominate. Moreover, we find that spikes tend to happen more frequently at smaller . Roughly speaking, this does not come as a surprise, since at smaller time steps one has to live with a larger number of sweeps, thereby increasing the chances of generating large fluctuations when computing the force fields. In Table 2 the orders available at each volume and time step are shown in detail.

### 6.3 Determination of the

The lowest coefficients have already been computed analytically. In particular, in twisted lattice perturbation theory we have that

(48) |

is volume independent [45]. The infinite-volume value of can be obtained adding to the pure gauge contribution [46],

(49) |

the contribution due to staggered fermions [47],

(50) |

For the specific case , we find . We also
computed the fermion contribution to in twisted lattice
perturbation theory ^{6}^{6}6We are grateful to M. García Pérez and
A. González-Arroyo for providing us the gluon contribution in
finite volume.. The finite-volume result is at , therefore we expect
finite volume effects to be negligible in the lattices we are
employing. In particular, we improved the determination of in Eq. (50) using the finite volume calculations at
as the central value, and the variation between and as an estimate of its uncertainty, leading to for , and hence
for . Trying to extract and
from our data at , we realise that even effects in the
extrapolation must be considered because of the very high precision of
the measurements. For these two coefficients, a dedicated study at
has been performed, which required new simulations at time steps and ; the agreement
with the analytic calculations is found to be excellent, see
Figure 6.

Therefore, and are set to their infinite-volume values and excluded from the analysis of the numerical simulations. The remaining orders are obtained from NSPT. The value for the plaquette at order and time step is computed from the average of the fields generated by the stochastic process, after discarding a number of thermalisation steps. The moving averages result to be stable, as can be seen in the two examples of Figure 8. In order to exploit all the available data, the thermalisation is set differently at different orders. The covariance between and is computed taking into account autocorrelations and cross-correlations, as explained in detail in Appendix D. Clearly there is no correlation between different . In order to estimate the covariance when two orders have different thermalisations, we take into account only the largest set of common values where both are thermalised. This pairwise estimation of the covariance matrix does not guarantee positive definiteness, therefore we rely on Higham’s algorithm, which we describe in Appendix G, to find the nearest positive definite covariance matrix; the procedure introduces some dependence on a tolerance . The extrapolation to vanishing time step is performed by minimising

(51) |

where the coefficients are the slopes of the combined linear fits. The interesting fit results are the values of the extrapolated plaquettes and their covariance matrix . In general, because of the available statistics and the intrinsic fluctuations of the observable, the lower-order values are measured more accurately compared to the higher-order ones; the same holds for the estimate of the entries the covariance matrix. Since, in principle, the plaquette at a certain order could be extracted without any knowledge about its higher-order values, we can get the best estimate for a by implementing the fit iteratively, increasing from to the maximum available order. At each iteration, we determine the order with the minimum number of measures and rescale the entries of the covariance matrix so that there is a common normalisation ( in Eq. (83)) for all the matrix elements. In this way, all the data are exploited for the determination of the covariance of the process, and the non-positive definiteness of the covariance of the averages arises only from the presence of autocorrelations and cross-correlations. Higham’s algorithm is then applied to restricted to orders. At this stage, minimising the allows us to extract with for . The tolerance of Higham’s algorithm is tuned so that the covariance matrix is able to represent our data, i.e. so that the reduced chi-squared is close to . The combined fit determines also the plaquettes at orders lower than , which are always checked and found to be in agreement, within errors, with their previous determination at smaller . An example of a correlation matrix extracted with this procedure is in Figure 8, where clear structures of correlated and anticorrelated coefficients are visible. The results of the combined extrapolations are summarised in Table 3.