Efficient Bayesian inference for natural time series using ARFIMA processes

Many geophysical quantities, such as atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long memory (LM). LM implies that these quantities experience non-trivial temporal memory, which potentially not only enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LM. In this paper we present a modern and systematic approach to the inference of LM. We use the flexible autoregressive fractional integrated moving average (ARFIMA) model, which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LM, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g., short-memory effects) can be integrated over in order to focus on long-memory parameters and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data and the central England temperature (CET) time series, with favorable comparison to the standard estimators. For CET we also extend our method to seasonal long memory.


Introduction
Many natural processes are sufficiently complex that a stochastic model is essential, or at the very least an efficient description (Watkins, 2013). Such a process will be specified by several properties, of which a particularly important one is the degree of memory in a time series, often expressed through a characteristic autocorrelation time over which fluctuations will decay in magnitude. In this paper, however, we are concerned with specific types of stochastic processes that are capable of possessing long memory (LM) (Beran, 1994a;Palma, 2007;Beran et al., 2013). Long memory is the notion of there being correlation between the present and all points in the past. A standard definition (Beran, 1994a;Palma, 2007;Beran et al., 2013) is that a (finite variance, stationary) process has long memory if its autocorrelation function (ACF) has power-law decay: ρ(·) such that ρ(k) ∼ c ρ k 2d−1 as k → ∞, for some non-zero constant c ρ , and where 0 < d < 1 2 . The parameter d is the memory parameter; if d = 0 the process does not exhibit long memory, while if − 1 2 < d < 0 the process is said to be anti-persistent. The asymptotic power-law form of the ACF corresponds to an absence of a characteristic decay timescale, in striking contrast to many standard (stationary) stochastic processes where the effect of each data point decays so fast that it rapidly becomes indistinguishable from noise. An exam-ple of the latter is the exponential ACF, where the e-folding timescale sets a characteristic correlation time. The study of processes that do possess long memory is important because they exhibit unusual properties, because many familiar mathematical results fail to hold, and because of the numerous examples of data sets where LM is seen.
The study of long memory originated in the 1950s in the field of hydrology, where studies of the levels of the Nile (Hurst, 1951) demonstrated anomalously fast growth of the rescaled range of the time series. After protracted debates 1 about whether this was a transient (finite time) effect, the mathematical pioneer Benoît B. Mandelbrot showed that if one retained the assumption of stationarity, novel mathematics would then be essential to sufficiently explain the Hurst effect. In doing so he rigorously defined the concept of long memory (Mandelbrot and Van Ness, 1968;Mandelbrot and Wallis, 1968).
Most research into long memory and its properties has been based on classical statistical methods, spanning parametric, semi-parametric, and non-parametric modeling (see Beran et al., 2013, for a review). Very few Bayesian methods have been studied, most probably due to computational difficulties. The earliest works are parametric and include Koop et al. (1997), Pai and Ravishanker (1998), and Hsu and Breidt (2003). If computational challenges could be mitigated, the Bayesian paradigm would offer advantages over classical methods including flexibility in specification of priors (i.e., physical expertise could be used to elicit an informative prior). It would offer the ability to marginalize out aspects of a model apparatus and data, such as short-memory or seasonal effects and missing observations, so that statements about long-memory effects can be made unconditionally.
Towards easing the computational burden, we focus on the autoregressive fractional integrated moving average (ARFIMA) class of processes (Granger and Joyeux, 1980;Hosking, 1981) as the basis of developing a systematic and unifying Bayesian framework for modeling a variety of common time series phenomena, with particular emphasis on (marginally) detecting potential long-memory effects (i.e., averaging over short-memory and seasonal effects). ARFIMA has become very popular in statistics and econometrics because it is generalizable and its connection to the autoregressive moving average (ARMA) family and to fractional Gaussian noise is relatively transparent. A key property of ARFIMA is its ability to simultaneously yet separately model long and short memory.
Here we present a Bayesian framework for the efficient and systematic estimation of the ARFIMA parameters. We provide a new approximate likelihood for ARFIMA processes that can be computed quickly for repeated evaluation on large time series, and which underpins an efficient Markov chain Monte Carlo (MCMC) scheme for Bayesian inference.
Our sampling scheme can be best described as a modernization of a blocked MCMC scheme proposed by Pai and Ravishanker (1998) -adapting it to the approximate likelihood and extending it to handle a richer form of (known) short-memory effects. We then further extend the analysis to the case where the short-memory form is unknown, which requires trans-dimensional MCMC, in which the model order (the p and q parameters in the ARFIMA model) varies and, thus, so does the dimension of the problem. This aspect is similar to the work of Ehlers and Brooks (2008), who considered the simpler autoregressive-integrated moving average (ARIMA) model class, and to Holan et al. (2009), who worked with a non-parametric long-memory process. Our contribution has aspects in common with Egri˙oglu and Günay (2010), who presented a more limited method focused on model selection rather than averaging. The advantage of averaging is that the unknown form of short-memory effects can be integrated out, focusing on long memory without conditioning on nuisance parameters.
The aim of this paper is to introduce an efficient Bayesian algorithm for the inference of the parameters of the ARFIMA(p,d,q) model, with particular emphasis on the LM parameter d. Our Bayesian inference algorithm has been designed in a flexible fashion so that, for instance, the innovations can come from a wide class of different distributions, e.g., α stable or t distribution (to be published in a companion paper). The remainder of the paper is organized as follows. Section 2 discusses the important numerical calculation of likelihoods, representing a hybrid between earlier classical statistical methods and our new contributions towards a full-Bayesian approach. Section 3 describes our proposed Bayesian framework and methodology in detail, focusing on long memory only. Then, in Sect. 4, we consider extensions for additional short memory and the computational techniques required to integrate them out. Empirical illustration and comparison of all methods is provided in Sect. 5 via synthetic and real data including the Nile water level data and the central England temperature (CET) time series, with favorable comparison to the standard estimators. In the case of the Nile data, we find strong evidence for long memory. The CET analysis requires a slight extension to handle seasonal long memory, and we find that the situation here is more nuanced in terms of evidence for long memory. The paper concludes with a discussion in Sect. 7 focused on the potential for further extension.

ARFIMA model
We provide here a brief review of the ARFIMA model. More details are given in Appendix A.
An ARFIMA model is given by (1) We define the backshift operator B, where B X t = X t−1 , and powers of B are defined iteratively: is the autoregressive component and is the moving average component and constitute the short-memory components of the ARFIMA model. These are defined in more detail in Appendix A and in Graves (2013).

Likelihood function
For now, we restrict our attention to a Bayesian analysis of an ARFIMA(0,d,0) process, having no short-ranged ARMA components (p = q = 0), placing emphasis squarely on the memory parameter d. As we explain in our Appendix, the resulting process is identical to a fractionally integrated processes with memory parameter d.
Here we develop an efficient and new scheme for evaluating the (log) likelihood, via approximation. Throughout, the reader should suppose that we have observed the vector x = (x 1 , . . . , x n ) as a realization of a stationary, causal and invertible ARFIMA(0,d,0) process {X t } with mean µ ∈ R. The innovations will be assumed to be independent, and taken from a zero-mean location-scale probability density f (·; 0, σ , λ), which means the density can be written as f (x; δ, σ , λ) ≡ 1 σ f ( x−δ σ ; 0, 1, λ). The parameters δ and σ are called the location and scale parameters, respectively. The mdimensional λ is a shape parameter (if it exists, i.e., m > 0). A common example is the Gaussian N (µ, σ 2 ), where δ ≡ µ and there is no λ. We classify the four parameters µ, σ , λ, and d into three distinct classes: (1) the mean of process, µ; (2) innovation distribution parameters, υ = (σ , λ); and (3) memory structure, d. Together, ψ = (µ, υ, ω), where ω will later encompass the short-range parameters p and q.
Our proposed likelihood approximation uses a truncated autoregressive model (AR) (∞) approximation (cf. Haslett and Raftery, 1989). We first re-write the AR(∞) approximation of ARFIMA(0,d,0) to incorporate the unknown parameter µ, and drop the (d) superscript for conve- . Then we truncate this AR(∞) representation to obtain an AR(P ) one, with P large enough to retain low frequency effects, e.g., P = n. We denote P = P k=0 π k and, with π 0 = 1, rearrange terms to obtain the following modified model: It is now possible to write down a conditional likelihood. For convenience the notation x k = (x 1 , . . . , x k ) for k = 1, . . . , n will be used (and x 0 is interpreted as appropriate where necessary). Denote the unobserved P vector of random variables (x 1−P , . . . , x −1 , x 0 ) by x A (in the Bayesian context these will be auxiliary, hence "A"). Consider the likelihood L(x|ψ) as a joint density, which can be factorized as a product of conditionals. Writing g t (x t |x t−1 , ψ) for the density of X t conditional on x t−1 , we obtain L(x|ψ) = n t=1 g t (x t |x t−1 , ψ). This is still of little use because the g t may have a complicated form. However, by further conditioning on x A , and writing h t (x t |x A , x t−1 , ψ) for the density of X t conditional on x t−1 and x A , we obtain L(x|ψ, x A ) = n t=1 h t (x t |x A , x t−1 , ψ). Returning to Eq. (2) observe that, conditional on both the observed and unobserved past values, X t is simply distributed according to the innovations' density f with a suitable change in location: π k x t−k ], σ , λ). Then using location-scale representation: Evaluating this expression efficiently depends upon efficient calculation of c = (c 1 , . . . , c n ) t and log f (·). From Eq. (3), c is a convolution of the augmented data, (x, x A ), and coefficients depending on d, which can be evaluated quickly in the R language for statistical computing via convolve via fast Fourier transform (FFT). Consequently, evaluation of the conditional on x A likelihood in the Gaussian case costs only O(n log n) -a clear improvement over the exact method. Obtaining the unconditional likelihood requires marginalization over x A , which is analytically infeasible. However, this conditional form will suffice in the context of our Bayesian inferential scheme, presented below.

A Bayesian approach to long-memory inference
We are now ready to consider Bayesian inference for ARFIMA(0,d,0) processes. Our method can be succinctly described as a modernization of the blocked MCMC method of Pai and Ravishanker (1998). Isolating parameters by blocking provides significant scope for modularization,

682
T. Graves et al.: Bayesian inference which helps to accommodate our extensions for short memory. Pairing with efficient likelihood evaluations allows much longer time series to be entertained than ever before. Our description begins with the appropriate specification of priors, which are more general than previous choices, yet still encourages tractable inference. We then provide the relevant updating calculations for all parameters, including those for auxiliary parameters x A .
Update of d: updating the memory parameter d is far less straightforward than either µ or σ . Regardless of the innovations' distribution, the conditional posterior π d|ψ −d (d|ψ −d , x) is not amenable to Gibbs sampling. We use RW proposals from truncated Gaussian ξ d ∼ N (a,b) where (N ) and φ (N ) are the standard normal cumulative density function (CDF) and probability density function (PDF), respectively. In particular, we use ξ d |d ∼ N (−1/2,1/2) (d, σ 2 d ) via rejection sampling from N (d, σ 2 d ) until ξ d ,∈ (− 1 2 , 1 2 ). Although this may seem inefficient, it is perfectly acceptable; for example, if σ d = 0.5 the expected number of required variates is still less than 2, regardless of d. More refined methods of directly sampling from truncated normal distributions exist -see for example Robert (1995) but we find little added benefit in our context.
A useful cancellation in q(d; ξ d )/q(ξ d ; d) obtained from Eq. (6) yields Denote ξ c t = P k=0 ξ π k x t−k for t = 1, . . . , n, where {ξ π k } are the proposed coefficients {π ξ π k . Then in the approximate case Optional update of x A : when using the approximate likelihood method, one must account for the auxiliary variables x A , a P vector (e.g., P = n). We find that, in practice, it is not necessary to update all the auxiliary parameters at each iteration. In fact the method can be shown to work perfectly well, empirically, if we never update them, provided they are given a sensible initial value (such as the sample mean of the observed data x). This is not an uncommon tactic in the long-memory (big-n) context (e.g., Beran, 1994b); for further discussion refer to Graves (2013, Appendix C).
For a full-MH approach, we recommend an independence sampler to backward project the observed time series. Specifically, first relabel the observed data: y −i = x i+1 , i = 0, . . . n − 1; furthermore, use the vector (y −(n−1) , . . . , y −1 , y 0 ) t to generate a new vector of length n, (Y 1 , . . . , Y n ) t where Y t via Eq. (2): where the coefficients {π } are determined by the current value of the memory parameter(s). Then take the proposed x A , denoted ξ x A , as the reverse sequence: ξ x −i = y i+1 , i = 0, . . . , n − 1. Since this is an independence sampler, calculation of the acceptance probability is straightforward. It is only necessary to evaluate the proposal density q(ξ x A |x, ψ). But this is easy using the results from Sect. 2. For simplicity, we prefer uniform prior for x A .
Besides simplicity, justification for this approach lies primarily in is preservation of the autocorrelation structurethis is clear since the ACF is symmetric in time. The proposed vector has a low acceptance rate, and the potential remedies (e.g., multiple-try methods) seem unnecessarily complicated given the success of the simpler method.

Extensions to accommodate short memory
Simple ARFIMA(0,d,0) models are mathematically convenient but have limited practical applicability because the entire memory structure is determined by just one parameter, d. Although d is often of primary interest, it may be unrealistic to assume no short-memory effects. This issue is often implicitly acknowledged since semi-parametric estimation methods, such as those used as comparators in Sect. 5.1, are motivated by a desire to circumvent the problem of specifying precisely (and inferring) the form of short memory (i.e., the values of p and q in an ARIMA model). Full parametric Bayesian modeling of ARFIMA(p,d,q) processes represents an essentially untried alternative, primarily due to computational challenges. Related, more discrete, alternatives show potential. Pai and Ravishanker (1998) considered all four models with p, q ≤ 1, whereas Koop et al. (1997) considered 16 with p, q ≤ 3.
Such approaches, especially ones allowing larger p, q, can be computationally burdensome as much effort is spent modeling unsuitable processes towards a goal (inferring p, q), which is not of primary interest (d is). To develop an efficient, fully parametric, Bayesian method of inference that properly accounts for varying models, and to marginalize out these nuisance quantities, we use reversible-jump (RJ) MCMC (Green, 1995). We extend the parameter space to include the set of models (p and q), with chains moving between (i.e., changing p and/or q) and within (sampling φ and θ given particular fixed p and q) models, and focus on the marginal posterior distribution of d obtained by (Monte Carlo) integration over all models and parameters therein. RJ methods, which mixes so-called trans-dimensional, between-model moves with the conventional within-model ones, have previously been applied to both autoregressive models (Vermaak et al., 2004), and full-ARMA models Brooks, 2006, 2008). In the longmemory context, Holan et al. (2009) applied RJ to fractional exponential processes (FEXP). However for ARFIMA, the only related work we are aware of is by Egri˙oglu and Günay (2010) who demonstrated a promising if limited alternative.
Below we show how the likelihood may be calculated with extra short-memory components when p and q are known, and subsequently how Bayesian inference can be applied in this case. Then, the more general case of unknown p and q via RJ is described. The result is a Monte Carlo inferential scheme that allows short-memory effects to be marginalized out when summarizing inferences for the main parameter of interest: d, for long memory.

Likelihood derivation and inference for known short memory
Recall that short-memory components of an ARFIMA process are defined by the AR and moving average (MA) polynomials, and , respectively (see Sect. 2.1). Here, we distinguish between the polynomial, , and the vector of its coefficients, φ = (φ 1 , . . . , φ p ). When the polynomial degree is required explicitly, bracketed superscripts will be used: We combine the short-memory parameters φ and θ with d to create a single memory parameter, ω = (φ, θ , d). For a given unit-variance ARFIMA(p,d,q) process, we denote its autocovariance (ACV) by γ ω (·), with γ d (·) and γ φ , θ (·) those of the relevant unit-variance ARFIMA(0,d,0) and ARMA(p,q) processes, respectively. The spectral density function (SDF) of the unit-variance ARFIMA(p,d,q) process is written as f ω (·), and its covariance matrix is ω .
An exact likelihood evaluation requires an explicit calculation of the ACV γ ω (·); however, there is no simple closed form for arbitrary ARFIMA processes. Fortunately, our proposed approximate likelihood method of section 2 can be ported over directly. Given the coefficients {π (d) k } and polynomials and , it is straightforward to calculate the {π (ω) k } coefficients required by again applying the numerical methods of Brockwell and Davis (1991, Sect. 3.3).
To focus the exposition, consider the simple, yet useful, ARFIMA(1,d,0) model where the full memory parameter is ω = (d, φ 1 ). Because the parameter spaces of d and φ 1 are independent, it is simplest to update each of these parameters separately; d with the methods of Sect. 3 and φ 1 similarly: ξ φ 1 |φ 1 ∼ N (−1,1) (φ 1 , σ 2 φ 1 ), for some σ 2 φ 1 . In practice however, the posteriors of d and φ 1 typically exhibit significant correlation so independent proposals are inefficient. One solution would be to parametrize to some d * and orthogonal φ * with the correlation structure. This will ensure a reasonable acceptance rate and mixing.
To propose parameters in the manner described above, a two-dimensional, suitably truncated Gaussian random walk, with covariance matrix aligned with the posterior covariance, is required. To make proposals of this sort, and indeed for arbitrary ω in larger p and q cases, requires sampling from a hypercuboid-truncated multivariate normal (MVN) N (a,b) r (ω, ω ), where (a, b) describe the coordinates of the hypercube. We find that rejection sampling-based unconstrained similarly parametrized MVN samples (e.g., using mvtnorm, Genz et al., 2012) works well, because in the RW setup the mode of the distribution always lies inside the hypercuboid. Returning to the specific ARFIMA(1,d,0) case, r = 2, b = (0.5, 1), and a = −b are appropriate choices. Calculation of the MH acceptance ratio A ω (ω, ξ ω ) is trivial; it simply requires numerical evaluation of (N ) r (·; ·, ω ), e.g., via mvtnorm, since the ratios of hypercuboid normalization terms would cancel. We find that initial values φ [0] chosen uniformly in C 1 ; i.e., the interval (−1, 1), and d [0] from {−0.4, −0.2, 0, 0.2, 0.4} work well. Any choice of prior for ω can be made, although we prefer flat (proper) priors.
The only technical difficulty is the choice of proposal covariance matrix ω . Ideally, it would be aligned with the posterior covariance; however, this is not a priori known. We find that running a pilot chain with independent proposals via N (a,b) r (ω, σ 2 ω I r ) can help choose a ω . A rescaled version of the sample covariance matrix from the pilot posterior chain, following Roberts and Rosenthal (2001), works well (see Sect. 5.2).

Unknown short-memory form
We now expand the parameter space to include models M ∈ M, the set of ARFIMA models with p and q shortmemory parameters, indexing the size of the parameter space (M) . For our trans-dimensional moves, we only consider adjacent models, on which we will be more specific later. For now, note that the choice of bijective function mapping between model spaces (whose Jacobian term appears in the acceptance ratio) is crucial to the success of the sampler. To illustrate, consider transforming from (p+1) ∈ C p+1 down to (p) ∈ C p . This turns out to be a non-trivial problem, however, because (for p > 1) C p has a very complicated shape. The most natural map would be (φ 1 , . . . , φ p , φ p+1 ) −→ (φ 1 , . . . , φ p ). However, there is no guarantee that the image will lie in C p . Even if the model dimension is fixed, difficulties are still encountered; a natural proposal method would be to update each component of φ separately but, because of the awkward shape of C p , the allowable values for each component are a complicated function of the others. Non-trivial proposals are required.
A potential approach is to parametrize in terms of the inverse roots (poles) of , as advocated by Brooks (2006, 2008) ∈ C p ⇐⇒ |α i | < 1 for all i. This looks attractive because it transforms C p into D p = D × · · · × D (p times) where D is the open unit disc, which is easy to sample from. But this method has serious drawbacks when we consider the RJ step. To decrease dimension, the natural map would be to remove one of the roots from the polynomial. But because it is assumed that has real coefficients (otherwise the model has no realistic interpretation), any complex α i must appear as conjugate pairs. There is then no obvious way to remove a root; a contrived method might be to remove the conjugate pair and replace it with a real root with the same modulus; however, it is unclear how this new polynomial is related to the original, and to other aspects of the process, like ACV.

Reparametrization of and
We therefore propose reparametrization (and ) using the bijection between C p and (−1, 1) p advocated by various authors, e.g., Marriott et al. (1995) and Vermaak et al. (2004). To our knowledge, these methods have not previously been deployed towards integrating out short-memory components in Bayesian analysis of ARFIMA processes. Monahan (1984) defined a mapping φ (p) ←→ ϕ (p) recursively as follows: p . Moreover, if p = 1, the two parametrizations are the same, i.e., φ 1 = ϕ 1 (consequently the brief study of ARFIMA(1,d,0) in Sect. 4.1 fits in this framework). The equivalent parametrized form for θ is ϑ. The full memory parameter ω is parametrized as = (−1/2, 1/2) × (the image of C p,q ). However, recall that in practice, C p,q will be assumed equivalent to C p × C q , so the parameter space is effectively = (−1/2, 1/2) × (−1, 1) p+q . Besides mathematical convenience, this bijection has a very useful property (Kay and Marple, 1981, cf.), which helps motivate its use in defining RJ maps. If d = q = 0, using this parametrization for ϕ when moving between different values of p allows one to automatically choose processes that have very closely matching ACFs at low lags. In the MCMC context this is useful because it allows the chain to propose models that have a similar correlation structure to the current one. Although this property is nice, it may be of limited value for full-ARFIMA models, since the proof of the main result does not easily lend itself to the inclusion of either a MA or long-memory component. Nevertheless, our empirical results similarly indicate a near match for a full-ARFIMA(p,d,q) model.

Application of RJ MCMC to ARFIMA(p,d,q) processes
We now use this reparametrization to efficiently propose new parameter values. Firstly, it is necessary to propose a new memory parameter while keeping the model fixed. Attempts at updating each component individually suffer from the same problems of excessive posterior correlation that were encountered in Sect. 4.1. Therefore, the simultaneous update of the entire r = (p + q + 1)-dimensional parameter is performed using the hypercuboid-truncated Gaussian distribution from definition ξ | ∼ N H r r ( , ), where H r defines the r-dimensional rectangle. The covariance matrix is discussed in some detail below. The choice of prior p (·) is arbitrary. Pai and Ravishanker (1998) used a uniform prior for ω, which has an explicit expression in the parametrization (Monahan, 1984). However, their expression is unnecessarily complicated since a uniform prior over holds no special interpretation. We therefore prefer uniform prior over : p ( ) ∝ 1, ∈ . Now consider the between-model transition. We must first choose a model prior p M (·). A variety of priors are possible; the simplest option would be to have a uniform prior over M, but this would of course be improper. We may in practice want to restrict the possible values of p, q to 0 ≤ p ≤ P and 0 ≤ q ≤ Q for some P , Q (say 5), which would render the uniform prior proper. However, even in this formulation, a lot of prior weight is being put onto (larger) more complicated models that, in the interests of parsimony, might be undesired. As a simple representative of potential priors that give greater weight to smaller models, we prefer a truncated joint Poisson distribution with parameter λ: p M (p, q) ∝ λ p+q p!q! I(p ≤ P , q ≤ Q). Now, denote the probability of jumping from model M p,q to model M p ,q by U (p,q),(p ,q ) . U could allocate nonzero probability for every model pair, but for convenience we severely restrict the possible jumps (while retaining irreducibility) using a two-dimensional bounded birth and death process. Consider the subgraph of Z 2 : G = {(p, q): 0 ≤ p ≤ P , 0 ≤ q ≤ Q}, and allocate uniform non-zero probability only to neighboring values, i.e., if and only if |p − p | + |q − q | = 1. Each point in the body of G has four neighbors, each point on the line boundaries has three, and each of the four corner points has only two neighbors. Therefore, the model transition probabilities U (p,q),(p ,q ) are either 1/4, 1/3, 1/2, or 0. Now suppose the current (p + q + 3)-dimensional parameter is ψ (p,q) , given by ψ (p,q) = (µ, σ , d, ϕ (p) , ϑ (q) ), using a slight abuse of notation. Because the mathematical detail of the AR and MA components are almost identical, we consider only the case of decreasing/increasing p by 1 here; all of the following remains valid if p is replaced by q, and ϕ replaced by ϑ. We therefore seek to propose a parameter ξ (p+1,q) , that is somehow based on ψ (p,q) . We further simplify by regarding the other three parameters (µ, σ , and d) as having the same interpretation in every model, choosing ξ µ = µ, ξ σ = σ and ξ d = d. For simplicity we also set ξ To specify a bijection, we match dimensions by adding in a random scalar u. The most obvious map is to specify u, so that its support is the interval (−1, 1) and then set: ). We either add, or remove the final parameter, while keeping all others fixed with the identity map, so the Jacobian is unity. The proposal q(u|ψ (p,q) ) can be made in many ways -we prefer the simple U(−1,1). With these choices the RJ acceptance ratio is which applies to both increasing and decreasing dimensional moves.

Construction of
: much of the efficiency of the above scheme, including within-and between-model moves, depends on the choice of ≡ (p,q) , the within-model move RW proposal covariance matrix. We first seek an appropriate (1,1) , as in Sect. 4.1, with a pilot tuning scheme. That matrix is shown on the left below, where we have blocked it out (where each block is a scalar), so that we can extend this idea to the (p, q) case in the obvious way -on the right abovewhere ϑ , ϑ (q) is a q × q matrix, etc. If either (or both) p, q = 0 then the relevant blocks are simply omitted. To specify the various sub-matrices, we propose ϕ 2 , . . . , ϕ p with equal variances, and independently of d, ϕ 1 , ϑ 1 , (and similarly for ϑ 2 , . . . , ϑ q ). In the context of Eq. (9), the following holds true: Bayesian estimate from the truth against that truth, d I . Each "x" plotted represts one estimate or residual.
where the dotted lines indicate further blocking, 0 is a row vector of zeros, and O is a zero matrix. This choice of ϕ is conceptually simple, computationally easy and preserves the positive definiteness as required (see Graves, 2013).

Empirical illustration and comparison
Here we provide empirical illustrations for the methods above: for classical and Bayesian analysis of long-memory models, and extensions for short memory. To ensure consistency throughout, the location and scale parameters will always be chosen as µ I = 0 and σ I = 1. Furthermore, unless stated otherwise, the simulated series will be of length n = 2 10 = 1024. This is a reasonable size for many applications; it is equivalent to 85 years of monthly observations. When using the approximate likelihood method we set P = n.

Long memory
Standard MCMC diagnostics were used throughout to ensure, and tune for, good mixing. Because d is the parameter of primary interest, the initial values d [0] will be chosen to systematically cover its parameter space, usually starting five chains at the regularly spaced points {−0.4, −0.2, 0, 0.2, 0.4}. Initial values for other parameters are not varied: µ will start at the sample mean x; σ at the sample standard deviation of the observed series x.

Efficacy of approximate likelihood method
We start with the null case; i.e., how does the algorithm perform when the data are not from a long-memory process? One hundred independent ARFIMA(0,0,0), or Gaussian white noise, processes are simulated, from which marginal posterior means, standard deviations, and credibility interval end points are extracted. Table 1 shows averages over the runs.
The average estimate for each of the three parameters is less than a quarter of a standard deviation away from the truth. Credibility intervals are nearly symmetric about the estimate and the marginal posteriors are, to a good approximation, locally Gaussian (not shown). Upon, applying a proxy credible-interval-based hypothesis test, one would conclude in 98 of the cases that d = 0 could not be ruled out. A similar analysis for µ and σ shows that hypotheses µ = 0 and σ = 1 would each have been accepted 96 times. These results indicate that the 95 % credibility intervals are approximately correctly sized.
Next, consider the more interesting case of d I = 0. We repeat the above experiment except that 10 processes are generated with d I set to each of {−0.45, −0.35, . . . , 0.45}, giving 100 series total. Figure 1 shows a graphical analog of results from this experiment. The plot axes involve a Bayesian resid- From the figure is clear that the estimator for d is performing well. Figure 1a shows how tight the estimates of d are around the input value -recall that the parameter space for d is the whole interval (− 1 2 , 1 2 ). Moreover, Fig. 1b indicates that there is no significant change of posterior bias or variance as d I is varied.
Next, the corresponding plots for the parameters σ and µ are shown in Fig. 2. We see from uation is different however in Fig. 2b for the location parameter µ. Although the bias appears to be roughly zero for all d I , the posterior variance clearly is affected by d I . To ascertain the precise functional dependence, consider Fig. 2c, which shows, on a semi-log scale, the marginal posterior standard deviation of µ, σ µ (B) against d I . It appears that the marginal posterior standard deviation σ µ (B) is a function of d I ; specifically, σ µ (B) ∝ A d I , for some A. The constant A could be estimated via least-squares regression. Instead however, inspired by asymptotic results in literature concerning classical estimation of long-memory processes (Beran, 1994a), we set A = n and plotted the bestfitting such line (shown in Fig. 2c). Observe that, although not fitting exactly, the relation σ µ (B) ∝ n d I holds reasonably well for d I ∈ (− 1 2 , 1 2 ). Indeed, Beran (1994a) motivated long memory in this way, and derived asymptotic consistency results for optimum (likelihood-based) estimators and found indeed that the standard error for µ is proportional to n d−1/2 but the standard errors of all other parameters are proportional to n −1/2 .

Effect of varying time series length
We now analyze the effect of changing the time series length. For this we conduct a similar experiment but fix = 16 384}, 10 independent ARFIMA(0,0,0) time series are generated. The resulting posterior standard deviations are plotted against n (on log-log scale) in Fig. 3.
Observe that all three marginal posterior standard deviations are proportional to 1 √ n , although the posterior of µ is less reliable. Combining these observations with our earlier deduction that σ (B) µ ∝ n d I , we conclude that for an ARFIMA(0,d I ,0) process of length n, the marginal posterior standard deviations follow those of Beran (1994a).

Comparison with common estimators
In many practical applications, the long-memory parameter is estimated using non-/semi-parametric methods. These may be appropriate in many situations, where the exact form of the underlying process is unknown. However, when a specific model form is known (or at least assumed) they tend to perform poorly compared with fully parametric alternatives . Our aim here is to demonstrate, via a short Monte Carlo study involving ARFIMA(0,d,0) data, that our Bayesian likelihood-based method significantly outperforms other common methods in that case. We consider the following comparators: (i) rescaled adjusted range, or R/S (Hurst, 1951;Graves, 2013) -we use the R implementation in the FGN (McLeod et al., 2007) package; (ii) semi-parametric Geweke-Porter-Hudak (GPH) method (Geweke and Porter-Hudak, 1983) -implemented in R package fracdiff (Fraley et al., 2012); (iii) detrended fluctuation analysis (DFA), originally devised by Peng et al. (1994) -in the R package PowerSpectrum (Vyushin et al., 2009); and (iv) wavelet-based semi-parametric estimators (Abry et al., 2003) available in R package fARMA (Wuertz, 2012). Each of these four methods will be applied to the same 100 time series with varying d I as were used earlier experiments above. We extend the idea of a residual, d R Observe that all four methods have a much larger variance than our Bayesian method, and moreover the R/S is positively biased. Actually, the bias in some cases would seem to depend on d I : R/S is significantly (i.e., > 0.25) biased for d I < −0.3 but slightly negatively biased for d > 0.3 (not shown); DFA is only unbiased for d I > 0; both the GPH and wavelet methods are unbiased for all d ∈ (− 1 2 , 1 2 ).

Extensions for short memory
Known form: we first consider the MCMC algorithm from Sect. 4.1 for sampling under an ARFIMA(1,d,0) model where the full memory parameter is ω = (d, φ 1 ). Recall that method involved proposals from a hypercuboid MVN using a pilot-tuned covariance matrix. Also recall that it is a special case of the reparametrized method from Sect. 4.2.
In general, this method works very well; two example outputs are presented in Fig. 6, under two similar datagenerating mechanisms. Figure 6a shows relatively mild correlation (ρ = 0.21) compared with Fig. 6b, which shows strong correlation (ρ = 0.91). This differential behavior can be explained heuristically by considering the differing data-generating values. For the process in Fig. 6a the short-memory and longmemory components exhibit their effects at opposite ends of the spectrum; see Fig. 7a. The resulting ARFIMA spectrum, with peaks at either end, makes it easy to distinguish between short and long-memory effects, and consequently the posteriors of d and φ are largely uncorrelated. In contrast, the parameters of the process in Fig. 7b express their behavior at the same end of the spectrum. With negative d these effects partially cancel each other out, except very near the origin where the negative memory effect dominates; see Fig. 7b. Distinguishing between the effects of φ and d is much more Shows posterior samples of (d, φ) from series considered in (b) with credibility sets: red is 95 % credibility set for (d, φ), green is 95 % credibility interval for d, blue is 95 % credibility interval for φ.  Fig. 6, (a, b). Solid line is density obtained using reversible-jump algorithm. Unknown form: the RJ scheme outlined in Sect. 4.2 works well for data simulated with p and q up to 3. The marginal posteriors for d are generally roughly centered around d I (the data-generating value) and the modal posterior model probability is usually the correct one. To illustrate, consider again the two example data-generating contexts used above. For both series, kernel density for the marginal posterior for d are plotted in Fig. 8a and b, together with the equivalent density estimated assuming unknown model orders.
Notice how the densities obtained via the RJ method are very close to those obtained assuming p = 1 and q = 0. The former are slightly more heavy tailed, reflecting a greater level of uncertainty about d. Interestingly, the corresponding plots for the posteriors of µ and σ do not appear to exhibit this effect; see Fig. 8c and d. The posterior model probabilities are presented in Table 2, showing that the correct modes are being picked up consistently.   As a test of the robustness of the method, consider a complicated short-memory input combined with a heavy-tailed α-stable innovations distribution. Specifically, the time series that will be used is the following ARFIMA(2,d,1) process where ε t ∼ S 1.75,0 .
For more details, see Graves (2013, Sect. 7.1). The marginal posterior densities of d and α are presented in Fig. 9. Performance looks good despite the complicated structure. The posterior estimate for d is d (B) = 0.22, with 95 % CI (0.04, 0.41). Although this interval is admittedly rather wide, it is reasonably clear that long memory is present in the signal. The corresponding interval for α is (1.71, 1.88) with estimate α (B) = 1.79. Finally, we see from Table 3 that the algorithm is very rarely in the wrong model.

Observational data analysis
We conclude with the application of our method to two long data sets: the Nile water level minima data and the CET. The Nile data are part of the R package "longmemo" and the CET time series can be downloaded from http://www.metoffice. gov.uk/hadobs/hadcet/.

The Nile data
Because of the fundamental importance of the Nile river to the civilizations it has supported, local rulers kept measurements of the annual maximal and minimal heights obtained by the river at certain points (called gauges). The longest uninterrupted sequence of recordings is from the Roda gauge Table 3. Posterior model probabilities based on simulations of model Eq. (10) for the autoregressive parameter p and moving average parameter q. Bold numbers denote the true model. (near Cairo), between AD 622 and 1284 (n = 663). 2 These data are plotted in Fig. 10. We immediately observe the apparent low frequency component of the data. The data appear to be on the "verge" of being stationary; however, the general consensus amongst the statistical community is that the series is stationary. The posterior summary statistics are presented in Table 5, density estimates of the marginal posteriors of d and µ are presented in Fig. 12, and the posterior model probabilities are presented in Table 4.    The posterior summary statistics and marginal densities of d and µ for the Nile data are presented in Fig. 12. Posterior model probabilities are presented in Table 6. We see that the model with the highest posterior probability is the ARFIMA(0,d,0) model with d ≈ 0.4. This suggests a strong, pure, long-memory feature. Our results compare favorably with other studies (Liseo et al., 2001;Hsu and Breidt, 2003;Ko and Vannucci, 2006a).
It is interesting to compare these findings with other literature. Liseo et al. (2001) used a semi-parametric Bayesian method on the first 512 observations of the sequence and obtained an estimate for d of 0.278. Hsu and Breidt (2003) used a similar method to Pai and Ravishanker (1998) to estimate d (within an ARFIMA(0,d,0) model) at 0.416 with an approximate credibility interval of (0.315, 0.463). Ko and Vannucci (2006a) similarly found using wavelets d B = 0.379 with a credibility interval of (0.327, 0.427). Palma (2007)  a credibility interval of (0.316, 0.475) using their Bayesian FEXP method. We note that the interpretation as persistence of the d ≈ 0.4 (H ≈ 0.9) value that we and others have obtained has been challenged by Kärner (2001). In his view the analysis should be applied to the increments of the level heights rather than the level heights themselves, giving an anti-persistent time series with a negative d value. The need for a short-rangedependent component that he argues for is, however, automatically included in the use of an ARFIMA model. Although ARFIMA was originally introduced in econometrics as a phenomenological model of LM, very recent progress is being made in statistics and physics on building a bridge between it and continuous time linear dynamical systems (see e.g., Slezak and Weron, 2015).
In conclusion, our findings agree with all published Bayesian long-memory results (except for the anomalous finding of Liseo et al. (2001)). Moreover, these findings agree with numerous classical methods of analysis (e.g., Beran, 1994a) that have found the best model fit is an ARFIMA(0,d,0) model with d ≈ 0.4. We note that it is a result of our data analysis method that short-memory can be neglected, rather than being an a priori assumption.  0.000 0.000 0.000 0.000 0.000 0.000 0.000 4 0.000 0.000 0.000 0.000 0.000 0.000 0.000 5 0.000 0.000 0.000 0.000 0.000 0.000 0.000 Marginal 0.742 0.236 0.022 0.000 0.000 0.000

Central England temperature
There is increasing evidence that surface air temperatures posses long memory (Gil-Alana, 2003Bunde et al., 2014;Franzke, 2010Franzke, , 2012 but long time series are needed to get robust results. The CET index is a famous measure of the monthly mean temperature in an area of southern-central England dating back to 1659 (Manley, 1974). Given to a precision of 0.5 • C prior to 1699 and 0.1 • C thereafter, the index is considered to be the longest reliable known temperature record from station data. As expected, the CET exhibits a significant seasonal signal, at least some of which must be considered as deterministic. Following the approach of Montanari et al. (2000), the index is first deseasonalized using the additive "STL" method (Cleveland et al., 1990). This deseasonalized CET index is shown in Fig. 13.
The estimated seasonal function S(t) that was removed is shown in Fig. 14a. The spectrum of the deseasonalized process is shown in Fig. 14b. D denotes the seasonal longmemory parameter. Notice that, in addition to the obvious spectral peak at the origin, there still remains a noticeable peak at the monthly frequency ω = π 6 . However, there are no further peaks in the spectrum, which would appear to rule out a seasonal ARFIMA (SARFIMA) model. These observations therefore suggest that a simple two- frequency Gegenbauer(d,D; π 6 ) 2 process might be an appropriate model. See Appendix B for more details about seasonal long memory.
Applying this model, the marginal posterior statistics are presented in Table 7 and the joint posterior samples of (d, D) from this model are plotted in Fig. 15. These clearly indicate that both d and D are non-zero (albeit small in the case of D) suggesting the presence of long memory in both the conventional and seasonal sense.
In order to compare these results with other publications', it is important to note that to remove annual seasonality from the CET, the series of annual means is often used instead of the monthly series. This of course reduces the fidelity of the analysis. Hosking (1984) found (using rather crude estimation procedures) that the best-fitting model for the annual means of the CET was the ARFIMA(1,0.33,0) model with φ = 0.16. Pai and Ravishanker (1998) used the same series as test data for their Bayesian method; they fitted each of  the ARFIMA models with p, q ≤ 1 and found that all models were suitable. Their estimates of d ranged from 0.24 for p = q = 0 to 0.34 for p = 0, q = 1. Of course all these studies assume the time series is stationary and, in particular, has a constant mean. The validity of this assumption was considered by Gil-Alana (2003) who used formal hypothesis testing to consider models: where {X t } is an ARFIMA(0,d,0) process. For values of d = 0, 0.05, 0.10, 0.15, β 1 was found to be significantly nonzero (at about 0.23 • C per century) but for d ≥ 0.20, statistical significance was not found. Gil-Alana (2008) later extended this work by replacing the ARFIMA(0,d,0) process in Eq. (11) with a Gegenbauer(d;ω) process to obtain similar results. However, choice of ω was rather ad hoc, likely influencing the results. In order to consider the stationarity of the time series, we divided the series up into four blocks of length 1024 months (chosen to maximize efficiency of the fast Fourier transform) and analyzed each block independently. The posterior statistics for each block are presented in Table 8 with some results presented graphically in Fig. 16. It is interesting to note that the degree of (conventional) long memory is roughly constant over the last three blocks but appears to be larger in the first block. Of particular concern is that there is no value of d that is included in all four 95 % credibility intervals; this would suggest nonstationarity. Although this phenomenon may indeed have a physical explanation, it is more likely caused by the inhomogeneity of the time series. Recall that the first 50 years of the index are only given to an accuracy of 0.5 • C compared to 0.1 • C afterwards; this lack of resolution clearly has the potential to bias in favor of strong autocorrelation when compared with later periods.
Interestingly, the seasonal long-memory parameter D has 95 % credibility intervals that include zero for the both the second and third blocks. Finally, note that the 95 % credibility intervals for µ all include the range (9.314, 9.517), in other words it is entirely credible that the mean is nonvarying over the time period.

Conclusions
We have provided a systematic treatment of efficient Bayesian inference for ARFIMA models, the most popular parametric model combining long-and short-memory effects. Through a mixture of theoretical and empirical work we have demonstrated that our method can handle the sorts of time series data with possible long memory that we are typically confronted with.
Many of the choices made throughout, but in particular those leading to our likelihood approximation, stem from a need to accommodate further extension. For example, in future work we intend to extend them to cope with heavy-tailed innovation distributions. For more evidence of potential in this context, see Graves (2013, Sect. 7).
Finally, an advantage of the Bayesian approach is that it provides a natural mechanism for dealing with missing data, via data augmentation. This is particularly relevant for long historical time series, which may, for a myriad of reasons, have recording gaps. For example, some of the data recorded at other gauges along the Nile have missing observations although otherwise span a similarly long time frame. For a demonstration of how this might fit within our framework, see Sect. 5.6 of Graves (2013).
Indeed, k-factor Gegenbauer models are very flexible, and include nearly all other seasonal variants of ARFIMA processes such as the flexible-seasonal ARFIMA (Hassler, 1994) and fractional ARUMA (Robinson, 1994;Giraitis and Leipus, 1995) processes. Importantly, they also includes SARFIMA processes (Reisen et al., 2006): a SARFIMA(0,d,0) × (0,D,0) s process is equivalent to a s+2 2 factor Gegenbauer(d;ω) process where: ω j = 2π(j − 1) s , j = 1, . . ., k, and d 1 = d+D 2 , d j = D for j = 2, . . . , k, unless s is even in which case d k = D 2 . Although k-factor Gegenbauer models are very general, one particular sub-model is potentially very appealing. This is the two-factor model, with one pole at the origin and one at a non-zero frequency. In order to conform with notation for ARFIMA(0,d,0) processes, we will slightly redefine this model: a process {X t } is a simple two-frequency Gegenbauer process with parameters d, D, and ω, denoted Gegenbauer(d,D;ω) 2 if for all t: The Bayesian MCMC methodology developed here is easily extended to incorporate these seasonal fractional models. It is assumed that the frequency ω, or seasonal period s, is a priori known.