Regularization Destriping of Remote Sensing Imagery

We illustrate the utility of variational destriping for ocean color images from both mulitspectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory’s (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The 5 target functional penalizes ‘the neighborhood of stripes’ (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite difference scheme. We show the accuracy of our method from a benchmark data set which represents the Sea Surface Temperature off the Coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.

Abstract. We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes "the neighborhood of stripes" (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.

Introduction
Striping is a persistent artifact in remote sensing images and is particularly pronounced in visible-near infrared (VNIR) water-leaving radiance products such as those produced by operational sensors including NPP VIIRS, Landsat 8 Operational Land Imager (OLI), and Geostationary Ocean Color Imager (GOCI), as well airborne instruments such as NASA's JPL PRISM sensor. These sensors cover a temporal sampling range from daily (VIIRS) to hourly (GOCI) and spectral sampling from multi-spectral (VIIRS, GOCI) to hyperspectral (PRISM). Striping is pronounced in products from all these sensors because atmospheric correction for ocean color products typically removes at least 90 % of the signal recorded at the top of the atmosphere (TOA). Put another way, any artifacts in the TOA signal are amplified by at least a factor of 10 in any derived water products such as normalized waterleaving radiance of a specific spectral band (nLw(λ)), or in product fields such as total suspended sediment (TSS) concentration maps.
Striping is ubiquitous and difficult to remove because it has many possible origins. The detectors themselves are subjected to small amplitude variations in both sensitivity and calibration. The view angles (azimuthal and zenith) also vary from detector to detector and from pixel to pixel. Other differences in the instrument's optical path, components (e.g., mirrors), asynchronous readout, and so on also cause striping. Not unexpectedly, the magnitude of the striping varies from image to image. Striping is particularly problematic when comparing a sequence of images, since any difference in computations between images produces spurious results in the neighborhood of stripes.
Ocean products from NPP VIIRS have shown problematic striping since its launch, which has led to focus efforts at both NASA and NOAA to find correction methods. NASA created a vicarious destriping method for VIIRS images based on a collection of long-term on-orbit image data, including solar and lunar calibrations. NASA's Ocean Biology Processing Group (OBPG) began serving operational products with their vicarious calibrations and destriping for VIIRS in 2014 Eplee et al. (2012). In contrast, a method for hierarchically destriping VNIR images based on a single scene was proposed in Bouali (2010). This particular variational method was more recently augmented with filtering using a hierarchical image decomposition (Bouali and Ignatov, 2013), and that algorithm has also been implemented by scientists at NOAA for operations with images from VIIRS (Mikelsons et al., 2015(Mikelsons et al., , 2014. Scene-based processing methods are advantageous for sensors which do not have dedicated calibration subsystems such as a solar diffuser or where the data sets are limited in scope (such as in the case of airborne sensors) and do not include uniform scenes for vicarious calibration.
2 Regularization destriping: the functional and its minimization The method described here is closely related to the destriping functional described in Bouali (2010). Our work differs in its exact functional form and its principle of solution. In particular, we formulate a solution for destriping in an inverseproblem framework, and keep only the part of the functional in Bouali (2010) that smooths the stripes. This new formulation allows us to provide an explicit numerical solution instead of an iterative one, the former being more efficient and thus better suited for operational codes. We explicitly introduce a regularization parameter that controls the relative balance between the data term ("fitting the original image") and the regularity term ("smoothing out the stripes"). Such regularization is a common practice in inverse problems (Vogel, 2002), and fall under the rubric of Tikhonov regularization theory. As a further refinement of the destriping functional, we specifically define weights for the regularization term so that the algorithm applies only to the stripes while preserving the other features.
Assuming that the stripes are parallel to one another in the image plane, we take the direction of the stripes as the x (horizontal) direction. Thus the data term representing the horizontal gradient difference between the original and the destriped images is given as where is the image domain on xy plane, f (x, y) is the original image with stripes, and u(x, y) is the destriped image.
The regularization term emphasizes the smoothness in the vertical direction, which is assumed to be free of stripes. This regularization term is given by The regularization parameter α > 0 balances the data term and the regularization term. The resulting destriping func-tional is This is the x-directional destriping functional proposed by Bouali (2010), and it is equivalent to the basic form of our destriping functional when α = 1. The choice of α, as we show later, is key to achieving the balance between matching the original image and removing stripes. However, our approach differs from Bouali (2010) and our goal is to develop a destriping method that is easy to implement while preserving the other features of the image. A drawback of scene-based destriping is unintended changes in the values of all the pixels and not just the stripes. If we apply the regularization term for the whole image as it is in Eq. (3), the entire image is effected. This could modify the original features of the image, in addition to recovering stripes. Therefore, we further develop our functional in Eq.
(3) to regularize only the stripes. We introduce a mask (L) to the regularization term, to limit the smoothing effects on the stripes. To obtain L from the image, we first compute the slope of the image transverse to the stripes using first-order finite differences. Then we sum the absolute differences parallel to the stripes. This yields the total value (S) corresponding to each row. From the peaks of graph S versus r, where r is the row index, we can identify the stripes and select a "threshold" to separate the stripes from the other features. To compute the S curve, we do not need to consider the complete image and instead we can consider a vertical image segment that contains some part of all the biased data. In this case, we can preserve the actual features such as roads or any other real signals that are exactly aligned with stripes.
The mathematical expression for the computation of S for an image f , of size m×n, with a suitable boundary condition, can be written as where r = 1, 2, . . ., m and f (m + 1, c) is the introduced boundary row. Now defining the "threshold" value from the S curve, we obtain the sparse matrix L by assigning "ones" for the locations of the stripes. We will explain the procedure of selecting an appropriate "threshold" value using examples in Sect. 3. Any row r, where r = 1, 2, . . ., m of matrix L with size m × n can be defined as where c = 1, 2, . . ., n.
Then the new destriping functional, with the spatially weighted regularization term, is written as The destriped image is obtained by minimizing the functional after choosing an appropriate regularization parameter. Note that the functional E(u) is invariant under constant shift. That is, E(u + a) = E(u) for any constant a, implying that minimization of E(u) leads to an infinite number of solutions. In this work, we impose appropriate boundary conditions (as discussed below) that ensure uniqueness of the solution.
We create a destriped image by minimizing the energy functional in Eq. (6) using the Euler-Lagrange equation. For a functional of the form J (u) = F x, y, u, u x , u y d on the bounded domain , the Euler-Lagrange equation is given as As explained in Vogel (2002) and Basnayake and Bollt (2014), the Euler-Lagrange equation, is obtained by applying Eq. (7) to Eq. (6), where subscripts represent the argument variable(s) of the partial derivatives. We can rewrite Eq. (8) as where the operators D •• are two-dimensional arrays of size k ×k used to compute the partial derivatives of a given vector of size k × 1 with respect to the indices ••. We use finite-difference approximations with suitable boundary conditions for each derivative to directly represent these differential operators. In this work, we apply "reflexive" boundary conditions parallel to the stripes and "zero" boundary conditions transverse to the stripes when we the generate derivative operators. These boundary conditions lead D xx + αLD yy to a full rank operator and hence we reach a unique solution. In Eq. (9) we stack the given image of size p × q into k × 1 vector, where k = pq. We can now use the differential operators to write the linear Euler-Lagrange equation in the form of Au = b, and solve for u using an appropriate numerical method rather than solving the Euler-Lagrange equation for u from an iterative scheme such as the gradient descent method.

Construction of the differential operator
We construct the operator D xx using finite-difference approximations. The operator D yy is built by taking the transpose of the finite-difference stencil. Suppose we have a function M(x, y) ∈ R p×q . We need to compute the second-order partial derivative of M with respect to x. We use a fourthorder finite-difference approximation and compute the pointwise second partial derivative of the array M with respect to x.
As an example, take an array M (x, y) of size 3 × 5 where we want to compute M xx (x, y). We index the elements in the array in the form of a column vector as shown in Table 1, with two added boundary columns for each side.
The boundary points are highlighted in bold. If we compute the partial derivative of m 1 with respect to x, the resulting approximation is Computing the finite-difference approximations for each element in the array, we can obtain the differential operator D xx corresponding to the 3 × 5 array in Table 1. The resulting operator is a sparse matrix with only five non-zero diagonals, and it is shown in Table 2 with a multiplication factor of 12h 2 .

Solution to the Euler-Lagrange equation
Now we can determine the solution to Eq. (9). We rewrite the Eq. (9) as where A = D xx + αLD yy and b = D xx f . If the size of the given striped image f is p × q, then A is a k × k sparse array and b is a k × 1 array, where k = pq. Using a suitable value for the regularization parameter, Eq. (10) can be solved as a linear system. The system is sparse, and hence the computation time for an image with n pixels is of O(n) for each iteration. Clearly, at this stage, for a given α, Eq. (10) is straightforward to solve; however, in terms of the image processing, the specific choice of α plays an important role. To achieve "the most appropriate solution," we need to determine the best regularization parameter α.

Selection of the regularization parameter
The condition number of the resulting matrix quantifies the amplification of computational errors seen while solving the problem by direct computation. The condition number may be computed as the ratio between the largest singular value and the smallest singular value of the coefficient matrix. If the condition number is large, then the coefficient matrix is said to be ill-conditioned and hence the corresponding system is ill-posed. In an ill-posed system, the solution is highly sensitive to perturbations of the input data. Regularizing an ill-posed system, which emphasizes a desired property of the   Table 2. A discretized derivative operator D xx × 12h 2 for a 3 × 5 matrix.
problem, introduces a stable way to define a desirable solution (Vogel, 2002;Hansen et al., 2006;Hansen, 2010). This is the standard trade-off between regularity and stability in Tikhonov regularization terms. We regularize our computed solution by emphasizing the expected physics. To damp the accumulated errors from the residuals, we must make sure that we add sufficient regularity. The balance between the data term and the regularization term is very important: if we add too much regularity, it will divert the solution from the desired solution. Stated in terms of Tikhonov regularization, α serves the role to select a unique optimizer u, from what would be an otherwise ill-posed system had only the data fidelity had been chosen. In terms of the images, the data fidelity states that the optimizer image u should "appear as" the original data image, measured here in terms of along the stripes, but "smoothed" according to the regularizer term, in this case transverse to the stripes. The question then is how to balance these two terms.
Some of the common methods to determine the regularization parameter in inverse problems are the L-curve method of Hansen (1992Hansen ( , 2000 and U -curve method of Krawczyk-StańDo and Rudnicki (2007) and Krawczyk-Stado and Rudnicki (2008). After trials with both methods we selected the U -curve method. Starting with the functional where B is a k×k array and u, v are k×1 arrays for an integer k, a regularized solution is computed for a fixed α. Then the norm of the residuals and solution are written as In general, α ∈ (0, 1) and in this work we picked 100 equally spaced α values in the interval of (10 −12 , 1). Considering all the α values, and computing the sum of the reciprocals, results in a U -curve, The "best" α value corresponds to the minimum of U (α).
The "best" α value must be in the interval α ∈ σ 2/3 r , σ 2/3 1 , where σ 1 is the largest singular value of the operator B, and σ r is the smallest non-zero singular value of the operator B (Krawczyk-StańDo and Rudnicki, 2007).

Inpainting data gaps
Unlike terrestrial images, which can show sharp edges, ocean color images typically appear continuous. This is because the water tends to diffuse any color agents in the water column, and the spatial resolution of the sensor is usually finer than those diffusive features. The same holds spectrally if the sampling wavelength is less than the autocorrelation function of the spectra, as is the case for hyperspectral images. However, this continuity is broken if gaps appear in the data.
Clouds are very bright and often saturate the sensor. In normal processing, clouds are typically masked from the data since their large radiance values obscure the (relatively dark) ocean. These types of processing masks also cause large, irregular data gaps. There are other sources of data gaps as well, as we discuss here, that can be inherent in the sensor design.
As an example of data gaps introduced by system design, consider the VIIRS sensor, which uses 16 detector elements to generate each spatial image. The spatial footprint of each adjacent sensor element overlaps at the detector edges of Cao et al. (2013). To reduce data transmission from the satellite to ground stations, the overlapping regions are not transmit-ted, causing the so-called "bow-tie" effect. This appears as visible horizontal stripes in the Level 1 or Level 2 unmapped pixel values. These gaps are removed during projection to ground-based coordinates, so-called Level-3 data (L3). However, stripes which are linear in the "unmapped" data become nonlinear after mapping, and are more difficult to remove. Therefore we work with the Level 1 and Level 2 data where the stripes are linear, and often aligned with the focal plane array or detector elements.
We need to preprocess the image data in such a way as to ensure continuity across any data gaps. An obvious way to fix the gaps is to use "inpainting", a technique from image processing -rather than infer what the actual missing data might be, inpainting simply imposes continuity across the whole image when gaps are present. In a museum setting, inpainting refers to the process whereby a painter-restorer interprets a damaged painting by artistically filling in damaged or missing parts of a painting, smoothly bleeding in the colors of the painting that surrounds the damage (Bertalmio et al., 2011). In modern-day computational parlance, inpainting refers to an image-processing algorithm that mimics this idea. A wide variety of methods have been implemented (Bertalmio et al., 2000).
Missing data are filled in by solving the Laplace equation with Dirichlet boundary conditions, ∇u = 0 in and u = f on ∂ ; specifically, we use the MATLAB routine roifill. Because the algorithm approximates the partial derivatives using finite-difference approximations, inpainting should be done before applying the destriping algorithm on images.

Results and discussion
In this section, we first apply the destriping method to a simulated image of sea surface temperature (SST), and then apply the destriping algorithm to two real-world images, one from the multi-spectral NOAA imager VIIRS, and the second from the JPL PRISM hyperspectral sensor.

Benchmark data: sea surface temperature
A benchmark data set -sea surface temperature data off the Oregon coast -was used to test the codes. The original image data is shown in Fig. 1a, where red represents high temperature and blue represents the low temperature. This simulated image is from a three-dimensional Regional Ocean Model System (ROMS) showing the Oregon Coast on 1 August 2002 (Osborne et al., 2011). The area covered is 41-46 • N, −124-−125 • W.
The image (b) in Fig. 1 shows the artificially added stripes on the image shown in Fig. 1a. The intensities of stripes are assigned as the absolute values of difference between the original intensity and the average of each striped row. However, stripes at the 10th, 90th and 110th rows have some added noise intensity values so that they can be visualized properly.
Our next step is to apply the destriping method to the SST data and check the accuracy of the algorithm. There, we compare the solutions with regularization parameters α = 1 and α = 3 × 10 −1 using the functional with an unweighted regularization term, as shown in Eq. (3) and α = 7 × 10 −3 with a spatially weighted regularization term, as shown in Eq. (6). The idea of this comparison is to show the importance of the selection of our regularization method and how the weighted regularization term improves the destriping results. The destriped image from the unweighted regularization term is shown in Fig. 2a, and clearly it is overly smooth as most of the original features are destroyed from the destriping. Therefore, we chose an appropriate α value to balance the data term and the unweighted regularization term from the U -curve method. The U -curve solution is α = 3 × 10 −1 and the resulting destriped image is shown in Fig. 2b. However, it can be observed that some features of the destriped image have undergone smoothing at some places in addition to the stripes during the destriping process. Hence, it is an important task to preserve the other features where there are no stripes while applying destriping. We achieve this by improving the regularization term by incorporating different weights to the places where stripes are available and to where they are not available. The weighted matrix L can be obtained from the expression in Eq. (5) by giving zeros to the places where no stripes. In this manner, we can preserve the features of the original image from smoothing, and the resulting image is shown in Fig. 2c. The next task is to check the accuracy of the estimated values for the stripes. The stripe at the 110th row was randomly selected for this purpose. There we plot the intensity values of the stripe (black), reconstruction (red) and the actual values as they were against the column index. The results from the functional in Eq. (3) with α = 1 is shown in the graph (c) of Fig. 3. The reconstructions of the first 20 pixels and the last 10 pixels are closer to the original values, but the rest have significant differences compared to the actual value. The graph (b) in Fig. 3 shows destriped results of the 110th row from the functional in Eq. (3) using the U -curve solution, α = 3 × 10 −1 . In this case, the reconstruction of some middle pixels (18-28 and 31-37) and the last 10 pixels are off by some extent, but the rest is much closer to the actual values than the reconstruction with α = 1. The graph (c) in Fig. 3 shows the best solution among these approaches, obtained from the weighted regularization term with α = 7 × 10 −1 . This reconstruction is very close to the actual values and hence it is the most accurate reconstruction. These are the sort of results that we expect from a destriping algorithm. More importantly, if we compare "a stripe-like feature" at the 18th row, we can confirm the importance of giving weights to the regularization term, as the weighted regularization term focus only the stripes.
In addition to the reconstruction of stripes, we need to pay attention to the rest of the features of the image. The idea of destriping is to remove artificial stripes while preserving the other original features of the image. Therefore, we randomly select the 67th row of the image to compare before and after effects of destriping at a place where there is no actual stripe. The graphs (a) and (b) in Fig. 4 are from the func-  Fig. 1b. The graphs show the actual data in blue, striped data in black and the destriped data in red. Graphs (a) and (b) represent the reconstructions from the functional in Eq. (3) with α = 1 and α = 3 × 10 −1 , respectively. The graph (c) shows the reconstruction from the functional in Eq. (6) with α = 7 × 10 −1 . tional in Eq. (3) with α = 1 and α = 3 × 10 −1 , respectively, and the graph (c) in Fig. 4 from the functional in Eq. (6) with α = 7 × 10 −1 . The graphs (a) and (b) in Fig. 4 conclude that the destriping has affected the other features of the image when the functional in Eq. (3) is applied regardless of the value of the regularization parameter. However, the graph (c) in Fig. 4 shows that the functional with the spatially weighted regularization term in Eq. (6) does not affect the other features of the image.
The S curve corresponding to the SST image is shown in graph (a) in Fig. 5. The "threshold" value for this problem is also shown on the same graph and it is 18 in this case. If we consider the units of the sea surface temperature data, the threshold value can written as 842 • C longitude per latitude. To determine the best "threshold" value that does not affect the image regions with no stripes, we begin the procedure by assigning the "threshold" value to the maximum peak of the S curve. Then we apply the destriping algorithm and check the solution. If at least one stripe is still visible, we assign the next highest peak of the S curve as the "threshold" value and check for the stripes. Continuing in this manner until all the stripes are removed, the best "threshold" value can be determined. The peak points that have crossed the threshold value are the stripes. As a comparison of the full destriping image, the absolute percentage error between the striped and destriped image is shown in the image (b) in Fig. 5. This is computed by where f and u are striped and destriped images respectively. The graph of the absolute percentage error shows that not only the 67th row but also all the other non-stripe rows are not affected from this spatially weighted regularization method.

Example 2 -VIIRS images
A good example of VIIRS stripping is shown (Fig. 6) in a chlorophyll concentration map near the Santa Monica region in southern California on 7 November 2014 NAS (2014). The chlorophyll concentration is given in milligrams per cubic meter. Green is the land and the dark blue is the dropped data and the missing data. The full VIIRS data granule covers −122.09 • W to −116.90 • E and 34.2 • N to 31.6 • S (Fig. 6). We consider a subsetted (cropped) region of interest to highlight the stripes in Fig. 7a. The first step of applying this destriping method is to determine the threshold value to separate the neighborhood of stripes and the rest of the features. However, when we deal with real data, we may not always get nice and smooth images. For instance, if we compute the S curve values using the whole image, we are unable to get any evidence to determine the threshold value, as the sum of the magnitude of forward differences in some other rows are higher than that of the stripes. Therefore, we need to carefully pick a subregion of the image that includes all the rows with some selected columns. The sum of the magnitude of forward differences in non-stripe rows should be less than the that of stripes in this region. Then the stripes can be easily highlighted. For this example, we select the region using the columns from 137 to 148, and it is shown in Fig. 7b with the S curve. The threshold value can be determined from the S curve, and it is 0.558 for this image. If we include the proper units of the image data, the threshold value is 86.04 mg m −3 .
The effects of spatially weighted regularizing destriping are shown in Fig. 7e. We apply our algorithm to the image shown in Fig. 7a. In this case, the regularization parameter was α = 10 −2 with the weighted regularization term. The intensity of the destriped image now varies smoothly. Therefore, the image is smooth enough to further post-process for other applications such as computing optical flow.
The approach is proposed in Bouali (2010) without the transverse direction functional, effectively setting α = 1 in Figure 4. This figure shows the effects of destriping on the places where there are "no stripes". We randomly picked the 67th row for this comparison. When α = 1 in the unweighted regularization functional, the image is more smoothed and affects destriping to the whole image. Much better results can be obtained from the unweighted regularization functional with α = 3 × 10 −1 , which is the U -curve solution and is shown in graph (b). A spatially weighted regularization term with α = 7 × 10 −1 has less of an effect on the other features of the image and we can observe that from the graph (c). our functional Eq. (3). However, our destriping functional as shown in Eq. (6) has weight matrix L inside the regularization term and hence the two approaches are not the same. Panel (c) in Fig. 7 represents the destriped image when α = 1 without the weighted regularization term, where image (a) is the original image. The destriped image is blurred and we lose some information from the original image due to "over regularization". In this case, by over regularization we mean that continuity in the transverse direction to the "stripes" has been emphasized so strongly in functional Eq. (3), when α is close to 1, that it is past balance against the need to also emphasize the image data in the first term, called "data fidelity".
On the other hand, image (d) is the destriped image of image (a) with α = 10 −5 in weighted regularization terms. In this image, we still can observe some stripes due to less regularity to smooth the stripes. Hence, it is clearly observed that a given weighting factor to the regularization term is necessary, and choosing an appropriate regularization parameter is also very important. We can observe this scenario when we compare the images (c), (d) and (e) in Fig. 7 as α varies from 10 −5 to 1. Therefore, once we have an appropriate α, we do not need an extra functional to reduce the blurring effects as explained in Bouali (2010), and hence our approach provides a simple algorithm to remove the stripes. The appropriate α is Figure 6. The image shows the chlorophyll concentration in milligrams per cubic meter near the Santa Monica region in southern California as viewed by VIIRS on 7 November 2014. Green represents the land and dark blue represents the dropped data due to bow-tie effects and missing data due to clouds. For a detailed discussion, we next consider the subset of the image that is covered by the pink square in the image.   selected from the U -curve method, as explained in Sect. 2.3. Panel (f) in Fig. 7 shows the percentage error between the striped and destriped images. This is proof that the effects from the destriping on the image where there are no stripes is negligible.
Image-based methods such as the variational destriping can be used together with other destriping methods. For instance, NASA's vicarious calibration of the L2 (*.nc) product method uses a monthly moon calibration to monitor the striping and calibrations and create up-to-date corrections using the entire image collection. Figure 8a shows the VIIRS image in Fig. 7a after the NASA correction, and the data are publicly available at NAS (2014). However, the striping artifacts are still visible in that image. Fig. 8b shows the application of the weighted regularizing destriping to the image from NASA's vicarious calibration of The L2 (*.nc) product method. The product from the applications of both corrections is superior to either individual correction. The regularization parameter for this image was 5 × 10 −3 , and the destriped image is shown in Fig. 8b. The threshold value for the columns 137 to 148 was 0.55. If we include the proper units of the chlorophyll data, the threshold value is 84.81 (mg) longitude/(m 3 ) latitude.
When we compare the images (a) and (b) in Fig. 8, it can be observed that our method can be used to further improve upon the destriped images, after NASA's vicarious calibration of the L2 (*.nc) product method has already been applied.

Example 3 -JPL PRISM images
In the last example, we apply the variational destriping algorithm to another data set from the JPL PRISM hyperspectral imager, where the data is publicly available at JPL (2015). Stripes arise in this sensor because of the focal-plane array read-out mechanism has cross-talk artifacts. An image from the band centered at 410 nm (band 22), which was taken from an airborne campaign around Monterey, CA, near the Elkhorn Slough (Pantazis et al., 2015), is shown in Fig. 9.
The regularization parameter to destripe the image (a) in Fig. 9 was α = 5 × 10 −3 . The destriped image is shown in image (b) in Fig. 9. We used the columns from 50 to 70 to determine the threshold value to assign the weights in the regularization term, and this value was 1.7 for this image. It can be clearly observed that the stripes are smoothed from the destriping method while preserving the other features of the image.

Conclusions
We present a variational destriping method by explicitly including a tunable regularization parameter with a weighted regularization term to a part of the destriping functional in Bouali (2010). In other words, we modeled one piece of the destriping functional in Bouali (2010) into a standard variational-based approach employing the Tikhonov regularization theory of Vogel (2002). According to the Tikhonov regularization theory, the tuning parameter allows us to properly balance the effects of optimizing the data fidelity and the smoothing effects of regularization term with the help of given weights. The introduction of the weighted regularization term avoids the effects on the original features of the image during the process of destriping. As a preprocessing step, we apply an image-processing technique to avoid the error accumulation from the "bow-tie" effects and missing data due to clouds. We also demonstrate alternative numerical aspects to implementing these methods, which allows us to write the solver in terms of common matrix manipulations. Lastly, we show that applying a scene-based method to the destriped VIIRS L2 product from NASA calibration method results in additional improvements in scene uniformity.
Data availability. We used three data sets (SST, VIIRS and PRISM) in our paper. They are VIIRS data at https://oceancolor. gsfc.nasa.gov/ and PRISM data at https://prism.jpl.nasa.gov/. SST data are not publicly accessible. The data are available by a request to John Osborn, Oregon State University.