[1]:

# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Authored by: Tommi Heikkilä (LUT) # Review by: Margaret Duff (STFC-UKRI), Jakob Jørgensen (DTU)
[2]:

import cil print(f"Using CIL version {cil.__version__}")
Using CIL version 24.2.0

Controlled Wavelet Sparsity using callbacks (2D data)#

This notebook showcases an automated method for tuning the regularization parameter towards a good value during the optimization process. It is organized as follows.

  1. Set up the data and the appropriate geometry

  2. Brief introduction to a representation system called wavelets and explaining what wavelet regularization should do.

    • Traditional run with fixed regularization parameter using FISTA

  3. Perform multiple runs with different (fixed) regularization parameter values for comparison and motivating later steps.

  4. Brief explanation of the controlled wavelet sparsity method including the original source and how the CIL implementation is done.

  5. Some comments on the limitations of the method and how to pick the importat parameter values.

2D laboratory micro-CT, fan-beam data of a walnut#

This example uses the walnut dataset Data328.mat from https://doi.org/10.5281/zenodo.1254206 :

If running locally please download the data and update the ‘path’ variable below.

[3]:
path = './data'

Set the logging level

[4]:
import logging
logger = logging.getLogger('dxchange')
logger.setLevel(logging.ERROR)

First import all of the modules we will need:

[5]:
import os
import numpy as np
import matplotlib.pyplot as plt

# cil imports
from cil.framework import ImageData, ImageGeometry
from cil.framework import AcquisitionGeometry, AcquisitionData

from cil.optimisation.functions import LeastSquares, L1Sparsity, ScaledFunction
from cil.optimisation.operators import WaveletOperator
from cil.optimisation.algorithms import FISTA

from cil.plugins.astra import ProjectionOperator

from cil.utilities.display import show2D, show_geometry

import scipy.io

Load the 2D fan-beam sinogram of walnut:

[6]:
filename = os.path.join(path, "Data328.mat")

matdata = scipy.io.loadmat(filename)
sinogram = np.float32(matdata["m"].transpose()) # Matlab is column-major, numpy is row-major
num_angles, num_pixels = sinogram.shape

Create geometry#

We have to manually set the values for the correct geometry. These can be found in the documentation of the data.

[7]:
# Acquisition geometry parameters
SOD = 110 # Source-Origin-distance
print(f'SOD: {SOD:.3f} mm')
SDD = 300 # Source-Detector-distance
print(f'SDD: {SDD:.3f} mm')
ODD = SDD - SOD # Origin-detector-distance
print(f'ODD: {ODD:.3f} mm')
pixelSize = 114.8 / num_pixels # Detector pixel width after binning
print(f'Detector pixel size: {pixelSize:.3f} mm')

angles = np.linspace(0., 360., 120, endpoint=False)
SOD: 110.000 mm
SDD: 300.000 mm
ODD: 190.000 mm
Detector pixel size: 0.350 mm
[8]:
ag = AcquisitionGeometry.create_Cone2D([0,-SOD], [0,ODD], units='mm',detector_direction_x=[1, 0])
ag.set_panel(num_pixels, pixel_size=pixelSize, origin='bottom-left')
ag.set_angles(-angles)
ag.set_labels(('angle','horizontal'))
print(ag)
2D Cone-beam tomography
System configuration:
        Source position: [   0., -110.]
        Rotation axis position: [0., 0.]
        Detector position: [  0., 190.]
        Detector direction x: [1., 0.]
Panel configuration:
        Number of pixels: [328   1]
        Pixel size: [0.35 0.35]
        Pixel origin: bottom-left
Channel configuration:
        Number of channels: 1
Acquisition description:
        Number of positions: 120
        Angles 0-9 in degrees: [ -0.,  -3.,  -6.,  -9., -12., -15., -18., -21., -24., -27.]
        Angles 110-119 in degrees: [-330., -333., -336., -339., -342., -345., -348., -351., -354., -357.]
        Full angular array can be accessed with acquisition_data.geometry.angles
Distances in units: mm
[9]:
show_geometry(ag);
../../_images/demos_controlledWaveletSparsity_2d_14_0.png

Image geometry and data#

Notice in particular how the voxel_size (or image pixel size) matches the reported value of 0.128 mm.

[10]:
# Setup image geometry
n = num_pixels - 1
ig = ag.get_ImageGeometry()
ig.voxel_num_x = n
ig.voxel_num_y = n
print(ig)

# Setup data
data = AcquisitionData(sinogram, deep_copy=False, geometry=ag)
show2D(data);
Number of channels: 1
channel_spacing: 1.0
voxel_num : x327,y327
voxel_size : x0.12833333333333333,y0.12833333333333333
center : x0,y0

../../_images/demos_controlledWaveletSparsity_2d_16_1.png

Reconstructions#

[11]:
A = ProjectionOperator(ig, ag, 'cpu')

x_bp = A.adjoint(data) # Simple backprojection
show2D(x_bp, size=(10,10), title="Naive backprojection");
../../_images/demos_controlledWaveletSparsity_2d_18_0.png

Wavelet based regularization#

Classical approach in imaging problems (such as CT, denoising, etc.) is to try represent the key information about the solution using a wavelet basis. By construction a relatively sparse solution (i.e. only few nonzero components) should suffice, as opposed to pixel-based representation.

In short, we have the least squares data mismatch term of form

\[\frac{1}{2} \| A x - m_\delta \|_2^2,\]

where \(A\) is the forward operator, \(m_\delta\) is the noisy data and \(x\) is the unknown solution. Then we add a regularization term of the form

\[\alpha \| W(x) \|_1 = \alpha \sum_{j,k} | c(x)_{j,k} |,\]

where \(\alpha > 0\) is the regularization term (balacing our trust in data and need in regularization) and \(W\) is the Discrete Wavelet Transform operator which decomposes any input \(x\) (signal, image or volume) into a sequence of coefficients \(c(x)_{j,k}\) of varying scale (\(j \in \mathbb{N}\)) and location (\(k \in \mathbb{Z}^d\)), such that

\[x = \sum_{j,k} c(x)_{j,k} \psi_{j,k}.\]

Wavelet transform#

Classical choice is one of the orthogonal wavelets of Ingrid Daubechies, usually distinguished by their smoothness properties. For example the Daubechies-2 wavelets always give continuous approximations.

[12]:
wname = 'db2'
level = 4

W = WaveletOperator(ig, level=level, wname=wname)

Fast Iterative Soft Thresholding Algorithm (FISTA)#

Equally classic choice of minimization algrorithm in the (Fast) Iterative Soft Thresholding Algorithm (ISTA), originally developed for wavelet-based denoising in
> Daubechies, I, Defrise, M, & De Mol, C..
An iterative thresholding algorithm for linear inverse problems with a sparsity constraint.
Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences 57.11 (2004): 1413-1457.

Although we will run the accelerated version.

[13]:
f = LeastSquares(A, data)
g = L1Sparsity(W)

tau = 1e-5  # Regularization parameter

fista = FISTA(initial=ig.allocate(0), f=f, g=tau*g)

Let’s run arbitrary number of iterations (200)

[14]:
fista.run(200)
[15]:
x_fista = fista.get_output()
show2D(x_fista, size=(10,10), fix_range=(0,0.1), title=f"FISTA with {tau = }");
../../_images/demos_controlledWaveletSparsity_2d_26_0.png

Looks okay but how do we know the choice of regularization parameter was good? Could we get a better result with another value? Do we have the patience to try out dozens or even hundreds of values?

Bulk reconstruction (slow)#

Let’s try to reconstruct with a variety of different regularization parameters

[16]:
J = 6
iter = 300
regParams = 10**np.linspace(-6, 2, J, endpoint=True)
recons = []
L2Vals = np.zeros((J,1))
regVals = np.zeros((J,1))


for j, tau in enumerate(regParams):
    print(f"{tau = }")
    fista = FISTA(initial=ig.allocate(0), f=f, g=tau*g)
    fista.run(iter)
    x = fista.get_output()
    # Store reconstruction
    recons.append(x)
    # Evaluate data and regularization terms
    L2Vals[j] = f(x)
    regVals[j] = g(x)
tau = 1e-06
tau = 3.9810717055349695e-05
tau = 0.001584893192461114
tau = 0.06309573444801943
tau = 2.511886431509582
tau = 100.0
[17]:
# Plot reconstructions
show2D(recons, title=[f"tau = {tau:.1e}" for tau in regParams], fix_range=(0,0.1), size=(10,10));
# Plot data mismatch vs. regularization term of each minimizer
fig, ax = plt.subplots()
ax.loglog(regVals, L2Vals, linestyle='dashed', marker='o')
ax.set_title("Space of minimizers")
ax.set_xlabel("Data mismatch")
ax.set_ylabel("Regularizer");
../../_images/demos_controlledWaveletSparsity_2d_30_0.png
../../_images/demos_controlledWaveletSparsity_2d_30_1.png

Methods like L-curve can help choose suitable value from such curves but require many runs of the algorithm (most of which are “wasted” computations).

Sparsity levels of the bulk reconstruction#

Let’s compute the ration of “meaningful” wavelet coefficients in the minimizers

[18]:
sparsityLevels = np.zeros((J,1))
Wlen = np.prod(W._shapes[0]) # Number of Approximation Coefficients
for l in range(W.level):
    Wlen += (2**W.domain_geometry().ndim - 1)*np.prod(W._shapes[l+1][W.domain_geometry().ndim*'d']) # Number of Detail Coefficients for level ´l´
kappa = 1e-7 # Threshold for "meaningful" or "big" coefficients
for j in range(J):
    Wx = W.direct(recons[j])
    sparsityLevels[j] = (Wx.abs().as_array() > kappa).sum() / Wlen

fig, ax = plt.subplots()
ax.semilogx(regParams, sparsityLevels*100, linestyle=None, marker='o')
ax.set_title("Sparsity levels")
ax.set_xlabel("Regularization parameter")
ax.set_ylabel("Wavelet sparsity (%)");
../../_images/demos_controlledWaveletSparsity_2d_33_0.png

The not-so-good reconstructions seem to either have very high (~100%) or very low (~0%) amount of “meaningful” wavelet coefficients since they are either dominated by noise or lack all details. Maybe we can use this to our advantage?

Automated Controlled Sparsity#

We follow the automated tuning strategy as introduced in
> Purisha, Z., Rimpeläinen, J., Bubba, T., & Siltanen, S. (2017).
Controlled wavelet domain sparsity for x-ray tomography.
Measurement Science and Technology, 29(1), 014002.

We’ll try to implement it as a StepSizeRule and callback. However we are NOT actually changing the step_size (which is based on the Lipschitz constant of the data mistmatch term) but the regularization parameter tau. However, we can update it every iterate by using StepSizeRule (in fact we could pass it an actual StepSizeRule to perform but let’s just stick with constant value for now).

[19]:
from SparsityControl import ControlledSparsity, DesiredSparsity
[20]:
reg_weight = 1e-2 # Initial value
# Automated tuning of regularization parameter
desired_sparsity = 0.3 # Let's require that 30% of the wavelet coefficients must be "meaningful"
sparsity_ctrl = ControlledSparsity(desired_sparsity=desired_sparsity)
fista_controlled = FISTA(initial=ig.allocate(0), f=f, g=reg_weight*g, step_size=sparsity_ctrl, max_iteration=500)
[21]:
# Sparsity based stopping rule
sparsity_stop = DesiredSparsity(print_stuff=2, rel_change_tol=5e-4)
[22]:
fista_controlled.run(500, callbacks=[sparsity_stop])
Iteration : 5 | Current sparsity: 0.973 | Regularization param.: 9.56e-03 | Desired sparsity: 0.300 | Difference: 0.673 | Beta: 5.90e-05 | Relative change: 8.82e-02
Iteration : 10 | Current sparsity: 0.965 | Regularization param.: 9.76e-03 | Desired sparsity: 0.300 | Difference: 0.665 | Beta: 5.90e-05 | Relative change: 5.13e-02
Iteration : 15 | Current sparsity: 0.950 | Regularization param.: 9.95e-03 | Desired sparsity: 0.300 | Difference: 0.650 | Beta: 5.90e-05 | Relative change: 3.04e-02
Iteration : 20 | Current sparsity: 0.931 | Regularization param.: 1.01e-02 | Desired sparsity: 0.300 | Difference: 0.631 | Beta: 5.90e-05 | Relative change: 1.78e-02
Iteration : 25 | Current sparsity: 0.908 | Regularization param.: 1.03e-02 | Desired sparsity: 0.300 | Difference: 0.608 | Beta: 5.90e-05 | Relative change: 1.17e-02
Iteration : 30 | Current sparsity: 0.884 | Regularization param.: 1.05e-02 | Desired sparsity: 0.300 | Difference: 0.584 | Beta: 5.90e-05 | Relative change: 8.16e-03
Iteration : 35 | Current sparsity: 0.857 | Regularization param.: 1.07e-02 | Desired sparsity: 0.300 | Difference: 0.557 | Beta: 5.90e-05 | Relative change: 6.29e-03
Iteration : 40 | Current sparsity: 0.826 | Regularization param.: 1.08e-02 | Desired sparsity: 0.300 | Difference: 0.526 | Beta: 5.90e-05 | Relative change: 5.30e-03
Iteration : 45 | Current sparsity: 0.792 | Regularization param.: 1.10e-02 | Desired sparsity: 0.300 | Difference: 0.492 | Beta: 5.90e-05 | Relative change: 4.57e-03
Iteration : 50 | Current sparsity: 0.757 | Regularization param.: 1.11e-02 | Desired sparsity: 0.300 | Difference: 0.457 | Beta: 5.90e-05 | Relative change: 3.99e-03
Iteration : 55 | Current sparsity: 0.722 | Regularization param.: 1.12e-02 | Desired sparsity: 0.300 | Difference: 0.422 | Beta: 5.90e-05 | Relative change: 3.58e-03
Iteration : 60 | Current sparsity: 0.687 | Regularization param.: 1.14e-02 | Desired sparsity: 0.300 | Difference: 0.387 | Beta: 5.90e-05 | Relative change: 3.27e-03
Iteration : 65 | Current sparsity: 0.653 | Regularization param.: 1.15e-02 | Desired sparsity: 0.300 | Difference: 0.353 | Beta: 5.90e-05 | Relative change: 3.03e-03
Iteration : 70 | Current sparsity: 0.621 | Regularization param.: 1.16e-02 | Desired sparsity: 0.300 | Difference: 0.321 | Beta: 5.90e-05 | Relative change: 2.84e-03
Iteration : 75 | Current sparsity: 0.592 | Regularization param.: 1.17e-02 | Desired sparsity: 0.300 | Difference: 0.292 | Beta: 5.90e-05 | Relative change: 2.66e-03
Iteration : 80 | Current sparsity: 0.565 | Regularization param.: 1.17e-02 | Desired sparsity: 0.300 | Difference: 0.265 | Beta: 5.90e-05 | Relative change: 2.50e-03
Iteration : 85 | Current sparsity: 0.540 | Regularization param.: 1.18e-02 | Desired sparsity: 0.300 | Difference: 0.240 | Beta: 5.90e-05 | Relative change: 2.38e-03
Iteration : 90 | Current sparsity: 0.518 | Regularization param.: 1.19e-02 | Desired sparsity: 0.300 | Difference: 0.218 | Beta: 5.90e-05 | Relative change: 2.26e-03
Iteration : 95 | Current sparsity: 0.498 | Regularization param.: 1.19e-02 | Desired sparsity: 0.300 | Difference: 0.198 | Beta: 5.90e-05 | Relative change: 2.15e-03
Iteration : 100 | Current sparsity: 0.480 | Regularization param.: 1.20e-02 | Desired sparsity: 0.300 | Difference: 0.180 | Beta: 5.90e-05 | Relative change: 2.05e-03
Iteration : 105 | Current sparsity: 0.464 | Regularization param.: 1.20e-02 | Desired sparsity: 0.300 | Difference: 0.164 | Beta: 5.90e-05 | Relative change: 1.97e-03
Iteration : 110 | Current sparsity: 0.449 | Regularization param.: 1.21e-02 | Desired sparsity: 0.300 | Difference: 0.149 | Beta: 5.90e-05 | Relative change: 1.89e-03
Iteration : 115 | Current sparsity: 0.436 | Regularization param.: 1.21e-02 | Desired sparsity: 0.300 | Difference: 0.136 | Beta: 5.90e-05 | Relative change: 1.82e-03
Iteration : 120 | Current sparsity: 0.425 | Regularization param.: 1.22e-02 | Desired sparsity: 0.300 | Difference: 0.125 | Beta: 5.90e-05 | Relative change: 1.75e-03
Iteration : 125 | Current sparsity: 0.415 | Regularization param.: 1.22e-02 | Desired sparsity: 0.300 | Difference: 0.115 | Beta: 5.90e-05 | Relative change: 1.69e-03
Iteration : 130 | Current sparsity: 0.405 | Regularization param.: 1.22e-02 | Desired sparsity: 0.300 | Difference: 0.105 | Beta: 5.90e-05 | Relative change: 1.63e-03
Iteration : 135 | Current sparsity: 0.397 | Regularization param.: 1.23e-02 | Desired sparsity: 0.300 | Difference: 0.097 | Beta: 5.90e-05 | Relative change: 1.58e-03
Iteration : 140 | Current sparsity: 0.390 | Regularization param.: 1.23e-02 | Desired sparsity: 0.300 | Difference: 0.090 | Beta: 5.90e-05 | Relative change: 1.53e-03
Iteration : 145 | Current sparsity: 0.383 | Regularization param.: 1.23e-02 | Desired sparsity: 0.300 | Difference: 0.083 | Beta: 5.90e-05 | Relative change: 1.48e-03
Iteration : 150 | Current sparsity: 0.377 | Regularization param.: 1.23e-02 | Desired sparsity: 0.300 | Difference: 0.077 | Beta: 5.90e-05 | Relative change: 1.43e-03
Iteration : 155 | Current sparsity: 0.371 | Regularization param.: 1.24e-02 | Desired sparsity: 0.300 | Difference: 0.071 | Beta: 5.90e-05 | Relative change: 1.39e-03
Iteration : 160 | Current sparsity: 0.367 | Regularization param.: 1.24e-02 | Desired sparsity: 0.300 | Difference: 0.067 | Beta: 5.90e-05 | Relative change: 1.34e-03
Iteration : 165 | Current sparsity: 0.363 | Regularization param.: 1.24e-02 | Desired sparsity: 0.300 | Difference: 0.063 | Beta: 5.90e-05 | Relative change: 1.30e-03
Iteration : 170 | Current sparsity: 0.358 | Regularization param.: 1.24e-02 | Desired sparsity: 0.300 | Difference: 0.058 | Beta: 5.90e-05 | Relative change: 1.26e-03
Iteration : 175 | Current sparsity: 0.354 | Regularization param.: 1.24e-02 | Desired sparsity: 0.300 | Difference: 0.054 | Beta: 5.90e-05 | Relative change: 1.22e-03
Iteration : 180 | Current sparsity: 0.350 | Regularization param.: 1.24e-02 | Desired sparsity: 0.300 | Difference: 0.050 | Beta: 5.90e-05 | Relative change: 1.18e-03
Iteration : 185 | Current sparsity: 0.347 | Regularization param.: 1.25e-02 | Desired sparsity: 0.300 | Difference: 0.047 | Beta: 5.90e-05 | Relative change: 1.15e-03
Iteration : 190 | Current sparsity: 0.343 | Regularization param.: 1.25e-02 | Desired sparsity: 0.300 | Difference: 0.043 | Beta: 5.90e-05 | Relative change: 1.11e-03
Iteration : 195 | Current sparsity: 0.341 | Regularization param.: 1.25e-02 | Desired sparsity: 0.300 | Difference: 0.041 | Beta: 5.90e-05 | Relative change: 1.08e-03
Iteration : 200 | Current sparsity: 0.337 | Regularization param.: 1.25e-02 | Desired sparsity: 0.300 | Difference: 0.037 | Beta: 5.90e-05 | Relative change: 1.04e-03
Iteration : 205 | Current sparsity: 0.335 | Regularization param.: 1.25e-02 | Desired sparsity: 0.300 | Difference: 0.035 | Beta: 5.90e-05 | Relative change: 1.01e-03
Iteration : 210 | Current sparsity: 0.332 | Regularization param.: 1.25e-02 | Desired sparsity: 0.300 | Difference: 0.032 | Beta: 5.90e-05 | Relative change: 9.84e-04
Iteration : 215 | Current sparsity: 0.329 | Regularization param.: 1.25e-02 | Desired sparsity: 0.300 | Difference: 0.029 | Beta: 5.90e-05 | Relative change: 9.56e-04
Iteration : 220 | Current sparsity: 0.327 | Regularization param.: 1.25e-02 | Desired sparsity: 0.300 | Difference: 0.027 | Beta: 5.90e-05 | Relative change: 9.28e-04
Iteration : 225 | Current sparsity: 0.325 | Regularization param.: 1.25e-02 | Desired sparsity: 0.300 | Difference: 0.025 | Beta: 5.90e-05 | Relative change: 9.03e-04
Iteration : 230 | Current sparsity: 0.323 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.023 | Beta: 5.90e-05 | Relative change: 8.78e-04
Iteration : 235 | Current sparsity: 0.321 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.021 | Beta: 5.90e-05 | Relative change: 8.51e-04
Iteration : 240 | Current sparsity: 0.320 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.020 | Beta: 5.90e-05 | Relative change: 8.27e-04
Iteration : 245 | Current sparsity: 0.318 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.018 | Beta: 5.90e-05 | Relative change: 8.01e-04
Iteration : 250 | Current sparsity: 0.317 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.017 | Beta: 5.90e-05 | Relative change: 7.79e-04
Iteration : 255 | Current sparsity: 0.315 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.015 | Beta: 5.90e-05 | Relative change: 7.57e-04
Iteration : 260 | Current sparsity: 0.314 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.014 | Beta: 5.90e-05 | Relative change: 7.35e-04
Iteration : 265 | Current sparsity: 0.313 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.013 | Beta: 5.90e-05 | Relative change: 7.13e-04
Iteration : 270 | Current sparsity: 0.312 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.012 | Beta: 5.90e-05 | Relative change: 6.93e-04
Iteration : 275 | Current sparsity: 0.310 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.010 | Beta: 5.90e-05 | Relative change: 6.73e-04
Iteration : 280 | Current sparsity: 0.310 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.010 | Beta: 5.90e-05 | Relative change: 6.53e-04
Iteration : 285 | Current sparsity: 0.309 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.009 | Beta: 5.90e-05 | Relative change: 6.33e-04
Iteration : 290 | Current sparsity: 0.308 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.008 | Beta: 5.90e-05 | Relative change: 6.16e-04
Iteration : 295 | Current sparsity: 0.307 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.007 | Beta: 5.90e-05 | Relative change: 5.99e-04
Iteration : 300 | Current sparsity: 0.306 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.006 | Beta: 5.90e-05 | Relative change: 5.83e-04
Iteration : 305 | Current sparsity: 0.305 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.005 | Beta: 5.90e-05 | Relative change: 5.68e-04
Iteration : 310 | Current sparsity: 0.305 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.005 | Beta: 5.90e-05 | Relative change: 5.55e-04
Iteration : 315 | Current sparsity: 0.304 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.004 | Beta: 5.90e-05 | Relative change: 5.43e-04
Iteration : 320 | Current sparsity: 0.303 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.003 | Beta: 5.90e-05 | Relative change: 5.30e-04
Iteration : 325 | Current sparsity: 0.302 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.002 | Beta: 5.90e-05 | Relative change: 5.17e-04
Iteration : 330 | Current sparsity: 0.302 | Regularization param.: 1.26e-02 | Desired sparsity: 0.300 | Difference: 0.002 | Beta: 5.90e-05 | Relative change: 5.05e-04
Desired sparsity reached after 332 iterations!
[23]:
x_controlled = fista_controlled.solution
show2D([x_fista, x_controlled], title=["Fixed reg. param.", "Automated reg. param."], fix_range=(0,0.1), size=(10,20));
../../_images/demos_controlledWaveletSparsity_2d_40_0.png
[24]:
import matplotlib.pyplot as plt

N = fista_controlled.iterations[-1]
fig, ax = plt.subplots(2)
ax[0].plot(sparsity_ctrl.sparsity_vals, label='Current sparsity')
ax[0].hlines(sparsity_ctrl.desired_sparsity, xmin=0, xmax=N, colors='r', label='Desired sparsity')
ax[0].set_ylabel("Sparsity level")
ax[0].legend()

ax[1].semilogy(sparsity_ctrl.reg_param_vals)
ax[1].set_ylabel("Regularization parameter")
ax[1].set_xlabel("Iteration")
for a in ax.flat:
    a.label_outer()
../../_images/demos_controlledWaveletSparsity_2d_41_0.png

Here we can see how increasing the regularization parameter (lower plot) drives the sparsity level down towards the desired level (top plot). The iteration can be stopped once the suitable level has been reached (and the relative change is small enough).

Notice also how the tuning is adaptive, in the sense that as we get closer to the desired sparsity level, the incremental changes in the regularization parameter get finer.

Limitations of the method#

The method does not fully automate the choice of regularization parameter as the desired sparsity level and some of the tuning parameters still need to be chosen. However, chosing a good level of sparsity is much easier, the values are clearly bounded between 0 and 1 (corresponding to 0% and 100% of coefficients being “meaningful”) and the tuning parameters mostly affect the speed on the method. Moreover, with bit of experience, it is relatively easy to tell early whether the choice will be good and restart the algorithm if needed.

How to pick desired sparsity?#

In the original paper it is assumed that a reference image or reconstruction could give the desired sparsity level. In practice this depends a lot on the data and the reconstruction method and an educated guess (c.f. Stetson-Harrison) often works fine.

For example a TV regularized solution will be much smoother than filtered backprojection and hence the reference sparsity would be lower. In general values around 25-40% seem to work fine. Very rarely is the sparsity level > 60% with wavelets. Other representation systems (like shearlets) usually require higher levels.