Source code for cil.optimisation.functions.TotalVariation
# Copyright 2020 United Kingdom Research and Innovation# Copyright 2020 The University of Manchester## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.## Authors:# CIL Developers, listed at: https://github.com/TomographicImaging/CIL/blob/master/NOTICE.txt# Claire Delplancke (University of Bath)fromcil.optimisation.functionsimportFunction,IndicatorBox,MixedL21Norm,MixedL11Normfromcil.optimisation.operatorsimportGradientOperatorimportnumpyasnpfromnumbersimportNumberimportwarningsimportloggingfromcil.utilities.errorsimportInPlaceErrorlog=logging.getLogger(__name__)
[docs]classTotalVariation(Function):r""" Total variation Function .. math:: \mathrm{TV}(u) := \|\nabla u\|_{2,1} = \sum \|\nabla u\|_{2},\, (\mbox{isotropic}) .. math:: \mathrm{TV}(u) := \|\nabla u\|_{1,1} = \sum \|\nabla u\|_{1}\, (\mbox{anisotropic}) Notes ----- The :code:`TotalVariation` (TV) :code:`Function` acts as a composite function, i.e., the composition of the :class:`.MixedL21Norm` function and the :class:`.GradientOperator` operator, .. math:: f(u) = \|u\|_{2,1}, \Rightarrow (f\circ\nabla)(u) = f(\nabla x) = \mathrm{TV}(u) In that case, the proximal operator of TV does not have an exact solution and we use an iterative algorithm to solve: .. math:: \mathrm{prox}_{\tau \mathrm{TV}}(b) := \underset{u}{\mathrm{argmin}} \frac{1}{2\tau}\|u - b\|^{2} + \mathrm{TV}(u) The algorithm used for the proximal operator of TV is the Fast Gradient Projection algorithm (or FISTA) applied to the _dual problem_ of the above problem, see :cite:`BeckTeboulle_b`, :cite:`BeckTeboulle_a`, :cite:`Zhu2010`. See also "Multicontrast MRI Reconstruction with Structure-Guided Total Variation", Ehrhardt, Betcke, 2016. Parameters ---------- max_iteration : :obj:`int`, default = 5 Maximum number of iterations for the FGP algorithm to solve to solve the dual problem of the Total Variation Denoising problem (ROF). If warm_start=False, this should be around 100, or larger, with a set tolerance. tolerance : :obj:`float`, default = None Stopping criterion for the FGP algorithm used to to solve the dual problem of the Total Variation Denoising problem (ROF). If the difference between iterates in the FGP algorithm is less than the tolerance the iterations end before the max_iteration number. .. math:: \|x^{k+1} - x^{k}\|_{2} < \mathrm{tolerance} correlation : :obj:`str`, default = `Space` Correlation between `Space` and/or `SpaceChannels` for the :class:`.GradientOperator`. backend : :obj:`str`, default = `c` Backend to compute the :class:`.GradientOperator` lower : :obj:`'float`, default = None A constraint is enforced using the :class:`.IndicatorBox` function, e.g., :code:`IndicatorBox(lower, upper)`. upper : :obj:`'float`, default = None A constraint is enforced using the :class:`.IndicatorBox` function, e.g., :code:`IndicatorBox(lower, upper)`. isotropic : :obj:`boolean`, default = True Use either isotropic or anisotropic definition of TV. .. math:: |x|_{2} = \sqrt{x_{1}^{2} + x_{2}^{2}},\, (\mbox{isotropic}) .. math:: |x|_{1} = |x_{1}| + |x_{2}|\, (\mbox{anisotropic}) split : :obj:`boolean`, default = False Splits the Gradient into spatial gradient and spectral or temporal gradient for multichannel data. strong_convexity_constant : :obj:`float`, default = 0 A strongly convex term weighted by the :code:`strong_convexity_constant` (:math:`\gamma`) parameter is added to the Total variation. Now the :code:`TotalVariation` function is :math:`\gamma` - strongly convex and the proximal operator is .. math:: \underset{u}{\mathrm{argmin}} \frac{1}{2\tau}\|u - b\|^{2} + \mathrm{TV}(u) + \frac{\gamma}{2}\|u\|^{2} \Leftrightarrow .. math:: \underset{u}{\mathrm{argmin}} \frac{1}{2\frac{\tau}{1+\gamma\tau}}\|u - \frac{b}{1+\gamma\tau}\|^{2} + \mathrm{TV}(u) warm_start : :obj:`boolean`, default = True If set to true, the FGP algorithm used to solve the dual problem of the Total Variation Denoising problem (ROF) is initiated by the final value from the previous iteration and not at zero. This allows the max_iteration value to be reduced to 5-10 iterations. Note ---- With warm_start set to the default, True, the TV function will keep in memory the range of the gradient of the image to be denoised, i.e. N times the dimensionality of the image. This increases the memory requirements. However, during the evaluation of `proximal` the memory requirements will be unchanged as the same amount of memory will need to be allocated and deallocated. Note ---- In the case where the Total variation becomes a :math:`\gamma` - strongly convex function, i.e., .. math:: \mathrm{TV}(u) + \frac{\gamma}{2}\|u\|^{2} :math:`\gamma` should be relatively small, so as the second term above will not act as an additional regulariser. For more information, see :cite:`Rasch2020`, :cite:`CP2011`. Examples -------- .. math:: \underset{u}{\mathrm{argmin}} \frac{1}{2}\|u - b\|^{2} + \alpha\|\nabla u\|_{2,1} >>> alpha = 2.0 >>> TV = TotalVariation() >>> sol = TV.proximal(b, tau = alpha) Examples -------- .. math:: \underset{u}{\mathrm{argmin}} \frac{1}{2}\|u - b\|^{2} + \alpha\|\nabla u\|_{1,1} + \mathbb{I}_{C}(u) where :math:`C = \{1.0\leq u\leq 2.0\}`. >>> alpha = 2.0 >>> TV = TotalVariation(isotropic=False, lower=1.0, upper=2.0) >>> sol = TV.proximal(b, tau = alpha) Examples -------- .. math:: \underset{u}{\mathrm{argmin}} \frac{1}{2}\|u - b\|^{2} + (\alpha\|\nabla u\|_{2,1} + \frac{\gamma}{2}\|u\|^{2}) >>> alpha = 2.0 >>> gamma = 1e-3 >>> TV = alpha * TotalVariation(isotropic=False, strong_convexity_constant=gamma) >>> sol = TV.proximal(b, tau = 1.0) """def__init__(self,max_iteration=10,tolerance=None,correlation="Space",backend="c",lower=None,upper=None,isotropic=True,split=False,strong_convexity_constant=0,warm_start=True):super(TotalVariation,self).__init__(L=None)# Regularising parameter = alphaself.regularisation_parameter=1.self.iterations=max_iterationself.tolerance=tolerance# Total variation correlation (isotropic=Default)self.isotropic=isotropic# correlation space or spacechannelsself.correlation=correlationself.backend=backend# Define orthogonal projection onto the convex set CiflowerisNone:lower=-np.infifupperisNone:upper=np.infself.lower=lowerself.upper=upperself.projection_C=IndicatorBox(lower,upper).proximal# Setup GradientOperator as None. This is to avoid domain argument in the __init__self._gradient=Noneself._domain=None# splitting Gradientself.split=split# For the warm_start functionalityself.warm_start=warm_startself._p2=None# Strong convexity for TVself.strong_convexity_constant=strong_convexity_constant# Define Total variation normifself.isotropic:self.func=MixedL21Norm()else:self.func=MixedL11Norm()def_get_p2(self):r"""The initial value for the dual in the proximal calculation - allocated to zero in the case of warm_start=False or initialised as the last iterate seen in the proximal calculation in the case warm_start=True ."""ifself._p2isNone:returnself.gradient_operator.range_geometry().allocate(0)else:returnself._p2@propertydefregularisation_parameter(self):returnself._regularisation_parameter@regularisation_parameter.setterdefregularisation_parameter(self,value):ifnotisinstance(value,Number):raiseTypeError("regularisation_parameter: expected a number, got {}".format(type(value)))self._regularisation_parameter=valuedef__call__(self,x):r""" Returns the value of the TotalVariation function at :code:`x` ."""try:self._domain=x.geometryexcept:self._domain=x# Compute Lipschitz constant provided that domain is not None.# Lipschitz constant dependes on the GradientOperator, which is configured only if domain is not Noneifself._LisNone:self.calculate_Lipschitz()ifself.strong_convexity_constant>0:strongly_convex_term=(self.strong_convexity_constant/2)*x.squared_norm()else:strongly_convex_term=0returnself.regularisation_parameter*self.func(self.gradient_operator.direct(x))+strongly_convex_term
[docs]defproximal(self,x,tau,out=None):r""" Returns the proximal operator of the TotalVariation function at :code:`x` ."""ifid(x)==id(out):raiseInPlaceError(message="TotalVariation.proximal cannot be used in place")ifself.strong_convexity_constant>0:strongly_convex_factor=(1+tau*self.strong_convexity_constant)x/=strongly_convex_factortau/=strongly_convex_factorsolution=self._fista_on_dual_rof(x,tau,out=out)ifself.strong_convexity_constant>0:x*=strongly_convex_factortau*=strongly_convex_factorreturnsolution
def_fista_on_dual_rof(self,x,tau,out=None):r""" Runs the Fast Gradient Projection (FGP) algorithm to solve the dual problem of the Total Variation Denoising problem (ROF). .. math: \max_{\|y\|_{\infty}<=1.} \frac{1}{2}\|\nabla^{*} y + x \|^{2} - \frac{1}{2}\|x\|^{2} """try:self._domain=x.geometryexcept:self._domain=x# Compute Lipschitz constant provided that domain is not None.# Lipschitz constant depends on the GradientOperator, which is configured only if domain is not Noneifself._LisNone:self.calculate_Lipschitz()# initialiset=1# dual variable - its content is overwritten during iterationsp1=self.gradient_operator.range_geometry().allocate(None)p2=self._get_p2()tmp_q=p2.copy()# multiply tau by -1 * regularisation_parameter here so it's not recomputed every iteration# when tau is an array this is done inplace so reverted at the endifisinstance(tau,Number):tau_reg_neg=-self.regularisation_parameter*tauelse:tau_reg_neg=tautau.multiply(-self.regularisation_parameter,out=tau_reg_neg)ifoutisNone:out=self.gradient_operator.domain_geometry().allocate(0)forkinrange(self.iterations):t0=tself.gradient_operator.adjoint(tmp_q,out=out)out.sapyb(tau_reg_neg,x,1.0,out=out)self.projection_C(out,tau=None,out=out)self.gradient_operator.direct(out,out=p1)multip=(-self.L)/tau_reg_negtmp_q.sapyb(1.,p1,multip,out=tmp_q)ifself.toleranceisnotNoneandk%5==0:p1*=multiperror=p1.norm()error/=tmp_q.norm()iferror<self.tolerance:breakself.func.proximal_conjugate(tmp_q,1.0,out=p1)t=(1+np.sqrt(1+4*t0**2))/2p1.subtract(p2,out=tmp_q)tmp_q*=(t0-1)/ttmp_q+=p1# switch p1 and p2 referencestmp=p1p1=p2p2=tmpifself.warm_start:self._p2=p2ifself.toleranceisnotNone:log.info("Stop at %d iterations with tolerance %r",k,error)else:log.info("Stop at %d iterations.",k)# return tau to its original state if it was modifiedifid(tau_reg_neg)==id(tau):tau_reg_neg.divide(-self.regularisation_parameter,out=tau)returnout
[docs]defconvex_conjugate(self,x):r""" Returns the value of convex conjugate of the TotalVariation function at :code:`x` ."""return0.0
[docs]defcalculate_Lipschitz(self):r""" Default value for the Lipschitz constant."""# Compute the Lipschitz parameter from the operator if possible# Leave it initialised to None otherwiseself._L=(1./self.gradient_operator.norm())**2
@propertydefgradient_operator(self):r""" GradientOperator is created if it is not instantiated yet. The domain of the `_gradient`, is created in the `__call__` and `proximal` methods. """ifself._domainisnotNone:self._gradient=GradientOperator(self._domain,correlation=self.correlation,backend=self.backend)else:raiseValueError(" The domain of the TotalVariation is {}. Please use the __call__ or proximal methods first before calling gradient.".format(self._domain))returnself._gradientdef__rmul__(self,scalar):ifnotisinstance(scalar,Number):raiseTypeError("scalar: Expected a number, got {}".format(type(scalar)))self.regularisation_parameter*=scalarreturnself