[1]:
# -*- coding: utf-8 -*-
# Copyright 2024
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Authored by: Jack S. Callaghan (Bangor University, UK)
# Reviewed by: Margaret Duff (STFC-UKRI), Franck Vidal (STFC-UKRI)
Simulation using gVXR and CPU reconstruction using CIL of a twisted hexagonal object with helical flow channels#
This notebook is used to simulate X-ray CT scan projections and perform reconstruction on the simulated data.
The scan object is a twisted hexagonal geometry with helical flow channels (object.stl), and the scan parameters are taken from an existing scan (scan_param.xml). The scan parameters in this example are extracted from a Diondo d5 XCT scanner.
The aim is to create a dataset of projections similar to the experiment to:
practice the reconstruction methods
prepare the scripting for data import and processing when the raw projection data arrives
gVXR is used to simulate the projections, then the data is written to unsigned 16-bit raw format, which is then read in again and reconstructed, without a GPU,using CIL.
This notebook differs from a previous gVXR notebook https://github.com/JSCallaghan/CIL-User-Showcase/blob/main/006_gVXR2CIL/CBCT-acquisition-with-noise-and-reconstruction.ipynb in that it demonstrates
imaging a single material object created from an stl file
reading in and using scan parameters for on existing scan xml file
saving and then importing and processing raw projection data
reconstructing using CIL SIRT and CGLS iterative algorithms using CPU projectors.
Installation#
environment: CIL CPU > conda create –name cilCPU -c conda-forge -c https://software.repos.intel.com/python/conda -c ccpi cil=24.2.0 ipp=2021.12 astra-toolbox==py ipykernel ipywidgets
package: gVXR > pip install –upgrade gvxr
package: k3d > conda install -c conda-forge k3d
Import modules#
[2]:
# CIL - tested on version 24.2.0
import cil
print(cil.__version__)
from cil.framework import AcquisitionData, AcquisitionGeometry, DataContainer
from cil.utilities.display import show_geometry, show2D
from cil.utilities.jupyter import islicer
from cil.plugins.astra import ProjectionOperator
from cil.io import RAWFileWriter
from cil.optimisation.algorithms import CGLS, SIRT
from cil.optimisation.functions import IndicatorBox
from cil.plugins.astra import FBP
# gvxr
import k3d
from gvxrPython3 import gvxr
from gvxrPython3.utils import visualise
# other
import xml.etree.ElementTree as ET
import matplotlib.pyplot as plt
import numpy as np
cmapRecon = "inferno"
from IPython.display import Image
24.2.0
spekpy is not install, you won't be able to load a beam spectrum using spekpy
xpecgen is not install, you won't be able to load a beam spectrum using xpecgen
Tue Jan 7 15:01:02 2025 (WW) Spekpy is not installed, try Xpecgen instead.
Tue Jan 7 15:01:02 2025 (WW) Xpecgen is not installed either.
Read data#
The scan parameters and object parameters are found in the xml and stl files, respectively, in the repository. The experimental data is very large, so for the simulation a reduction factor is applied to the projection resolution, and a smaller number of projection angles are used.
First the user specifies the file locations and the data reduction factor:
[3]:
# xml name
xmlName = 'scan_param.xml'
# stl name
stlName = 'object.stl'
# data reduction factor (scale down)
dataRed = 15
The XML data is read, setting the scan geometry:
[4]:
# convert xml as nested dictionary
tree = ET.parse(xmlName)
d = {}
for group in tree.getroot():
d[group.tag] = {}
for item in group:
if item:
for subitem in item:
d[group.tag][item.tag] = {}
d[group.tag][item.tag][subitem.tag] = subitem.text
else:
d[group.tag][item.tag] = item.text
# geometry data
numPixX = int(d['ScanParameter']['DetectorPixelX'])
numPixY = int(d['ScanParameter']['DetectorPixelY'])
souPos = float(d['Geometrie']['SourceObjectDist'])
detPos = float(d['Geometrie']['SourceDetectorDist'])
pixSiz = float(d['Recon']['ProjectionPixelSizeX'])
detOffX = float(d['Recon']['ProjectionCenterOffsetX'])
detOffY = float(d['Recon']['ProjectionCenterOffsetY'])
# simulation data - resolution decreased by factor of dataRed
numPixX = int(numPixX / dataRed)
numPixY = int(numPixY / dataRed)
pixSiz = pixSiz * dataRed
# scan data
scaVol = float(d['ScanParameter']['Voltage'])
numPro = int(d['Recon']['ProjectionCount'])
/tmp/ipykernel_4044685/1872668756.py:7: DeprecationWarning: Testing an element's truth value will always return True in future versions. Use specific 'len(elem)' or 'elem is not None' test instead.
if item:
The stl file is read and the object viewable in an interactive window:
[5]:
plot = k3d.plot()
with open(stlName, "rb") as model:
plot += k3d.stl(model.read(),color=0xfdea4f)
plot.display()
print("Screenshot for online viewing:")
Image(url='object_screenshot.png')
Screenshot for online viewing:
[5]:

Simulate Projection#
The user can determine the number of projections and the end argument. They also set the material of the object being scanned.
[6]:
# projection
numPro = 300
endAng = 360
# material
matNam = "ZrO2"
matDen = 5.68 # g/cm3
Using the CIL function show_geometry, we can visualise the scan parameters:
[7]:
# create geometry
ag = AcquisitionGeometry.create_Cone3D(source_position=[0,-souPos,0],
detector_position=[detOffX,detPos,detOffY])\
.set_panel(num_pixels=[numPixX,numPixY],pixel_size=pixSiz)\
.set_angles(angles=np.linspace(0,endAng,numPro,endpoint=False))
# display geometry
show_geometry(ag)
# image geometry
ig = ag.get_ImageGeometry()
The geometry looks feasible and so we can set up gVXR:
[8]:
# Create an OpenGL context
window_id = 0
opengl_major_version = 4
opengl_minor_version = 5
backend = "OPENGL"
visible = False
gvxr.createWindow(window_id, visible, backend, opengl_major_version, opengl_minor_version)
# Setup Detector
gvxr.setDetectorNumberOfPixels(numPixX, numPixY)
gvxr.setDetectorPixelSize(pixSiz, pixSiz, "mm")
gvxr.setDetectorUpVector(0, 0, 1)
gvxr.setDetectorPosition(0, detPos, 0, "mm")
# Setup Source
gvxr.setSourcePosition(0,-souPos,0,"mm")
gvxr.usePointSource()
gvxr.addEnergyBinToSpectrum(scaVol, "keV", 1)
# Setup Model
gvxr.removePolygonMeshesFromSceneGraph()
gvxr.loadMeshFile("cell", stlName, "mm")
gvxr.setCompound("cell",matNam)
gvxr.setDensity("cell", matDen, "g/cm3")
Tue Jan 7 15:01:05 2025 ---- Create window (ID: 0)
Tue Jan 7 15:01:05 2025 ---- Request an interactive OpenGL context
Tue Jan 7 15:01:05 2025 ---- Initialise GLFW
Tue Jan 7 15:01:05 2025 (WW) GLFW error: X11: The DISPLAY environment variable is missing
Tue Jan 7 15:01:05 2025 (WW) Cannot initialise GLFW.
gvxrWarning: OpenGLWindow cannot be created.
Tue Jan 7 15:01:05 2025 (WW) Could not create a context
Tue Jan 7 15:01:05 2025 ---- Request a EGL context
Tue Jan 7 15:01:05 2025 ---- EGL client extensions:
Tue Jan 7 15:01:05 2025 ---- EGL_EXT_platform_base EGL_EXT_device_base EGL_EXT_device_enumeration EGL_EXT_device_query EGL_KHR_client_get_all_proc_addresses EGL_EXT_client_extensions EGL_KHR_debug EGL_KHR_platform_x11 EGL_EXT_platform_x11 EGL_EXT_platform_device EGL_MESA_platform_surfaceless EGL_EXT_explicit_device EGL_KHR_platform_wayland EGL_EXT_platform_wayland EGL_KHR_platform_gbm EGL_MESA_platform_gbm EGL_EXT_platform_xcb
Tue Jan 7 15:01:05 2025 ---- EGL clients supported by this host:
Tue Jan 7 15:01:05 2025 ---- - Surfaceless platform
Tue Jan 7 15:01:05 2025 ---- - X11 platform
Tue Jan 7 15:01:05 2025 ---- - Wayland platform
Tue Jan 7 15:01:05 2025 ---- - GBM platform
Tue Jan 7 15:01:05 2025 ---- - Device platform
Tue Jan 7 15:01:05 2025 ---- Initialise EGL
Tue Jan 7 15:01:05 2025 ---- Try to build an EGL display.
Tue Jan 7 15:01:05 2025 ---- Go through all the EGL devices to find a suitable display.
Tue Jan 7 15:01:05 2025 ---- Surfaceless have the lowest priority.
Tue Jan 7 15:01:05 2025 ---- Query the number of EGL devices
Tue Jan 7 15:01:05 2025 ---- Success
Tue Jan 7 15:01:05 2025 ---- Detected 5 EGL devices.
Tue Jan 7 15:01:05 2025 ---- Print the details here of every EGL device.
Tue Jan 7 15:01:05 2025 ---- Success
Tue Jan 7 15:01:05 2025 ---- Device 1/5:
Tue Jan 7 15:01:05 2025 ---- Device Extensions: EGL_NV_device_cuda EGL_EXT_device_drm EGL_EXT_device_drm_render_node EGL_EXT_device_query_name EGL_EXT_device_persistent_id
Tue Jan 7 15:01:05 2025 ---- Device 2/5:
Tue Jan 7 15:01:05 2025 ---- Device Extensions: EGL_NV_device_cuda EGL_EXT_device_drm EGL_EXT_device_drm_render_node EGL_EXT_device_query_name EGL_EXT_device_persistent_id
Tue Jan 7 15:01:05 2025 ---- Device 3/5:
Tue Jan 7 15:01:05 2025 ---- Device Extensions: EGL_EXT_device_drm EGL_EXT_device_drm_render_node
Tue Jan 7 15:01:05 2025 ---- Device 4/5:
Tue Jan 7 15:01:05 2025 ---- Device Extensions: EGL_EXT_device_drm EGL_EXT_device_drm_render_node
Tue Jan 7 15:01:05 2025 ---- Device 5/5:
Tue Jan 7 15:01:05 2025 ---- Device Extensions: EGL_MESA_device_software EGL_EXT_device_drm_render_node
Tue Jan 7 15:01:05 2025 ---- Device 0's extensions: EGL_NV_device_cuda EGL_EXT_device_drm EGL_EXT_device_drm_render_node EGL_EXT_device_query_name EGL_EXT_device_persistent_id
Tue Jan 7 15:01:05 2025 ---- Device 1's extensions: EGL_NV_device_cuda EGL_EXT_device_drm EGL_EXT_device_drm_render_node EGL_EXT_device_query_name EGL_EXT_device_persistent_id
Tue Jan 7 15:01:05 2025 ---- Device 2's extensions: EGL_EXT_device_drm EGL_EXT_device_drm_render_node
Tue Jan 7 15:01:05 2025 ---- Device 3's extensions: EGL_EXT_device_drm EGL_EXT_device_drm_render_node
Tue Jan 7 15:01:05 2025 ---- Device 4's extensions: EGL_MESA_device_software EGL_EXT_device_drm_render_node
Tue Jan 7 15:01:05 2025 ---- Try to build an EGL display for Platform device 1/5.
Tue Jan 7 15:01:05 2025 ---- Success.
Tue Jan 7 15:01:05 2025 ---- EGL extensions:
Tue Jan 7 15:01:05 2025 (WW) Retrieving EGL extensions failed
Tue Jan 7 15:01:05 2025 ---- EGL version: 1.5
Tue Jan 7 15:01:05 2025 ---- Bind the OpenGL API to EGL
Tue Jan 7 15:01:05 2025 ---- Create the context
Tue Jan 7 15:01:05 2025 ---- Create the surface
Tue Jan 7 15:01:05 2025 ---- Make the context current
Tue Jan 7 15:01:05 2025 ---- Initialise GLEW
Tue Jan 7 15:01:05 2025 ---- OpenGL version supported by this platform 4.5.0 NVIDIA 550.120
Tue Jan 7 15:01:05 2025 ---- OpenGL vendor:NVIDIA Corporation
Tue Jan 7 15:01:05 2025 ---- OpenGL renderer:Tesla V100-PCIE-32GB/PCIe/SSE2
Tue Jan 7 15:01:05 2025 ---- OpenGL version:4.5.0 NVIDIA 550.120
Tue Jan 7 15:01:05 2025 ---- Use OpenGL 4.5.
Tue Jan 7 15:01:05 2025 ---- Initialise the X-ray renderer if needed and if possible
Tue Jan 7 15:01:05 2025 ---- Initialise the renderer
Tue Jan 7 15:01:05 2025 ---- file_name: object.stl nb_faces: 39806 nb_vertices: 119418 bounding_box (in cm): (-1.09503, -1.10892, -1.24934) (1.10467, 1.09054, 1.22056)
Run the virtual scan using gVXR:
[9]:
gvxr.computeCTAcquisition("", # Where to save the projections
"", # Where to save the screenshots
numPro, # Total number of projections
0, # First angle
False, # Include the last angle
endAng, # Last angle
0, # Number of flat images
0, 0, 0, "mm", # Centre of rotation
*gvxr.getDetectorUpVector()) # Rotation axis
projSim = np.array(gvxr.getLastProjectionSet())
# Don't forget, we need to use flatfield normalisation on the radiographs to get the correct attenuation values!
# Because our flatfield and darkfield are perfect, all we need to do is divide by the total energy
projSim /= gvxr.getTotalEnergyWithDetectorResponse()
Again using the K3D package, we can visualise again the scan set up alongside one projection:
[10]:
plot=visualise(use_log=True,)
plot.display()
print("Screenshot for online viewing:")
Image(url='projection_screenshot.png')
Screenshot for online viewing:
[10]:

We export the projections. This allows us to demonstrate saving and loading projections in GVXR and loading and reconstructing GVXR simulations in CIL
[11]:
# bit depth
bitDep = 16
# bit depth multiplier
bitMul = (2**bitDep)-1
# admin
filNam = "proj"
# write data to raw - CIL method
imData = DataContainer(projSim, False)
rawWrite = RAWFileWriter(imData,filNam+"_CIL",'uint16')
rawWrite.write()
# write to raw - numpy
projSimNP = projSim * bitMul
projSimNP.astype('uint16').tofile(filNam+"_np.raw")
Read projections#
We now read in the raw projections from the file we just made:
[12]:
# use simulated or simulated read/write projection data
dataSource = "simRW" # sim, simRW, exp
dataWriteMethod = "np" # CIL, np
match(dataSource):
case "sim":
print('using sim data')
proj = projSim
case "exp":
print('using experimental data')
case "simRW":
print('using simRW data')
# get number of pixels
num_pixels = numPixX * numPixY * numPro
match(dataWriteMethod):
case "CIL":
# Read the .raw file as a numpy array - CIL data
with open(filNam+"_CIL.raw", "rb") as f:
data = np.fromfile(f, dtype=np.uint16, count=num_pixels)
# normalise data - need to use compression and scale data
data = (data.astype(float) + 16918.20416302011) / 82453.20416302011
case "np":
# Read the .raw file as a numpy array - NP data
with open(filNam+"_np.raw", "rb") as f:
data = np.fromfile(f, dtype=np.uint16, count=num_pixels)
# normalise data
data = data.astype(float) / bitMul
# Reshape the data into a 3D array (num_images, height, width)
projRead = data.reshape((numPro, numPixX, numPixY))
# reallocate projections
proj = projRead
using simRW data
We can compare the data we read in to the data we simulated. A small amount of noise has been added:
[13]:
projDiff = 100*(projSim[0]-projRead[0])/projSim[0]
limit = max(abs(projDiff.min()),projDiff.max())
show2D([projSim[0], projRead[0], projDiff],title=["sim","sim_read_write_"+dataWriteMethod,"difference (in %)"],\
cmap=['inferno', 'inferno', 'seismic'],num_cols=3,fix_range=[(0,1),(0,1),(-limit,limit)], size=(9,3))
plt.figure(figsize=(3,3))
plt.hist(projDiff.ravel(), bins=20)
plt.title("Histogram of % difference")
plt.grid('both')
plt.show()
Reconstruction#
We can now reconstruct the simulation in CIL. We first visualise the acquisition data using the CIL islicer:
[14]:
# colormap
cmapSino = "inferno" # gray, inferno
# reduce data (64 -> 32 bit)
proj_red = proj.astype(np.float32)
aqData = AcquisitionData(proj_red,False,ag)
islicer(aqData, cmap=cmapSino, direction=1)
print("Screenshot for online viewing:")
Image(url='islicer_screenshot.png')
Screenshot for online viewing:
[14]:

Working on the CPU, we cannot reconstruct the whole 3D volume for a cone beam geometry. Instead we reconstruct a single slice:
[15]:
# get slice
dataSlice = aqData.get_slice(vertical='centre')
# log corecction
dataSlice.log(out=dataSlice)
dataSlice *= -1
# set up the 2D fandbeam geometry
datageom = dataSlice.geometry
imgeom = datageom.get_ImageGeometry()
Astra has a forward and back projectors for 2D fan beam geometries for CPU but not an FBP algorithm. Thus, to reconstruct, we try two basic iterative algorithms, CGLS and SIRT, to reconstruct the image.
[16]:
A = ProjectionOperator(imgeom, dataSlice.geometry, device='cpu')
# 'CGLS':
cgls = CGLS(initial=imgeom.allocate(), operator=A, data=dataSlice)
cgls.run(20)
# 'SIRT':
constraint = IndicatorBox(lower=0)
sirt = SIRT(initial=imgeom.allocate(), operator=A, data=dataSlice, constraint=constraint)
sirt.run(500)
[17]:
show2D([cgls.solution,sirt.solution], title=["CGLS","SIRT"], cmap=cmapRecon, num_cols=2, origin="upper", fix_range=(0, 0.1), size=(9,3))
[17]:
<cil.utilities.display.show2D at 0x7f0016b83170>
CGLS takes many fewer iterations to converge, as expected, and seems to give a sharper image.
Conclusion#
We have demonstrated:
simulating a cone beam CT scan using real scan parameters, on a single material object defined in an stl file
saving and loading the raw projections
reconstructed the central 2D slice using CGLS and SIRT in CIL with CPU only projectors
demonstrated the difference in behaviour of the CGLS and SIRT algorithms.