Creating the animal mummy 3D viewer

The Allard Pierson, where the collections of the University of Amsterdam are housed, owns a group of animal mummies from ancient Egypt. They had never been studied in detail, so a research project was initiated in the summer of 2021 to answer the question of what is exactly inside the mummy wrappings. In the first step of the project, the mummies were taken to the Amsterdam University Medical Centre (UMC) where they were run through a CT scanner. This resulted in high-resolution image data of the interior of the mummies, that allowed to specialists to identify the animal species and study the bones, wrapping and any items that may have been included during the process of mummification. Over the last four months of 2021 we have been working on a 3D viewer that allows museum visitors to interact with the scan data themselves. This project is still in progress, but a prototype was quickly developed and was on display in the exhibition Dierenmummies Doorgelicht ('animal mummies exposed') from 11 december 2021 until the Netherlands went back in lockdown at the end of 2021. In this blogpost I’ll discuss the process of creation so far, and include some tips for anyone looking to visualise volumetric data such as CT scans, in a 3D web viewer.

Volumetric display of crocodile mummy with cross-section tool active in the pilot version.

The idea

The idea of displaying the interior of mummies in a digital application that allows visitors to unpack them layer by layer is not new. A company named Interprectral has made name in the museum world with applications comprising beautiful visualisations and intuitive user interaction on touchscreens. In the Netherlands, they have worked with the National Museum of Antiquities in Leiden, who also had their animal mummies CT scanned recently. It is needless to say that we were inspired by their work. Although Interpectral offers their software for sale, we as a university lab are interested to learn how to create such an application ourselves with the use of available open-source tools. We also wanted to add an extra feature not seen in the application offered by Interpectral: we wanted to give the user not just the possibility to unwrap the mummy layer by layer, but also allow them to interact with the CT scan data and study them in a manner akin to how professionals are doing. In the same vein, we wanted people to be able to snapshot and annotate their discoveries, and send them to the museum.

The pilot

For the pilot that was developed for the temporary exhibition we did not intend to include all the features from the conceptual design. We wanted to get acquainted with the technology and the workflow, and to create a basic application. In the prototype thus created users can switch between exterior and interior viewing mode, create cross-sections of the data and adjust the display of CT scan data density. The annotation, snapshot and send tools must still be developed. Within the time constraints, we could not create models for all animal mummies , but choose two iconic specimens: the crocodile mummy (APM17925), and a bird mummy (APM16999).


Allard Pierson

  • Ben van den Bercken - curator Ancient Egyptian collection
  • Martine Kilburn - exhibition project manager

4D Research Lab

  • Jitte Waagen - coordination
  • Tijm Lanjouw - development, modeling, design
  • Markus Stoffer - development

Amsterdam UMC

  • Mario Maas - professor radiology
  • Nick Lobé - radiology lab technician
  • Roel Jansen - radiology lab technician
  • Zosja Stenchlak - radiology lab technician
Volumetric display of CT-scan data of the crocodile mummy with high density materials isolated with the density slider tool.

3D Viewer choices

The first choice that had to be made is whether we wanted to use a game-engine like unity or unreal or a web-based framework to create the application in. We went for the second option as a quick review showed us some usable examples, and with an eye on future use of the technology we prefer to use browser-based applications as it makes sharing 3D models resulting from our projects more easy.

In fact, there exist already many solutions for online viewing of medical data resulting from MRI, CT or other types of scanners, many of which are open-source. This showed us the potential. However, since we wanted a completely custom user-interface and not only to display CT scan data but also regular 3D mesh models and high-resolution textures of the exterior, we needed a more general approach. A code library with an application programming interface (API) gives us this flexibility. For 3D display there are several widely used libraries, but not all have many options when it regards displaying volumetric 3D data. In the end we settled for X3DOM, a viewer developed to display all kinds of (scientific) 3D data and which showed some promise with regards to volumetric 3D. The website contained an example that already did most of what we wanted with regards to allowing the user to make cross-sections and modifying density. We could take it easily from there, so we thought.

Data processing

When working with CT data, it is important to understand what CT scanners actually do and what kind of data they output. CT scanners make X-rays from multiple sides of an object, and uses computer algorithms to create ‘slices’ (cross-sections) of an object on predetermined distances. The slices can be in the order of 200 μm (= 0.2 mm) apart, so it creates very high-resolution data. Each slice is an image with a fixed resolution, such as 512x512 pixels. CT scan datasets are thus series of 2D images positioned in a 3D space. Unlike regular digital images, the pixel values do not represent colour values (e.g. RGB), but Hounsefield values, which is used for material density. The CT scan allows us therefore to discriminate materials with different densities: bone from fabric or metal. This makes it a very powerful analytical tool for material studies.

The raw data was delivered to us by the Amsterdam UMC as DICOM files, which fortunately is a global standard for medical data that is readable by many applications. For processing we used the open-source package 3D slicer, which is a very powerful piece of software created to process and visualise medical 2D and 3D data. I highly recommend it. First, parts of the CT scan data we did not want to show had to be cut off – for instance the support the mummy was placed on during the scan. Second, it had to be reduced in resolution for fast loading in the app. Third, the images with Hounsefield values had to be converted to regular RGB. We only became aware of this requirement during the process. Unfortunately, this means an immense compression of data as one RGB channel can only take 255 values, while the hounsefield scale runs from -1000 (air) to +30.000 (metals).

The processed data was exported to NRRD format, which is another universal standard used to exchange medical imaging data. With a piece of custom code extension written by the X3DOM community, NRRD data is directly loadable in the 3D viewer and displayed as a volume. Without this code extension, it is required to create texture atlases of the CT-scan data, that combine the individual slices in one very large image. This is the default functionality of X3DOM that we didn’t use.

3D viewers and libraries

Ready made medical viewers:

Libraries with volume display:

Processing software:

3D slicer is excellent open-source software with much training material available and an active community of users.

Animated CT scan slices and volumetric reconstruction of the bones.

Scanning the exteriors

The CT scans can be used to create very high-resolution 3D models, but they do not contain the original colour of a surface. Since we wanted to show the relation between exterior appearance and the interior of a mummy, we needed to make separate 3D colour scans of the exterior. The scans were made on location in the Allard Pierson using photogrammetry and another scan with our latest acquisition: the Artec LEO. The LEO is a mobile 3D scanner using structured light technology and we wanted to test this scanner on these objects and compare it with the photogrammetric scans. It is very fast and flexible (no wires attached), but both in terms of texture quality and 3D mesh resolution does not match the quality of photogrammetric 3D reconstruction. We reduced the photogrammetric mesh models to 200.000 faces and created a 4k texture, which resulted in good quality models with ample detail.

Photoscanning the crocodile mummy.
The photoscanned exterior of the crocodile mummy displayed in the animal mummy viewer.

Displaying volume

As said, CT scan data is nothing more than series of cross-sections at different locations in an object. To create a 3D representation, these images are converted and interpolated and displayed as voxels, or volumetric pixels. This works in a web-based viewer like X3DOM no different than in a professional piece of software like 3D slicer. Using a range of rendering techniques, the display quality of the volume can be improved, for instance by modifying opacity and enhancing outlines or by darkening crevasses and lightening protruding areas. However, compared to the quality that is attainable with software like 3D slicer, the rendering of volumes in X3DOM lags behind. This is not just a rendering issue, but also attributable to the reduction in resolution, and the compression of Hounsefield units to just 255 values. Although X3DOM does offer ways to improve volume display, these are not compatible with the NRRD loader that I mentioned above. We may therefore have to reconsider our approach, and avoid using NRRD after all.

Volume rendering comparison. Left: in data visualisation software 3D slicer, right: in X3DOM webviewer.


The application is basically a single XHTML web page, which loads the models by referencing the external model files. X3DOM is a framework created around the X3D file standard. So the models need to be in X3D format, although recent versions can also handle the more common .gtlf/.glb format. The user interface and interaction is a custom design using html, css and javascript. The X3DOM API allows for easy integration of all the functionality with models and user interface, although quite some time was spent on learning the right way to reference the X3DOM functions and properties. We ran into some complications which are common when working with open source. For instance, pieces of code found on websites like github or in examples online were sometimes not compatible with recent versions of X3DOM, which required the occasional time-consuming deep dive in the source code to fix it. We managed, but it is clear that we sometimes hit our limitations as self-taught programmers with many other interests besides the technology itself.

Animal mummy app prototype in the exhibition room, with touchscreen and projection on wall.


Although we successfully created an application that could count on a lot of positive response during the opening of the exhibition, there are some critical notes to make. This mainly regards the display quality of the CT scan data. As the purpose of display quality is not aesthetic but mainly to create a visualisation that is easily understood and interpreted by the user, this is something we will need to focus on in the upcoming months. We furthermore noted that the touch interaction with the 3D models could be improved, although we are limited by X3DOM functionality in this regards. We will also start adding the other planned features such as the annotation and snapshot tools that allow the user to make independent observations and interpretations about ancient artefacts. Finally, we will keep on adding new mummies until all scanned specimens are accessible through the animal mummies viewer.