3DWorkSpace (b)log II – evaluation is key

Hugo Huurdeman, Jill Hilditch, Jitte Waagen

Why evaluation matters

From the outset the 3DWorkSpace project has focused on the evaluation of the platform. Why? As a multi-user environment that ultimately should benefit education and research by enabling tools to learn about 3D datasets, it is of paramount importance to understand how well our platform actually performs in this respect. This information is extremely important to check usability, fine-tune functionality, discuss its potential for implementation in education and research and, of course, eventually to assess our efforts and investments and determine future directions.

From the start, we envisioned that we would like to find answers to various questions, such as:

  • To what extent does the platform support easy interaction with 3D datasets?
  • To what extent does it encourage more in-depth engagement with 3D datasets?
  • To what extent is 3DWorkSpace a viable platform for education or research?
  • What are regarded as the most useful features of the platform?
  • What potentially useful features are not yet incorporated?

To this end, we designed an evaluation strategy that includes a full range of potential users, from researchers to teachers to students, as well as a selected group of experts in the domain of ICT and education, digital archaeology and also the 3D Program team of the Smithsonian itself – the creators of the Voyager 3D toolset integrated into 3DWorkSpace.

Fig 1. 3DWorkSpace models page

Evaluation design: a plan in two parts

In order to collect useful evaluation data, we designed both quantitative and qualitative evaluations.

The first part was an expert evaluation that was planned rather early in the development process, after having a first concept of the 3DWS platform online, but before the final phase of development, bug-fixing and fine-tuning. We invited a combination of hand-picked and suggested reviewers that were selected based on their respective expertise within and outside archaeology and with different specializations. We asked our reviewers to follow a structured online survey (using Limesurvey) that included a project aims statement and a set of dedicated screencasts running them through the 3DWS platform at its state of development at that moment. After a round of demographic questions, the participants engaged with the platform themselves. Subsequently, they filled out additional questionnaires about their experience in using 3DWorkSpace. These involved the System Usability Scale (SUS), a validated usability survey (Brooke, 1996) as well as a set of more qualitatively oriented questions.

Six experts participated in the usability survey, which means that we could meet the generally accepted minimum number of participants to generate a quantitative SUS-score (see e.g., Virzi, 1992). The subsequent qualitative questions were aimed at generating expert opinions on whether the 3DWS platform was fit-for-purpose, as well as receiving feedback on implementation possibilities and detailed information on potential shortcomings. In this way, we hoped to gather useful information from a group with a broad perspective and good insight-knowledge on 3D, digital methods and heritage.

The second part of the evaluation was a focus group with students, i.e., one of the intended end-user audiences. This was organized in a later phase of the project using a more refined version of the 3DWS platform, which contained elaborate examples of learning pathways (specific sequential guided activities aimed towards achieving competence) using the 3D model collections within the platform. Participants were students that responded to an advertisement for participation. As the results from the evaluation's introductory questionnaire indicate, these students had varied experiences with 3D datasets in the course of their studies. During the focus group session, the students were introduced to the project and the platform, watched screencasts of 3DWorkSpace, and were then presented with a case study on forming traces in pottery production in antiquity. Finally, they completed similar usability questionnaires as the experts. More importantly, we had a plenary discussion on the 3DWS platform to get their perspective. The nature and role of the learning pathways presented to the students are the focus of Blogpost 3 for the 3DWS project.

Fig 2. Screenshot of the Voyager app

Results expert assessment

The expert participants in our survey formed a diverse group in terms of their fields of study (Earth Sciences, Archaeology, Heritage Management, Computer Science, Information Science and Engineering), but most participants had obtained postgraduate qualifications in their chosen fields and, ultimately, were working under the purview of applications in computer science/visualisation or archaeological research. After trying out the 3DWorkSpace toolset, the participants first filled out the SUS, which consists of a set of 10 questions (for instance: "I think I would like to use this system frequently", and "I found the system unnecessarily complex"). From the answers, given on a scale of 1 to 5, the final score for the System Usability Scale is calculated. This score is measured in a range of 0 to 100, where a minimum score of 52 is considered "OK" and a score above 70 is considered as "Good" (see e.g. the empirical evaluation by Bangor et al., 2008).

The System Usability Scale score for the 3DWorkSpace platform was 74,64, based on the questionnaires completed by our group of six experts. Thus, the usability of the platform may be considered as good, even though individual questionnaire items indicated further potential for improvement.

This was further explored in the next part of the questionnaire, with specific open questions about the usability of 3DWorkSpace. Participants indicated that their overall experience with 3DWorkSpace was "positive", that the system was "easy to navigate" and "made the artifacts and models tangible". Specific features, such as the learning pathways, annotation features and possibilities for collection making were generally seen as the most useful features of 3DWorkSpace. The 'live' 3D model views were also seen as a useful addition, even though this according to several participants resulted in slowdown of their browser, due to the needed hardware resources for showing multiple 3D viewer panels simultaneously. Some concerns occurred regarding the workflows (for instance when adding learning pathways) and the ability to change other people's collections in the current prototype. A number of concrete suggestions for improving features were provided, for instance adding more visual elements to the now largely textual learning pathways. Additionally, feedback for improving the user interface and user experience was given, pointing at the naming of functionality, the contents of menus and the organization of features. These suggestions provide a wealth of useful feedback for future improvements of 3DWorkSpace.

The next part of the survey looked at the purposes of the 3DWorkSpace project and the ability of the created tools to meet the project's goals: develop an online platform for interacting with 3D datasets and explore its potential to offer structured guidance, stimulate discussion and advance knowledge publication. Generally, the tool was deemed by the experts as appropriate for reaching the project's goals. One participant mentioned that it "certainly makes it easier to engage with 3D datasets through the viewer and the rich annotation and documentation system". Another referred to the possibility of allowing "multiple people to create their own annotations and interpretations of the same datasets" as a crucial element. This was underlined by another participant: the tool facilitates "co-creation of and transfer of knowledge", in both didactic and science dissemination contexts.

Specific observations made by the experts on the placing and visibility of models, information texts, additional hyperlinked content and more, were useful for considering how to maximize engagement with the integrated datasets. One expert asked if gamification of the learning pathways (questions and scoring) might encourage wider or more in-depth engagement with the 3D models, in educational and heritage-based contexts. Another comment bridging usability and engagement potential suggested including ‘info-tips’ to briefly show the functions and capacities of the tools on offer, or prompts to remind users of the different ways they could interact with the models and collections. Further, it was emphasized that the structured guidance contained in the learning pathways needed to be tested more systematically in a pedagogical context. A first step in this regard will be described in Blogpost 3 focusing on the learning pathways in 3DWorkSpace and their evaluation.

Many comments looked ahead to scaling up the 3DWS prototype and raised concerns regarding data integrity, and data visibility for different user groups. Maintaining the integrity of uploaded collections with curated annotations and navigation was a key issue for considering reuse of the models and collections, as well as publication rights. The potential to develop different user profiles with greater or lesser powers of editing and to restrict access to collections containing unpublished 3D models among only authenticated collaborators were also suggested as future avenues for safeguarding integrity issues on the platform.

Overall, the broad interdisciplinary appeal of the 3DWS platform was commented upon, where any sharing or inspection of 3D datasets holds importance for moving knowledge and collaboration forward (such as medicine and geosciences, among others). The commenting feature and ability to add notes on the 3D models was also found to open up important space for new dialogues and knowledge sharing, bringing future appeal to such a platform across a broad range of contexts.

Fig 3. 3DWorkSpace - Learning Pathway

Conclusion and discussion

This blogpost outlined one of the key focal points of the 3DWorkSpace project: evaluation. An expert evaluation of the platform resulted in a SUS usability score of 74, which represents “good” usability. In addition, the qualitative parts of this study showed many positive aspects, for instance the ease of navigation and the 3DWorkSpace platform’s facilitation of co-creation of knowledge. Naturally, also potential points for improvements were identified, for example regarding editing workflows and technical aspects of the platform.

In the next blogpost, we will shift our focus to the evaluation conducted with students and discuss the nature and role of the learning pathways introduced in 3DWorkSpace.


Bangor, A., Kortum, P. T., & Miller, J. T. (2008). An empirical evaluation of the System Usability Scale. International Journal of Human-Computer Interaction, 24(6), 1–44. https://doi.org/10.1080/10447310802205776

Brooke, J. (1996). SUS: A “quick and dirty” usability scale. In P. W. Jordan, B. Thomas, B. A. Weerdmeester, & A. L. McClelland (Eds.), Usability Evaluation in Industry. Taylor and Francis.

Virzi, R. A. (1992). Refining the Test Phase of Usability Evaluation: How Many Subjects Is Enough? Human Factors, 34(4), 457-468. https://doi.org/10.1177/001872089203400407

3DWorkSpace (b)log I – from idea to platform

Jitte Waagen, Hugo Huurdeman, Jill Hilditch

Introduction: 3D datasets and open science

In the field of material heritage we see an exponential increase in 3D datasets which provides the scientific community with interesting new possibilities. One of these possibilities is to share 3D models using online platforms, making use of so-called ‘3D-viewers’. Such platforms can really add value to 3D datasets, because they allow for presentation of scientific data in real-world dimensions, provide the possibility of annotating the models, and often feature tools to interact with the models. All these factors increase the impact of 3D datasets by making them insightful and creating a versatile medium for communicating in-depth knowledge on those datasets. Some useful platforms have already been developed specifically for archaeological collections, such as Dynamic Collections, a 3D web infrastructure created by DARK Lab, Lund University (https://models.darklab.lu.se/dynmcoll/Dynamic_Collections/) or PURE3D (https://pure3d.eu/), focusing on the publication and preservation of 3D scholarship.

However, the availability of online 3D datasets on such platforms also presents challenges: 3D datasets can be both complex to understand and interact with, and presentation tools often lack possibilities for bi-directional knowledge transfer, which can mean that the insights and narratives generated from the user-perspective are difficult to integrate and often ignored. From an Open Science perspective, it would be very interesting if platforms such as these would provide novel tools to create a rich multi-user workspace. These might include creating your own versions of 3D models and personal annotated collections, as well as multi-author 3D models or collections, and tools to enable discussions on those models and collections, and creating 3D-illustrated learning pathways.

The project that we introduce here, 3DWorkSpace, is an Open Science project funded by NWO (https://www.nwo.nl/projecten/203001026, see also the project announcement https://4dresearchlab.nl/3dworkspace-project-announcement/) and led by Jill Hilditch, Jitte Waagen, Tijm Lanjouw and Hugo Huurdeman. The goal of the project is to develop an online platform for interacting with 3D datasets and explore its potential to offer structured guidance, stimulate discussion and advance knowledge publication. This project is not so much aimed at creating yet another platform, but is intended as a pilot study towards the direct combination of realizing a platform and presenting case studies that will explore its potential, benefits and shortcomings. These case studies are focused on both deployment in the classroom as well as for peer-interaction in a research and professional context.


The 3D viewer technology is something quite different than the eventual user-facing platform in which you’d like to integrate it. Depending on the case, building a viewer from scratch might not be a good idea - especially when many good examples already exist. Since our goal was to explore the potential for creating the platform and to evaluate that, we decided to work with  an existing viewer. In our explorations, both within the 4D Research Lab as well as in the Tracing The Potter's Wheel project (https://tracingthewheel.eu/), we evaluated various 3D viewers and 3D web technologies, such as 3DHOP (https://3dhop.net/), Aton (https://osiris.itabc.cnr.it/aton/), and Potree (https://potree.github.io/). Each of these has its specific benefits and drawbacks in terms of features, usability and technology, but eventually we chose Smithsonian Voyager (https://smithsonian.github.io/dpo-voyager/), an open-source 3D toolset. We found especially attractive the focus of Voyager on providing both a web-based 3D model viewer (Voyager Explorer) and an extensive authoring tool (Voyager Story). This authoring tool allows a user without specific technical experience to enrich 3D models via a web browser. A user can add, for instance, annotations as well as articles and combine these into tours. These enriched 3D models can, requiring some technical expertise, be subsequently published by integrating Voyager Explorer into a website. Given this capacity, Voyager ticked quite some boxes on our wishlist. A final important benefit of Voyager is that behind its development are professionals working hard to bring their product to as many users as possible and increase flexibility. Direct communication with the 3D Program team of the Digitization Program Office of the Smithsonian has been of fundamental value to the 3DWorkSpace project.

Having decided to use Voyager as the 3D-viewing building block of our platform, we turned to designing a platform in which it could be integrated, allowing us to reach our goals related to the open science approach of multi-authoring, learning and discussing. The challenge was to not fall into the trap of ‘featureism’, i.e. thinking up as many cool features as possible to integrate into the single most fantastic tool. This approach could lead to potential issues, including usability problems and implementation difficulties. Instead, we opted for a theoretical and methodological discussion which led to a baseline set of features that would facilitate the type of use and case studies that we were working towards. Thus, in addition to the basic browsing and search functions of such a platform, users should be able to:

  • create and use their own 3DWorkSpace account (user authentication)
  • compare 3D models side-by-side using multiple viewer panels (comparing models)
  • annotate specific 3D models (annotating models)
  • create and save personal or public collections of 3D models (collection making)
  • add basic metadata to collections (describing collections)
  • add comments to collections and reply to comments (discussing collections)
  • create learning pathways for collections, incorporating textual content and hyperlinks to custom views of specific models (creating collection learning content)


Components of the 3DWorkSpace platform

The final 3DWorkSpace platform integrates the basic features we defined, such as collection making, annotation of 3D models and detailed discussions about collections. These features were implemented using three main elements: a storage server for 3D assets, the 3D viewer and authoring tools, and the 3DWorkSpace system itself.

The first crucial element of 3DWorkSpace platform entailed the storage and retrieval of the required 3D assets. These assets include 3D models, but also related annotations and additional metadata about the models. As we aimed for creating a bi-directional platform, these files had not just to be statically stored, but also dynamically editable. Voyager directly supports the WebDAV-protocol (https://en.wikipedia.org/wiki/WebDAV), which allows for editing files directly on a web server. Therefore, this WebDAV-server provided the foundation of the 3DWorkSpace platform.

Second, we integrated the Voyager toolset into the 3DWorkSpace platform. Specifically, we made use of two elements of the toolset: Voyager Explorer, the web-based viewer for 3D models, and Voyager Story, the authoring tool for creating the necessary files to display 3D models together with contextual information in Voyager Explorer (using Voyager’s structured SVX-format). Enrichments created using Voyager Story were automatically saved on the previously described storage server and could be visualized using the Explorer element.

Finally, the third crucial element was the 3DWorkSpace system itself, which seamlessly integrated the Voyager tools. The system made use of the Firebase app development platform (https://firebase.google.com/) for features such as user authentication and associated databases. The user interface (‘front-end’) was created via the React-framework (https://react.dev/), a framework to create interfaces using individual pieces (named 'components'). An advantage of React is that created components are highly adaptable, resilient and reusable, further contributing to the goals of the Open Science program 3DWorkSpace is part of.

3DWorkSpace overview 

The three discussed components led to the platform illustrated in Figure 1, 2 and 3. Users can browse and search models, collections and learning pathways. A unique feature of 3DWorkSpace is that users can always directly interact with the 3D models; in search results list, collections as well as detail views (Figure 1). The addition of multiple models makes directly comparing features of models possible, which is potentially useful for education, research and professional purposes.

Within the detailed views of collections (Figure 2) users can view and interact with associated 3D models, for instance by rotating models or by toggling visible annotations. Logged-in users can also edit 3D models and metadata using Voyager Story. These edits are directly saved on the WebDAV-server providing storage, offering a seamless experience.

On the right-hand side of a collection, various tabs allow for inspecting and editing collection metadata, notes and comments, as well as learning pathways. These features allow for unique possibilities in terms of bi-directional knowledge transfer: for instance, discussions with peers or teachers. Furthermore, learning pathways (Figure 3) allow for directly linking learning content with specific model views, such as close-ups of forming traces on ceramics. In this case, the multi-model view also allows for direct comparisons. Learning pathways will be further discussed in Blogpost 3.

Challenges, solutions and future work

While the 3DWorkSpace platform prototype provides various novel features, a number of challenges arose during its design and implementation, including user roles, potential system requirements and the authoring of enriched 3D models.

User authentication is an important issue. In the prototype version of the 3DWorkSpace platform, users can register and log-in to access commenting and editing features. However, there is no differentiation between roles; any user can directly edit or even delete any model, collection or associated data. In a future version of the platform, different user roles should be distinguished, to include administrators (having full editing access), editors (upload and edit models or collections) or commenters (only being able to comment on collections). This is especially important for use of the platform within educational settings.

The unique feature of displaying multiple editable models on search result pages facilitates model comparisons, but also resulted in issues with regards to high memory usage; a potential issue for users with older or limited computers. We resolved this issue by including only six models on any given page (e.g., in a search result list). In future work, model display via Voyager can be further optimized, for instance by initially showing low polygon-versions of models, or by showing thumbnails of models which only load after clicking on them.

A final challenge was the inherent complexity of the authoring tool Voyager Story for enriching 3D models with metadata, annotation and tours. Voyager Story has many in-depth features which make it an incredibly useful tool. However, this leads to some difficulties for initial users of Voyager Story due to its complexity. It was not feasible to resolve this within the scope of 3DWorkSpace, but we were able to alleviate it by creating extensive screencasts explaining the authoring process.


We hope with this blogpost to have provided you with some insights into our ideas and how they steered the development of 3DWorkSpace. We will comment on the platform evaluation and practical case studies in the next few blogposts!


Tracing the Potters Wheel








Project Lead

  • Jill Hilditch - ACASA
  • Jitte Waagen - ACASA / 4D Research Lab

Concept, development, evaluation

  • Hugo Huurdeman - Open Universiteit
  • Tijm Lanjouw - 4D Research Lab
  • Caroline Campolo-Jeffra - Immersive Heritage

Technical development

  • Markus Stoffer - 4D Research Lab
  • Ivan Kisjes - CREATE
  • Saan Rashid - CREATE

Funded by NWO Open Science Fund (203.001.026)




Screenshot of the Voyager app



















Fig 1. 3DWorkSpace models page
Fig 2. 3DWorkSpace side-by-side comparison on a collection page
Fig 3. 3DWorkSpace - Learning Pathway

Creating the animal mummy 3D viewer

The Allard Pierson, where the collections of the University of Amsterdam are housed, owns a group of animal mummies from ancient Egypt. They had never been studied in detail, so a research project was initiated in the summer of 2021 to answer the question of what is exactly inside the mummy wrappings. In the first step of the project, the mummies were taken to the Amsterdam University Medical Centre (UMC) where they were run through a CT scanner. This resulted in high-resolution image data of the interior of the mummies, that allowed to specialists to identify the animal species and study the bones, wrapping and any items that may have been included during the process of mummification. Over the last four months of 2021 we have been working on a 3D viewer that allows museum visitors to interact with the scan data themselves. This project is still in progress, but a prototype was quickly developed and was on display in the exhibition Dierenmummies Doorgelicht ('animal mummies exposed') from 11 december 2021 until the Netherlands went back in lockdown at the end of 2021. In this blogpost I’ll discuss the process of creation so far, and include some tips for anyone looking to visualise volumetric data such as CT scans, in a 3D web viewer.

Volumetric display of crocodile mummy with cross-section tool active in the pilot version.

The idea

The idea of displaying the interior of mummies in a digital application that allows visitors to unpack them layer by layer is not new. A company named Interprectral has made name in the museum world with applications comprising beautiful visualisations and intuitive user interaction on touchscreens. In the Netherlands, they have worked with the National Museum of Antiquities in Leiden, who also had their animal mummies CT scanned recently. It is needless to say that we were inspired by their work. Although Interpectral offers their software for sale, we as a university lab are interested to learn how to create such an application ourselves with the use of available open-source tools. We also wanted to add an extra feature not seen in the application offered by Interpectral: we wanted to give the user not just the possibility to unwrap the mummy layer by layer, but also allow them to interact with the CT scan data and study them in a manner akin to how professionals are doing. In the same vein, we wanted people to be able to snapshot and annotate their discoveries, and send them to the museum.

The pilot

For the pilot that was developed for the temporary exhibition we did not intend to include all the features from the conceptual design. We wanted to get acquainted with the technology and the workflow, and to create a basic application. In the prototype thus created users can switch between exterior and interior viewing mode, create cross-sections of the data and adjust the display of CT scan data density. The annotation, snapshot and send tools must still be developed. Within the time constraints, we could not create models for all animal mummies , but choose two iconic specimens: the crocodile mummy (APM17925), and a bird mummy (APM16999).


Allard Pierson

  • Ben van den Bercken - curator Ancient Egyptian collection
  • Martine Kilburn - exhibition project manager

4D Research Lab

  • Jitte Waagen - coordination
  • Tijm Lanjouw - development, modeling, design
  • Markus Stoffer - development

Amsterdam UMC

  • Mario Maas - professor radiology
  • Nick Lobé - radiology lab technician
  • Roel Jansen - radiology lab technician
  • Zosja Stenchlak - radiology lab technician
Volumetric display of CT-scan data of the crocodile mummy with high density materials isolated with the density slider tool.

3D Viewer choices

The first choice that had to be made is whether we wanted to use a game-engine like unity or unreal or a web-based framework to create the application in. We went for the second option as a quick review showed us some usable examples, and with an eye on future use of the technology we prefer to use browser-based applications as it makes sharing 3D models resulting from our projects more easy.

In fact, there exist already many solutions for online viewing of medical data resulting from MRI, CT or other types of scanners, many of which are open-source. This showed us the potential. However, since we wanted a completely custom user-interface and not only to display CT scan data but also regular 3D mesh models and high-resolution textures of the exterior, we needed a more general approach. A code library with an application programming interface (API) gives us this flexibility. For 3D display there are several widely used libraries, but not all have many options when it regards displaying volumetric 3D data. In the end we settled for X3DOM, a viewer developed to display all kinds of (scientific) 3D data and which showed some promise with regards to volumetric 3D. The website contained an example that already did most of what we wanted with regards to allowing the user to make cross-sections and modifying density. We could take it easily from there, so we thought.

Data processing

When working with CT data, it is important to understand what CT scanners actually do and what kind of data they output. CT scanners make X-rays from multiple sides of an object, and uses computer algorithms to create ‘slices’ (cross-sections) of an object on predetermined distances. The slices can be in the order of 200 μm (= 0.2 mm) apart, so it creates very high-resolution data. Each slice is an image with a fixed resolution, such as 512x512 pixels. CT scan datasets are thus series of 2D images positioned in a 3D space. Unlike regular digital images, the pixel values do not represent colour values (e.g. RGB), but Hounsefield values, which is used for material density. The CT scan allows us therefore to discriminate materials with different densities: bone from fabric or metal. This makes it a very powerful analytical tool for material studies.

The raw data was delivered to us by the Amsterdam UMC as DICOM files, which fortunately is a global standard for medical data that is readable by many applications. For processing we used the open-source package 3D slicer, which is a very powerful piece of software created to process and visualise medical 2D and 3D data. I highly recommend it. First, parts of the CT scan data we did not want to show had to be cut off – for instance the support the mummy was placed on during the scan. Second, it had to be reduced in resolution for fast loading in the app. Third, the images with Hounsefield values had to be converted to regular RGB. We only became aware of this requirement during the process. Unfortunately, this means an immense compression of data as one RGB channel can only take 255 values, while the hounsefield scale runs from -1000 (air) to +30.000 (metals).

The processed data was exported to NRRD format, which is another universal standard used to exchange medical imaging data. With a piece of custom code extension written by the X3DOM community, NRRD data is directly loadable in the 3D viewer and displayed as a volume. Without this code extension, it is required to create texture atlases of the CT-scan data, that combine the individual slices in one very large image. This is the default functionality of X3DOM that we didn’t use.

3D viewers and libraries

Ready made medical viewers:

Libraries with volume display:

Processing software:

3D slicer is excellent open-source software with much training material available and an active community of users.

Animated CT scan slices and volumetric reconstruction of the bones.

Scanning the exteriors

The CT scans can be used to create very high-resolution 3D models, but they do not contain the original colour of a surface. Since we wanted to show the relation between exterior appearance and the interior of a mummy, we needed to make separate 3D colour scans of the exterior. The scans were made on location in the Allard Pierson using photogrammetry and another scan with our latest acquisition: the Artec LEO. The LEO is a mobile 3D scanner using structured light technology and we wanted to test this scanner on these objects and compare it with the photogrammetric scans. It is very fast and flexible (no wires attached), but both in terms of texture quality and 3D mesh resolution does not match the quality of photogrammetric 3D reconstruction. We reduced the photogrammetric mesh models to 200.000 faces and created a 4k texture, which resulted in good quality models with ample detail.

Photoscanning the crocodile mummy.
The photoscanned exterior of the crocodile mummy displayed in the animal mummy viewer.

Displaying volume

As said, CT scan data is nothing more than series of cross-sections at different locations in an object. To create a 3D representation, these images are converted and interpolated and displayed as voxels, or volumetric pixels. This works in a web-based viewer like X3DOM no different than in a professional piece of software like 3D slicer. Using a range of rendering techniques, the display quality of the volume can be improved, for instance by modifying opacity and enhancing outlines or by darkening crevasses and lightening protruding areas. However, compared to the quality that is attainable with software like 3D slicer, the rendering of volumes in X3DOM lags behind. This is not just a rendering issue, but also attributable to the reduction in resolution, and the compression of Hounsefield units to just 255 values. Although X3DOM does offer ways to improve volume display, these are not compatible with the NRRD loader that I mentioned above. We may therefore have to reconsider our approach, and avoid using NRRD after all.

Volume rendering comparison. Left: in data visualisation software 3D slicer, right: in X3DOM webviewer.


The application is basically a single XHTML web page, which loads the models by referencing the external model files. X3DOM is a framework created around the X3D file standard. So the models need to be in X3D format, although recent versions can also handle the more common .gtlf/.glb format. The user interface and interaction is a custom design using html, css and javascript. The X3DOM API allows for easy integration of all the functionality with models and user interface, although quite some time was spent on learning the right way to reference the X3DOM functions and properties. We ran into some complications which are common when working with open source. For instance, pieces of code found on websites like github or in examples online were sometimes not compatible with recent versions of X3DOM, which required the occasional time-consuming deep dive in the source code to fix it. We managed, but it is clear that we sometimes hit our limitations as self-taught programmers with many other interests besides the technology itself.

Animal mummy app prototype in the exhibition room, with touchscreen and projection on wall.


Although we successfully created an application that could count on a lot of positive response during the opening of the exhibition, there are some critical notes to make. This mainly regards the display quality of the CT scan data. As the purpose of display quality is not aesthetic but mainly to create a visualisation that is easily understood and interpreted by the user, this is something we will need to focus on in the upcoming months. We furthermore noted that the touch interaction with the 3D models could be improved, although we are limited by X3DOM functionality in this regards. We will also start adding the other planned features such as the annotation and snapshot tools that allow the user to make independent observations and interpretations about ancient artefacts. Finally, we will keep on adding new mummies until all scanned specimens are accessible through the animal mummies viewer.

3DWorkSpace project announcement

The NWO Open Science application that the 4D Research Lab submitted with main applicant dr. Jill Hilditch of the Tracing the Potters Wheel project (TPW), has received funding!

The project is an interdisciplinary collaboration on developing and deploying a 3D viewer for education and research purposes. This project, which we called 3D WorkSpace, is a collaboration with Loes Opgenhaffen, PhD researcher in the TPW project, Hugo Huurdeman, freelance designer (Timeless Future), Leon van Wissen, scientific programmer of CREATE, and is being executed in cooperation with Paul Melis and Casper van Leeuwen from SURF and the developers of the Smithsonian Institute represented by Jamie Cope, computer engineer at the Digitization Program Office of the Smithsonian Institute. Here, we present an outline of the project that will start in March, 2022.

Open access 3D models are often placed in online platforms with limited options for interactive communication and education. Although some 3D collections are published with their associated metadata, paradata, annotations and interpretations, these currently provide no open tools for re-use or interactivity. The Voyager digital museum curation tool suite, developed by the Smithsonian Institute, allows for interactivity and enrichment of the data but does not enable reuse or open content creation in a multiuser environment. Annotating, adding information to a 3D model without modifying the model itself, is possible for the creators of the content, but not for the end-users.

3DWorkSpace will facilitate re-use of 3D models through the addition of annotations and narratives, as well as side-by-side comparison of multiple models, within an online app environment adapted from the open source Voyager platform. It will allow data enrichment by enabling multi-authoring through the built-in annotation system, as well as through linkage to datasets (e.g., thesauri and museum catalogues) available as Linked Open Data (LOD). Two heritage-based case studies, production traces on experimental ceramics from the Tracing the Potter’s Wheel and a drone-based dataset from the 4D Research Lab, will allow full exploration of diverse 3D models for the implementation and testing of the adapted Voyager environment. Learning pathways, using the Voyager annotation feature, will train users in the necessary skills for guiding analysis of the 3D data models.

3DWorkSpace utilizes existing open access resources to realise a truly open science platform: from adaptation of the Voyager tool suite and testing with web-based open access 3D datasets, to technical support for data creation and access via Linked Open Data and Figshare. Evaluation will occur in tandem with the creation of training materials for technical set-up and 3D data curation. In this way we will lower the threshold for adoption, create best practice through training and demonstration, and create a roadmap for implementation and evaluation.

Although born from the material culture field, 3DWorkSpace is an initiative aimed at any field engaging with 3D data visualisation, as well as users seeking to integrate interactivity and data re-use, and will open up new ways of communicating and debating the narratives in which 3D reconstructions function for educators, researchers, students and general users.

We are really looking forward starting with this project!

Screenshot of the Voyager app