TagLab Segmentation

Author: Danilo Marco Campanaro - Last Update: 2026-03-05 

Campanaro, D. M., Dell'Unto, N., Callieri, M., & DARKlab. (2026, marzo 5). TagLab Segmentation. Zenodo. doi.org/10.5281/zenodo.18874282

Description of the tutorial

This tutorial presents the main segmentation tools available in TagLab and demonstrates how to begin working on a project by adding an orthographic image as a new map. It explains how to define image metadata such as acquisition date and pixel size, which are required for measurements and multi-temporal analysis. The video then introduces different annotation and segmentation approaches, including annotation points, AI-assisted segmentation tools (such as positive and negative click segmentation, watershed segmentation, and SAM-based segmentation), as well as manual freehand segmentation and region editing. These tools allow users to identify regions in the image, assign labels, and export the resulting annotations and statistics for further analysis.

This tutorial builds on previously produced tutorials: one focusing on the georeferencing of 3D models  and another explaining how to to generate vertical orthographic projections (e.g., building façades or architectural sections) in RealityScan 

Segmentation

The content presented within this section has been created by the Swedish Infrastructure for Digital Archaeology (Swedigarch) and is made available for reuse under the Creative Commons Attribution (CC-BY) licence.

Page Manager: nicolo.delluntoark.luse | 2026-03-18