Rapid and Precise Topological Comparison with Merge Tree Neural Networks

(a) Merge tree distances, where trees are matched and edited, are computationally heavy and slow to compute. This leads to analyses that can be a bottleneck in scientific workflows. (b) Here, we show a Multidimensional scaling (MDS) visualization of a test dataset and the significant runtime necessary to produce a pair-wise distance matrix. (c) Rather than directly compute tree distances, we treat merge tree comparison as a learning task using our novel Merge Tree Neural Network (MTNN) to calculate the similarity. (d) MDS of the same data as (b) using MTNN. This reduces comparison times by orders of magnitude (5, here) with extremely low error added compared to the state-of-the-art merge tree distance.
Demi Qin
Brittany Terese Fasy
Carola Wenk
Brian Summa
Merge trees are a valuable tool in the scientific visualization of scalar fields; however, current methods for merge tree comparisons are computationally expensive, primarily due to the exhaustive matching between tree nodes. To address this challenge, we introduce the Merge Tree Neural Network (MTNN), a learned neural network model designed for merge tree comparison. The MTNN enables rapid and high-quality similarity computation. We first demonstrate how to train graph neural networks, which emerged as effective encoders for graphs, in order to produce embeddings of merge trees in vector spaces for efficient similarity comparison. Next, we formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism. We demonstrate the effectiveness of our model on real-world data in different domains and examine our model's generalizability across various datasets. Our experimental analysis demonstrates our approach's superiority in accuracy and efficiency. In particular, we speed up the prior state-of-the-art by more than 100X on the benchmark datasets while maintaining an error rate below 0.1%.
TBD
	
						TBD