Delaunay triangulation based text detection from multi-view images of natural scene

Roy, Soumyadip and Shivakumara, P. and Pal, Umapada and Lu, Tong and Hemantha Kumar, G. (2020) Delaunay triangulation based text detection from multi-view images of natural scene. Pattern Recognition Letters, 129. pp. 92-100.

Full text not available from this repository. (Request a copy)
Official URL:


Text detection in the wild is still considered as a challenging issue to the researchers because of its several real time applications like forensic application, where CCTV camera captures images at different angles of the same scene. Unlike the existing methods that consider a single view captured orthogonally for text detection, this paper considers multi-view (view-1 and view-2 of the same spot) of the same scene captured at different angles or different height distances for text detection. For each pair of the same scene, the proposed method extracts features that describe characteristics of text components based on Delaunay Triangulation (DT), namely corner points, area and cavity of the DT. The features of corresponding DT in view-1 and view-2 are compared through cosine distance measure to estimate the similarity between two components of respective view-1 and view-2. If the pair satisfies the similarity condition, the components are considered as Candidate Text Components (CTC). In other words, these are the common components for view-1 and view-2 that satisfy the similarity condition. From each CTC of view-1 and view-2, the proposed method finds nearest neighbor components to restore the components of the same text line based on estimating degree of similarly between CTC and neighbor components using Chi-square and cosine distance measures. Furthermore, the proposed method uses a recognition step to detect correct texts by comparing recognition results of view-1 and view-2. The same recognition step is used for removing false positives to improve the performance of the proposed method. Experimental results on our own dataset, which contains pair of images of different situations, and the standard datasets, namely, ICDAR 2013, MSRATD-500, CTW1500, Total-text, ICDAR 2017 MLT and COCO-text, show that the proposed method outperforms the existing methods. (C) 2019 Published by Elsevier B.V.

Item Type: Article
Subjects: D Physical Science > Computer Science
Divisions: Department of > Computer Science
Depositing User: Mr Umendra uom
Date Deposited: 30 Jan 2021 10:52
Last Modified: 03 Jun 2023 10:18

Actions (login required)

View Item View Item