Fourth International Conference


Computer Vision & Image Processing

Organized by

Malaviya National Institute of Technology Jaipur

27 - 29 September 2019


Handwritten Text Recognition Challenge on CALAM[1] dataset
(Panel: Balasubramanian Raman, Neeta Nain, Partha Pratim Roy)

This competition aims to bring together researchers working on off-line HTR and provides them a suitable benchmark to compare their techniques on the task of transcribing typical and difficult handwritten documents. The competition proposed for CVIP2019 aims at introducing a usual scenario for some collections in which there exist transcripts at page level for many pages useful for training. The transcripts are though not aligned with line images.
The challenge is to evaluate the performance of algorithms for detecting lines of handwritten text in paragraph form drawn from handwritten documents and transcript then into Unicode for Devnagari and Urdu scripts respectively. In particular, we wish to investigate and compare general methods that can reliably and robustly identify the Hindi/Urdu text lines in the presence of various noise conditions, interfering ligatures or modifiers, and the artifacts common to handwritten documents and their character to character transcription in the order of appearance.

Two tracks with different conditions on the use of training data are proposed:
1. Devanagari Handwritten Text (Segmentation and) Transcription
2. Urdu Handwritten Text Segmentation (and Transcription)
The problem is to first segment the handwritten pages into: Lines, Words per line and Characters. Then the segmented characters are to be transcripted in that order. Segmentation free approaches are also welcome; they should though transcript the characters in order.
The proposed competition is as follows:
  • Training: Dataset of pages with manually annotated lines, words and the corresponding transcripts associated to them has been carefully prepared. The Ground Truths (GTs) for 50 pages in terms of segmented Lines per page and the corresponding Words per line are provided as the Training samples at the time of Challenge launch. The corresponding transcripts are provided at page level with line breaks.
  • This information is provided in XML format. The entrants can use only this training material in the competition and external material for training is not allowed.
    Test Two tracks are defined:
  • Test-D: a simple track for Devanagari Handwritten text (segmentation) recognition and transcription with usual evaluation. The Devanagari script is written from left to right direction. The best recognition accuracy reported on Devanagari CALAM dataset is 93.4% by [4]
  • Test-U an advanced track corresponding to a more challenging scenario for Urdu Handwritten text (segmentation) recognition and transcription. The Urdu script is written from right to left direction.The best segmentation accuracy reported on Urdu CALAM dataset is 94.12% by [5]
  • Paper submssions are invited in this challenge if your accouracy is more than as reported above. Click for Submission Guidelines.The best (performing) two papers will be awarded.

The CALAM dataset
Urdu handwritten and Unicode corpus: a collection of handwritten forms scanned at a resolution of 300 dpi in gray-level using Canon flatbed scanner and saved in PNG format. At the top block is the header containing writers demographic. After header a printed text block, in which text are printed which is written by the writers in the space for handwriting just below it. At the bottom is the footer block containing writers signature etc.
The framework considers both handwritten and Unicode text and provides an align transcription between them. For the Urdu benchmarking platforms we have provided the ligature level GTs of Urdu handwritten documents.

For details Visit CALAM URDU

The Devanagri corpus form has the same structure as explained for Urdu. For each handwritten paragraph, sentences, lines and word, the corresponding transcription electronic text is displayed in the side panel for better understanding.
For details Visit Devanagari Framework and Hindi Characters

To Download Sample Devanagari Test Data Click Here

Evaluation: A completely new batch of images as the training set will be provided for evaluation. The information will be provided in both PNG Image format as in Training and their corresponding XML, but without transcripts. The participants have to submit XML files for all Tests including information about the detected lines and the corresponding recognition. The output will be the extracted lines and the corresponding words per line and its associated transcript.
WER/CER[2] will be used to measure performance and compare the systems. A linear combination of WER and CER will be used for evaluation of this track.
The winner will be decided just on results on batch Test. Organizers will check and publish results in both Segmentation Test and Transcription Test. Inconsistent results between these two subsets will result a request for explanation from the competitors and may be subject to disqualification.
Evaluation will be performed with BLEU[3] at region level (or some variant of BLEU for taking into account errors at character level), concatenating the lines provided by the participants. The reading order of the lines affects the performance, thus the participants must take care to include the lines in the reading order detected by their system. The reading order in the XML files is implicit by the order of the XML TextLine elements within each TextRegion.
There will be two winners, one for each track.
The dataset considered for training in this competition is the Cursive and Language Adaptive Methodology (CALAM)
For test, the documents will also be from CALAM for both Devanagari and Urdu handwritten text from the same period as the CALAM.
Challenge Open : February 14, 2019
Challenge Submission : June 25, 2019
Final Decision To Author : July 25, 2019
Each submission should be at most 8 pages in total including bibliography and well-marked appendices and must follow the Springer double columns publication format. You can download the Springer conference templates - Latex and MS Word A4 - at the following URL: Paper submission will only be online via: EasyChair.
Only pdf files will be accepted. Submissions not meeting these guidelines risk rejection without consideration of their merits. All submitted papers will be carefully evaluated based on originality, significance, technical soundness, and clarity of expression by at least two reviewers. The organizers will examine the reviews and make final paper selections. Contact Details: For sample test images mail:
Some Useful links

[2] The Character Error Rate (CER) measure on the test set is defined as CE = (i + s + d)=n with i, s, d, n the number of character insertions, substitutions, deletions and total characters respectively. Likewise, Word error rate (WER) is a common metric of the performance of a machine translation system.

Important Dates

Submission Deadline May 15, 2019 [11:59 p.m. Indian Standard Time]
Supplementry Material Deadline May 20, 2019 [11:59 p.m. Indian Standard Time]
Challenge Open February 14, 2019 [00:01 a.m. Indian Standard Time]
Challenge Submission June 25, 2019 [11:59 p.m. Indian Standard Time]
Final Decision To Author July 25, 2019 [11:59 p.m. Indian Standard Time]
Camera Ready Paper August 05, 2019 [11:59 p.m. Indian Standard Time]
© Copyright 2018-19 MNIT JAIPUR