Outputs
For each task, the participants will provide:
1) a “distance matrix” of the images of the data-set
2) a “belonging matrix” of the images of the data-set describing the degree to which each image is associated with a particular label:
– For tasks 1 and 2, a CSV file with 13 columns: FILENAME,SCRIPT_TYPE1, …., SCRIPT_TYPE12” containing a normalized multi-weighted labeling associated with each image.
– For tasks 3 and 4, a CSV file with 16 columns: FILENAME,DATE_TYPE1, …., DATE_TYPE15” containing a normalized multi-weighted labeling associated with each image.
3) a two-page description of their method
Nota: Participants are allowed to submit several independent proposals, depending on the features they use.
Evaluation Criteria
Based on the test data-sets, the evaluation will be given as follow:
– Accuracy per script type for tasks 1 and 2
– Accuracy per date type for tasks 3 and 4
The “accuracy per script type” is given according to the ground truth, which has one label for each script image in the evaluation data set.
The “accuracy per date type” is given according to the ground truth, which has one date for each script image in the evaluation data set.
If handwritings in different script styles or from different dates appear on one image, the ground truth records the majoritarian class.