By using a simple CSV file, the “Duplicate Check” module provides you with a way to check your entire data set.
We take care to maintain compatibility when extending the CSV import interface of the Duplicate Check module. This means that you can always use the latest version without generating additional effort when integrating it into your ERP system.
To ensure that the individual duplicate data records belong to your master data, you have the option of specifying up to two unique keys in the import file.
The default separator of the individual elements for the Duplicate Check is the ‘|’ character (pipe). This can be changed via the settings. Bold field names are mandatory fields (the separator can be changed via settings).
Please note that all fields must be specified in the Duplicate Check import file, even if you do not use Key_1 and Key_2.
Example in the form of a CSV file:
key1;key2;firstname;lastname;name1;name2;name3;name4;street;number;postcode;town;department;country; val_key1;val_key2;val_firstname;val_lastname ;val_name1;val_name2;val_name3;val_name4;val_street;val_number;val_postcode;val_town;val_department;val_country;
… (more duplicate checks)
Please pay attention to the correct number of columns (14 columns) when creating the CSV import file. This note is important for possible errors during import when using CSV. However, you can also use the XLSX or JSON import format to eliminate this source of error.
The CSV export file of the Duplicate Check contains the transferred values as well as cleaned values and values marked as duplicates.
|// cleaned data|
|// applied cleaners|
|// applied duplicates|
The output of the export file in CSV format of the duplicate check always includes an additional column containing the headings. Please take this into account for any automatic re-import of the check results.