Full Program »
Anonymisation Methods For Complex Data Based On Privacy Models
As the demand for personal data rises, so does the risk of potential de-anonymisation attacks. De-anomymisation is a breach of privacy where an attacker can identify an individual in a published dataset, despite the removal of personal identifying information. A risk analysis based on privacy models can be applied to assess the de-anonymisation risks. The challenge lies in finding appropriate anonymisation methods based on the results. So far, a large number of privacy enhancing methods of non-complex data types based on privacy models was already proposed, but only a few for high-dimensional and complex data types. Therefore, this study focuses on identifying methods for the anonymisation of such data types based on privacy models. In order to evaluate possible approaches and to assess the associated challenges a total of 9 prototypes was developed for 5 different data types. The result was a guideline to determine which method is suitable for each of the data types. Here, the data controller has to decide between more labour-intensive manual methods with more accurate anonymisation results and thus a lower privacy-utility trade-off or the faster, automatic methods with a decrease in utility. It was also shown that even after applying privacy enhancing methods, it is still important that the anonymised datasets are again subjected to risk analysis. The presented methods and guidelines support data controllers in complying with privacy regulations such as GDPR.