Title
|
|
|
|
Style obfuscation by invariance
| |
Author
|
|
|
|
| |
Abstract
|
|
|
|
The task of obfuscating writing style using sequence models has previously been investigated under the framework of obfuscation-by-transfer, where the input text is explicitly rewritten in another style. A side effect of this framework are the frequent major alterations to the semantic content of the input. In this work, we propose obfuscation-by-invariance, and investigate to what extent models trained to be explicitly style-invariant preserve semantics. We evaluate our architectures in parallel and non-parallel settings, and compare automatic and human evaluations on the obfuscated sentences. Our experiments show that the performance of a style classifier can be reduced to chance level, while the output is evaluated to be of equal quality to models applying style-transfer. Additionally, human evaluation indicates a trade-off between the level of obfuscation and the observed quality of the output in terms of meaning preservation and grammaticality. |
| |
Language
|
|
|
|
English
| |
Source (book)
|
|
|
|
Proceedings of the 27th International Conference on Computational Linguistics
| |
Publication
|
|
|
|
2018
| |
Volume/pages
|
|
|
|
(2018)
, p. 984-996
| |
Full text (open access)
|
|
|
|
| |
|