Title
|
|
|
|
Classifying without discriminating
| |
Author
|
|
|
|
| |
Abstract
|
|
|
|
Classification models usually make predictions on the basis of training data. If the training data is biased towards certain groups or classes of objects, e.g., there is racial discrimination towards black people, the learned model will also show discriminatory behavior towards that particular community. This partial attitude of the learned model may lead to biased outcomes when labeling future unlabeled data objects. Often, however, impartial classification results are desired or even required by lam, for future data objects in spite of having biased training data. In this paper, we tackle this problem by introducing a new classification scheme for learning unbiased models on biased training data. Our method is based on massaging the dataset by making the least intrusive modifications which lead to an unbiased dataset. On this modified dataset we then learn a non-discriminating classifier. The proposed method has been implemented and experimental results on a credit approval dataset show promising results: in all experiments our method is able to reduce the prejudicial behavior for future classification significantly without loosing too much predictive accuracy. |
| |
Language
|
|
|
|
English
| |
Source (book)
|
|
|
|
2nd International Conference on Computer, Control and Communication, February 17-18, 2008, Karahi, Pakistan
| |
Publication
|
|
|
|
New York, N.Y.
:
IEEE
,
2009
| |
ISBN
|
|
|
|
978-1-4244-3312-4
| |
Volume/pages
|
|
|
|
(2009)
, p. 337-342
| |
ISI
|
|
|
|
000267589700062
| |
|