TY - GEN
T1 - Estimation of Individual Device Contributions for Incentivizing Federated Learning
AU - Nishio, Takayuki
AU - Shinkuma, Ryoichi
AU - Mandayam, Narayan B.
N1 - Funding Information:
This work is supported in part by the US NSF under Grant ACI-1541069, JST PRESTO Grant no. JPMJPR1854, and KDDI fundation. Also, the research results were partly obtained from the commissioned research by National Institute of Information and Communications Technology (NICT), Japan.
Publisher Copyright:
© 2020 IEEE.
PY - 2020/12
Y1 - 2020/12
N2 - Federated learning (FL) is an emerging technique used to collaboratively train a machine-learning model using the data and computation resources of mobile devices without exposing private or sensitive user data. Appropriate incentive mechanisms that motivate the data and mobile-device owner to participate in FL is key to building a sustainable platform. However, it is difficult to evaluate the contribution levels of participants to determine appropriate rewards without large computation and communication overhead. This paper proposes a computation- A nd communication-efficient method of estimating participants contribution levels. The proposed method requires a single FL training process, which significantly reduces overhead. Performance evaluations are done using the MNIST dataset, showing that the proposed method estimates participant contributions accurately with 46-49% less computation overhead and no communication overhead, as compared to a naive estimation method.
AB - Federated learning (FL) is an emerging technique used to collaboratively train a machine-learning model using the data and computation resources of mobile devices without exposing private or sensitive user data. Appropriate incentive mechanisms that motivate the data and mobile-device owner to participate in FL is key to building a sustainable platform. However, it is difficult to evaluate the contribution levels of participants to determine appropriate rewards without large computation and communication overhead. This paper proposes a computation- A nd communication-efficient method of estimating participants contribution levels. The proposed method requires a single FL training process, which significantly reduces overhead. Performance evaluations are done using the MNIST dataset, showing that the proposed method estimates participant contributions accurately with 46-49% less computation overhead and no communication overhead, as compared to a naive estimation method.
KW - Contribution Estimation
KW - Contribution Metric
KW - Federate Learning
KW - Incentive Mechanism
UR - http://www.scopus.com/inward/record.url?scp=85102949224&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85102949224&partnerID=8YFLogxK
U2 - 10.1109/GCWkshps50303.2020.9367484
DO - 10.1109/GCWkshps50303.2020.9367484
M3 - Conference contribution
AN - SCOPUS:85102949224
T3 - 2020 IEEE Globecom Workshops, GC Wkshps 2020 - Proceedings
BT - 2020 IEEE Globecom Workshops, GC Wkshps 2020 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE Globecom Workshops, GC Wkshps 2020
Y2 - 7 December 2020 through 11 December 2020
ER -