Communication Optimization in Tiny Federated Learning
Tiny Federated Learning is a combination of Tiny Machine Learning (TinyML) and the Federated Learning (FL) approach. TinyML is a field of study that involves machine learning and embedded systems that can be run on small, low-powered devices, and Federated learning is an emerging approach used to train a decentralized machine learning model across multiple edge devices. Tiny Federated Learning is a federated learning approach on tiny edge devices with low latency, low power, and low computational cost. This approach is helpful in many ways. The edge device can learn new things every day and updates itself to do the tasks better.TFL trains a shared Machine learning model in Embedded devices like microcontrollers while keeping the training data locally without sharing it with the Server. TFL suffers from the problem of transmitting the data from the edge devices to the server as edge devices have limited computational power.To solve this problem, we design an approach for reducing the transmission cost between edge devices and the server. More specifically, we build a model by pruning approach to reduce the model computational cost and find effective means of communication of data from the trained data to the Server. We focus on sending only the instance of newly learned model update to the server rather than sending the whole model update to the server. The Proposed approach reduces the uplink transmission cost. Our Overall objective is to build a model that can perform better and learn better on an embedded device and also sends updates to the server by reducing computational and transmission cost.
Course Project 202310 CPS 595 P1
Ahmed El Ouadrhiri, Phu Phung
Primary Advisor's Department
Stander Symposium, College of Arts and Sciences
Institutional Learning Goals
"Communication Optimization in Tiny Federated Learning" (2023). Stander Symposium Projects. 3256.