Federated Learning: Concepts and Applications 

2023-11-28

디지털 트윈의 콘셉트를 나타낸 그림. 실제 항공기 옆에 가상 공간에 구현한 디지털 트윈이 있다.

AI machine learning relies on vast amounts of data. To achieve this, you start by gathering substantial data in a central server, and then you train the data using high-performance computing resources. The more data at your disposal, the more effectively it learns, resulting in improved AI capabilities to offer enhanced personalized services to consumers or make more accurate predictions for industrial applications.

Simultaneously, there is a rising concern regarding privacy breaches and leaks of industrial data. For instance, global tech giants train AI models using a wide array of personal information, making it challenging for consumers to discern the exact extent of data collected and utilized. Consequently, regulators are increasingly tightening privacy and data security regulations. Notably, in May 2023, the European Union (EU) imposed a record $1.3 billion fine on Meta for breaching the European General Data Protection Regulation (GDPR) through unauthorized personal information transfers.

The issue at hand is that the demand for data utilization is continually on the rise. In this context, federated learning has recently emerged as a method to advance AI by harnessing diverse data sources without contravening existing regulations. In this article, we will explore the concept of federated learning and its potential implications for industries.

What is federated learning?

It is a technology that permits AI models to be trained on data stored in various locations without centralizing it. According to market research firm Emergen Research, the global federated learning market reached $112.7 million in 2021 and is projected to grow at a compound annual growth rate (CAGR) of 10.5% through 2032.

(1) Concepts

In federated learning, data stored in distinct locations isn’t transmitted to a central server. Instead, a global AI model is dispatched from the central server to those specific locations. These AI models are refined by learning from the locally stored data, and the resulting value, known as the “*weight” is then transmitted back to the central server. The central server averages these weights to enhance the global AI model. This entire process is iterated upon. As a result, the global AI model on the central server becomes increasingly generalized, and the AI models in the individual locations become progressively more accurate.

  • Weight: A parameter that governs the significance of each input value in relation to the output. Machine learning is the procedure of employing inputs and outputs to determine the appropriate parameter values.

Google compares federated learning to “a conference organized by artificial intelligence.” In this analogy, doctors share their experiences in treating patients to provide insights to other doctors who may not have firsthand experience with certain symptoms or treatments. This exchange allows them to discuss and collaborate on developing improved treatments. It’s important to note that, in this context, the doctors are not sharing patients’ personal information; instead, they are exchanging information related to the disease and its treatment.

Establish a global AI model by initially training it with data stored on the server. Share this model with multiple clients, who then use their own data to refine the model. The resulting weight values are transmitted back to the server, which utilizes them to update the model. This updated model is subsequently sent back to the clients, and the iterative process continues. [7] - Image Credit: OpenFL[8]

Establish a global AI model by initially training it with data stored on the server. Share this model with multiple clients, who then use their own data to refine the model. The resulting weight values are transmitted back to the server, which utilizes them to update the model. This updated model is subsequently sent back to the clients, and the iterative process continues. [7] – Image Credit: OpenFL[8]

(2) Benefits of federated learning

Most importantly, federated learning can reduce concerns related to data breaches because it transmits information on how the data was processed to a centralized server rather than the actual data itself. Additionally, several other advantages come along.

-Resource Efficiency : Storing vast quantities of data in a single location is both time-consuming and costly. For example, self-driving cars involved in current research generate several terabytes of data per hour. Collecting data from all autonomous vehicles in the cloud for machine learning would be prohibitively time-consuming and expensive. However, with federated learning, each car trains a model and only transmits the results to the network. This significantly reduces the volume of data that needs to be processed by a centralized server, resulting in cost savings for data transmission, computation, and data storage.

-Improved Reliability : Building unbiased and accurate AI models requires substantial data. By training models from multiple sources, Federated learning enhances AI models’ reliability.

-Convenience : Federated learning is efficient because it allows you to maintain datasets in multiple locations and concurrently train an AI model with each dataset. Moreover, you can promptly deploy improved AI models to individual locations from a centralized server. This enables you to utilize enhanced AI capabilities immediately.

Effortlessly conduct the data collection and analysis needed for implementing Digital Twins with AHHA Labs’ innovative solutions.

Core Challenges

Federated learning still faces many technical challenges that need to be addressed, with optimization being the most critical issue. In this regard, there are four key challenges, and recent research is actively addressing them.

Challenge 1: Expensive Communication

Communication is a critical bottleneck in federated networks. Indeed, federated networks are potentially comprised of a massive number of devices, e.g., millions of smartphones, and communication in the network can be slower than local computation by many orders of magnitude. It is therefore necessary to develop communication-efficient methods. Two key aspects to consider to reduce communication in such a setting further are: (i) reducing the total number of communication rounds or (ii) reducing the size of transmitted messages at each round.

Challenge 2: Systems Heterogeneity

Each device’s storage, computational, and communication capabilities may differ due to hardware, network connectivity, and power variability. Additionally, the constraints on each device typically result in only a small fraction of the devices being active at once. Each device may also be unreliable, and it is not uncommon for an active device to drop out. Federated learning methods that are developed and analyzed must therefore: (i) anticipate a low amount of participation, (ii) tolerate heterogeneous hardware, and (iii) be robust to dropped devices in the network.

Challenge 3: Statistical Heterogeneity

Federated learning assumes that the data collected by devices are independent and identically distributed (I.I.D.). However, devices frequently generate and collect data in a non-identically distributed manner across the network. Moreover, the number of data points across devices may vary significantly. This data generation may add complexity in terms of modeling, analysis, and evaluation.

Applications

The key advantage of federated learning is data security. As a result, it demonstrates a growing trend in active adoption, especially in personalized services, healthcare, and financial sectors, which are particularly sensitive to data privacy.

(1) Personalized services

Google is implementing federated learning in Gboard, the Google keyboard for Android. Gboard learns the unique behavioral patterns of individual users and suggests relevant words as they type. When Gboard displays a suggested word, it stores the information locally on the smartphone, recording whether or not the user clicks on it. The local AI learns this data. This federated learning process occurs while the smartphone is charging, connected to Wi-Fi, and not actively in use, typically during nighttime when the user is asleep.

Looking ahead, we anticipate a proliferation of personal devices beyond smartphones, driving increased demand for personalized services. Federated learning is expected to emerge as a powerful tool to enhance companies’ competitiveness in an environment where data sharing is virtually impossible due to privacy concerns.

An example of federated learning using smartphones. Google initially deploys a global AI model (represented by the blue circle) on each smartphone. This model undergoes refinement through learning from the data stored on the user's device(A). Various training parameters from each smartphone are subsequently transmitted to a centralized server(B) to update the global AI model. The resulting training parameters (depicted as green diamonds) are then sent back to each individual smartphone(C). -Image Credit: Google Tech Blog.

An example of federated learning using smartphones. Google initially deploys a global AI model (represented by the blue circle) on each smartphone. This model undergoes refinement through learning from the data stored on the user’s device(A). Various training parameters from each smartphone are subsequently transmitted to a centralized server(B) to update the global AI model. The resulting training parameters (depicted as green diamonds) are then sent back to each individual smartphone(C). -Image Credit: Google Tech Blog.

(2) AI for disease identification

Data accessibility has long been a challenge in the field of healthcare. Advancing artificial intelligence to combat diseases requires a significant amount of data, often beyond what a single hospital can generate. However, medical information is highly sensitive and subject to strict national regulations, such as the U.S. Health Insurance Portability and Accountability Act (HIPAA). This has historically hindered hospitals from collaborating on healthcare AI.

Recently, several studies have emerged that employ federated learning to enhance AI models for disease diagnosis without directly sharing personal medical data.

For example, a research team from 71 institutions, including Intel Labs and the Perelman School of Medicine at the University of Pennsylvania in the U.S., utilized federated learning to advance AI in identifying malignant brain tumors. This collaborative project also involved Yonsei University College of Medicine and Asan Medical Center in South Korea.

The research team employed Intel’s open-source framework, Open Federated Learning (OpenFL), to execute federated learning. The platform was trained on 3.7 million images collected from 6,314 brain tumor patients across six continents, making it the largest brain tumor dataset to date.

The results revealed a 33% improvement in brain tumor detection compared to the same model trained on BraTS, the publicly available data from the International Brain Tumor Segmentation.

In the healthcare sector, especially, numerous entities are responsible for managing personal data. Individual medical information is stored in multiple hospitals, and even data related to a single disease can be scattered across various healthcare institutions. Therefore, it is anticipated that research and development utilizing associative learning will see increased activity in the medical field in the future.

Penn Medicine and 71 international healthcare and research institutions used Intel’s federated learning hardware and software to demonstrate improved detection of rare cancer boundaries by 33% compared to an initial AI model trained using public data. -Image Credit: Intel Corporation

Penn Medicine and 71 international healthcare and research institutions used Intel’s federated learning hardware and software to demonstrate improved detection of rare cancer boundaries by 33% compared to an initial AI model trained using public data. -Image Credit: Intel Corporation

(3) Better gripping with intelligent picking robots

Festo, a global automation solutions company, collaborates with German and Canadian partners, including the Karlsruhe Institute of Technology and the University of Waterloo, on the FLAIROP (Federated Learning for Robot Picking) project. The team is researching techniques to enable robots to pick up objects intelligently using federated learning.

Without disclosing sensitive corporate data, the team constructs federated models by utilizing data from various sources, including workstations, factories, and companies. The result is a robot capable of picking up items on workbenches for which it has yet to be specifically trained.

The process unfolds as follows: initially, the central server dispatches the initial model to local workstations. Subsequently, each workstation trains its local model using data from its respective source. The updated local model weights are then transmitted back to the central server, which aggregates the weights from all workstations to update the global model. The optimized model is then distributed to the local workstations.

(4) Predictive maintenance in manufacturing processes

Federated Learning for Predictive Maintenance and Anomaly Detection Using Time Series Data Distribution Shifts in Manufacturing Processes[14] 

In this paper, the team proposed a 1DCNN-Bilstm model for time series anomaly detection and predictive maintenance of manufacturing processes. Subsequently, they combined a federated learning framework with these models to consider the distributional shifts of time series data and perform anomaly detection and predictive maintenance based on them. The team conducted an evaluation using the pump dataset to assess the performance of combining several federated learning frameworks and time series anomaly detection models. Experimental results show that the proposed framework achieves a test accuracy of 97.2%, which shows its potential to be utilized for real-world predictive maintenance in the future.

Proposed framework structure. Source: https://doi.org/10.3390/s23177331

Proposed framework structure. -Image Credit: https://doi.org/10.3390/s23177331

Transforming Manufacturing with Federated Learning

Optimizing manufacturing relies on making decisions with data and AI. However, this process can be time-consuming and costly for manufacturers, especially small and medium-sized businesses(SMBs). They may lack the resources to collect data or run cloud operations and are often hesitant to share raw data with public clouds. This is where federated learning comes in.

For instance, let’s imagine a large company with factories scattered across an industrial estate. While having data from all of them for machine learning would be ideal, centrally collecting such a vast amount of data would be inefficient. You can construct a more efficient model through federated learning by training it at each factory and only sending the weights to the cloud.

Consider collaboratively leveraging multiple companies with similar types of machines to train a predictive maintenance and quality control AI model. The company’s data itself is not shared externally, preserving the security of proprietary data while enhancing the performance of predictive maintenance, process optimization, energy consumption optimization, and quality control. All companies involved in this collaborative process can enjoy the benefits of more efficient production, higher product quality, and improved customer satisfaction.

We also see applications in the supply chain space. Supply chains consist of various stakeholders, each of whom can train a global AI model that enables demand forecasting, inventory optimization, and logistics optimization without exposing their data to each other. Scaling this up, you can build federated digital twins and digital threads.

Although we haven’t witnessed the widespread application of federated learning in manufacturing yet, as we’ve seen, it holds significant potential to transform the industry. Starting with data collection, you’ll need to take steps to prepare your organization to adopt federated learning.

Chloe Woo | Content Strategist

Related Stories