top of page

Privacy Enhancing Technologies

Finterai specializes in federated learning as a collaboration enabling technology. We believe that this technology offers the best combination of ease of use, data privacy, and cost. We're leading the field with groundbreaking research in the area of secure federated learning.

Below, you will find a table summarizing our perspectives on the advantages and disadvantages of the main privacy enhancing technologies used for collaboration projects.

The gist of federated learning

Federated Learning is a machine learning approach where a shared global model is trained across many participating servers holding local data samples. The model learns from the data at each device and never needs to exchange raw data, thus maintaining privacy.

Comparing federated learning
Advantages of Federated Learning

Privacy: Since raw data never leaves the local server, it provides excellent data privacy.

Cost: The cost of data centralization, both in terms of data transmission and storage, is reduced as each participant holds and processes its data locally. Processing or compute costs are significantly lower than encryption-based methods.

Ease of Use: With proper infrastructure, federated learning is straightforward for organizations to deploy, requiring less stringent data preparation steps than other methods and fewer changes to workflow.

Alternative Technology

The key alternatives include Anonymization, Pseudonymization, Homomorphic Encryption, and Secure Multi-Party Computation. Each of these methods has its unique strengths and weaknesses in terms of privacy preservation.

Anonymization & Pseudonymization: While these methods can enhance privacy, they suffer from limitations. There's a risk of re-identification, and they often cause lower model accuracy.

Homomorphic Encryption: While this method allows computations on encrypted data and models, it suffers from significant computational overhead, which can be impractical for large datasets or complex computations. The method also severely limits a data scientist's flexibility in choice of models and training paradigms.

Secure Multi-Party Computation: SMPC can provide strong privacy guarantees, but it requires significant computation and communication resources and can be difficult to implement due to its complexity. It also severely limits the choice of models that can be trained.

The overall picture

In comparison to the alternatives, Federated Learning offers an appealing balance. It provides strong privacy preservation without introducing up to 10x computational overhead (like Homomorphic Encryption), complex implementation (like SMPC), or likely loss of data utility (like Anonymization and Pseudonymization). The ease of implementation and lower overall costs (due to reduced data transmission and storage needs) further enhance its appeal. Finally, freedom in choice of models is well preserved, and compliance management is simpler.

bottom of page