Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

TT-TFHE: a Torus Fully Homomorphic Encryption-Friendly Neural Network Architecture

Created by
  • Haebom

Author

Adrien Benamira, Tristan Guerand , Thomas Peyrin, Sayandeep Saha

Outline

This paper presents a TT-TFHE framework that efficiently performs homomorphic encryption inference of deep learning models using Torus FHE (TFHE). Based on a convolutional neural network called Truth-Table Neural Networks (TTnet), it effectively scales up the usage of TFHE for tabular and image datasets. It provides an open-source Concrete implementation in Python, and provides an easy CPU-based implementation using a lookup table and an automated TTnet-based design tool. Experimental results show that it outperforms existing homomorphic encryption settings in terms of time and accuracy on three tabular datasets, and outperforms other TFHE settings and other homomorphic encryption methods such as BFV and CKKS on MNIST and CIFAR-10 image datasets. In addition, it has a very small memory usage (tens of MB for MNIST), which contrasts with the tens to hundreds of GB of memory usage required by other homomorphic encryption settings. This is the first study to provide practical-level private inference (seconds of inference time, tens of MB of memory) on both tabular and MNIST image datasets, and is easily scalable to multiple threads and users on the server side.

Takeaways, Limitations

Takeaways:
Presentation of an efficient deep learning homomorphic encryption inference framework using TFHE
Achieving practical levels of performance applicable to both tabular and image datasets
Implementing low memory usage and fast inference speed
Providing open source Concrete implementation and automated design tools
Scalability to multiple threads and users
Limitations:
Lack of detailed description of specific hardware specifications and environment.
Experimental results on various datasets and models may be limited.
Further research is needed to improve and optimize the performance of concrete implementations.
Comparative analysis with other advanced FHE schemes may be lacking.
👍