·¬ÇÑÉçÇø

Event Details

Cooperative Activation Functions

Presenter: William Briguglio
Supervisor:

Date: Wed, August 14, 2024
Time: 11:00:00 - 00:00:00
Place: Zoom, link below.

ABSTRACT

Note: Please log in to Zoom via SSO and your UVic Netlink ID.

 

Join Zoom Meeting:

 

 

Meeting ID: 843 6841 6374

One tap mobile

+16475580588,,84368416374# Canada

+17789072071,,84368416374# Canada

 

Dial by your location

        +1 647 558 0588 Canada

        +1 778 907 2071 Canada

Meeting ID: 843 6841 6374

Find your local number:  


Abstract

 

Sensitive information that cannot be shared presents an obstacle to gathering centralized datasets for machine learning tasks. Federated learning (FL) remedies this by leveraging data held across multiple clients without exchanging any private data. However, extra steps must be taken to ensure model privacy during the inference phase in order to eliminate the risk that model weights will be reverse-engineered to compromise training data privacy. Owners of a model, either jointly trained using FL or conventionally trained, may also wish to keep their model private in order to retain a monopoly on its use and commercialization. Such models may be intended to be used on sensitive data which also cannot be shared with the model owner. In this presentation I will detail cooperative activation functions (CAFs). CAFs enable inference on homomorphically encrypted data without requiring changes to model training and architecture. CAFs maintain the privacy of both the inference data and model weights. Motivating this work were vulnerabilities we identified in a comparison approach for private inference, which compromises the privacy of model weights and, therefore, training data. We evaluated CAFs on the same dataset to measure their effect on model accuracy and compute time. The results indicate that CAFs were able to deliver lossless or near-lossless performance.