Skip to content

Commit c4628d5

Browse files
shs037tensorflower-gardener
authored andcommitted
Skips adding noise when noise_multiplier is 0 for fast clipping.
PiperOrigin-RevId: 522396275
1 parent de98368 commit c4628d5

File tree

1 file changed

+11
-8
lines changed

1 file changed

+11
-8
lines changed

tensorflow_privacy/privacy/keras_models/dp_keras_model.py

Lines changed: 11 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -231,16 +231,19 @@ def train_step(self, data):
231231
self._num_microbatches,
232232
)
233233
)
234-
grads = gradient_clipping_utils.add_aggregate_noise(
235-
self,
236-
clipped_grads,
237-
eff_num_microbatches,
238-
self._l2_norm_clip,
239-
self._noise_multiplier,
240-
)
234+
if self._noise_multiplier > 0:
235+
grads = gradient_clipping_utils.add_aggregate_noise(
236+
self,
237+
clipped_grads,
238+
eff_num_microbatches,
239+
self._l2_norm_clip,
240+
self._noise_multiplier,
241+
)
242+
else:
243+
grads = clipped_grads
241244
output_metrics[privatized_loss_name] = weighted_loss
242245
else:
243-
logging.info('Computing gradients using microbatching.')
246+
logging.info('Computing gradients using original clipping algorithm.')
244247
# Computes per-example clipped gradients directly. This is called
245248
# if at least one of the layers cannot use the "fast" gradient clipping
246249
# algorithm.

0 commit comments

Comments
 (0)