You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/ICML_workshop/db-icml.tex
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -347,7 +347,7 @@ \section{Related work}
347
347
348
348
\section{Conclusion}\label{sec:conclusion}
349
349
350
-
$\partial\mathbb{B}$ nets are differentiable nets that are hard-equivalent to non-differentiable, boolean-valued functions. $\partial\mathbb{B}$ nets can therefore learn discrete functions by gradient descent. Ensuring hard-equivalence requires defining new kinds of activation functions and network layers. `Margin packing' is a potentially general technique for constructing differentiable functions that are hard-equivalent yet gradient-rich. An advantage of $\partial\mathbb{B}$ nets is that `hardening' to 1-bit weights has provably identical accuracy. At inference time $\partial\mathbb{B}$ nets are highly compact and potentially cheap to evaluate. Preliminary experiments demonstrate that $\partial\mathbb{B}$ nets are competitive with existing classification approaches.
350
+
$\partial\mathbb{B}$ nets are differentiable nets that are hard-equivalent to non-differentiable, boolean-valued functions. $\partial\mathbb{B}$ nets can therefore learn discrete functions by gradient descent. Ensuring hard-equivalence requires defining new kinds of activation functions and network layers. `Margin packing' is a potentially general technique for constructing differentiable functions that are hard-equivalent yet gradient-rich. An advantage of $\partial\mathbb{B}$ nets is that `hardening' to 1-bit weights has provably identical accuracy. At inference time $\partial\mathbb{B}$ nets are highly compact and potentially cheap to evaluate. Preliminary experiments demonstrate that $\partial\mathbb{B}$ nets achieve comparable performance to existing approaches.
351
351
352
352
% Acknowledgements should only appear in the accepted version.
0 commit comments