Document Type

Article

Publication Date

11-2020

Publication Source

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

Abstract

In this article, we present a layer-wise refinement method for neural network output range analysis. While approaches such as nonlinear programming (NLP) can directly model the high nonlinearity brought by neural networks in output range analysis, they are known to be difficult to solve in general. We propose to use a convex polygonal relaxation (overapproximation) of the activation functions to cope with the nonlinearity. This allows us to encode the relaxed problem into a mixed-integer linear program (MILP), and control the tightness of the relaxation by adjusting the number of segments in the polygon. Starting with a segment number of 1 for each neuron, which coincides with a linear programming (LP) relaxation, our approach selects neurons layer by layer to iteratively refine this relaxation. To tackle the increase of the number of integer variables with tighter refinement, we bridge the propagation-based method and the programming-based method by dividing and sliding the layerwise constraints. Specifically, given a sliding number s, for the neurons in layer l, we only encode the constraints of the layers between l - s and l. We show that our overall framework is sound and provides a valid overapproximation. Experiments on deep neural networks demonstrate significant improvement on output range analysis precision using our approach compared to the state-of-the-art.

Inclusive pages

3323-3335

ISBN/ISSN

0278-0070

Document Version

Published Version

Comments

This open-access article is provided for download in compliance with the publisher’s policy on self-archiving. To view the version of record, use the DOI: https://doi.org/10.1109/TCAD.2020.3013071

Publisher

IEEE-INST Electrical Electronics Engineers INC

Volume

39

Peer Reviewed

yes

Issue

11


Share

COinS