Neural Networks are enjoying widespread adoption in several fields, including image classification, speech recognition, vision and robot control. State-of-the-art networks feature a rich structure with several layers specialized for different functions including filtering and pooling beside the traditional fully-connected layers. While such complexity is called for by applications and it is justified by the high accuracy rates obtained, these networks remain vulnerable to adversarial attacks, i.e., small variations on the input patterns that cause widely different responses. In scenarios requiring safety and security certifications, verification of neural networks remains a challenging, yet indisputable need to ensure that the systems they power do not exhibit unwanted behaviours.

VNN-LIB is devoted to support this challenge and provide researchers from different communities with a cooperation platform. In particular, we expect users and developers of neural networks to contribute examples of networks and properties that they must fulfil; researchers in automated verification and reasoning should contribute by providing methods and tools that can effectively check useful properties. Interaction between communities should foster research and contribute to the development of tools that can scale to the size of networks required for real-world applications.