Robust deep learning algorithm, capable of thwarting infiltrations of state-of-the-art adversarial examples/images with high levels of accuracy

About

Classifier resistant to Adversarial Example Attack The Challenge Deep learning neural networks have had phenomenal success with complex problem-solving applications. But their susceptibility to adversarial attacks remains a primary safety and security concern for companies and nation-states alike. The delivery of a robust deep learning algorithm, capable of thwarting infiltrations of state-of-the-art adversarial examples/images with high levels of accuracy remains a concern for artificial intelligence research and development. The Solution We have developed a novel algorithm that provides security against adversarial examples/images by training multiple classifiers on randomly partitioned class labels. Classifiers are trained using meta-features derived from the outputs of each randomly partitioned class. This results in a much larger label space. Our approach maps meaningful classes to a considerably smaller subset of the label space. This significantly reduces the probability of adversarial examples/images being assigned valid random labels. The algorithm is highly robust. Attackers must develop noise optimisation techniques for multiple classifier outputs to ensure that their adversarial examples/image receives a valid label. Our novel algorithm has produced excellent results against Carlini-Wagner (L2) and Projected Gradient Descent attacks. It also has high accuracy with MNIST (>97%) and CIFAR-10 (>80%) datasets. Intellectual Property Status Filed. Awaiting publication. UK Patent Application No 2117796.9 Classifier Resistant to Adversarial Example Attacks University of Newcastle upon Tyne The Opportunity Application Description: A randomized labelling and partitioning based method to defend against adversarial examples. We seek a partner who will invest in R&D to develop a solution to adversarial attacks. The solution should aim for mass deployment through product/process/service offering(s). Enquiries for further technical and product development or licensing opportunities are encouraged. The technique could support: Autonomous vehicles Image recognition Malware intrusion Surveillance Contact Graeme Young, Business Development Manger [email protected]

Key Benefits

Security against adversarial examples/images by training multiple classifiers on randomly partitioned class labels. Classifiers trained using meta-features derived from the outputs of each randomly partitioned class. Maps meaningful classes to a considerably smaller subset of the label space. Reduces the probability of adversarial examples/images being assigned valid random labels. Highly robust. Attackers must develop noise optimisation techniques for multiple classifier outputs to ensure that their adversarial examples/image receives a valid label.

Applications

Autonomous Vehicles Validation of decisions Web Site design and Optimisation Safety Critical Systems

Register for free for full unlimited access to all innovation profiles on LEO

  • Discover articles from some of the world’s brightest minds, or share your thoughts and add one yourself
  • Connect with like-minded individuals and forge valuable relationships and collaboration partners
  • Innovate together, promote your expertise, or showcase your innovations