In this example, you can see an algorithm that correctly identifies people based on the photo input. However, when a few pixels are changed in a hostile attack, the algorithm can no longer recognize humans. credit: Jan Hendrik Metzen and others. the.the author provided
Artificial intelligence algorithms are quickly becoming a part of everyday life. Many systems that require strong security are either already supported by machine learning or will soon be. These systems include facial recognition, banking, military targeting applications, robotics and autonomous vehicles, to name a few.
This raises an important question: How secure are these machine learning algorithms against malicious attacks?
in an essay Posted today in Nature’s machine intelligenceMy colleagues at the University of Melbourne and I discuss a possible solution to the weaknesses of machine learning models.
We propose that incorporating quantum computing into these models can produce new algorithms with strong resilience against adversarial attacks.
Risks of data manipulation attacks
Machine learning algorithms can be remarkably accurate and effective for many tasks. It is particularly useful for image classification and identification. However, they are also highly vulnerable to data manipulation attacks, which can pose serious security risks.
Data tampering attacks, which involve highly subtle manipulation of image data, can be launched in a number of ways. An attack might be launched by mixing corrupt data into a training dataset used to train an algorithm, causing it to learn things it shouldn’t.
Manipulated data can also be injected during the testing phase (after training is complete), in cases where the AI system continues to train the underlying algorithms while in use.
People could even carry out such attacks from the physical world. Anyone can put a sticker on a stop’s sign Fool a self-driving car Artificial intelligence in identifying it as a speed limit sign. Or, on the front lines, soldiers might wear uniforms that could trick AI-based drones into identifying them as normal features.
Either way, the consequences of data manipulation attacks can be dire. For example, if a self-driving car uses a hacked machine learning algorithm, it might incorrectly predict when there are no humans on the road — when there are.
How can quantum computing help?
In our article, we describe how integrating quantum computing with machine learning can lead to secure algorithms called quantum machine learning models.
These algorithms are carefully designed to exploit special quantitative properties that allow them to find specific patterns in image data that cannot be easily manipulated. The result will be algorithms that are resilient and secure against powerful attacks. Nor will it require a huge cost.”Adversarial training“It is currently being used to teach algorithms how to resist such attacks.
Moreover, quantum machine learning can allow for faster algorithm training and more accuracy in learning features.
How will it work?
Today’s classical computers work by storing and processing information as “bits,” or binary numbers, which are the smallest unit of data a computer can process. In classical computers, which follow the laws of classical physics, bits are represented as binary numbers — specifically zeros and ones.
On the other hand, quantum computing follows the principles used in quantum physics. Information in quantum computers is stored and processed as qubits (quantum bits) which can exist as 0, 1, or a combination of both. A quantum system that exists in multiple states simultaneously is said to be in superposition. Quantum computers can be used to design intelligent algorithms that exploit this property.
However, while there are great potential benefits in using quantum computing to secure machine learning models, it can also be a double-edged sword.
On the other hand, quantum machine learning models will provide important security for many sensitive applications. On the other hand, quantum computers can be used to generate powerful adversarial attacks, capable of easily fooling even the latest conventional machine learning models.
Moving forward, we’ll need to seriously think about the best ways to protect our systems; An adversary with access to early quantum computers could pose a major security threat.
limitations to overcome
Current evidence suggests that we are still a few years away from quantum machine learning becoming a reality, due to limitations in the current generation of quantum processors.
Today’s quantum computers are relatively small (with less than 500 qubits) and their error rates are high. Errors may appear for a variety of reasons, including incomplete manufacture of qubits, errors in the control circuit, or loss of information (called “Quantum decoherence) through interaction with the environment.
However, we have seen huge advances in quantum hardware and software over the past few years. According to modern quantum devices RoadmapsQuantum devices being built in the coming years are expected to contain hundreds of thousands of qubits.
These devices must be able to run powerful quantum machine learning models to help protect a wide range of industries that rely on machine learning and AI tools.
Around the world, governments and the private sectors alike are increasing their investments in quantum technologies.
This month the Australian government launched a National Quantum StrategyIt aims to develop the country’s quantum industry and commercialize quantum technologies. According to CSIRO, Quantum Industry in Australia It could be worth about A$2.2 billion by 2030.
This article has been republished from Conversation Under Creative Commons Licence. Read the The original article.
the quote: From Self-Driving Cars to Military Surveillance: Quantum Computing Can Help Secure the Future of AI Systems (2023, May 28), Retrieved May 28, 2023 from https://phys.org/news/2023-05-self- driving-cars -military control-quantum.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.