Artificial intelligence (AI) is capable of increasing business productivity by up to 40%. AI allows people to use their time more efficiently by automating tasks. From asking our phone the weather to being chauffeur driven by our car, AI is advancing quickly.
AI in everyday technologies
AI is all around us and used in almost every industry. It allows businesses to automate certain processes, forecast sales and aid marketers with ad targeting. Social listening tools track how many times a company has been mentioned and creates an overview of how their customers react and respond to them. The different types of interactions can be turned into measurable data to be analysed.
While AI has many positive ways to enhance the world we live in, AI can also be used for malicious purposes. Experts predict cyber-attacks to occur such as automated hacking, targeted spam emails that use information gathered from social media to speech synthesis. The physical world is also at risk as cyber-physical systems could enable attackers to seize control of autonomous vehicles.
How threat actors can use DeepFakes for spear phishing attacks
Phishing and spear phishing have become the cause of most cybercrime attacks. AI can be held responsible for advancing this industry. You can no longer believe what you see, Deepfakes use AI to manipulate videos to make people appear like they are saying or doing something that never happened. The realism of these videos is growing at an alarming rate. It is not long before they will be used to bypass cybersecurity and successfully commit social engineering attacks. Malicious AI has the potential to trick automated detection systems and violate privacy operations.
This proves that AI and machine learning have the potential to wreak havoc. Assailants will be able to create automated and targeted content to pull off more advanced spear phishing attacks through social engineering. AI will help create targeted spear phishing attacks that are personalized and more successful. It will be able to impersonate an individual’s writing style, which could be used to target employees and manipulate them into clicking on malicious links. In the not so distant future, it could be possible to see “vishing” attacks that can emulate voice.
Proof-of-concept attack tool called DeepLocker by IBM
IBM Research have developed a proof-of-concept attack tool called DeepLocker. Its purpose is to understand what happens when AI merges with current malware capabilities. This new class of AI-powered malware remains undetected until it reaches its victim. The AI model can identify its target through facial and voice recognition. Once identified the AI-powered malware will unleash its malicious action. DeepLocker is extremely dangerous as it could infect millions of systems undetected.
The future of AI
AI has changed the risk landscape for the entire world. Criminals will be able to leverage this technology to hack and phish with human levels of realism. The impact on security is vast but the power of AI will be our biggest resource and aid in our defence against it.
If you liked this article, you may also like:
The top 3 cloud security challenges
The best practices of Administrative Privilege Management
Incremental vs differential backup: which one is right for your company?