Machine learning: A key weapon in the cybersecurity battle

Since the dawn of the internet, companies have been fighting to stay ahead of cybercriminals. Artificial intelligence (AI) and machine learning have made this job easier by automating complex processes for detecting attacks and reacting to breaches. However, cybercriminals are also using this technology for their own malicious purposes.
More and more hackers are exploiting cognitive technologies to spy, take control of Internet of Things (IoT) devices and carry out malicious activities. CSO magazine called 2018 “the year of the AI-powered cyberattack”. For example, smart malware bots are now using AI to collect data from thousands of breached devices and can learn from that information to make future attacks more difficult to prevent and detect.
As hackers weaponize AI, cybersecurity professionals must fight fire with fire by using cognitive technology to identify and prevent attacks.
Sophisticated phishing at scale
Neural networks, modeled after the human brain, can be used to automate “spear phishing”, the creation of phishing emails or tweets that are highly personal and target specific users. According to research by Blackhat, automated spear phishing had between a 30 to 66 percent success rate, which is 5 to 14 percent higher than large-scale traditional phishing campaigns and comparable with manual spear phishing campaigns.
Automation enables attackers to run spear phishing campaigns at an alarmingly large scale. However, companies are using the capabilities of AI as a countermeasure.
According to a recent Ponemon study, 52 percent of companies are looking to add in-house AI talent to help them boost their cybersecurity efforts, and 60 percent said AI could provide deeper security than purely human efforts. That’s why new security solutions such as IBM QRadar use machine learning to automate the threat detection process, helping cyber incident investigation and response efforts get started as much as 50 times faster than before.
CAPTCHA and authentication concerns
Another area in which AI tools are already helping cybercriminals do their dirty work is in breaking complex codes, whether it’s CAPTCHA or usernames and passwords. Using processes such as optical character recognition, the software can identify and learn from millions of images, eventually gaining the ability to recognize and solve a CAPTCHA. Similarly, hackers are applying the same optical character recognition combined with the ability to automate login requests to test stolen usernames and passwords across multiple sites.
Fighting back against such large-scale attacks requires leaning on these same AI technologies. One way to do this is to use learning-enabled technology to understand what is normal for a system, then flag unusual incidents for human review. Security professionals need AI-based monitoring solutions to provide automated help and identify which alerts pose a real and immediate risk.
Malware
Smart malware, which “learns” how to become less detectable, is also posing a significant threat. Defeating normal malware is typically done by “capturing” the malware and reverse engineering it to figure out how it works. However, in smart malware, it is more difficult to analyze how the neural network makes decisions on who to attack.
While reverse-engineering smart malware remains challenging, neural networks have been successful at recognizing malicious domains created by a domain generation algorithm (DGA), which creates pseudo-random domain names. A smart DGA keeps changing to stay ahead of attempts to thwart it, but, likewise, a smart neural network will continue to learn the strategies deployed by hackers and how to defeat them.
Fight security threats before they happen
One of the most powerful aspects of security enabled by AI and machine learning is the ability to uncover patterns and learn from unstructured data. As a result, these tools can provide security professionals with the means to combat attacks, as well as insights into emerging threats and recommendations on how to defend against impending incidents. Additionally, machine learning can help locate vulnerabilities that may be difficult for human security teams to find.
Cybercriminals are already using AI to launch larger-scale, more sophisticated attacks. Here’s the good news: companies can fight back by using these same technologies. If your organization has been considering implementing AI but hasn’t yet put a plan in place, the time is now, and the business case has arrived. Cognitive technologies such as neural networks and automated security monitoring solutions can help bring your business’s defenses into the cyber age and give you the most cutting-edge weapons to defend against emerging threats.
Discover the ways that IBM Cloud Private for Data can enable security by supporting the development and deployment of AI and machine learning capabilities.
The post Machine learning: A key weapon in the cybersecurity battle appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Published by