As the year ahead presents cybersecurity defenders with a host of new challenges and threat actors continuously evolving their tactics, Zaheer Ebrahim, Solutions Architect at Trend Micro MEA, has shared top insights from his organisation’s latest research and the cybersecurity developments that are on his radar for the year.
These insights, he disclosed below:
Closing the security gap in cloud environments: Cloud adoption among organisations in Africa has grown exponentially in recent years. At the same time, this has expanded the attack surface and organisations will need to ensure that as their IT infrastructure has transformed so too have their security parameters.
Trend Micro’s latest research shows that cloud environments will become the playground for tailor-made worms built to exploit cloud technologies. Threat actors do this by using misconfigurations as easy entry points. One study by the Open Worldwide Application Security Project (OWASP) found that misconfigurations ranked among the top API risk for organisations. Worming capabilities have the potential to set off rapid propagation in cloud environments due to the interconnectivity of the cloud.
The ideal approach to any cloud migration journey is to ensure cybersecurity is considered every step of the way. This requires security teams to look beyond the usual malware and vulnerability scans, and to proactively evaluate cloud environments in anticipation of these worm attacks.
Protect data to defend ML models: Data poisoning cloud-based machine learning (ML) models is an emerging threat but will grow in prominence in 2024. Defenders will have to contend with an expansive attack surface as a weaponised ML model can open the floodgates to severe consequences such as divulging confidential data for extraction, writing malicious instructions, and providing biased content that could lead to user dissatisfaction or potential legal repercussions.
While ML models are still an expensive integration for many businesses, 69% of IT leaders consider ML integration a top priority for operations. With this in mind, it will be crucial for those who do deploy this technology to validate and authenticate training datasets to prevent an attack. Companies might opt to cut costs by taking their algorithms off premises, however, because the data is sourced from third parties, this can leave a ML model vulnerable to data poisoning.
Generative AI gives cybercriminals an advantage: AI continues to be a tool that aids the fight against cybercriminals, but it is also one used by bad actors themselves. In the case of social engineering scams like spear phishing, harpoon whaling, and virtual kidnapping, generative AI has been key in increased success rates for criminals.
This, however, is just the tip of the iceberg, with 2024 expected to bring with it more advancements in this kind of attack. Voice cloning has already started to make its way into the toolkits of criminals for identity theft and social engineering.
While the malicious large language model (LLM), WormGPT was shut down in August last year, it’s unlikely we’ll see the last of this type of tool. Cybercriminals often pivot quickly to alternatives and AI researchers have demonstrated that it’s possible to trick generative AI systems into circumventing their own censorship rules.