Artificial Intelligence (AI) will soon be able to assist hackers in carrying out cyberattacks, according to a report that was recently published by the Government Communications Headquarters (GCHQ) of the United Kingdom. The report was titled “The near-term impact of AI on the cyber threat.”
According to Reuters, the British agency warns that the rapid development of artificial intelligence technologies will likely lead to an increase in the number of cyberattacks such as ransomware attacks and phishing scams all over the world. This is because it will make it simpler for hackers with less experience to cause damage online within the next two years.
To be more specific, the article asserts that artificial intelligence will primarily enhance the social engineering capabilities of threat actors. Generative artificial intelligence (GenAI) may already be used to enable convincing communication with victims, including the development of lure documents, without the requirement for translation, spelling, or grammar checks, which are frequently indicative of phishing. Generative AI may also be used to generate lure documents.
It is almost guaranteed that this will increase over the next two years as a result of the development of new models and the increase in usage. Due to the rapid data summarizing capabilities of artificial intelligence, threat actors will likely be able to choose high-value assets for examination and exfiltration over the next two years. This is in addition to the fact that cyberattacks will become more valuable and significantly more effective.
AI-Assisted Cyberattacks
Ransomware continues to be the most serious cyber danger that organizations and businesses in the United Kingdom face, despite the fact that hackers are modifying their business models in order to boost their efficiency and revenues.
There are reports that artificial intelligence is already being used for malicious cyber activities. It is quite possible that the frequency and severity of cyberattacks and cyber operations that involve phishing, hacking, and reconnaissance will increase as a result of this technological advancement. Consequently, it may be deduced that this pattern is likely to continue until the year 2025 or even farther.
In light of the fact that the report states that until the year 2025, it will be impossible for anyone to recognize phishing, spoofing, and social engineering attempts, let alone determine whether an email or request for a password reset is legitimate, thanks to the utilization of GenAI and large language models (LLMs), the advancements in artificial intelligence will consequently make it more difficult for everyone to differentiate between legitimate practices and scams.
As a consequence of this, it is now more challenging for network administrators to patch known vulnerabilities before they can be exploited. If a detection tool that is both more accurate and faster is used to locate devices that have inadequate cybersecurity protections, then artificial intelligence is likely to make this problem even more critical.
Read More: Cybersecurity Best Practices: Safeguarding Your Business from Attacks
An Expected AI Hacking Development
This warning from GCHQ is a similar warning from Google’s own forecast from the previous year, when it echoed these very statements of how large language models (LLMs) and GenAI will be used in phishing, SMS, and other cyberattacks along with other social engineering techniques to make information, including voice and video, appear more authentic. This warning from GCHQ proves to be a similar warning.
According to ZDnet, the report also offers predictions regarding the future development of LLMs and other generative AI tools that are accessible as a commercial service to assist attackers in launching their assaults more effectively and with less effort. These tools are designed to help attackers deploy their attacks more efficiently.
The use of generative artificial intelligence to develop material, such as the creation of an invoice reminder, is supposedly not inherently harmful. As a result, attackers may use it to target victims for their own interests, which means that malevolent AI or LLMs will not even be totally necessary.
Read More: IBM Says Quantum Computing Could Cause a Cybersecurity Armageddon