date-line 20/03/2024

AI has been enhancing cybersecurity tools for a long time. For example, machine learning tools have boosted the efficiency of network security, anti-malware, and fraud detection software by rapidly spotting anomalies compared to humans. However, AI has also posed threats to cybersecurity. AI-based risks involve brute force, denial of service (DoS), and social engineering attacks, among others.

As AI tools become more affordable and accessible, it's expected that cybersecurity risks linked with AI will increase rapidly. It's feasible to manipulate ChatGPT to create harmful code or a message requesting donations that seems to be from a reputed organization or person. Moreover, there are various deepfake tools available that can generate highly convincing fake audio or video with minimal training data. Increasing privacy concerns arise as more users get comfortable sharing sensitive data with AI.

Optimisation of Cyber Assaults

Attackers can use generative AI and large language models to escalate attacks to an unprecedented scale and level of complexity. They might exploit generative AI to come up with unique methods to undermine cloud infrastructure and use geopolitical tensions to carry out intricate attacks. Furthermore, they can enhance their ransomware and phishing strategies using generative AI.

Automated Malicious Code

Similar to ChatGPT, AI can efficiently compute numbers. Experts suggest that it might support software developers, computer programmers, and engineers shortly or even take over some of their work. Although there are safeguards in software like ChatGPT to prevent the formation of malicious code, experts can use smart methods to create malware effortlessly.

This could be just the beginning as potential AI-powered tools may empower developers with basic programming skills to craft advanced malicious bots which can acquire data, corrupt networks, and sabotage systems without any human intervention.

Physical Safety

As AI is integrated into various systems like autonomous vehicles, manufacturing equipment, and medical systems, the risks to physical safety might rise. For instance, a cyber breach involving an AI-powered autonomous vehicle could jeopardize the occupants' physical safety. Similarly, an adversary could tamper with a construction site's maintenance tools dataset to create hazardous situations.

In a recent annual conference of the RBI ombudsman, Reserve Bank of India Governor Shaktikanta Das brought up the issue of looming cybersecurity threats that the rapid growth of artificial intelligence in the financial sector would entail. He said that safeguarding measures must be quite robust to address the challenges associated with the technology in question.

Das admitted that the use of AI could elevate the threat to consumers’ information: “With the advent of AI, cybersecurity challenges can rise manifold. They can expose consumers to identity theft, fraud, and unauthorized access to personal information. Financial Institutions must dedicate substantial efforts to protect customer information and ensure that vulnerabilities exposing customers to risk are promptly identified and addressed.”

According to the RBI Governor, caution was necessary when adopting artificial intelligence, as the lack of essential safeguards could affect the levels of data security between the recipients of the financial services and the relevant organizations. He noted, “Lax safeguards could create issues such as privacy invasion and subtle manipulations based on consumer profiling to nudge him into certain services that may not be the right fit.”

The identified threats notwithstanding, he stressed the benefits that AI could offer. Particularly, Das mentioned the usefulness of the technology of analyzing the behaviour of customers for the purpose of detecting anomalies. As an example, he referred to the combination of technological tools and behavioural analysis in preventing the proliferation of schemes regarding mobilization fraud.

In the light of warmer welcome for AI and the issues at hand both by the Indian government and by governments in other countries, around the globe, RBI’s advice on cybersecurity safety measurements appears to be quite reasonable. In particular, the launch of the IndiaAI Mission should be noted as an example of the Indian government’s enthusiasm in relation to AI, similarly to MeitY’s approach in the field, directed at providing advisory with regard to AI models. Given the unique opportunities and challenges that AI offers, the RBI’s urging of banks and other financial organizations to increase the levels of data safety appear thoroughly justified.

To see how our expertise can help you, let’s talk

Discuss your unique business challenges and get technology recommendations.