Cyber Crises: AI Risks, Black Box Vulnerabilities & Global Readiness

The cyber crises is upon us. But Are we ready?

Global cybersecurity remains fragile despite decades of warnings. In 2010, Richard Clarke’s Cyber War mapped cyberspace’s vulnerabilities and defences; by 2021, Eric Cole’s The Cyber Crisis declared everyone a target in an interconnected world. Exactly how industries and affluent populations have routinely dismissed such alarms, AI risks around biases, opacity, etc., have also been ignored over the past decade. The result of such ignorance is that ransomware now infiltrates 44% of breaches, AI-augmented attacks surge 33% year-over-year, and supply-chain exploits create cascading failures. In 2025, with sophisticated tools and frameworks at hand, nations and individuals alike stay exposed, while privacy is becoming obsolete, mirroring the hollowing out of ethics and authenticity that underpins trust in technology and society.

Key Insights

  • Cybersecurity warnings have been ignored for decades, leaving everyone exposed despite advanced tools.
  • Algorithms erode cognitive discernment, enabling perception manipulation at scale.
  • AI infrastructure’s opacity creates blind spots where breaches go undetected by users.
  • Cyberspace has become a geopolitical battleground amplifying hybrid warfare risks.
  • Vulnerabilities only become strategic weaknesses through human error and poor monitoring.

Recent research suggests that when people are chronically overwhelmed by war, crisis, and polarised culture as they doomscroll, many turn to endless feeds as a way to numb or “zone out,” even though this behaviour is strongly associated with higher psychological distress and lower mental well‑being. Algorithmic recommendations intensify this pattern because they are designed to learn from every micro‑signal of engagement, and then feed users more of the same, effectively outsourcing curation while still implicitly placing the burden of discernment on individuals who are already cognitively overloaded. In principle, user choice should anchor these systems. In practice, once a population’s attention and affect are chronically burned out, the same architectures can be reoriented to steer perception and manipulate cognitive agency.

The digital landscape gets complicated by the minute. LLMs, conversational agents, and autonomous AI systems are creating ‘digital minds’ as one’s cognitive extension. They are capable of understanding, generating, and responding to natural language or performing complex decision-making tasks. Early work on more agentic systems raises parallel questions about how societies will distribute epistemic authority and responsibility when synthetic interlocutors can both soothe and subtly steer users at scale. While we are in the early stages of what can be described as cognitive warfare, the core concern in this essay is the readiness, rather the lack of readiness, for a multi-faceted cyber crisis that these evolving technologies exacerbate.

Examining data breaches in AI ecosystems amplifies the concern. For instance, OpenAI does not make their core ChatGPT model code publicly available for scanning or modification, so users, whether tech-savvy or not, typically do not have access to know is and what information was compromised. Security teams operate behind opaque “black boxes,” leaving users powerless and responsible for protecting their data after the fact. But there is good news, tied with a bigger concern(s): OpenAI has recently released GPT Open-Source Series (GPT-OSS), a family of open-source models, which are available for public use, evaluation, and deployment.

Cybersecurity experts can download and run these models locally or on private infrastructure, allowing for security scanning and customisation. The concern however remains that GPT-OSS is not the same model used in OpenAI’s main ChatGPT service, so an exploit in GPT-OSS does not imply the main model is vulnerable. We can’t know unless OpenAI announces it. Concerns only increase when a vulnerability like related to prompt injection or model behaviour is found on GPT-OSS, it can inspire similar attack strategies against other models, including OpenAI’s production ones.

Spotlight on CVE-2024-27564: A Server-Side Request Forgery (SSRF) Vulnerability in OpenAI’s ChatGPT Infrastructure

CVE-2024-27564 was found in the pictureproxy.php component of OpenAI’s ChatGPT platform. SSRF generally allows attackers to trick servers into making unintended requests, often to internal, protected resources. In case of OpenAI, the researcher, Jacob Krut, exploited Custom GPTs “Actions” feature, an API integration layer through improper handling of a URL parameter, forcing the ChatGPT server to make unauthorised requests to internal or external systems controlled by the attacker. Although the core AI model was not compromised, the SSRF vulnerability in the Custom GPTs “Actions” feature allowed the attacker to:

  1. Access Azure’s Instance Metadata Service (169.254.169.254)
  2. Retrieve the Azure IDMS token (OAuth2) for the Azure Management API
  3. Gain potential access to OpenAI’s cloud infrastructure, including resource management and configuration data

This means production secrets and cloud credentials were at risk. OpenAI patched the flaw promptly after disclosure, however, there is no absolute guarantee that a breach didn’t occur, except that no evidence of misuse has been reported. This is a paradoxical risk.

Why it matters

This vulnerability is “in the wild,” with over 10,000 attack attempts recorded within a week, primarily targeting US government organisations and other critical sectors. Mitigation requires prompt patching, strict URL validation, and enhanced network segmentation to limit accessible resources.​ However, because of such opacity users, and even many administrators may be unaware of such server-level requests, complicating detection and prevention. While direct exploitation is not guaranteed due to architectural and deployment differences, it’s also not not guaranteed.

Applications and websites built on such complex and opaque algorithms defy full security scanning with existing tools, making proactive breach prevention nearly impossible. But third-party integrations allow SSRF and similar vulnerabilities to pivot within protected networks, access sensitive data, or launch additional attacks. The danger here is that it makes them especially dangerous in complex AI infrastructure where full system scanning is infeasible. This situation is no longer theoretical. It’s a global crisis that makes everyone a target, especially those with minimal defenses.

So why is it a crisis?

International Infrastructure at Risk: Critical infrastructure like power grids, water systems, transportation networks, now runs on interconnected digital systems increasingly targeted by sophisticated actors. Recent reports are emphasising nation-state campaigns that exploit supply-chain vulnerabilities to disrupt entire economies, while ransomware groups hold hospitals and pipelines hostage.​ A notable example is the April 2025 cyberattack on Norway’s Lake Risevatnet dam control system. Despite being one of the most sophisticated and robust cybersecurity practitioner among the middle powers, the attackers forced a discharge valve open to 100% capacity for four hours.

Cyberspace as Geopolitical Battleground: Once imagined as borderless freedom, cyberspace has redefined world as an abstract virtual space where data is generated, stored, transmitted, and accessed like currency. It’s become a geopolitical battleground where state actors, criminals, and hacktivists wage asymmetric warfare. The 2022 Russian invasion of Ukraine showcased a coordinated cyber warfare alongside kinetic operations, with Russia targeting Ukraine’s government, communication, and energy infrastructure. AI is accelerating this contest, enabling automated phishing at unprecedented scale and deepfake-driven influence operations that blur reality itself.​ We live in a cyberspace.

The Only Time Vulnerability Becomes a Weakness: Technical vulnerabilities become strategic national weaknesses when exploited through human error, poor patching, or inadequate monitoring. Organisations that treat security as a checklist rather than a continuous capability remain perpetually exposed. Effective vulnerability management demands proactive intelligence and rapid response.​ Unfortunately, general users with limited technical knowledge lose defensive capabilities daily.

Information is Power, Misinformation is a Weapon of the Powerful|: In an AI-amplified era, weaponized falsehoods eliminate trust faster than facts can counter them. State actors flood public discourse with highly tailored disinformation, while algorithms reward outrage over accuracy. Public attention itself has become the ultimate battlefield. This phenomenon was exploited in past election interference campaigns such as Cambridge Analytica during Trump’s 2016 US election, driving dangerous political polarisation.

The confluence of advanced AI technologies, persistent vulnerabilities, and socio-political dynamics demands urgent attention and coordinated action.

Actionable Insights

Cyber readiness is no longer optional; it is more essential than ever for safeguarding individuals, nations, and the global digital commons. Especially as digital identity and digital economy gain momentum, proactive coordination is critical to combating escalating threats.

  • Policymakers & Middle Powers must mandate transparency requirements for AI infrastructure (breach disclosure, model auditing) and fund supply-chain resilience programs targeting ICS/SCADA systems.​
  • Organisations & Tech Leaders must prioritise shifting from checklist compliance to continuous adaptive security with proactive intelligence, rapid patching, and behavioural monitoring over static vulnerability scans.​

There is a need to stop piling on fragmented strategies that marry technology, policy, awareness, and ethics without operationalising them. General users should at least begin with enabling two-factor authentication, using passwords managers, limit app permissions, and verify urgent requests. As a consumer of technology, users must recognise themselves as a vulnerability, especially on social media curate feeds to preserve discernment. ​Only through holistic strategies, operational coalitions, and partnerships rooted in good faith and balanced reciprocity can we navigate the complex cyber crisis unfolding before us.


References & Recommended Reading

  1. Clarke, R. A. (2010). Cyber War: The next threat to national security and what to do about it.
  2. Cole, E. (2021). The Cyber Crisis: Protecting your digital future.
  3. World Economic Forum. (2025). Global Cybersecurity Outlook 2025[PDF]
  4. CrowdStrike. (2025). 2025 Global Threat Report[Link]
  5. Krut, J. (2025). CVE-2024-27564: Server-Side Request Forgery vulnerability analysis. [Technical report]. [Link]
  6. OpenAI. (2025). GPT Open-Source Series (GPT-OSS).  [Link]
  7. Privacy International. (2022). Understanding Algorithmic Manipulation and Doomscrolling Effects on Mental Healthhttps://www.privacyinternational.org
  8. ACLED. (2023). Cyber operations and hybrid warfare: The Russia-Ukraine conflict. [Link]
  9. Norwegian National Cybersecurity Center. (2025). Report on the Lake Risevatnet dam cyberattack[Link]
  10. Cambridge Analytica Scandal. (2018). Investigative reports on election interference and misinformation[Link]
  11. Satici, S. A., Gocet Tekin, E., Deniz, M. E., & Satici, B. (2023). Doomscrolling scale: Its association with personality traits, psychological distress, social media use, and wellbeing. Applied Research in Quality of Life, 18(2), 833–47. [Link]
  12. Jaycox, L. H., & Radovic, A. (2025). Resistance or compliance? The impact of algorithmic awareness on social media behaviors. Perspectives on Psychological Science, 20(4), 123–145. [Link]

Leave a comment