Fortune Views

― Advertisement ―

spot_img

Despite $75B in VC Investments in Q4, Startups Struggle to Secure Funding

Venture capital investment in U.S. startups surged to $74.6 billion in the fourth quarter of 2024, a significant rise from the $42 billion average...
HomeTechnologyArtificial Intelligence (AI)New AI security guidelines for critical infrastructure released by the U.S. government

New AI security guidelines for critical infrastructure released by the U.S. government

The U.S. government has introduced comprehensive security guidelines aimed at fortifying critical infrastructure against threats related to artificial intelligence (AI). Informed by a holistic assessment of AI risks across sixteen critical infrastructure sectors, the Department of Homeland Security (DHS) emphasized addressing threats both from and to AI systems.

The guidelines underscore the importance of promoting the responsible and trustworthy use of AI technology while safeguarding individuals’ privacy, civil rights, and liberties. They specifically address the augmentation of attacks on critical infrastructure, adversarial manipulation of AI systems, and potential shortcomings in AI tools that could lead to unintended consequences.

Covering four key functions—govern, map, measure, and manage—the guidance emphasizes the establishment of an organizational culture of AI risk management, understanding individual AI use contexts and risk profiles, developing systems to assess and track AI risks, and prioritizing actions to mitigate AI risks to safety and security.

Critical infrastructure owners and operators are urged to assess their sector-specific and context-specific use of AI and select appropriate mitigations accordingly. The guidance also emphasizes the need to understand dependencies on AI vendors and delineate mitigation responsibilities accordingly.

These developments come on the heels of the Five Eyes intelligence alliance’s cybersecurity information sheet, which highlighted the need for careful setup and configuration when deploying AI systems due to their attractiveness as targets for malicious cyber actors.

To bolster AI security, best practices recommended include securing deployment environments, reviewing AI model sources and supply chain security, validating AI system integrity, enforcing strict access controls, conducting external audits, and implementing robust logging.

Failure to adhere to robust security measures could lead to severe consequences, including model inversion attacks and trojanization of AI models, which can have cascading downstream impacts.

Furthermore, recent research has identified various prompt injection attacks targeting AI systems, with cybercriminals using carefully crafted questions or prompts to trick AI models into generating malicious content. Nation-state actors have also been observed weaponizing generative AI for espionage and influence operations.

Moreover, studies have shown that large language models (LLMs) can autonomously exploit one-day vulnerabilities in real-world systems using their descriptions, highlighting the need for heightened awareness and security measures in the AI landscape.