Google Cloud announces security model using generative AI

Today 25 April 2023 (Paris time), Google introduces Google Cloud Security AI Workbench platform with generative AI – a specialised security LLM, Sec-PaLM, within the framework of the RSA Conference of this year, held in the U.S. They incorporated Google’s unsurpassed security intelligence into the threat landscape and Mandiant’s frontline intelligence on vulnerabilities, malware, threat indicators and behaviour threat actor profiles. This platform responds to three top security challenges – threat overload, toilsome tools and the talent gap. Google used Vertex AI infrastructure of Google Cloud that customers control their data with the enterprise-grade capacities – e.g., data isolation, data protection, sovereignty, and compliance support. 

Prevent threats from spreading beyond the first infection

Google combines threat intelligence with point-in-time incident analysis and novel AI-based detection, analytics to help prevent new infections. Then, they presented (i) VirusTotal Code Insight, using Sec-PaLM to facilitate the analyse and explanation of the potential malicious scripts’ behaviour; and, (ii) Mandiant Breach Analytics for Chronicle to automatically inform their customers of active breaches in their customer’s environment by using Sec-PaLM to help contextualise and respond instantly to the critical findings.  

Reduce toil 

They also offer two following solutions. (i) Assured OSS uses LLMs (large language models) to assist in adding more open-source software (OSS) packages to their OSS vulnerability management solution. Then, (ii) Mandiant Threat Intelligence AI summarises and acts on threats relevant to the customers’ organisation. 

Close the talent gap 

They embedded Sec-PaLM-based features in their two solutions Chronicle AI and Security Command Center AI. Chronicle AI allows its customers to search an enormous number of security events, to interact conversationally with the results, to ask follow-up questions, and to quickly generate detections, all without learning a new syntax or schema. Furthermore, the Security Command Center AI helps to translate complex attack graphs to human-readable explanations of attack exposure. 

Security threats from generative AI platforms 

According to WSJ, due to the ability to generate working computer code, write essays, summarise long text documents, and reproduce other forms of text-based outputs, they could enable new waves of cybercrime. For example, they can help create phishing emails as persuasive as the human-written ones or produce malware code.  

Sources: 

For more news about cybersecurity on AstraIA Gear: https://www.astraiagear.com/category/cyber-security/ 

For more short news, follow our Page on LinkedIn, cheers!

For more short news, connect with us on LinkedIn

To have further discussion with me