Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Google Fixes AI Coding Tool Flaw That Let Attackers Execute Malicious Code: Report

CN
Decrypt
Follow
3 hours ago
AI summarizes in 5 seconds.

Google has patched a vulnerability in its Antigravity AI coding platform that researchers say could allow attackers to run commands on a developer’s machine through a prompt injection attack.


According to a report by Cybersecurity firm Pillar Security, the flaw involved Antigravity’s find_by_name file search tool, which passed user input directly to an underlying command-line utility without validation. That allowed malicious input to convert a file search into a command execution task, enabling remote code execution.


“Combined with Antigravity's ability to create files as a permitted action, this enables a full attack chain: stage a malicious script, then trigger it through a seemingly legitimate search, all without additional user interaction once the prompt injection lands,” Pillar Security researchers wrote.


Launched last November, Antigravity is Google’s AI-powered development environment designed to help programmers write, test, and manage code with the assistance of autonomous software agents. Pillar Security disclosed the issue to Google on January 7, and Google acknowledged the report the same day, marking the issue as fixed on February 28.





Google did not immediately respond to a request for comment by Decrypt.


Prompt injection attacks occur when hidden instructions embedded in content cause an AI system to perform unintended actions. Because AI tools often process external files or text as part of normal workflows, the system may interpret those instructions as legitimate commands, allowing an attacker to trigger actions on a user’s machine without direct access or additional interaction.


The threat of prompt injection attacks for large language models came into renewed focus last summer when ChatGPT developer OpenAI warned that its new ChatGPT agent could be compromised.


“When you sign ChatGPT agent into websites or enable connectors, it will be able to access sensitive data from those sources, such as emails, files, or account information,” OpenAI wrote in a blog post.


To demonstrate the Antigravity issue, the researchers created a test script inside a project workspace and triggered it through the search tool. When executed, the script opened the computer’s calculator application, showing that the search function could be turned into a command execution mechanism.


“Critically, this vulnerability bypasses Antigravity's Secure Mode, the product's most restrictive security configuration,” the report said.


The findings highlight a broader security challenge facing AI-powered development tools as they begin to execute tasks autonomously.


“The industry must move beyond sanitization-based controls toward execution isolation. Every native tool parameter that reaches a shell command is a potential injection point,” Pillar Security said. “Auditing for this class of vulnerability is no longer optional, and it is a prerequisite for shipping agentic features safely.”


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Decrypt

1 hour ago
Coinbase Flags Proof-of-Stake Chains Like Ethereum, Solana as Potential Quantum Risks
1 hour ago
Core Scientific Reveals $3.3 Billion Junk-Bond Sale to Pivot Further from Bitcoin Mining to AI
2 hours ago
Strategy Now Holds $62 Billion in Bitcoin—These Are Its Biggest BTC Buys
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarcoindesk
35 minutes ago
Crypto\\\'s great hope in Senate\\\'s Clarity Act still has a path to survive tight calendar
avatar
avatarbitcoin.com
56 minutes ago
Strategy Could Reach 1 Million Bitcoin by Late 2026; River Notes STRC Inflows Dwarf ETF Net Gains
avatar
avatarDecrypt
1 hour ago
Coinbase Flags Proof-of-Stake Chains Like Ethereum, Solana as Potential Quantum Risks
avatar
avatarcoindesk
1 hour ago
Inside the hunt for Satoshi: Filmmakers chase crypto’s biggest mystery
avatar
avatarDecrypt
1 hour ago
Core Scientific Reveals $3.3 Billion Junk-Bond Sale to Pivot Further from Bitcoin Mining to AI
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink