Today's AI/ML headlines are brought to you by ThreatPerspective

Digital Event Horizon

Critics Raise Alarm as Microsoft's Experimental AI Agent Reveals Flawed Security Measures


Microsoft has warned that its experimental AI agent can infect devices and pilfer sensitive user data. Critics argue that the company's security measures are insufficient and may shift liability to users, highlighting a broader concern about the integration of AI into consumer products without adequate consideration for their safety and privacy.

  • Microsoft's AI agent, Copilot Actions, can infect devices and steal sensitive user data if not properly secured.
  • Critics argue that the warning provided by Microsoft is insufficient for most users to fully understand the risks involved.
  • The company's security strategy relies on users reading dialog windows that warn of potential risks, which may not be effective for all users.
  • Experts question how long Copilot Actions will remain off by default and whether it will become a permanent feature.
  • Reed Mideke suggests that Microsoft should shift liability to the user for AI-related security issues.
  • Ongoing concerns about AI integration highlight the need for prioritizing security when introducing new technologies like Copilot Actions.



  • Microsoft has warned that its experimental AI agent, Copilot Actions, can infect devices and pilfer sensitive user data. The warning comes after researchers demonstrated known defects in large language models, including hallucinations and prompt injection, which can be exploited in attacks that exfiltrate data, run malicious code, and steal cryptocurrency.

    The fanfare surrounding the introduction of Copilot Actions was accompanied by a significant caveat - users are only recommended to enable the feature if they understand the security implications outlined. However, critics argue that this warning is not enough, as it may not be clear what exactly these risks entail for most users.

    Moreover, Microsoft's overall strategy for securing agentic features in Windows, such as non-repudiation, confidentiality preservation, and user approval, has been criticized for relying on users reading dialog windows that warn of the risks. This approach is seen as insufficient, especially since many users may not fully understand what is going on or might just click 'yes' all the time.

    The integration of Copilot Actions into Windows is off by default, but critics question how long this will remain the case. Some security experts even compare Microsoft's warning to warnings about macros in Office apps, which have remained a vulnerable spot for hackers despite being around for decades.

    Reed Mideke, a critic, stated that "Microsoft has no idea how to stop prompt injection or hallucinations, which makes it fundamentally unfit for almost anything serious." He further added that the solution lies in shifting liability to the user, much like the disclaimer often seen on LLM chatbots.

    The flaws in Copilot Actions are not an isolated incident, as other companies, including Apple, Google, and Meta, are also integrating AI into their products. This integration often begins as optional features but eventually becomes default capabilities whether users want them or not.

    In light of these concerns, Microsoft's warning serves as a timely reminder about the importance of prioritizing security when introducing new technologies like Copilot Actions. It remains to be seen how this situation will play out in the future, with many questions still unanswered regarding the long-term availability and functionality of this feature.

    Related Information:
  • https://www.digitaleventhorizon.com/articles/Critics-Raise-Alarm-as-Microsofts-Experimental-AI-Agent-Reveals-Flawed-Security-Measures-deh.shtml

  • https://arstechnica.com/security/2025/11/critics-scoff-after-microsoft-warns-ai-feature-can-infect-machines-and-pilfer-data/


  • Published: Wed Nov 19 14:45:45 2025 by llama3.2 3B Q4_K_M











    © Digital Event Horizon . All rights reserved.

    Privacy | Terms of Use | Contact Us