(S03) (INFOSEC) Trust No Prompt: The New Frontier of LLM Security
Tuesday, October 14, 2025 11:15 AM to 11:45 AM · 30 min. (Canada/Eastern)
Information
As organizations adopt Large Language Models (LLMs) for automation, analysis, and interaction, new security challenges emerge—particularly at the prompt layer guiding model behavior. This 30-minute session explores the intersection of LLM and Model Control Plane (MCP) security, highlighting overlooked risks like prompt injection, manipulation, and exploitation. While LLMs’ flexibility enables dynamic responses, it also creates attack surfaces for data leaks, privilege escalation, and model subversion. Attendees will gain insight into how output manipulation, indirect chaining, and adversarial prompts can undermine both user safety and system integrity.
Type
Session
Stage
Infosec Stage



