Posts Tagged:

Agent Security and Prompt Injection: How to Safely Integrate AI Tools Video
Nov 20, 2025 3 min read

Agent Security and Prompt Injection: How to Safely Integrate AI Tools

🛡️ Agent Security and Prompt Injection The capabilities of Large Language Models (LLMs) to control applications via tool calls (functions) are revolutionary. However, this introduces serious security risks, primarily from Prompt Injection. Prompt injection occurs when a user or outside data source (like a LinkedIn profile’s “About” section) injects malicious...
Application Control via LLM Conversation: Fusing the UX/UI Boundary Video
Jul 21, 2024 3 min read

Application Control via LLM Conversation: Fusing the UX/UI Boundary

🗣️ Application Control via LLM Conversation Welcome to the recap of my July 2024 presentation at the Vegas Tech Alley AI Meetup. This talk explores a different paradigm for application design: making the LLM conversation the primary method of control and navigation, effectively fusing the boundaries between the user interface...