For my BNY experiential learning case study, I served as Prompt Engineer for a technical
project focused on designing and building an autonomous AI agent capable of detecting,
analyzing, and summarizing anomalies in a simulated banking application.
My system integrates directly with Prometheus, which continuously collects application
metrics. When a safeguard rule is triggered, the alert flows through Alertmanager and into
my custom-built AI agent. The agent appends the alert details to a structured system prompt
and sends it to my LLM engine (Gemini Flash), which is role-prompted to behave as an
expert error analyst.
Gemini returns a structured JSON response containing a detailed thought process, error summary,
and proposed solutions including actionable code-level fixes. I parse this output into clean,
readable Markdown reports, automatically saved with timestamps for traceability and review.
This resulted in an end-to-end autonomous pipeline that: