Field Notes: Knowledge Graphs and AI Agents

By Volodymyr Khrystynych · April 18, 2026

The Theme Nobody Stated Out Loud

Five talks at the Global Azure Bootcamp, ostensibly on different topics — API management, phishing, knowledge graphs, agents, context engineering. The theme that ran through all of them was the same: the things we used to govern at the application layer are now happening at the model layer, and most teams have not caught up.

Governing AI Traffic Like You Govern API Traffic

The first talk (Callon Campbell) framed this directly. Every team has shipped at least one AI feature. Most of them have done it by giving an agent unmoderated access to internal APIs. The result: cost explosions, no visibility, and the same outage modes that motivated API gateways in the first place.

The argument: treat agent traffic like external traffic. Token throttling scoped per user or per IP. Redis-backed caching for repeated queries. Circuit breakers when the backend wobbles. None of this is new — it is the API management playbook, applied to a new class of consumer.

The piece that stood out: Azure API Center now registers MCP tools alongside traditional APIs. The implication is that the gateway is the right place to discover, document, and govern model-callable tools. That feels right. The wrong place to govern this stuff is each agent's prompt.

Phishing Got Better While We Weren't Looking

Tamir Albalkni's talk was the one that should make engineering leaders nervous. The headline statistic: 54% of recipients click an AI-generated phishing email, compared to a much lower rate for the typo-ridden classics.

The mechanics have shifted too. Modern phishing does not chase passwords — it chases sessions. OAuth consent manipulation, device-code flow abuse, reverse-proxy session capture. The user types their credentials into a real-looking page, the attacker grabs the session cookie, and from that point on the attack is indistinguishable from legitimate access.

The defensive posture has to shift accordingly. MFA on the password is necessary and not sufficient. Conditional access, continuous session evaluation, and aggressive monitoring of unusual OAuth grants are where the leverage is now.

Knowledge Graphs as a Context Layer

The most useful talk, for our work, was Ashraf Ghonsim's piece on knowledge graphs (Fabric IQ). The premise: LLMs hallucinate when their context is fragmented. Pointing them at a relational schema with developer-oriented column names (customer_id, pkg_id, evt_ts_utc) does not help, because the names do not encode meaning.

A knowledge graph encodes meaning explicitly: "customer purchased product," "product stored in warehouse." The verbs are the relationships; the nouns are the entities. A model walking the graph can answer questions about meaning rather than questions about schema.

The architectural move that matters here is the ontology layer. Instead of dumping the schema into the prompt, you let the ontology narrow the context to the slice that is relevant to the user's question. The LLM sees a small, semantically coherent neighborhood instead of the whole world.

Two flavors of agent fall out of this naturally:

  • Data agents — answer questions by traversing the graph.
  • Operational agents — execute workflows against the graph (e.g., flag at-risk customers, hand off to a human for approval).

The split is useful because the safety properties are different. Data agents need to be accurate. Operational agents need to be reversible, with human gates at the right places.

The Smaller Talks

A few quick notes:

  • OpenClaw on Azure (Kaan Turgut) — local-first AI assistant deployed through messaging platforms. Useful when local infrastructure isn't in scope. Architecture: chat → gateway → agent runtime → AI Foundry.
  • Context engineering on Azure (Tara Khani, Majid Fekri) — surveyed two products. Moorcheh.ai does vector search without re-indexing on update, with an integrated reranker. MemAnto.ai is an HNSW-backed memory system exposed as a remember/recall/answer API. Both interesting. Neither got enough technical depth to evaluate seriously from a talk.

What I'm Taking Back to the Practice

Three things:

  1. Govern agent traffic. If we are putting an agent in front of an internal API for a client, the gateway, the throttling, and the monitoring need to be part of the design, not bolted on after launch.
  2. Knowledge graphs are a real option. When a client has fragmented data and wants natural-language access, the right answer is sometimes "model the relationships explicitly" rather than "throw a bigger LLM at the schema."
  3. The threat model has moved. Anyone we ship for is operating in an environment where credible phishing is automated and cheap. Session-level controls are not optional anymore.

Volodymyr Khrystynych

Written by Volodymyr Khrystynych, partner at Khrystynych Innovations Inc an AI and Web3 consultancy specializing in multimodal RAG, AI automation, AI training, and smart contract engineering on Ethereum and Solana.

Have a project in mind? Let's talk.