Field Notes: An AI Pet Health Platform

By Volodymyr Khrystynych · January 20, 2025

The Promise on the Marketing Page

PetPortal.ai sells a simple line: expert answers about your pet's health, in sixty seconds. Behind that line is a real product — a chat advisor backed by AI, a subscription, a community feed, transcription, recommendations, and the operational glue that has to hold all of it together when traffic shows up.

This is a brief tour of what it took to ship.

Where AI Actually Lives in the Product

Plenty of "AI products" are a chat box bolted to a marketing site. PetPortal isn't that. The model shows up in three places that genuinely change the user experience:

  • The advisor itself — the chat surface where a pet owner describes a symptom and gets a structured response. Not a diagnosis, by design; a triage and a set of next steps.
  • Recommendations — what to feed, what to try, what to monitor. These are personalized to the animal and the conversation history, not a static "things people like" list.
  • The community feed — surfaced posts and threads relevant to whatever the user is currently dealing with.

Each one is a different shape of AI: a real-time chat loop, a recommender, a relevance ranker. The temptation is to treat them as one system. They aren't. Building them as separate features that share a small amount of context worked far better than trying to make a single "intelligent layer" that did everything.

Subscriptions Make the AI Cheaper

A free chat advisor is a cost center. A subscription advisor is a unit-economics question.

We wired Stripe in early. Tier limits, plan upgrades, and usage tracking sat next to the AI calls so that the cost of inference was always visible against revenue per user. This is unglamorous infrastructure, but it is the thing that turns "we have an AI feature" into "we have a business." Without it, the model bill becomes a surprise.

A pattern worth repeating on any AI product with a per-user cost: the billing and the inference share a customer. They should share a code path.

n8n.io as the Workflow Backbone

A surprising amount of the system runs through n8n. New users hit a workflow. Subscription events hit a workflow. Content moderation, email, even some of the recommendation pipeline is wired up as nodes.

This is a contrarian choice in a stack that already has TypeScript everywhere, and it earned its keep:

  • Things change weekly. A founder-led product reshapes its onboarding flow constantly. n8n lets non-engineers see and edit the flow without touching deployable code.
  • Integrations are the work. Stripe, OpenAI, email, the database, the notification provider — most of the flow is gluing services together. n8n's whole purpose is gluing services together.
  • Visual workflows are easier to debug. When a webhook stops firing, finding the broken step in n8n takes seconds. Finding the same break in a fan-out of TypeScript files takes longer.

The line we drew: n8n owns orchestration; the application owns user-facing logic. Anything that has to render in the UI lives in code; anything that fires off the back of an event can live in n8n.

What Was Hard

Three things, in descending order of pain:

  1. Latency budget for the advisor. "Sixty-second answers" is a UX promise. Streaming helps, but transcription, retrieval, and generation each have their own tail. Watching p95 was a daily exercise.
  2. Accessibility on a chat surface. Real-time chat with screen reader support is genuinely tricky. We ended up rebuilding our own message renderer rather than wrestling a third-party one into compliance.
  3. Telling users what the AI is and is not. People bring real anxiety to a pet symptom. A confident-sounding wrong answer is worse than no answer. Most of our prompt engineering was actually safety engineering.

Takeaways

  • Different AI features want different shapes. Build them as different features.
  • Wire billing and inference together. Cost should be visible from day one.
  • Workflow tools earn their keep where the business logic moves faster than the code.
  • The hardest part of an AI product aimed at anxious humans is calibrating the voice.

Volodymyr Khrystynych

Written by Volodymyr Khrystynych, partner at Khrystynych Innovations Inc an AI and Web3 consultancy specializing in multimodal RAG, AI automation, AI training, and smart contract engineering on Ethereum and Solana.

Have a project in mind? Let's talk.