

Principal Product Manager · Infeedo AI
Shikhar
Kesarwani
Nine years turning AI research into production.
Currently building @ inFeedo · Gurugram, India
Scroll
9
Years · One Company
330+
Enterprise Deployments
80M+
User Interactions
<1%
Hallucination Rate
Multi-Agent Conversational AI Platform
Multi-step reasoning, tool calling, RAG-based retrieval. Automates 80%+ of repeated HR, IT, and Finance queries across system-of-record apps.
30+ enterprises
5x QoQ growth
LLM Observability Infrastructure
Automated eval framework modelled on DeepMind's LLM-as-judge research. Classifies production failures by root cause and drives a continuous improvement loop.
<1% hallucination
95%+ accuracy
Attrition Prediction Model
Multi-signal behavioural classifier predicting employee churn 60–90 days in advance. Sentiment trend, response-rate anomalies, silence patterns — across 1M+ users.
1M+ user base
Primary revenue driver
May 2017 – Jan 2019
Software Engineer
Built Amber's analytics dashboard on the MEAN stack. Helped scale inFeedo from 10 to 100 enterprise customers.
Jan 2019 – May 2021
Data Scientist
Built inFeedo's sentiment engine and multi-label text classifier. Co-developed LexScore — core IP in HR-domain NLP.
Apr 2024 – Oct 2025
Senior Product Manager
Shipped production-grade GenAI features across 330+ enterprises. RAG, LLM evals, response personalisation.
Oct 2025 – Present
Principal Product Manager
Multi-agent automation infrastructure. LLM observability from the ground up. 5x QoQ conversation growth.
May 2017 – Jan 2019
Software Engineer
Built Amber's analytics dashboard on the MEAN stack. Helped scale inFeedo from 10 to 100 enterprise customers.
Jan 2019 – May 2021
Data Scientist
Built inFeedo's sentiment engine and multi-label text classifier. Co-developed LexScore — core IP in HR-domain NLP.
Apr 2024 – Oct 2025
Senior Product Manager
Shipped production-grade GenAI features across 330+ enterprises. RAG, LLM evals, response personalisation.
Oct 2025 – Present
Principal Product Manager
Multi-agent automation infrastructure. LLM observability from the ground up. 5x QoQ conversation growth.
How
I Work
Most days start with a Notion doc and a context window. I write extensively before I prompt — the better the context, the better the output.
The rest is prototyping in code to pressure-test ideas faster than any meeting could, debugging Python pipelines, and reading everything Anthropic, OpenAI, and the scrappy labs in between are shipping.
- ›Writing context docs as LLM input before sessions start
- ›Prototyping in code to share with engineers and APMs
- ›Debugging Python. Writing scripts for things that shouldn't be manual
- ›Tracking what’s happening at the frontier — research, model releases, company moves
Stack
Expertise
Let's talk.
If you're building something in AI and think there's a conversation worth having, I'm always open to it.
Favorite Model
Claude Opus
Prototypes In
design · code · docs
Current Streak
4 days active
Peak Hour
10:00 – 11:00
Avg Context
~80k tokens / session
Last Shipped
2 days ago
© 2026 Shikhar Kesarwani
shikhar k.