Shoppers of enterprise AI are discovering how TP ICAP turned thousands of dusty CRM meeting notes into instant, actionable answers with Amazon Bedrock , a fast, secure way to build Retrieval Augmented Generation (RAG) and text-to-SQL workflows that cut research time from hours to seconds and keeps audit trails intact.

  • Speed boost: ClientIQ reduced time spent on research tasks by around 75%, turning manual searches into near-instant results.
  • Hybrid retrieval: Uses metadata pre-filtering plus semantic search for precise, context-rich document hits.
  • Two-path query engine: LLM routes plain-English questions to either RAG for notes or text-to-SQL for structured analytics.
  • Enterprise security: Integrates Salesforce permissions via Okta claims so answers respect existing access controls.
  • Automated quality checks: Bedrock Evaluations are part of CI/CD, measuring relevance, factual accuracy and retrieval precision.

How TP ICAP made meeting notes useful again, fast and with a human touch

TP ICAP’s problem was one many companies recognise , tens of thousands of qualitative meeting records in Salesforce that held valuable context but were effectively invisible. The Innovation Lab wanted quick, trustworthy answers rather than pages of hit-or-miss search results. The result, ClientIQ, feels like talking to a knowledgeable colleague: it summarises, cites sources and links back to the original Salesforce record so you can verify the claim and follow up.

There’s a sensory payoff here too , responses read naturally and include direct links to records, so the output doesn’t feel like a vague AI hallucination. Internally, users have said searches are noticeably less tedious and insights arrive with the context they need.

Why a two-path approach (RAG plus text-to-SQL) beats a one-size-fits-all assistant

Rather than forcing every question through a general LLM, ClientIQ first classifies the user’s intent. If the question concerns unstructured meeting notes, a RAG workflow retrieves relevant documents and builds a context-aware answer. If it’s an analytical or tabular request, the system generates SQL to query Athena and returns concrete figures. That means you get narrative insight when you want it and hard numbers when you need them, without confusing the two.

This split also helps control cost and latency , TP ICAP can run smaller, cheaper models where appropriate and only use heavier generation when needed. It’s a pragmatic design if you care about both speed and accuracy.

How hybrid search and metadata tagging lift retrieval from “noisy” to “helpful”

ClientIQ uses hybrid searching: metadata filters narrow the universe first, then semantic embeddings find the best matches within that subset. For instance, asking for “executive meetings with AnyCompany in Chicago” first filters for Visiting_City_C = Chicago then performs semantic matching, avoiding irrelevant hits from other divisions or cities.

TP ICAP improved retrieval quality by custom chunking during ingestion , one CSV per meeting, enriched with topic tags generated by Nova Pro. Embeddings (Amazon Titan v1) index each meeting in OpenSearch Serverless, and Bedrock Knowledge Bases handle session context and source attribution so answers stay grounded and traceable.

Why Amazon Bedrock made the build faster and more flexible

TP ICAP could have used a CRM vendor’s in-built assistant but chose Amazon Bedrock for flexibility, model choice, and managed services. Bedrock exposes multiple foundation models via one API, so the team could test Anthropic, Mistral, Amazon models and pick the best tool for each task. They settled on Claude 3.5 Sonnet for classification and Nova Pro for text-to-SQL, balancing accuracy, latency and cost.

Because Bedrock is fully managed, the Innovation Lab avoided heavy infra work and moved from prototype to production in weeks. That’s a key takeaway: managed model services can compress delivery time for enterprise-grade AI.

How security and permissions remain central to usable enterprise AI

ClientIQ honours Salesforce’s permission model by mapping Okta group claims to metadata filters. When a user asks a question, their session carries division and region claims; queries are automatically constrained so only authorised documents are returned. In practice this means a user limited to EMEA never sees AMER notes, and admins can manage groups through an internal UI tied to Okta APIs.

This approach keeps governance tight without making the user experience clunky , answers still arrive naturally, they’re just scoped to what the user is allowed to see.

How they proved the system works: automated evaluations baked into CI/CD

TP ICAP built a measurement-led approach using Amazon Bedrock Evaluations. They created a 100-item ground truth set reflecting real questions, ran RAG evaluations to test different chunking, embedding models and FM choices, and used Bedrock’s evaluation reports to optimise retrieval precision and generation accuracy. Best bit: those tests run automatically in their CI/CD pipeline so every release is checked for regressions in quality.

This metric-driven loop isn’t glamorous, but it’s what makes the assistant reliable day-to-day and scales confident product iteration.

Practical tips if you want to copy their playbook

If you’re planning a similar project, start with clear user stories and a ground-truth test set. Chunk documents sensibly , one meeting per file worked for TP ICAP , and add metadata that reflects real filtering needs (region, division, date, author). Use hybrid search to reduce ambiguity and pick models per task to balance cost and latency. Finally, automate evaluation in CI/CD so quality stays high as you iterate.

And remember: traceability matters. Include source links in responses so consumers of the AI output can validate details quickly.

Closing line Ready to make your CRM a searchable knowledge asset? Check Amazon Bedrock’s Knowledge Bases and Evaluations documentation and compare models to see which setup matches your data, privacy and cost needs.