Sommaire
Last updated: 24th Feb
Pigment is an AI-forward organization. That comes through in our product strategy, but we also apply that ethos to the way our internal teams operate.
Every team at Pigment is achieving more than they were last year, thanks to AI. It’s taken us plenty of experimentation - with much more to come - to get where we are today.
With that in mind, we’re setting up a library of internal AI use cases, with details on how we built them and what we learned, to inspire others.
Salesforce in your pocket
Teo Leventhal, Growth Analyst


What is it?
In the growth team we’ve built a Slack bot that allows GTM team members to query Salesforce in natural language and receive accurate account information directly in Slack.
This is particularly handy for sales teams and executives who are often in the field and don’t have access to their computers or might not have the Salesforce app installed on their phones. The Slack plugin facilitates routine Q&As but can retrieve any type of account information stored on Salesforce, such as:
- Who owns X account?
- What’s the status of Y opportunity?
- Has contact Z been engaged?
- Has N customer signed a marketing agreement?
To make it work smoothly, the bot normalizes company names, selects the most relevant Salesforce account, retrieves the requested data, and responds in a clear, readable format, while ensuring the user is authorized to access Salesforce data.
Benefits
The main benefits are easy access to information, reduced context switching, fewer internal interruptions, and a consistent use of Salesforce as a single source of truth.
By handling fuzzy inputs and ranking accounts by relevance, it also reduces errors caused by duplicate records or inconsistent naming.
Before this automation, users relied on manual Salesforce searches or Slack back-and-forth with different teams - from Customer Success, to Sales Ops and AEs - to get basic account information.
Overall, we think it’s saving 3-5 minutes of work every time the bot is used to retrieve information. For most users, this happens multiple times a day. But beyond that, avoiding the context switching and being able to access information when they need it, is invaluable.
Learnings and limitations
While effective the bot currently has limitations around response time, supported fields, ambiguity handling, and the fact it just returns one best-match account.
Future development will focus on faster responses, broader field, richer follow-up interactions, and more proactive account insights.
Tools used: Slack, n8n, ChatGPT, Salesforce, Google Sheets (for monitoring and analysis)
AI for engineering
Virginie Jugie, Engineering Program Manager Lead

Pigment has deployed AI-powered development tools across the entire R&D organization the past year. This includes AI coding assistants and specialized tools for incident management (incident.io), technical documentation using OpenAI API, and code reviews (Auggie, Bugbot, Copilot, Claude Code).
Benefits
The project has been a clear success so far:
- Engineers report saving 2+ hours a day to days per week
- Accelerated development timescales for features development and refactoring (code maintenance)
- Knowledge democratization to discover our codebase using AI (on the long term that allows us to ramp up new joiners faster on complex codebases)
Learnings and limitations
AI development tools represent a significant departure from usual ways of working. To ensure engineers are able to use them properly, we’re implementing tailored training. For each tool, we propose demos and training on real life use cases while devs actively communicate with one another and document good practices.
Because we’re experimenting with different tools (each one solves a different problem and the landscape evolves quickly so we don’t want to close any doors), it’s inevitable that we will face tool fragmentation and license management. In addition, it can be difficult to quantify productivity gains beyond self-reported data.
Our planned next steps are:
- Create specific playbooks and guidelines for different engineering tasks and roles
- Finalize our AI usage monitoring, after which we will consolidate and optimize our license spending
- Codify a quantitative benchmarking framework through which to evaluate AI development tools
We know enough to say that AI-driven development is delivering real value. The next step is to establish clear frameworks and guidelines to measure its impact and continuously improve how we use it.
Tools used: Cursor, GitHub Copilot, Augment, Claude Code, OpenAI
Sales scorecards
Guy Solomon, Revenue Enablement Specialist

Within Gong, we’ve configured scorecards to run automatically across calls made by Pigment reps.
Benefits
We can now inform managers where they should focus their coaching efforts based on the conversation transcripts and trends that surface over customer interactions.
For example, we can see how well BDRs are qualifying their sales opportunities before handoff, or how well AEs are sticking to best practice when running meetings.
Top scoring calls will automatically appear in the relevant Slack channels to celebrate the wins as well as benchmarking 'what good looks like' for onboarding purposes.
Learnings and limitations
Over time, we will see where the execution gaps are across all customer-facing interactions depending on which scorecards we choose to build in Gong.
Tools used: Gong, Slack
Meeting tracker automation
Ed Gromann, Global Head of Analyst Relations

In my role as Analyst Relations manager, I need to track conversations with analysts in a Google Sheet, so that I can know who attended, what was said, and any follow ups required.
Using Prompt Cowboy, I wrote a prompt for ChatGPT, which in turn wrote me a Google Apps Script. The script is scheduled to run on the 1st of every month, which is saving me hours of time.
Benefits
Doing this used to involve quite a lot of repetitive manual work each month - adding the details of each meeting from my calendar into the spreadsheet.
Learnings and limitations
Right now I think it’s actually pulling some information that isn’t actually required. I’m going to refine the script to be a little more selective.
Tools used: Prompt Cowboy, ChatGPT, Google Apps Script, Google Calendar
Customer Idea Agent
Lea Benyamin, Data Automation Engineer

The Customer Idea Agent (CIA) harnesses a pipeline of custom LLM agents to automatically extract, centralize and quantify customer and prospect feature requests from across our feedback ecosystem::
- Pigment Community ideas
- Slack messages
- Freshdesk tickets
- Gong call transcripts
That means we're able to leverage data-backed signals for product development prioritization at scale.
Benefits
Our LLM pipeline extracts and clusters ideas and feature requests, then enriches them with data from our CRM. This allows us to dissect product feedback by customer use case, location, industry, and segment. We can track signals of interest that help Product Management and GTM teams identify features with the highest business impact.
Critically, we map these ideas to active sales opportunities, informing our roadmap with quantified assessments of revenue impact and enabling us to prioritize features that align with both customer needs and business outcomes.
Learnings and limitations
Benefits of Vertex AI Integration with BigQuery
One of the most significant technical advantages was leveraging Vertex AI directly within our BigQuery data workflows. This integration eliminated complex data movement between systems and allowed us to apply LLM capabilities where our data already lives. By keeping everything within the BigQuery ecosystem, we achieved faster processing, reduced latency, and simplified our architecture.
This native integration also provided seamless scalability and maintained secure data governance within our existing warehouse infrastructure.
The Critical Role of Data Modeling and Quality
Even the most sophisticated LLM is only as good as the data it processes. Leveraging our existing data modeling pipeline proved essential to achieving meaningful results. Our pipeline standardizes feedback from disparate sources, each requiring careful preprocessing to create consistent, high-quality inputs.
Ongoing Challenges
Evaluating LLM output quality remains challenging. Despite using an LLM-as-judge agent in our pipeline to monitor and filter obvious inaccuracies, human review is essential for quality assurance. We've established quality audits where product managers review clustered ideas to validate accuracy and catch edge cases.
LLMs can occasionally misinterpret context or merge unrelated ideas. While our clustering algorithms perform well overall, they require ongoing tuning and monitoring. We've built feedback mechanisms that let users flag incorrect categorizations, which we use to continuously refine our prompts and models.
Tools used: Vertex AI, Big Query, dbt Core, Gemini
.jpeg)
