Enhancing SingleStore’s Credit-Based Billing System through Better Usability
Project Context
SingleStore, a managed database platform, was launching a new credit-based billing system for cluster creation and upgrades. The challenge: ensure new users could easily understand how credits worked, confidently create clusters, and seamlessly upgrade their accounts. Early prototypes raised questions about clarity, terminology, and billing flows. My research aimed to uncover usability issues, clarify user mental models, and inform product improvements that would reduce support calls, increase task completion rates, and improve user satisfaction.
My Role
As Lead UX Researcher, I owned the end-to-end research process: designing the study, recruiting participants, executing usability tests, analyzing data, and synthesizing actionable insights. I also mentored junior researchers, created reusable study templates, and instituted ethical guardrails to ensure compliance and data security.
Research Methodology
I chose remote, unmoderated usability testing via Maze.co for its efficiency and ability to capture authentic user interactions. This method allowed participants to engage with a Figma prototype in their own environments, using a think-aloud protocol. Open-ended surveys captured qualitative feedback, while task completion rates and error tracking provided quantitative metrics.
Why this approach?
Remote testing increased diversity and reduced bias.
Unmoderated sessions revealed natural user behaviors.
Maze.co enabled rapid iteration and secure data handling.
Research Process
Recruitment & Panel Management:
I recruited five participants across industries (Financial Services, Aviation, IT, Communications) and seniority levels (from Data Analyst to Principal Engineer). Panel records were securely maintained, and incentives were provided as digital gift cards, ensuring GDPR compliance and ethical transparency.
Execution:
Participants were tasked with:
Signing up for a free trial using credits.
Upgrading their account when credits ran low.
I monitored completion rates, misclicks, and feedback, ensuring all data was anonymized and securely stored.
Data Analysis:
I triangulated quantitative metrics (task success, error rates) with qualitative insights (user quotes, open-ended feedback). I also reviewed the study with peers before sharing results, maintaining methodological rigor.
Key Findings
Successes:
The cluster creation flow was “very clear” and easy to complete.
Users understood how credits were applied to compute costs.
Pain Points:
Confusion over whether credits applied to storage costs.
The term “Upgrade” was misleading; users expected plan changes, not credit purchases.
Users wanted clearer displays of remaining credits and the ability to “pause” service when credits ran out.
Metrics:
80% task completion rate via expected paths.
Reduced potential for support calls by clarifying billing flows.
Identified opportunities to decrease bounce rates and improve NPS by refining terminology and UI clarity.
Impact
My research directly informed product and design decisions:
The team replaced “Upgrade” with “Buy More Credits,” aligning with user expectations.
UI changes now display remaining credits more prominently and clarify credit application to compute vs. storage.
Recommendations led to a more intuitive billing upgrade screen, reducing confusion and anticipated support tickets.
Deliverables
Usability Testing Report: Detailed findings, user quotes, and actionable recommendations.
Reusable Study Templates: For future credit-based feature research.
Panel Management Toolkit: Ensuring secure, ethical recruitment and data handling.
Onboarding Materials: For new researchers joining the team.
Reflections
This project reinforced my belief in the power of user-centered research. By listening to users and iterating quickly, we delivered a product that was not only functional but delightful. I learned the importance of precise terminology and the value of showing empathy for users’ mental models. Next time, I’d expand the panel for broader coverage and pilot test terminology changes earlier.
Tooling & Ethical Considerations
Tech Stack: Maze.co, Figma, secure panel management tools.
Compliance: GDPR, anonymized data, transparent incentives.
Ethical Guardrails: Informed consent, data security, peer review of findings.
Results & Success Metrics
Increased task completion rate (80%+)
Reduced anticipated support calls
Improved clarity and user satisfaction
Direct impact on product roadmap and UI copy
Enhanced ROI through reduced friction and improved conversion