TrueCheck
← Back to Blog
CompanyJan 28, 20265 min read

Building a Culture of Developer Experience

Dog-Fooding Your Own APIs

The single most impactful practice we adopted at TrueCheck was requiring every internal tool and service to use our public API. Our admin dashboard, our monitoring systems, and even our internal testing infrastructure all go through the same endpoints that our customers use. This means that every rough edge, confusing error message, or performance issue is felt internally before it ever reaches a customer.

Dog-fooding creates a natural feedback loop that no amount of user research can replicate. When the engineer building the billing integration has to parse the same error responses that a customer would see, the quality of those error responses improves rapidly. Pain points that might languish in a backlog for months get fixed in days because someone on the team is personally frustrated by them.

We formalized this practice with a rule: any API change that breaks the dog-fooding experience is treated with the same severity as a customer-facing production incident. This ensures that internal convenience never comes at the cost of public API quality.

SDK Design Principles

Our SDKs are designed around three principles: zero configuration for the common case, progressive disclosure for advanced use cases, and idiomatic code for each language. A developer should be able to send their first verification with three lines of code: initialize the client, call verify, and check the result.

Progressive disclosure means that advanced features like custom expiry times, callback URLs, and risk scoring thresholds are available but never required. Default values are chosen to be correct for 90 percent of use cases, so developers only need to configure what they explicitly want to change.

Writing idiomatic code means that our Python SDK feels like Python, our Go SDK feels like Go, and our TypeScript SDK feels like TypeScript. We do not generate SDKs from a single template; each one is hand-crafted by developers who are experts in that language. This attention to language conventions dramatically reduces the cognitive load of integration.

Documentation as Product

We treat documentation with the same rigor as product code. It lives in the same monorepo, goes through the same review process, and has its own test suite that verifies every code example actually compiles and runs. Stale documentation is a bug, and we track documentation accuracy as a team metric.

Every API endpoint has a complete reference page with request and response examples, error codes with explanations, and interactive try-it-now panels that let developers make real API calls from the browser. Guides are structured as progressive tutorials that build on each other, taking developers from first API call to production deployment in a logical sequence.

We also invest heavily in content that helps developers understand the why behind our design decisions. Understanding why rate limits exist, why certain fields are required, and why error codes are structured the way they are helps developers build more robust integrations and reduces support burden.

Developer Support Philosophy

Our support team is staffed entirely by engineers who have shipped production code. When a developer reaches out with an integration question, they get a response from someone who understands their codebase, their language, and their constraints. We do not use tiered support; every conversation starts with someone who can actually solve the problem.

We maintain a public Discord community where developers help each other and where our engineers actively participate. Questions asked in Discord often reveal documentation gaps or API usability issues that we track and address. The community has become one of our most valuable sources of product feedback.

Response time matters, but resolution quality matters more. We measure support success not by time-to-first-response but by time-to-resolution and customer satisfaction scores. A thoughtful response that arrives in 30 minutes and solves the problem is worth more than an automated acknowledgment in 30 seconds followed by days of back-and-forth.

Feedback Loops and Measuring DX Success

We track developer experience through a combination of quantitative metrics and qualitative feedback. Time-to-first-successful-API-call measures how quickly new developers can get started. SDK adoption rate tracks which languages and frameworks our customers prefer. Error rate by endpoint reveals which parts of the API are most confusing.

Qualitative feedback comes from support conversations, Discord discussions, developer interviews, and a quarterly NPS survey targeted at technical users. We synthesize this feedback into a monthly DX report that the entire company reviews, ensuring that developer experience stays visible at every level of the organization.

The most important metric, though, is one we cannot easily quantify: do developers enjoy working with our API? We look for signals like unsolicited positive feedback, developers recommending us to peers, and blog posts or tweets praising the integration experience. These organic signals tell us more about the true state of our DX than any dashboard metric.

← Back to all articles