/Case Study

Idexx - TeleMedicine

What is Telemedicine VetMedStat?

VetMedStat is IDEXX’s telemedicine platform that connects veterinary clinics with board-certified specialists for remote diagnostic consultations. Through VetMedStat, clinics can upload radiographs, lab results, and patient histories, then receive expert interpretations and treatment recommendations often within hours.

The TeleMedicine redesign introduced workflow intelligence into diagnostic submission by embedding image quality scoring and actionable feedback directly into clinician workflows. The goal was to reduce rework, improve turnaround time, and help clinics prepare higher-quality cases with less effort.

The platform streamlines the entire diagnostic workflow for veterinary practices by combining case submission, image management, and specialist reporting into one unified interface. With its 24.5 release, VetMedStat introduced powerful UX-driven features like Image Scoring, Smart Image Series Management, and Guided Radiology Histories that enhance image quality feedback, reduce submission errors, and improve turnaround times across the telemedicine network.

00/01

00/01

Outcome

  • Reduced diagnostic rework and delays by adding image quality scoring and actionable feedback

  • Improved specialist efficiency and turnaround time through structured, guided radiology histories

  • 🏆 Increased clinic confidence and cost transparency with smarter image series management

  • 🕶️ Shifted VetMedStat from transactional to instructional, helping clinics improve imaging quality over time

Outcome

  • Reduced diagnostic rework and delays by adding image quality scoring and actionable feedback

  • Improved specialist efficiency and turnaround time through structured, guided radiology histories

  • 🏆 Increased clinic confidence and cost transparency with smarter image series management

  • 🕶️ Shifted VetMedStat from transactional to instructional, helping clinics improve imaging quality over time

After a phased roll out release, monitored usage, and a collection of both quantitative and qualitative feedback we had an early snapshot of the impact of the re-design.

Metric

Before 24.5

After 24.5 (6 months)

Δ

Cases flagged for resubmission

18%

10%

↓ 44%

Clarification requests per case

1.9

1.1

↓ 42%

Median turnaround time

14.2 hrs

12.0 hrs

↓ 15%

Clinics citing Image Scoring as “useful feedback”

28%

65%

+37 pts

00/02

00/02

Discovering the Problem

As Software Interaction Designer II, and from user interviews and metrics, we knew:

  • A meaningful proportion of submitted cases were technically suboptimal — e.g. under/over exposure, mis-framed regions, missing projections.

  • Specialists frequently flagged “insufficient image quality” or asked clinics to resubmit parts, adding delays.

  • Clinics were uncertain which series to include or drop; sometimes they over-submitted, incurring extra region fees, or under-submitted, missing key regions.

  • The freeform history text field often lacked structure; critical diagnostic details (vitals, prior diagnostics) were inconsistently supplied, resulting in back-and-forth clarifications.

These friction points were hurting speed, satisfaction, and diagnostic confidence.

Setting Goals & Success Metrics

We defined success around several key user- or business-level outcomes:

  1. Image Quality Feedback Loop: Ensure clinics can learn and improve over time via feedback (Image Scoring).

  2. Submission Efficiency: Reduce rates of “incomplete / resubmit” cases, lower extra region submissions, and reduce time wasted by specialists clarifying histories.

  3. Turnaround & Throughput: Every delay or rework is costly — we aimed to reduce cycle time for case completion.

  4. User Confidence & Satisfaction: Clinics feel the system is helping them get better and faster, not just policing them.

We set target metrics like:

  • Decrease “resubmission due to image issues” from X% → Y%

  • Decrease average specialist clarification requests per case

  • Reduce turnaround time (from submission → final report) by a meaningful percentage

  • Increase clinic user satisfaction / NPS for submissions

00/03

00/03

Designing the Features

Image Scoring

We defined a 4-tier scoring taxonomy (Exceptional / Good / Acceptable / Low) with clear, descriptive guidelines. VetMedStat

  • In the report view, the score appears in a “Notes to Clinic” section. The score is accompanied by tailored feedback: “Your exposure is just slightly under target; consider adjusting technique next time.”

  • We built “tooltips / help” so clinics can click to see what “acceptable vs low quality” means and examples.

  • Internally, we designed specialist workflows so they can assign and annotate the quality score without undue extra burden.

Smart Image Series Management

At submission time, images are auto-grouped by anatomical region (abdomen, thorax, spine, etc.). Clinics see clearly which regions are included. VetMedStat

  • Clinics can “drop” regions that don’t need review — that helps avoid extra region fees.

  • The UI warns or highlights if a region is either under-sampled (insufficient projections) or redundant.

  • We added clear messaging about why extra regions cost more (reflecting specialist effort) — this transparency reduces friction and surprise.

Guided Radiology Histories

Rather than a single free-type block, we structured the history section into fields: vitals, diagnostics, treatment, what you specifically want commented, etc. VetMedStat

  • Some fields are required, some optional; dynamic branching prompts follow-up based on answers.

  • A “private notes to specialist” field lets the clinic add context not for the final report but for internal clarity.

  • Inline help and examples guide clinics to provide the right level of detail, reducing ambiguity.

00/04

00/04

The Hardware Problem Nobody Was Talking About

While we were redesigning the submission workflow in VetMedStat, a parallel conversation was happening at the hardware level.

IDEXX's CardioPet ECG device — a physical unit that clips onto the patient and transmits cardiac data directly into the telemedicine pipeline — was growing in adoption across clinics. The subscription plans (Intro through Pro, ranging from 5 to 40 screens per month) meant more clinics were running routine preanesthetic ECG screens, sometimes for the first time.

That created a new class of problem we hadn't fully accounted for.

The device could produce a perfect signal. The software could receive it cleanly. And the case could still fail.

Not because of hardware error. Not because of a transmission fault. Because the human step in between — the clinic staff positioning the leads, framing the submission, contextualizing the reading — wasn't supported by the interface at all.

The CardioPet workflow assumed competence at the point of capture. The VetMedStat workflow assumed complete data at the point of submission. Neither assumption held at scale.

What this revealed about the real design surface

The CardioPet ECG use case made visible something that applied to the broader telemedicine platform:

The gap
What it looked like in practice

Capture ≠ quality

A 3-minute ECG workflow produced files that varied wildly in interpretability

Submission ≠ context

Cardiologists received traces without the clinical history that made them meaningful

Hardware adoption ≠ workflow integration

Clinics were adding the device faster than staff were trained to contextualize its output

This wasn't a training problem to solve with a PDF. It was a design problem to solve at the interface layer.

What we did about it

The Image Scoring system we built for radiology had a direct analog here: feedback that closes the loop between what the clinic submitted and what the specialist needed. The Guided Radiology Histories feature — structured fields instead of a free-text block — was equally applicable to cardiac submissions, where details like breed, weight, presenting signs, and current medications dramatically affect how a cardiologist interprets an arrhythmia trace.

The work we shipped in 24.5 wasn't designed specifically for ECG. But the design decisions we made were transferable to it: structured input over freeform, feedback over silence, transparency over ambiguity.

The insight the CardioPet context added was sharper: when a physical device is in the workflow, the software can't assume the human step happened correctly. The interface has to account for what the hardware can't check — context, intent, and clinical judgment at the moment of capture.

That's a different kind of UX problem. Not just: how do we design the screen? But: how do we design the handoff between a physical act and a digital workflow?

That's the problem I want to keep solving.

00/05

00/05

Reflection

Image Scoring

Designing for telemedicine is less about user interface and more about shared understanding.

By embracing measured trade-offs, less automation, fewer rating levels, slightly longer forms, I achieved a workflow that helped people think, learn, and trust the system.

The biggest UX win wasn’t visual polish; it was the alignment of intent between clinic and specialist. When both sides feel seen, heard, and guided, the entire diagnostic process improved.

See What We Do

Ready to build a bold, purposeful brand?

Let’s collaborate on something meaningful — from first concept to final launch.