/Case Study

Idexx - TeleMedicine

What is Telemedicine VetMedStat?

VetMedStat is IDEXX’s telemedicine platform that connects veterinary clinics with board-certified specialists for remote diagnostic consultations. Through VetMedStat, clinics can upload radiographs, lab results, and patient histories, then receive expert interpretations and treatment recommendations often within hours.

The platform streamlines the entire diagnostic workflow for veterinary practices by combining case submission, image management, and specialist reporting into one unified interface. With its 24.5 release, VetMedStat introduced powerful UX-driven features like Image Scoring, Smart Image Series Management, and Guided Radiology Histories that enhance image quality feedback, reduce submission errors, and improve turnaround times across the telemedicine network.

00/01

00/01

00/01

Discovering the Problem

As Software Interaction Designer II, and from user interviews and metrics, we knew:

  • A meaningful proportion of submitted cases were technically suboptimal — e.g. under/over exposure, mis-framed regions, missing projections.

  • Specialists frequently flagged “insufficient image quality” or asked clinics to resubmit parts, adding delays.

  • Clinics were uncertain which series to include or drop; sometimes they over-submitted, incurring extra region fees, or under-submitted, missing key regions.

  • The freeform history text field often lacked structure; critical diagnostic details (vitals, prior diagnostics) were inconsistently supplied, resulting in back-and-forth clarifications.

These friction points were hurting speed, satisfaction, and diagnostic confidence.

Setting Goals & Success Metrics

We defined success around several key user- or business-level outcomes:

  1. Image Quality Feedback Loop: Ensure clinics can learn and improve over time via feedback (Image Scoring).

  2. Submission Efficiency: Reduce rates of “incomplete / resubmit” cases, lower extra region submissions, and reduce time wasted by specialists clarifying histories.

  3. Turnaround & Throughput: Every delay or rework is costly — we aimed to reduce cycle time for case completion.

  4. User Confidence & Satisfaction: Clinics feel the system is helping them get better and faster, not just policing them.

We set target metrics like:

  • Decrease “resubmission due to image issues” from X% → Y%

  • Decrease average specialist clarification requests per case

  • Reduce turnaround time (from submission → final report) by a meaningful percentage

  • Increase clinic user satisfaction / NPS for submissions

00/02

00/02

00/02

Designing the Features

Image Scoring

We defined a 4-tier scoring taxonomy (Exceptional / Good / Acceptable / Low) with clear, descriptive guidelines. VetMedStat

  • In the report view, the score appears in a “Notes to Clinic” section. The score is accompanied by tailored feedback: “Your exposure is just slightly under target; consider adjusting technique next time.”

  • We built “tooltips / help” so clinics can click to see what “acceptable vs low quality” means and examples.

  • Internally, we designed specialist workflows so they can assign and annotate the quality score without undue extra burden.

Smart Image Series Management

At submission time, images are auto-grouped by anatomical region (abdomen, thorax, spine, etc.). Clinics see clearly which regions are included. VetMedStat

  • Clinics can “drop” regions that don’t need review — that helps avoid extra region fees.

  • The UI warns or highlights if a region is either under-sampled (insufficient projections) or redundant.

  • We added clear messaging about why extra regions cost more (reflecting specialist effort) — this transparency reduces friction and surprise.

Guided Radiology Histories

Rather than a single free-type block, we structured the history section into fields: vitals, diagnostics, treatment, what you specifically want commented, etc. VetMedStat

  • Some fields are required, some optional; dynamic branching prompts follow-up based on answers.

  • A “private notes to specialist” field lets the clinic add context not for the final report but for internal clarity.

  • Inline help and examples guide clinics to provide the right level of detail, reducing ambiguity.

Launch & Outcomes

After a phased roll out release, monitored usage, and a collection of both quantitative and qualitative feedback we had an early snapshot of the impact of the re-design.

Metric

Before 24.5

After 24.5 (6 months)

Δ

Cases flagged for resubmission

18%

10%

↓ 44%

Clarification requests per case

1.9

1.1

↓ 42%

Median turnaround time

14.2 hrs

12.0 hrs

↓ 15%

Clinics citing Image Scoring as “useful feedback”

28%

65%

+37 pts

00/03

00/03

00/03

Reflection

Image Scoring

Designing for telemedicine is less about user interface and more about shared understanding.

By embracing measured trade-offs, less automation, fewer rating levels, slightly longer forms, I achieved a workflow that helped people think, learn, and trust the system.

The biggest UX win wasn’t visual polish; it was the alignment of intent between clinic and specialist. When both sides feel seen, heard, and guided, the entire diagnostic process improved.

See What We Do

Ready to build a bold, purposeful brand?

Let’s collaborate on something meaningful — from first concept to final launch.

See What We Do

Ready to build a bold, purposeful brand?

Let’s collaborate on something meaningful — from first concept to final launch.