EdenCare - AI Analytics POC

Overview

Overview

Overview

Client

Eden Care AI Researchers

Role

Product Designer (Validation & Experiment Design)

Scope

Rapid proof of concept

Team

AI team, Engineering, Internal stakeholders

Timeline

1 week

Outcome

Concept approved to move forward based on early validation

Client

Eden Care AI Researchers

Role

Product Designer (Validation & Experiment Design)

Scope

Rapid proof of concept

Team

AI team, Engineering, Internal stakeholders

Timeline

1 week

Outcome

Concept approved to move forward based on early validation

Client

Eden Care AI Researchers

Role

Product Designer (Validation & Experiment Design)

Scope

Rapid proof of concept

Team

AI team, Engineering, Internal stakeholders

Timeline

1 week

Outcome

Concept approved to move forward based on early validation

Project Context

Project Context

Project Context

Eden Care works with large volumes of medical, operational, and engagement data — from clinic visits and claims to HR records and wellness activity

Internal teams needed insights from this data, but most staff:

Didn’t have SQL or technical skills

Found dashboards slow and hard to use

Spent hours or days preparing reports

The AI team wanted to explore a new idea:

Could staff simply ask questions in plain English and get useful insights instantly?

This project was a short proof of concept, designed to help the company decide whether this idea was worth deeper investment.

What We Needed to Validate

What We Needed to Validate

What We Needed to Validate

This experiment focused on answering a small set of critical questions:

Would non-technical staff understand how to ask questions in natural language?

Would they trust AI-generated results enough to use them in real work?

Could this approach meaningfully reduce reporting time?

The goal was not to build a full analytics product — only to gather enough signal to decide whether to continue.

Constraints

Constraints

Constraints

One-week timeline

Needed something testable, not complete

Had to be simple enough for stakeholders to evaluate quickly

These constraints shaped every design decision.

Role & Ownership

Role & Ownership

Role & Ownership

I designed the experiment end-to-end.

I was responsible for

Shaping the validation scope

Designing the minimum interface needed to test the idea

Making AI output understandable and trustworthy

Working closely with the AI and engineering teams to ensure feasibility

The focus was speed, clarity, and learning — not polish.

Design Strategy

Design Strategy

Design Strategy

I intentionally kept the flow small and focused on three things that mattered for validation. Asking questions, understanding the result and creating trust and transparency regarding how the result was generated.

The Experiment

Natural-Language Question Input

Natural-Language Question Input

Natural-Language Question Input

Staff could type questions in plain English or select from example prompts.

The goal was to see whether people could ask meaningful questions without training.

Transparency View (Trust Builder)

Transparency View (Trust Builder)

Transparency View (Trust Builder)

Trust was a major risk in this experiment, especially in a healthcare context.

To address this, I designed a transparency panel that showed

How the AI interpreted the question

The steps taken to generate the answer

The underlying database query in MonGo DB

This helped stakeholders feel confident the system wasn’t a “black box” and also helped the AI team debug early outputs.

Results as Data format and Plain Language

Results as Data format and Plain Language

Results as Data format and Plain Language

Results were shown in three simple formats:

A table for raw data

A chart chosen automatically based on the dataset

A short written summary explaining what the data meant

This made insights easy to understand without dashboards or SQL

Impact an Validation Outomce

Impact an Validation Outomce

Impact an Validation Outomce

The product is still in active development. This case study reflects real design decisions made during build, not a polished post-launch story.

Early validation showed strong potential:

~70% of stakeholders responded positively and approved the concept for further development

A workflow that typically took ~3 days (SQL + reporting) could be reduced to ~5 minutes

Non-technical staff could get insights without relying on engineering teams

The transparency view significantly increased trust in AI outputs

This experiment gave the team enough confidence to decide whether to proceed with a full build.

Key Learnings

Key Learnings

Key Learnings

Experiments work best when they stay small and focused

Trust is just as important as accuracy in AI-driven tools

Showing how AI arrives at answers increases confidence and adoption

The biggest takeaway for me was learning to strip ideas down to the smallest version that clearly communicates value.

Why This Case Study Matters

Why This Case Study Matters

Why This Case Study Matters

I included this project to show how I approach early-stage validation, AI-powered products and designing just enough to support decision making. It reflects my ability to move quickly, design with intent and avoid overbuilding before direction is confirmed.