Imagine you’re a pathologist uploading a complex medical case with multiple microscopy images. You click “Create Case” and expect to wait.AI analysis takes time, especially when processing high-resolution medical images. But what if, instead of a loading screen, you could simply continue your work and have the AI insights appear on your screen the moment they’re ready no refresh, no waiting, just there?
That’s exactly what we built at PAICON. This is the story of how we integrated AI into our application in a way that feels magical to users, even though the underlying work is anything but simple.
The Challenge: Making Slow Things Feel Fast
When a pathologist creates a new case in iPath Forum, our AI assistant analyzes everything: patient demographics, medical history, lab results, clinical symptoms, and most importantly, the microscopy images. This analysis can take anywhere from 10 to 30 seconds depending on complexity.
The challenge? How do we keep users informed without making them stare at a spinner or, worse, forcing them to refresh the page to see results?
We needed:
- Immediate feedback - Show users that AI is working, even while processing
- Real-time updates - Display results the moment they’re ready, across all open browser tabs
- Graceful failure - Handle errors without breaking the user experience
- Personalization - Show each user relevant controls (like feedback forms) that adapt to their actions
The Architecture: Pieces Working Together
Our solution combines several moving parts that work in harmony. Think of it like a relay race where each component hands off to the next at exactly the right moment.
1. Multiple AI Services, One Interface
We designed our system to work with different AI services—including OpenAI (the same technology behind ChatGPT), our internal systems, and Amazon’s machine learning platform. Instead of scattering AI logic throughout our application, we created a unified interface.
Each AI service works differently under the hood, but we designed them to accept the same inputs and deliver results in the same format:
- They receive standardized case data (patient info, images, medical context)
- They process it through their AI models
- They return organized results we can use consistently
This means we can experiment with different AI models, compare their outputs, or switch services without rebuilding our entire system. It’s like having multiple translators who all understand the same input and deliver results in the same format.
2. Background Jobs: The Invisible Workers
When a user creates a case, we don’t make their browser wait for the AI. Instead, we immediately return control to them while the AI works behind the scenes.
The system handles the heavy lifting like calling the AI service, processing images, handling any errors, all while the user continues their work. It’s like dropping off your laundry and getting a ticket. You don’t stand there watching the washing machine; you go do other things and come back when it’s ready.
3. Real-Time Updates: The Magic Moment
Here’s where things get interesting. When the AI finishes analyzing a case, we don’t just save the results. We instantly send them to everyone who’s currently viewing that case page.
The moment the AI completes its analysis, we push the results to everyone viewing that case in real-time. Every pathologist sees the insights appear on their screen—no refresh needed.
The Tricky Part: Showing Everyone the Same Results, But Personal Controls
Sending updates to all viewers created an interesting challenge. When we send AI results to multiple people at once, everyone receives identical content. But we wanted each user to see personalized controls. Specifically, a feedback form that:
- Shows if they’ve already submitted feedback
- Includes their personal security credentials (to prevent tampering)
- Reflects their specific permissions
The problem? The system sending the results doesn’t know who’s viewing the page. It can’t personalize content for individual users because it’s not connected to any specific person.
Our Solution: A Two-Step Approach
We split the problem into two parts:
Part 1 - Send to Everyone (Same for Everyone): When AI analysis completes, we send the AI-generated content to all viewers. This content is identical for everyone—the medical insights, recommendations, and analysis.
Part 2 - The Personal Touch (Unique per Person): Instead of including the feedback form in the broadcast, each user’s browser automatically requests their personalized version. The system responds with content tailored to them:
- For users who already submitted feedback: a form showing their previous response
- For users who haven’t: an active form ready for their input
- With security credentials unique to that user
This happens so fast that users never notice the two-step process. They just see the AI results appear with a form ready for them.
Handling the Real World: When Things Go Wrong
AI systems aren’t perfect. Models can time out, services can be unavailable, or responses might be malformed. We built our system to handle these gracefully:
Status Tracking: We track each analysis through its lifecycle and show users appropriate indicators at each stage a spinner while working, and the insights when complete.
Graceful Degradation: If something goes wrong, we log it for our team to investigate, but the AI section simply doesn’t appear on that case. The system doesn’t break or show cryptic error messages—the rest of the case remains fully functional, and our team can investigate and retry later.
Why This Matters in Healthcare
In healthcare software, every second counts, but so does accuracy and reliability. Our approach delivers on all fronts:
Speed: Pathologists don’t wait for AI. They keep working while analysis happens in the background.
Reliability: If something goes wrong, the system handles it gracefully without disrupting the workflow.
Transparency: Users always know when AI is working and when results are ready.
Collaboration: Multiple experts can view and discuss AI insights at the same time, from anywhere.
Continuous Improvement: User feedback helps us refine how AI analyzes cases over time.
Lessons We Learned
Building this taught us valuable lessons about integrating AI into traditional web applications:
Don’t Block on AI: Never make users wait for AI processing. Run analyses asynchronously and surface incremental progress so the UI stays responsive.
Design for Failure: AI services will fail. Build your system to handle errors gracefully and allow retries.
Keep AI Separate: Keep AI service logic isolated from your core application logic. This way, you can swap or test different models without affecting the rest of your system.
Real-Time Requires Resilience: Even with instant browser updates, you need to handle common scenarios—users who reload pages, navigate away, or have unstable internet connections. Make your system work smoothly in all these situations.
Think About What’s Shared vs. Personal: When you send the same information to multiple users at once, think carefully about what can be shared and what needs to be personalized for each individual. Use a two-step approach when needed.
The Big Picture
At its heart, this project was about more than just technology. It was about designing software that respects users’ time and intelligence. Pathologists are highly skilled professionals; our job is to give them powerful tools that augment their expertise without getting in their way.
By making AI feel instant and seamless, we help them spend less time waiting and more time doing what they do best providing expert medical insights that save lives.
That’s how we build at PAICON: powerful technology, invisible complexity, focused on what matters.