2023: A Year of Intentionality

Blog

‘Tis the season for a year-in-review retrospective. 2023 has been an incredible and purposeful year for Collette Health as a business but also for the technology industry. We saw the rise of ChatGPT and broader AI tools in the public consciousness. While they can be initially fascinating tools, society collectively has begun to shift into conversations that challenge us in new ways: trust, truth, ownership, liability, and various ethical questions now stride to the forefront. Regardless of where the answers to this will come to rest, we can all agree that the rapid pace of development and innovation in just this year alone has been staggering.

 

As I reflect on the year that Collette Health has had, we have four main themes that predominantly define our 2023 year in Product:

 

  • Brand Evolution
  • Expansion of our Patient-to-Observer ratio
  • Analytics via our Customer’s Voice
  • Artificial Intelligence

 

 

Brand Evolution

We are incredibly excited to be sporting our new colors as Collette Health. Our new brand reflects our commitment to focus and innovation in high-acuity virtual care. When writing this, KLAS ranks us as the leading score for the Virtual Sitting & Nursing industry segment (89.6). We were grateful to our customers for responding to the KLAS survey and interviews and for providing their feedback. Their feedback, featured in our Spotlight report and included in their inaugural Virtual Sitting and Nursing survey, helped us focus our efforts. Where to improve. We are, at the core, a customer-led growth company. As the leader of our product, I’m very grateful for all the feedback from our customers. The most rewarding being the stories of “good catches.” Of keeping someone’s loved one safe.

 

It’s one of many reasons we continue to iterate. Throughout the 2023 year, we performed 17 formal product releases on our normal 3-week release cycle. During these production updates, we release new features, enhance existing capabilities, and resolve lower-priority issues. Additionally, we’ve performed 21 system patches for higher priority, and more timely support needs for 38 updates to our core production platform throughout the year. It’s important to emphasize that we are investing heavily in providing new enhancements, ensuring the best possible and most stable user experiences for our customers. Our development team and customer care teams are tightly aligned and empowered to take action to resolve client issues.

 

Being a cloud-native application, we can rapidly and effectively deploy updates (planned or patches) to benefit the entire platform. Those updates happen passively and do not disrupt ongoing active observations. And as a result, we’ve enjoyed a 100% platform-wide uptime in 2023 year-to-date.

 

Beyond the core application, we’ve released 11 updates to our Analytics portal and 5 updates to our Public API. Combined with the core product releases, that makes 54 updates to our production platform, averaging approximately once a week.

 

This high-velocity approach generally allows us to make frequent minor changes that don’t require retraining, reducing the burden on clinicians. We can also be highly nimble in responding to customer feedback, support cases, and emerging security needs.

 

Some Numbers of Note

Looking beyond the development metrics for the year, I thought it would be interesting to share a few metrics about patients observed on our platform this year. The first bullet is the one we are most proud of.

 

  • 83,678 falls avoided were reported by observers on the platform YTD. (December 23, 2023)

 

  • Observers engage to speak with patients on average every 43.7 minutes. Patient engagement and communication are key to increased outcomes and patient satisfaction. We see this value improve dramatically when customers transition from competitors to our platform, transitioning from simply sounding endless alarms to proactively engaging with patients.

 

  • Something I’m incredibly proud of is that our platform enabled observation for 89 Deaf patients this year and over 7,000 “Hard of Hearing” classified patients. We are the only platform that offers cross-language communication at the scale we do, and no one else enables American Sign Language communication. These patients who need observation would otherwise be neglected or ineligible for observation based solely on their ASL needs.

 

  • Patients on the platform spoke 56 distinct languages, the top languages being English, Spanish, French, Russian, Vietnamese, and Haitian Creole. Much to our disappointment, no actual patients needed the Klingon language, but there will always be next year!

 

  • Of patients onboarded to the Collette Health platform, 54.75% were male, and 45.25% were female.

 

  • The top reason for observation this year saw Cognitive Impairment take the top spot at 28.43%, with Fall Risk in second place at 22.35%. However, our customers used our platform for 18 major and 31 minor use cases, demonstrating our broad utility in providing virtual observation-based virtual care during the year.

 

  • We note a drop in use – by percentage – for observation on communicable diseases; perhaps a hopeful sign that the pandemic of the past few years is in the rear-view mirror.

 

  • We’ve collected good catches from observers for years. Still, we more tightly integrated them into the product and our datasets in the 2nd half of 2023. Since that release, we have seen a monthly average of about 2,135 good catch stories reported directly from those performing the observation.

 

Platform Expansion

I wanted to briefly summarize some of the major capabilities we released in 2023. Many hundreds of minor enhancements and improvements were also released.

 

  • Presence Detection – Via some impressive new AI and ML tools, we track the observer and provide them guidance if they lose focus during their observation; if the observer isn’t present or doesn’t return their focus, we escalate our attempt to redirect them. We ultimately can notify a supervisor that the observer is absent or distracted.

 

  • Virtual Check-in – This workflow enhancement is designed to allow organizations to configure a timeframe. The system will guide the observer if they haven’t checked in on a patient within the allotted time.

 

  • Acknowledge Patient or Clinician – Essential to good communication is having the tools to manage expectations. If an observer is engaged with a patient while their attention is requested in another room, this capability allows them to place a message on the screen of the second room, notifying them that they see them and will be right there.

 

  • Clinician Request Assistance – A workflow feature to allow staff in the patient room to signal the attention of the observer from the patient cart. This capability pairs well with the acknowledgment capability.

 

  • Integrated Good Catches – We have collected Good Catches within our resource center for years. Still, we’ve now integrated the collection of good catches within our notes and event logging, simplifying the process. Real-time counts are visible in the lobby. The Analytics portal will serve up data in 2024, providing deeper insights.

 

  • SMS on Alarm – The ability to send a text message (SMS) automatically to contacts if a patient in-room alarm is sounded reduces the reliance on the audio volume over distance.

 

  • New Audio Controls – We introduced an option to allow the audio with the patient to be unmuted or muted by default, depending on the need when a patient joins a pod. Additionally, this year, we added the ability to “unmute all,” which enables the observer to hear audio from all patients; this is particularly useful in observation models that are passive or when most patients are resting.

 

  • Observation Report Enhancements – A continuation of our efforts to expand and improve data about a patient’s observation experience, either for mapping into an EMR or incident investigation.

 

  • Analytics Dashboard – Reports and data export capabilities have been included for years. Still, this year, we heard from our customers a desire for more visuals, interactivity, and deeper contextualization. The start of that journey was the release of our interactive analytics dashboard, and we have multiple updates planned throughout 2024 to continue to delight and amaze.

 

  • Good Catch & Fall Avoided data exports – We provide access to dashboards, reports, and exportable data within our service. Everyone has different ways to interact with data and different needs. We continue to enhance all forms of data access, including, in this case, exportable data to include Good Catches and Falls Avoided.

 

  • Cart-facility mapping – One of our more commonly requested metrics has been facility-specific utilization of carts; to support this, we released an administrative configuration to define location and will provide the drill-in in the analytics dashboard in Q1 2024.

 

  • Quick Notes – To help increase user focus, reduce distractions, and speed up the average time to complete a note, we introduced predefined notes that are quickly selectable. Observers can still enter free-form notes if desired.

 

  • Activity Logging – A new category of observation documentation and data collection, “Activities” join “Notes” and “Events” to help capture the holistic details of the observation. Activities are quick to select and versatile, allowing an observer to note a patient’s state of sleep, level of agitation, and details of physical activity. We noted a trend with our customers to document this information by hand on paper and that it would be beneficial to have in the product and reportable.

 

  • 12:1 observation ratio – Expanding the ratio from 10:1 to 12:1 for patient observation by a single observer. When appropriate, this is a 20% increase in observational capacity and overall ROI of the system.

 

  • Security-related updates – User last login, random password generator on the user profile, session and inactivity alerts to support the latest NIST guidance, configurable password policies, and new user roles to support expanded capabilities.

 

 

User Experience Design

  • Administration Portal – we began the year by releasing our Admin User Interface (UI), which moved settings from within the core application, the lobby, and elsewhere – even a few settings only available to the CCC via support ticket – all to a brand new self-service user interface. We’ve greatly expanded the options, roles, and configurations available throughout the year.

 

  • Patient User Interface – Over the past few years, we’ve focused a lot of our time on the observer’s experience and the tools available to them. Limited functionality has always existed in-room because we don’t ask or expect the patient to physically interact with the cart. Still, it was time to address some critical updates and feedback. We all-but-rewrote the patient side experience, added a side panel with the “Dim mode” option, and released the fish tank video (which would later be moved to the passive view versus only when awaiting an observer; this would be our single most requested change during this period) which has been very popular with patients.

 

  • Reimagined Observer User Interface – We spent years observing observers, talking to customers, studying human behavior, and prototyping changes to fine-tune how observation should work. We first began showcasing design prototypes for the updated UI to customers 2.5 years ago. We relentlessly iterated on the concept based on their thoughts and perspectives. What was released reflects years of research and addressed approximately 85% of all user-submitted feedback.

 

  • Gallery Mode – As our customer’s use of our product has evolved, so have we. Gallery mode is a product layout for a more passive and less proactive observation use case. This view favors more extensive video spanning the entire screen vs the richness of the full feature suite. Some basic capabilities are available for communication – but since the layout is intended for passive observation, the full breadth of tools aren’t needed.

 

  • Product Rebrand – Our team was thrilled to push a special release to sport our new colors and brand!

 

 

Expanding The Ratio

As we conclude the 2023 year, we made the option available to monitor up to 12 patients per observer versus our standard 10:1 ratio. For years, as an organization, we’ve held firm to a smaller patient ratio as we know that smaller ratios provide better outcomes for fall prevention. And we continue to believe that. We are not blind to the fact that some competitors tout high ratios, but in practice, we’ve found that their outcomes are lower, and most of their customers don’t maximize capacity; it simply serves as overly idealistic marketing content. We have always been and will continue to be focused on outcomes and what is best for our customers. So, we’ve always chosen the more challenging path of being honest during sales, even if, on paper, our ratio puts us at a disadvantage. So why, then, are we expanding our ratio? Is it a heartless ploy to abandon principles in search of the almighty dollar? Hardly. The key to “why change” is why customers use our platform; those uses have expanded beyond fall reduction. We still feel that 10:1 or even 8:1 (where we started many years ago), depending on patient acuity, is optimal for fall reduction outcomes. Still, as customers use our platform for lower acuity patients and new passive observation models, it makes sense to lightly expand our ratio.

 

But in true fashion for us, we don’t simply “flip the switch.” We interviewed customers and spent a long time understanding what risks and concerns occur with an expanded ratio. Mainly speaking, the problem was about the observer’s ability to multi-task. A massive theme of the 2023 year has been putting the building blocks in place for new tools and modified capabilities for the observer to support them as the observation ratios expand. Our goal is to enable a meaningful expansion of the patient ratios without reducing outcomes. This year, we introduced our reimagined Observer User Interface (UI) – sometimes called PodV2. In this interface, we built in as many of the lessons learned from the past 7 years as possible. We reduced the average number of clicks an observer needs to make within a shift performing their role. This means bringing a lot of options and settings forward and shifting around the display placement of information. We also adjusted the layout of where the monitoring zone (video tiles) are placed from the right to the left of the screen for a more intuitive cognitive flow.

 

To support the expansion ratio more directly, we introduced drag and drop reordering of the video tiles on the view to enable observers to place higher need or higher acuity patients in preferable locations or to sort by facility or other desirable organization. We introduced the clinician acknowledgment, which allows the observer to place a message on the screen in the patient’s room at the press of a button, sharing that they are currently with another patient; this helps the observer manage concurrent needs. Quick Notes was introduced to help make the documentation process less time-consuming. We’ve introduced Activity Logging to help observers replace paper documentation processes or duplicate systems/screens. Between Activity Logging and Quick Notes, the goal is to help keep the eyes and attention of the observer more firmly directed at the patient video feeds and not pulled away for documentation. We also introduced our Presence Detection capabilities this year, which help the observer and those who manage observers ensure proper attention is paid to patients. For the observer, the Presence Detection offers multiple reminders to pay attention to the patients if their gaze drifts away. Starting gently and then progressively escalating, ultimately, a supervisor can get notified of prolonged distraction or absence by an observer. This ensures a higher degree of quality in observation by reducing distractions.

 

Virtual Check-in reminders have also been introduced to allow organizations to set a timer by which an observer is reminded of the need to check in with a patient if they haven’t in each given time. Early in the new year, we will release “tagging,” which is a capability designed to let an observer notate a moment in time but then come back to it to provide notes and context when they are not actively engaged in patient care.

 

All these features together are designed to boost and direct focus by the observer, remove obstacles that drain attention, and reduce any burdens that the application itself may have placed on observers (by clicking through menus for features) or having to type all their notes themselves versus select standard options via Quick Notes or Activity logging. Through all these measures, we feel comfortable that customers who see the need and choose to enable the 12:1 ratio expansion will be empowered to enjoy the benefits of a meaningful ratio expansion without reducing outcomes.

 

Data & Analytics

Data and outcomes have always been important to us. Still, we heard from our customers and validated in the KLAS report that our Data and analytics had room for improvement. We hope that you’ll find that we’ve listened! In August, we released the first iteration of our Analytics Dashboard. We’ve since released 3 additional updates to the dashboard alone, providing new metrics and adjustments based on conversations with our customers. We understand that data needs to tell a story. That story should be more than numbers, graphs, and charts; in 2024, we plan to release multiple expansions to our reporting and dashboarding capabilities to expand the depth of data available and the contextualization.

 

Join us in January for our Analytics Webinar We will have a candid conversation on data, how our customers leverage it, and our plans in 2024 to be the undisputed leader in our niche on data, dashboards, analytics, and reporting.

 

Artificial Intelligence and Machine Learning

This topic is quite fascinating, given all the energy and excitement, all the questions and fears, and the extreme complexity of the technology. It’s interesting that just as we were the first in our niche to be cloud-native and the first to use webRTC as a video backbone, we were also the first to honestly use modern machine learning. We were first – by many years – to introduce two-way video to patient observation; even today, others that have it struggle to make it a seamless component of their product versus a kludgy bolt-on. The years before the pandemic were very interesting as we engaged hospitals with a webRTC-based cloud product. We faced a near-endless series of questions about cloud security, scalability, and “can it work?” – in the end, it’s very secure and stable. Yes, it does work. A similar time is occurring right now with Artificial Intelligence.

 

For the readers of this blog who may be technology purists, yes – there is a difference between Artificial Intelligence and Machine Learning. Still, so often, the public consciousness blurs the lines with flagrant disregard, and I may as well as I continue to write this.

 

Within the niche of patient observation, the artificial intelligence capabilities available are still in their infancy and are evolving. The technology has limitless potential, but most health tech companies still search for meaningful applications. I’ve spoken to customers of AI-only or AI-first care platforms. In the end, those customers have primarily turned off the AI features for their lack of ability to deliver on the promises and high expectations they set. Some fundamental things need to be improved in the model of AI-only solutions, not the least of which is they take humanity out of patient care.

 

We could and certainly can provide some broad generalized “AI” that watches patients and tosses alerts at nurses faster than Sesame Street’s cookie monster drops crumbs. But that is offloading observation responsibilities onto already busy, already short-staffed, already tired staff; deploying in this manner doesn’t make sense. Instead, we’ve focused heavily for years on using the right technology and intelligence to best enable human-centric observation and patient care. Our machine learning and AI tools provide better insights, notifications, and alerts to observers. We leverage machine learning to do various tasks ranging from in-room luminosity detection to determine if night vision should be enabled to natural language processing to translate and voice text entered by an observer to a patient in their preferred language crossing the language divide.

 

We use AI to monitor the observers and remind them if their attention wanders. We use AI to dynamically crop the observer’s video, presenting them in the best light but reducing the risk of accidentally PHI sharing based on what may be behind the observer. We provide proactive motion detection. We leverage AI in our infrastructure to manage scale and alerting; we leverage AI in our development and Quality Assurance processes to increase code quality and velocity as well as automate portions of our testing program.

 

In the new year, we will begin piloting our AI that controls the camera position and angle to keep patients in view. This will further reduce observer workload by not having to fine-tune and adjust their vantage point. This will also enable a partially or fully autonomous observation mode for select use cases such as Elopement detection. Beyond this, we have multiple capabilities to release in 2024 to expand observation coverage on the platform and extend improved outcomes to as many patients as possible.

 

Our ambitions with AI are to provide valuable expansions of our product that directly map back to observer enablement, patient experience, and, ultimately, clinical and patient outcomes. If you are considering a patient observation and virtual care solution that touts AI, I encourage you to think critically about the reality of how that system operates; where does it place the burdens of workflow? I see plenty of demos on trade show floors of a box drawn around a patient on a video feed, but in most cases, ask yourself, wouldn’t the fall have occurred by the time the system detected the patient standing? Human-centric AI-assisted observation will, for a time, provide the best patient outcomes despite the claims of AI-only proponents.

 

Recap

2023 has been a significant year for our product, as we’ve made multiple major improvements and releases. We don’t see ourselves slowing down in 2024 either! With a continued focus on patient outcomes, we will lean more on Analytics by expanding our datasets, increasing contextualization, and predictive insights. We will continue to drive for stellar patient outcomes by enabling humans and vastly increasing our already strong in-application uses of AI.