Kokua AI: Using AI to Support Emotional Wellbeing
by StratMinds
- Full TranscriptKokua AI: Using AI to Support Emotional WellbeingNanea Reeves
And we're gonna start at the first slide actually, it would be helpful. Let's go here. All right, so I wanna talk about how we're using AI at TRIB to support emotional well-being and also the steps that we took to get there. But first, I wanna tell you a little bit about what kōkua means in Hawaiian. It's the Hawaiian word to you, we use here means help, to support. Hawaiian words always have several meanings. So this will be a challenge for the AI people to capture that language because context is everything. And there's always a higher meaning, a spiritual meaning to the word. And then there's an unspoken meaning, which is also very interesting. But more specifically, it's this concept of how do you support people without even an expectation of reciprocity back. Very challenging for a business.
All right, so first though, what I wanna do is I'm gonna disrupt this conference and guide you through a three-minute meditation. And the purpose of meditation is to capture your awareness present. It's not necessarily to chill you out like a tranquilizer. So let's just go through this. And we'll... Do you need audio? Yes, audio. Sorry.
Welcome. Welcome to the start of a new day. Before we turn our attention to the next discussion, let's create some space to turn our attention inward. Set down any objects and adjust your posture to get as comfortable as possible. For just a moment, lower your eyes and let your gaze rest just past the bridge of your nose. Soften the face and shoulders, let the arms rest gently, and allow the belly to relax. Notice what it's like to be here right now, opening up to the moment just as it is. Slowly lift your gaze now and rest your attention on the light you see in front of you. Take a breath in.
Sorry, you guys. Sorry, it's so loud. But you'll get an idea in the video of our environments. It's a native use case when you think of UX for virtual reality. It can immediately capture your awareness and you can also play with scale to trigger awe and wonder. And I just saw a paper online where they studied our app and compared it. We didn't even know this. I just search every night and look at trip research and I find all these published papers because our app is so freely available where they compare us to natural environments in VR. What we found when we tested natural environments early on, you had a mental model of what a beach should feel like or how it should smell. And when that wasn't present, it actually wasn't relaxing and calming. So we found to give you an environment that you could only experience in VR was the right approach for that user experience.
So anyway, we'll skip the video for this one, but hopefully we can show some of the other ones I have coming up. Little bit about me, I grew up here and it's really an amazing experience to be talking about tech in Honolulu. I shared a little bit about my feelings very emotionally yesterday about that and it's kind of a miracle what I've been able to achieve in my life to date, but I had to leave here to do it. And I've run tech at startups, mobile game startups, have really always, I love this inflection point of new technology transitions. And so let me get this little guy and see if I can figure it out.
All right, so we built this company, TRIP. It was really about nine years ago I lost my husband to cancer. And it was very suddenly and I started to think about what do I do with my life going forward? I've been number two at many companies, COO, president, CTO, and I felt like it was time for me to lead and support my own idea. And so this is what it was, how do we use technology to help people?
Thing I was most excited about, I was an investor in Oculus early and I would use the application and I was more interested in two things. How I felt coming in and out of it, it felt like a respite that I wanted to lean into. So you think about UX. And then the other thing was it was so easy to produce beer that it led me to the question, what else could you make someone feel? It was very organic. But I think that this is a major inflection point for the world actually. We have spatial computing, we have AI, we have integrated sensors and biosignals that can be captured through pupil measurement, heart rate, all kinds, even voice sentiment analysis. And then you have cloud GPU.
And I was at a company called Gaipai, which was a cloud gaming company that Sony acquired. And we had GPUs in the cloud in the early days, mostly coding to really low latency transactions to turn it around. And this is what's gonna drive the lighter weight devices, local models, but AI will demand it for low latency turnarounds and spatial will benefit from it. So everybody that says VR is dead, I don't think so. I think XR is gonna ramp as a result of this.
And so we wanted to see, could you use these technologies to support emotional and mental wellbeing? That we quantify the physical health, that we've seen that be very successful with WU, with Fitbit, et cetera. Well, what is the emotional, the inner fitness version of that? And why wait until you're in a mental health crisis? Can we start looking at our emotional wellbeing proactively, like we know we need to eat well and sleep well and not eat so many malasadas and spam musubis and take care of our physical health? The mind and body are very connected.
This is a slide from 2017 when I first started thinking about Tripp. And we always knew that AI was gonna be foundation for we make these adaptive experiences that were personalized to the end user. But it required that we start collecting data and get to a certain scale. And we have done that. We are now the recognized leader in the XR space. We've had lots of coverage. We're one time best invention. But I think from a user experience, the way we think about it is layered reality. The industry still thinks very vertically about VR, AR, mobile, audio. We just wanna meet you where you're at.
And so we deliver cross-platform. When you're in VR, it's a deep connection. You are opting in to retreat from the physical reality that you're experiencing. But we're starting to experiment with an AI agent called Kukua that can support you throughout the day. And now that that's audio, we can start to think about, could we put a one minute road rage deescalation in Android Auto? Do you know, and start to think about how does, when you just focus on what's the use case or the value proposition to the end user to just support, then there's so much opportunity to figure out how to meet them where they're at with today's interfaces will continue to evolve as the interfaces evolve.
So the initial value to the consumer is they come in, they wanna focus more, they wanna learn how to meditate, they wanna get help with their sleep, they wanna breathe. But what we find is the real value is this desire to wanna connect and also ascend. There's kind of this spiritual, for lack of a better term, desire, I think, as we become more and more secular as a society to have this expansion. And I think that we're achieving this already. If you look at our reviews compared to other reviews out there for mindfulness apps, people will say, oh, that's a really nice app. Over and over we get, this has changed my life, this has changed my life, this is helping me with my anxiety. We don't make any claims, but we do work with clinicians and we have lots of studies and we hope to get to that point where we can start to target very specific mental health use cases. In the meantime, we train our models to help us be able to do that in the future.
Because we're so broadly available, people just pick up our app and study it. I do searches all the time. I consider this data more valuable to us than if we sponsored the research. I had one investor tell me you shouldn't do that. What if the data comes back and it's not good? I want to know that. I want to know what's working, what's not working in a very methodical way. How do you measure the impact of what you're doing?
The other thing that was super interesting was because we needed users to collect data from, the only place that VR users existed were on the gaming platforms. So I went to the Oculus team and the PlayStation team because I knew both, and I said, would you let me launch my cute little meditation app on there, and they did me a favor. And they were surprised at the traction that we got. There aren't that many wellness outs out there that have over 50% male audience. And so we find this as a very unique attribute. And there are young men as well.
So the traditional approach, and this is how we launched as well. We worked with meditation experts. We have a connection with Ron Das's team, Jack Kornfield, all those. You have the concept. We worked with mental health professionals, Stanford neuroscientists, all the experts. We developed the concept. We put data instrumentation in, which I'll talk about. And then we developed the content and we started distributing it. And I think this is how all content creation up to date has been approached. But then we knew that we wanted to always tailor it to your own physical reality that we could detect, as well as how you said you feel. Because in mental health, how you think you feel, the self-reported data is actually really important. And so with every session when we launched the app in December of 2019, we asked at the beginning of the session and the end of the session, how do you feel, an 11 point scale, zero to 10, as well as a mood board that we randomized the display of recession to control against position bias. Here's the lightly copyedited version of the transcript, preserving all original content:
And then we could start to have our analysis and now ultimately our algorithms can go, oh, mood selected, if they can select more than one, we can weight the first one more heavily than the third one, et cetera. And then we started looking at how can we detect data and I'll talk a little bit more about how we use the IMU. And I'll show you how we can also sync with wearables.
So the first thing we wanted to do was could we detect the user's breath and how could we do that across platform? And how could we do that without having a dependency on syncing a wearable to it? And so we did that. We identified that the IMU was sensitive enough to pick up the pattern. And then we could build models that could detect the inhale, the holding, that little pause between a breath and the exhale, but also designed to filter out head movement and other movements. And we noticed it was actually sensitive enough to pick up your heartbeat, which was really interesting.
So the kinematic data coming into the headset, now we have live on all three platforms and we've collected over a million breathing patterns to train the model for breath. And then those same sort of extraneous movements that we're filtering out will apply to the heartbeat model that will start to roll out at the end of the year. And then we can sync wearable data for the 15 to 20% and we'll do that as another validating data set.
The interesting thing when we launched this was, well, one, we had some requirements on footprint and speed. So we put a local model actually within the app. And we decided, even though technically and legally, kinematic data is not biometric data, we thought one day it might be. And two, we should include the user in the decision to share and start to build trust. And we talked about this before. So we implemented a UX that told the user what we were doing. And since we had a local model, it would still work. But we gave them the option to help us train the model so it would be better to support people by submitting de-identified breath pattern data. And 70% said yes.
You're talking about gamer dudes. They're the most sus-like audience, right? They're the best early product development audience because they will tell you how much you suck. And they told us that from day one. They will write you long missives on why, but they will also tell you how you can be better. And then we would report back, OK, we did this. Thank you. We even posted in the reviews. I would post it personally. And they would go, these devs really care. And they would change their one-star reviews to five-star. And that was really like, I think the best thing is just get something out there and start learning.
So let me play the video so you can see how this looks. And I'll kind of walk you through. You can select different breathing patterns. And then in this environment, now this was the first training experience that we launched. So the particles get more bright as it starts detecting your breath. And then the rocks start glowing by your breathing pattern. So you're getting this, like, we thought biofeedback just looks so lame in apps. I call this stuff like lamification instead of gamification, right? Because why would I put a chart up there? Like, how is that gonna connect more emotionally to the people? But if you can have the sense that your breath was actually influencing the experience itself, it would connect you more to your breath, which was the whole point. And so we do a little visual animation reward.
And then we launched a version, which this is on the Apple Vision Pro, and it's really beautiful on there, but it's also on the Quest that, oh, this is the marketing material thing. But it's underwater, and you can, your breath are kind of like psychedelic bubbles. And then at the end of it, this family of fish. And we found that people love little baby animals to show up to. So you put it like seahorses in a little baby seahorse, or it really like, you know, it's like a connection to, I don't know, family. It's just been fun to experiment with this.
This is an experience that we did with Qualcomm, who's one of our investors, where we took heart rate variability from a fossil watch. And this is an augmented reality focus game where you navigate, it's a metaphor for your thought stream, and you navigate words through these objects. And you see the environment starts showing you stress with it speeding up to your heart, so you can start to self-regulate in real time with these visual cues.
So this is where we're headed now. So we had to collect all that data. We collected mood data, built a mood vector database with over 10 million sessions of asking people how do you feel. And then now we can start to take these different inputs, whether they're self-reported or detected, and apply them to the experience generation. Particle effects are really easy to manipulate, algorithmically and procedurally. And I'll show you a couple other things. And we have some patents, we're always filing patents. We just filed a patent on an atomic delivery of voice caching and voice caching, because when we launched our agent, we didn't like the turnaround time.
So we started innovating, we realized, I mean, it's so fun. Everybody's experiencing this right now in this room. The first thing that we did before we did the agent though, in the past, I'm the voice in the app, because I was free and available. And our chief revenue officer is the male voice. So we started, I used to go in the studio and record, we would hire meditation teachers to write the content. And then we created a voice agent, and we created a model based on the recordings that we had to have a compassionate and empathetic voice.
And then we started authoring the content, and we trained, I think we use a bar, Gemini, Claude, GPT-4. We call them the council, and they rate our output and score it, and anything that's low score gets trashed before it gets promoted to the voice output. And then there's a closed loop with the user rating on that. And we start building a vector profile on the individual as well, so that you know to reference the previous talk of what content they respond the best to. There's no mouth noises to clean up. There's surprisingly a lot of saliva when you record your voice. It's such a nightmare. And then you have to normalize room tone, because one day recording, all that's gone. Eight weeks it would take us to do 10 new meditations. Now we're authoring like 400 an hour, rating and scoring.
And here's the beautiful thing when you think about, yes, we are not hiring writers anymore, or sound engineers. We have different sound engineers that like process. But a very junior person at our company was our meditation manager. I've never worked at a tech company. She was a yoga teacher who really loved our product and said, could I come work for you? She project managed the delivery of that voice agent. She project managed the breath deployment, and then stepped into a product manager role on the Koku'a agent. And this was within less than a year. And so I look at that as like a symbol of hope for people who maybe didn't have the benefit of going to Stanford and getting a computer science degree, but how they can participate and do meaningful work in this new landscape. Like it really does democratize a lot. And I'm really, I couldn't be more proud of her. And also just the fact that our company enabled that opportunity for someone is amazing.
So once we did this, we realized we can do this in real time, right? We have 10 million sessions of data. And so we created our AI voice agent, and we launched it 60 days ago on mobile, because we have a smaller audience in mobile. When you launch anything in VR, you have to figure out what's happening here, what's happening behind you, what's happening all around you. So we thought, let's just get it out there and start training it. And we've been watching it learn over time. We've also developed new technologies for it. And it's been interesting, because we've seen our engagement increase 300% since we launched it. And that is really fascinating. Our mobile app has other features, like you can upload your images into the experience. And when we launched with that, we noticed that there was a, users who personalized the VR experience with their own images had 400% more engagement, and they had active sessions six months after their account creation date. So hyper-personalization for us is the key.
So now we're bringing it into VR. And so we built a prototype. This is actually in headset where you say, and that red ball will be a microphone or something like that to encourage you to speak. And you throw your voice into the light and it generates a bat for you. And, but what we are seeing now is people need more help with their inputs from a UX standpoint. So on average, when we launched Kukoo on mobile, they were only entering in one to three words. And then when we prompted them to say, put a deeper, more expansive description, we saw the engagement go up from that because they were getting better responses.
So then Cody, our UX person, he, and there's audio, but I'll talk through it so we can move faster. He started after we saw it in headset, oh, it needs more UX and we need more prompting. He works in an app called Shapes XR that some of you are investors here. We use it all the time because designing spatially must be done spatially. Otherwise, I mean, the thing that kills me, even in our own app are these menus floating in space and windows floating in space. You really have to think about like, you know, it's like if you play video games and you see a little glimmer over here and you're trying to figure out what to do next, then you go, oh, there's a glimmer. Here's the lightly copyedited version of the transcript, preserving all original content:
Let me go look, oh, it's a health kit. That's exactly what I need, right? And so it's, you know, it's an iterative process anyway.
One thing I just want to end with, and we can answer questions or I'm happy to do it if we're running long in between, you know, during lunch. The final Hawaiian word I'd like to share with you is kuleana and it means responsibility. It's a privilege to be a steward of the land. It's a privilege to be here in this moment, not in the negative use of that word recently, it's an honor.
You know, my cousin Kanakolu is here right now. Please introduce yourself, especially if you're working in tech here in Hawaii. She works with Kamehameha Schools and has lots of programs that she's involved with to help evolve and expand opportunities for the indigenous people here, but not only in Hawaii, but the entire Pacific Rim as well because we're all cousins.
And so, you know, one of my goals for being here at this conference was to help create that awareness and that connection because no offense to our, I went to St. Andrew's Priory, which is the sister school to Iolani, but Iolani and Punahou is a certain class. We need to also create opportunities to the people who are from here and of this place, you know, that are born of the earth of this place and the stewards of this land.
So I just leave you with that word. It's a very powerful word for me. And I think words bring a lot of meaning and connection to our own lives, as was mentioned earlier, but it's often in the context of how they're said. And also what the higher meaning is, for us to just connect to that, even what is the higher meaning of our own names, and how we got here.
So anyway, I'm just honored to be here. Thank you so much to the organizers. Thank you. Thank you.
Join Swell
At StratMinds, we stand by the conviction that the winners of the AI race will be determined by great UX.
As we push the boundaries of what's possible with AI, we're laser-focused on thoughtfully designing solutions that blend right into the real world and people's daily lives - solutions that genuinely benefit humans in meaningful ways.
Builders
Builders, founders, and product leaders actively creating new AI products and solutions, with a deep focus on user empathy.
Leaders
UX leaders and experts - designers, researchers, engineers - working on AI projects and shaping exceptional AI experiences.
Investors
Investors and VC firms at the forefront of AI.
AI × UX
Summit by:
StratMinds
Who is Speaking?
We've brought together a unique group of speakers, including AI builders, UX and product leaders, and forward-thinking investors.
Portal AI
Ride Home AI fund
Google Gemini
Metalab
Slang AI
Tripp
& Redcoat AI
Stanford University
Google DeepMind
Grammy Award winner
Portal AI
Ride Home AI fund
Google Gemini
Metalab
Slang AI
Tripp
& Redcoat AI
Stanford University
Google DeepMind
Grammy Award winner
Google Empathy Lab Founder
Blossom
Lazarev.
Chroma
Resilient Moment
Metalab Ventures
of STRATMINDS
Google Empathy Lab Founder
Blossom
Lazarev.
Chroma
Resilient Moment
Metalab Ventures
of STRATMINDS