Harnessing Agentic Systems: Building Hyper-Personalized Products with Purpose
by StratMinds
- Full TranscriptHarnessing Agentic Systems: Building Hyper-Personalized Products with PurposeHamudi Naanaa
I feel humbled by this introduction, so I guess the expectation management is now forcing me to have a good keynote. I'm Hamoodi. I'm the CTO and co-founder of Portal.ai, as Richard pointed out.
A little bit about myself. I'm coming from AI research and engineering. So I'm the CTO. For the last seven years I spent in big tech. I saw a few people here from Google. I saw a few people here from Meta. Unfortunately, I wasn't at Meta, but I was at Apple, Google, and Amazon. And the overarching idea that I had was essentially how do you build AI products into existing products that these companies would have? And then I had a few learnings there. Over time I got to appreciate that, but then I figured out it's time to build something on my own. And this is what Portal.ai is about.
But it's not about myself. It's about something that we all want to learn here. So first and foremost, I wanted to say thank you to all the speakers that have presented so far. You made my life really hard. Now I have to talk about something new, something different. And you did a really good job, which I really enjoyed.
So yeah, as a founder, I just do some hacky things. I went to this whiteboard and I figured out, well, what do people want to learn? Is there something that I can share? And there is a trend. The trend is in the agentic systems. Portal.ai, under the hood, that's the spoiler alert, uses a lot of agents. So this was a really good chance for myself to dive deeper into that and to share some of the things I know.
So a little bit more context also on something that Richard mentioned, the AGI house. So if you don't know what that is, it's a community of AI builders that I happen to emcee from time to time. And it's a really good place. So we have some great people. So I think if we have a lot of Google people here, we have Jeff Dean, we have Sergey Brin coming to us. We get to chat, we get to hack together. But first and foremost, we have a lot of great builders. And that's something that is really important that gives important context for this keynote. Because I was emceeing, or I've been emceeing more than 50 hackathons now last year, and there's a lot of builders. Each one is like 100 people that would come there. And then there's a lot of things, a lot of patterns that I've seen. And you can imagine how many people would build agentic systems. I think like first place is like your AI girlfriend. And then second place is building AI agents.
So that being said, I think I have a few learnings from my experience that I would like to share. But first, let's even define what agents are. We had a really good discussion on the first day. This is a really beautiful screenshot of a really beautiful paper that is not mine. That's from Johnson. So from Stanford. And if you want to read about that, we have a QR code there. It's a really cool paper about how to build this AI village where you have a lot of different agents that have different goals, motivations. They try to coexist. They try to seduce each other or hate each other, that kind of stuff. Really good read. It reads almost like a book.
So I think a lot of people, when they think about agents, they think about that. It's like a personalization of AI personas. But agents is a little bit more than that. And before we dive deeper, let's understand where we are right now. So coming from the same great folks from Stanford, there's this thing called AI index report that comes every year. It's a really good read. So if you want to really understand what's going on right now in the industry without all the buzzwords and everything else, this is a really good place to start. It's long. I think it's like 70 pages. But if like on the flight back, you have like five hours probably from flying back to San Francisco. This is a really cool one. And some tech words and stuff like that. But it's really good.
So I have an excerpt from there. And I promise that's going to be the ugliest slide because it's going to be a lot of text. But we have three main keywords here. So first, progress accelerated in 2023. I think we'll agree. And you know, 2023 feels like medieval age is now in AI terms. So we have a lot of multimodality. We have Gemini. We have OpenAI succeeding there. We have SOAR and upcoming soon, hopefully. And all that kind of stuff. So we all feel that progress in technology. Now, everyone is kind of racing to build products out of that technology. And we, I think we all agree now that technology is not a good product from all these discussions. So everyone is kind of racing and trying to understand that.
But there's a problem. So 2023 was a really good year of nice demos like Stanford Village. But it's not a product, right? And the challenge is, and I'm going to quote, it's that AI cannot reliably deal with facts, perform complex reasoning or explain its conclusions. So the technology is kind of limiting one. But then there is also another problem that is not mentioned here. And that is beautiful products. So I figured out, let's first talk about the first problem and like understand how to build agents in the first place on a high level. And then we'll talk about the second problem, which is, well, cool, we have agents, but what's next?
First and foremost, a little bit of context as well here. I'm the co-founder of Portal AI. And our vision essentially is described in this one beautiful image. And the world economy is like 100 trillion dollars. And 60% of it is actually coming from small medium businesses. However, 70% of them are actually dying out in the first five years. It's really hard to run a business. The problem, they have a lot of different problems, right? But one of them is it's really hard to deal with complexities of the business. You have so many different data points that you have to look for. You have so many different expertise that you want to have, like logistics, ads, all these kind of things. So we'll figure it out. And we want to help them. We want to flip the numbers.
And for that, we are in this unique moment of history where all of a sudden we've managed to aggregate all the knowledge of the humankind into this black box that seems to know some stuff. And there's a good opportunity there to just build something for these businesses. So that's what inspired me to co-found Portal AI with my beloved co-founder that couldn't make it, unfortunately, but glad if you're going to see a recording. Hi, we all love you.
So that being said, here's the thing. LLMs are really great, right? If you show something like that to someone like two years ago writing a haiko about a UX AI conference, like Islands Interface, AI whispers through poems, ocean of ideas. Absolutely beautiful, right? AI is really beautiful. I guess some of you are not impressed anymore because, you know, chatbots. But it used to be really cool and really impressive.
Now, here's the problem. Why can I just not go to a chatbots and tell them run my business for me? Like you're all going to be laughing. It's not going to work, right? It's like not how things work. And you get a bunch of like as an AI language model, I cannot help you, blah, blah, blah. Or here's a few ideas. So it's not going to work like that. That's kind of a challenge there, right?
So we figured out there is some intelligence in these systems. And this intelligence, if you laser focus that intelligence to a specific use case, works really well. So in this case, I have an example writing a description for my ad campaign that is going to target specific VCs living in Hawaii. And then I'm going to give a little bit of context here. It's like I sell AI that runs businesses. And then all of a sudden we get a few really good descriptions.
So if you think about that, if you decompose the big problem into small specific problems, then all of a sudden AI cancels all these problems. And then you just have to figure out how to put them back together. Those are kind of my research pieces in Big Tech, just how to build these agentic systems. And then with that knowledge in mind, I started building the technology behind Portal AI.
So last year, and that's an image from last year, there was a lot of experimenting on how to build agents. You might have heard of right prompting, you have zero-shot, few-shot prompting, you have chain of thoughts, you have then the self-consistency of chain of thoughts, which is like you have multiple majority voting, or you have tree of thoughts, where you try to prune the tree of decisions and stuff like that. So this is like medieval stuff from last year, you know, like nobody's doing it anymore. No, I'm kidding, everyone is actually kind of doing that, but everyone is building on top of that.
So there's a lot of ways how to do these things. But then one thing that comes from this is the fact that we are trying to build something that is not linear anymore. And if you think about chat GPT, it's always like, here's my input, here's my output, here's my input, here's my output. So here we start to reason, and that kind of motivates the emergency or the emergence of AI agents.
So we had this discussion on the first day on what is even an agent in the first place, and it took this really beautiful picture from the plan React paper, which is a really great paper, I'm going to look at it at the end, that tries to define agents in this robotics way. So in the core, you have the LLM, right? It all starts with this stupid, but very smart thing that has all the knowledge in the world, and you have to extract it from there. Here's the lightly copyedited version of the transcript, preserving all original content:
Now, on top of that, the agents need to live in an environment, so they cannot live in a vacuum. You need to define what environment it's going to be. It can be internet, it can be Google Ads API, specifically it could be Shopify in our case, it could be a lot of stuff, right? But that's our environment, and then you can act on these environments, and then you can observe that environment back.
From there, you can go back immediately to the LLM, but it's a good practice to also form some kind of a memory, because a lot of agents are stateful, they don't live in the moment of this conversation, they live over time. So then you have the memory, planning, all these kind of things. Now, there is also a few hacks that you can use that make systems more stable, like the fuchsia prompt in this case. The idea is that you imitate what a human being does in their head with LLM. So think about that. If I were to jump from this balcony, something bad is going to happen, right? And when I'm describing this scenario, you kind of imagine in your head what's going on with me jumping through the balcony and getting hurt. So this imagining in your head is something that you can achieve with this fuchsia prompt, because you use the tokens to just give the context explicitly, because LLMs don't have any brain, they don't like store any representations on the inside. But that's kind of a hack to avoid that.
Finally, you need to give a task, right? I mean, the agent without a task is like cool, it's going to think a lot, and then thank you, I guess. So that's kind of the agents. And I think we've mentioned Grok a few times. I'm really excited about Grok or what Grok is doing on LPU specifically. In this case, we have something like 800 tokens per second. And here's the point. My take on that, on the need of existence of this thing, is very simple. The tokens are going to be the new unit of thinking, because agents that are built on LLMs, they don't have any implicit representational thoughts, so you need to be explicit. And then if you want to be explicit, you want to put that on the side. But if your agent is going to think for too long, it's not going to work. You want this 300 milliseconds first response kind of thing. So that's where I'm really excited about. I think the technology is getting to these agents, and I think that's why this year is kind of where we'll get there.
Now, let's get back to Portal.AI. We build agents that are experts in this example for Shopify. We get all the API of Shopify, we train them on some data. For Shopify, we set it as an environment, we have some tasks, all these kind of interesting things. Now, an agent in a vacuum still doesn't make sense. You need to kind of have a lot of these agents, because then you want to just orchestrate this whole system. It's this divide and conquer approach that seems to be really promising, because that's how it works for human beings. So we kind of can mimic this org structure with AI agents as well.
So then the approach that we had essentially was, let's use different LLMs, different combinations of agents, different endpoints, different environments. So in this case, like Google Ads or Shitbot with Mistral, you get it. You build all these agents and then you build the orchestration. And then that's essentially what we have built. It's this really beautiful, really elegant orchestration where you have a lot of different agents coming together, decomposing the task and just executing it together.
Now, you might think, wow, Portal.AI, universal agent orchestrator, beautiful technology. We all got it. It's cost-efficient, it's modular. It's going to help all the actions, like execute the actions for your customers. And the first learning that we had was very simple. Product does not equal to technology. So far, I've been talking a lot about the technology of this whole thing, and it's great. I mean, I'm an engineer. I love these things. But there is a challenge there. And the challenge is my customer, they don't really care about what kind of agents I have and what kind of orchestration I have and how efficient that's going to be and blah, blah, blah. It's a mean to achieve the goal that they have. And their goal is to survive and to thrive. That's the challenge of the businesses.
So we figured out that putting AI together into a product is going to be a challenge. And I thought in the first part, we talked about agents. So I wanted to share just a few learnings, really hands-on, on what we did in the product. So I started with this really beautiful image coming from Sara yesterday. So, you know, I'm a founder. I just hack my way. So I think it's a really nice one. Because, you know, it fit it perfectly because it would like just make my pitch a little bit stronger. Because funny enough, we have quite a few things that we have implemented without really thinking about that in our product. And that's what made it really beautiful and easy for me to just glue it together.
So, yeah, let me show you a few snippets there just to give you a feeling of how that maps to the product. First, you know, a lot of conversations that we had were on, OK, it's a chatbot. You have to go there. Maybe it should not be a chatbot, all that kind of stuff. But I think something that a lot of people have slept on so far was the fact that some like, why do you have to talk to your AI agent? Maybe the AI agent should talk to you, right? Because all of a sudden you don't have to solve this problem of like, what do I even text that AI agent? Like, what should we talk about, right? In this case, you have that agent that runs 24-7, monitors all your data. And then all of a sudden that agent talks to you.
So in this specific case, we're thinking about actions. We wanted to help customers that we have to run their ads more autonomously. So we figured out let's bring an agent that's going to look into all the data that you have and then help you make a decision on what is a good performing campaign. And then you just execute that. So with that in mind, that was the first kind of a win that we had with customers. Because all of a sudden we really envisioned this billboard in San Francisco where we said like, it's the first AI agent that's going to text you. So that's kind of the vision behind that. And that was, I guess, the first one following the curate part from Sarah.
So the second part is, you know, this cold start problem is so annoying. Like, go into chat.gpt and then you have something that is less trivial than writing a haiku about kawaii. And I mean, it's really hard for me, by the way. I guess it's easy for AI. So you go there and then you have to explain yourself. It's like, yeah, I have to do that. Here's my data. Here's my blah, blah, blah. It gets really hard. Plus, if you think about the context of Podoligai, we have business owners, right? And business owners tend to be very different people. Sometimes you have experts, like subject domain experts, really understanding what LTV is and like talking really fancy words. And some people are like, yeah, that's my business. I sell wooden toys on Amazon. That's kind of it. I have no idea what it's like retention and like, you know, these like smart words.
So the thesis that we had was there's a lot of value in mining the knowledge from all the conversations that you have with your customers. So in this case, we have Emily and that has a bakery. And then we this is the part of the onboarding that we had at the beginning, which is like, hi, let's meet. And then from there, we extract a knowledge graph. And that is a really elegant and beautiful way to understand or to explicitly represent the knowledge about your customer or whatever is needed as a context. Because all of a sudden, first, you can show that to your customer and your customer understands what you know about them. And secondly, and you really there's a lot of really cool algorithms to extract the data back from the knowledge graph and to pass it to the context dynamically. So you will see an example a little bit later on how this enables something beautiful in the user experience. But the idea is this is really brilliant. We should all do that. It's like just mind the context, learn your customer and then build something really personal.
So the next thing and that we have is the explainability. You know, a big challenge of AI is you kind of talk to that and then one time it fails. You're like, I don't know if I can trust this thing. Right. It's like, especially my business is like my money. Like, why would I trust this thing? So the challenge that we have here and then we overcome is two things. First, referencing is a really cool idea. If we tell you that there is something on Google Ads that is working on your bidding strategy or something else on the keywords, there is a really cool and quick win of just referencing that. Right. And I think you don't come does a really good job at building that social algorithm. Richard, the CEO of you, the comp. And I think we love their product that helps us a lot as well. And to reference things and that allows you to explain the reasoning behind your decisions. Secondly, if you're so smart, just tell me what to say next. I think that's a really funny thing that we got to learn through our customers. And it's just like suggestions how to continue this conversation. I think we we had a few mentioned like people mentioned that as an idea. I can confirm we implemented that. That really helps here in this case. Here's the lightly copyedited version of the transcript, preserving all original content:
Like you continue with how do you have a campaign that's always kind of things because all of a sudden it's not just this entity that just does something for you. It's your partner that just teaches you without trying to be like really smart or something like smarty pants. It's just like, hey, here's what you can learn as well.
So that being said, we also have a lot of context that we pass because I think it's important. So any decision that we make, especially monetary decisions, it's really important that we say, hey, we think it's a good campaign because it has this Ross. It has that much sale net sales that it has contributed to that kind of stuff.
I think just if you think back about the agent existence, right, because everything that I show here is kind of enabled by agent systems. Agents having different memories for the use cases that they have allows you to retrieve all these kind of things without having to go through a database from all the sales and everything else, which is just going to be super expensive.
That being said, I think that's also a beautiful thing that I really love and our customers do from the heat map that we kind of evaluated. It's this I don't even want to text you just like let me click on something to follow up on that. So in this case, and we have this smart cursor or whatever you want to call that. We're not so good with names, so please help us. And it's the thing that you just highlight something specifically and then you dive deeper into this conversation without like having even three words said about the specific thing. So it's very simple, right? If you think about this partner that you build with a guy, you just tell him like, what is that like elaborate? And then a lot of people really need that because it's what makes it really easy to understand the reasoning behind that impression.
With that being said, I think that's my list of the features that we have in our products that we got from the customers. Now, coming back to agents, agents are not trivial. And don't be fooled by me being overly optimistic. Agents are really not trivial. There's a lot of things that can go wrong. And if you think back, one of them is the design. It's a very complicated system that is non-deterministic by its nature. Even like when you play with temperature and that kind of stuff, it's still going to spit out a few things that is not expected. Especially in like random environments where you have challenges.
And so first thing that you face as a challenge is you multiply the non-deterministic LLM by the non-deterministic environment, by the non-deterministic memory process, by the non-deterministic tasks. It gets really hard. So that's why there's a trade off that we got to learn when building our agents. And that is if you're too generic, you're going to have a lot of problems there because you're going to just like swim in this whole thing. If you're too specific, there's no need for AI.
So funny enough, one thing that we had to learn in developing the product is not when to use AI, but when to not use AI. Because a lot of things that we achieve are just a set of SQL queries. It's like you go to the database, you get the budget. There's no need to use an LLM that generates the SQL code that's going to hallucinate and that kind of stuff. It was like really great in 2023. I think we're not there yet technologically in 2024. So just learning how to hack your way through these systems to make them more stable is really great. That's kind of the first thing on design.
Now, orchestration is all the problems of design multiplied by a lot of agents. So it's even more hallucinations. And this is an excerpt from Altogen, which is a Microsoft paper, on how you can build things. If I would have to share just one learning from my experience or our experience, one of the best things that worked really well, and that's coming from my research in big tech and then also from Portalei, is this triangle architecture of agents.
I really loved GAN as a paper before the fusion models became a thing, so I was really inspired by GANs. You have one agent that is a generator, so this is the one that gets to decide something. You have one agent that is a discriminator, which gets to say, no, that's not going to work. Here's your problem. But then these two systems tend to still shift together towards hallucinations. So then you have the third one, which is persistent in terms of the global goal that these agents have to achieve. So this is kind of a moderator. It's the agent that really says, no, stop. It's not the thing. So yeah, orchestration is also a challenge that we will have to solve or we are solving.
And then the third part is evaluation, especially when you build your own models or your own agents, like fine-tuning or custom tools that you have. It gets really challenging. So this is an excerpt from the, I think it's called the Agent Bench paper, with the thesis of having different environments and how you evaluate these. I think it's a really good starting point. I don't want to give you an illusion that we have solved this problem. We have solved it good enough to deliver value to customers. And that's, I think, the main takeaway of the AI technology right now. But I think it's also going to be something that's going to happen very soon in this year, hopefully, that we will also figure out much better.
That being said, it's all funny and then you have costs. I think Richard pointed out that at the beginning. You know, it's all great. We help all the small medium businesses. We have a lot of agents. They think, they talk, they use all these tokens and it's so expensive, especially first when you have like GQB4, you want a state-of-the-art model. Just do something for me, please. And then you get 1.6K per week per customer. Like good luck selling that to a small medium business. It's like we tried.
So two learnings that we had to get here. First, and I really am sure they had that opinion with a lot of people that we have met at AGI House. It's important to use the smallest model needed because this works really well with agentic systems and most of the times. So just going to smaller models is going to be like this first going down. And this is where we started exploring Mixed Roll. And this is where we started exploring Later Llama and the 8 billion version of it. So this is like a quick window we get there.
Then the second one is designing the orchestration that just makes sense. And then fine tuning and a separate orchestrator model that just understands when to call whom, why. Instead of just boot forcing all the conversations was this kind of jump. And I think we're just proud to say that we went from 1.6K to 70 bucks a week. I think it's fantastic. So if you want to learn how, ask me. Happy to hear all I know because it was painful.
Then that being said, I was, you know, in every keynote that I have, I always ask myself, what can I give back to you? Right? Because I think we've learned a lot of things and I had a few keywords here and there. So on the left side, a few references to the papers. I think the biggest milestones that you get when you're trying to build agents is learn how to build. Lanchine is a good demo start. Learn how to design. That's where the plan React papers are a really cool thing. Just to give you a feeling of like prompting and different options there. Then learn how to orchestrate if you want to have a multi-agent system. But don't build a multi-agent system before you figure out how to build a single agent that is good enough for this use case because it's going to explode.
Then you have the prompting and the website is called the prompt engineering guide. So DEG. This is where you get a feeling of like common hacks that you have around the prompting, building agents, stuff like that. And then Agent Bench was this paper that gives you an inspiration on how to evaluate. But I'll be honest with you. Autogen was a paper from last year, September, so medieval time. And there was so much research that happened ever since then. And there is so much research that's happening now.
So if you would have to just scan one QR code, that's going to be this one. And that is from the AGI house, this hacker community that we have built. You're going to do so many mistakes. And you please just build that because you'll learn a lot more from building than from over intellectualizing. And coming from academia, that's what I was paid for, right? Just sitting and over intellectualizing.
So I want to finish on something positive. Agents are not trivial yet, but we will build them. I think this is going to be a really exciting year. So if you share my passion, my vision, and my beliefs, or the vision of the company, please join me. Have a good chat. Thank you very much. Any questions? Oh, there you go. Do you have any... Your voice is loud enough. Okay.
Do you have any thoughts about popular and viral models, like say, auto GPT and Devin?
So I had a chance to chat with Devin and founders, and that was really great. I think it's a very early stage product. So evaluating, I think, 11% of the benchmark was really great compared to GPT having like 3%. But my biggest thought, it's going to be May 4th, we will have OpenAI coming to AGI house to build agents with assistance API. So we have the SVE agent, which is an open source implementation that is similar to Devin, also coming. I think they have reached like 10%, so almost state of the art level. We're really early, and I think they only did that for the debugging and fixing bugs. I'm really excited about that. Here's the lightly copyedited version of the transcript, preserving all original content:
I think it's going to be a lot of engineering. There is the fundamental question if LLMs are even the best variants for the agents in the first place. But I think even with everything that we have now, there's a lot more than 11% on the benchmark that we can achieve. It's just clever engineering. So excited, I guess. The fact that agent system building is hard, it should be easy to think about just one sentence. Therein lies opportunities.
Okay, next question. So I guess I'm just curious about you, to discount the reality of these agent systems, because we have things like the AIPen, we have the Radar R1, and these products that are essentially allowing people to cast a spell, and then things happen on the other end, like a flight to Hawaii or something, and then you end up in first class and it's like $30,000. So how do you deal with those types of parameters where there's a lot more subjectivity to the preferences and the output? Because like you said, there's sort of like SQL queries and then there's LLM stuff, and there's a broad game in between. And so when it comes to designing these systems to be fault tolerant for users, what are some methods by which you pursue that? Does that make sense?
That makes sense, yeah. So I would split all the takeaways that we had into different groups. We have the explicit hacks and implicit hacks. So the explicit hacks are rail guards. For example, you might have seen the actions thing, like increase the budget. We envision this future where we will increase the budget, and we'll just report to you that we did that. But there's a lot of things that need to be done on trust, on making sure that it's not going to hallucinate, that nothing's going to shift, that it's not there yet. So just designing the system in a way that we still have this final say on the customer side, it's kind of this explicit rail guard that you just put in place. And that works most of the time.
And then the implicit one is in the, I guess, I guess it's also explicit, in designing the architecture. So I think on the context side, like most people, they want this Google-like experience where you just have like, buy me tickets to Hawaii. And then it's like first class economy. Is that like morning, red eyes an option or not? That kind of stuff. I see a lot of value in creating these contexts. Like knowledge graphs, for example, is one of the ways to just learn more about you. And then designing the conversations with your agent in a way that the agent will always check, like, do I know enough about you to really help you? And if not, I'm going to follow up. So it's this, again, this paradigm of the agent talking to you and not you talking to the agent that I really believe in.
And then one thing that we tested really early in the beginnings when we were testing knowledge mining was this scenario that you said, order me some food. And then it's like, yeah, do you have some allergies actually? And then we'll be like, yeah, I cannot eat any garlic. And then next time that you would have like, help me organize my birthday party for my daughter, it's going to be like, okay, we need to decompose that into an execution plan. Food is one of the things. Oh, wait, you have this allergy. Okay, we need to be careful here. So that's going to be this like, cool start, learn as much as you can about the customer. And then that's the implicit way of helping.
Cool. Thank you so much, Hamudi. Thank you. If you want to engage him as part of the conference session to talk about this, I think there will be a...
Join Swell
At StratMinds, we stand by the conviction that the winners of the AI race will be determined by great UX.
As we push the boundaries of what's possible with AI, we're laser-focused on thoughtfully designing solutions that blend right into the real world and people's daily lives - solutions that genuinely benefit humans in meaningful ways.
Builders
Builders, founders, and product leaders actively creating new AI products and solutions, with a deep focus on user empathy.
Leaders
UX leaders and experts - designers, researchers, engineers - working on AI projects and shaping exceptional AI experiences.
Investors
Investors and VC firms at the forefront of AI.
AI × UX
Summit by:
StratMinds
Who is Speaking?
We've brought together a unique group of speakers, including AI builders, UX and product leaders, and forward-thinking investors.
Portal AI
Ride Home AI fund
Google Gemini
Metalab
Slang AI
Tripp
& Redcoat AI
Stanford University
Google DeepMind
Grammy Award winner
Portal AI
Ride Home AI fund
Google Gemini
Metalab
Slang AI
Tripp
& Redcoat AI
Stanford University
Google DeepMind
Grammy Award winner
Google Empathy Lab Founder
Blossom
Lazarev.
Chroma
Resilient Moment
Metalab Ventures
of STRATMINDS
Google Empathy Lab Founder
Blossom
Lazarev.
Chroma
Resilient Moment
Metalab Ventures
of STRATMINDS