Monterey AI VoC 2024: Of Course It's About AI

Jul 29, 2024

Monterey AI VoC 2024: Of Course It's About AI

This panel discussion brought together experts from various tech companies to share their insights on the use of AI in VoC, covering both current practices and future possibilities.

Chun and guest speakers have mainly discussed:

  • VOC analysis for AI products versus general VOC analysis 

  • Evaluation and validation of AI-generated insights from high-stakes data sources 

  • Realistic expectations and team training for AI usage 

  • Potential future applications of AI in VOC analysis 

  • Ethical considerations and limitations of AI in VOC 

  • Data privacy safeguards when developing AI-powered VOC tools 

  • User expectations and adoption challenges for AI products

To dive deeper into the panel discussion, click here to watch the full video on YouTube.


Speakers:

Guest speakers:

  • Erica Solis

  • Chris Butler

  • Kenji Hayward

  • Mia Kuang

Monterey AI:

Chun Jiang - CEO, Co-founder of Monterey AI

To learn more about Monterey AI, check our website and LinkedIn page.


Transcripts

Chun:

All right. I like to start off with everyone. Maybe spending like one or two minutes talking about like who you are, where you come from and your journey with VOC. Kenji.

Kenji:

I Can start. My name is Kenji Hayward. I am the head of support at Front. I'm probably one of the few people who don't have a VOC role or function, so hopefully I can provide a perspective of how customer support teams are leveraged in that way.

Erica:

I'm Erica. I'm Erica Solis. I lead a team at LinkedIn that's really focused on product quality and making LinkedIn products friction free, which is impossible, but it's fun. And I think AI is nothing new. We've all interacted with it for years. We've all worked on products that use AI, but 2023 was like Taylor Swift's breakout year and it was Gen AI's breakout year, so.

So, we've learned a lot and over the last year and a half, my team has helped launch about ten different Gen AI products for our end users and our end customers. So, we've learned a lot about how VOC is different for an AI product versus a non AI product.

Mia:

Hi, my name is Mia Kuang, and I am a reliability engineering manager at Comcast for Xfinity Stream.

I studied scientific computing at school, and from there developed an interest in machine learning and data science. In my day to day right now, I would say I, I spend a lot of time working with various types of data from operations logs for observability, to product analytics, to customer feedback.

And even though in the world of SRE or reliability engineering, we don't frequently encountered the specific, the exact term of VOC. But I think the concept and the practice is definitely not new to us. And and I think it's very crucial to to come. And so from, from a data generation perspective, I think you can look at analytics for most.

customer facing technology product in two ways. One is the customer generated data, which are your ratings, reviews survey results, et cetera. And then on the other hand, you have your machine generated data your HTTP requests and responses, clicks, errors, and so on. All the measurables that's happening in your in your application or your platform in the background or customer facing.

So I think it's very important to combine all of these to look at your product in a holistic view. And have an end to end data pipeline that accommodates efficiently to all of these purposes and cater to your specific business goal setting and decision making. And I'm excited to talk more about AI today.

Chris:

Hi, I'm Chris Butler and I'm the staff product operations manager at GitHub and I work on the AI co pilot products and so I partner with two So, yeah, basically my job is to make sure that the two VPs that I work with are a product, by the way, are kind of clear, confident, and calm in the job that they do day to day.

I would also say that my job is to help PM the PM experience at GitHub, especially for the co pilot PMs. My background includes places like Google's core machine learning group. Facebook reality labs portal device where I did a bunch of different AI products blah blah blah But what I would say is I think I guess what I'm really excited and interested about is a you know It's part of like product operations practice.

We're doing a lot of things where we're leveraging large language models, AI, any type of classifier, these types of things in our day to day practice, especially around like automation or understanding the voice of the customer. And so I work with actually a centralized team. They couldn't be here, unfortunately.

But they actually do a lot of the key feedback parsing for our customer service, but I had to talk about all the other places that we end up getting feedback from too, so. Thanks for having me.

Chun:

Awesome. I think, I think I'm allowing that. But, anyway, I think like the first question is something like I personally most interested in.

So, apparently, we're still trying to figure out how VOC work, or product in general. But now we have like all the AI products coming. And then, my first question, probably to Erica and like Chris the most is, when you're developing like AI functions in LinkedIn or Copilot, by the way, we all use Copilot, love it, love it.

What are some like interesting finding you have with like analyzing VOC for AI products versus VOC in general?

Chris:

Well, so I was going to say that I think I don't think there should be a difference between an AI product for a non AI product. I mean, there's this like weird hype around what an AI PM is by the way, which I think longer term is not a real thing.

Like it should just be a product manager. And so I guess I wouldn't distinguish that, but I do think that the tools that are coming out now, like. Not only are we talking about the idea of like chatting with the body of knowledge, right? So we have a repo that is full of all of the qualitative research that we've done and people can ask it questions, right?

That's an example of that. We use a tool called unwrapped to be able to do like newsletters. They use some stuff to be able to pull together different topics. But I think that idea of like kind of using LLMs to do topic creation management monitoring, I think is really interesting, but longer term, I think we start to get in this realm that we want to actually have.

Almost like tripwires, so I do a lot of work around like, the way that it's, it's happening again, so, yeah. I, maybe I, it just means I should talk less, but like we, we, tripwires are this idea that we want to be able to monitor something, and when it goes beyond a threshold, or like, if we're monitoring a particular data set, like, if it goes into some type of exceptional variation, we actually want to take action.

We want to understand what's going on. I think the thing that's really interesting about large language models is you start to get into the realm that you can actually monitor things that are beyond just like a Google Alerts for a thousand keywords that then gives you a thousand emails. So I think, I think there's something interesting about that.

Especially when it comes not only with like the voice of the customer, of people that are engaging with you, right? So we get voice of customer through customer support people. We get it through something called a cab. We get it through our sales and revenue teams. And all of those different places have different ways to, like, escalate or prioritize or filter the things that the product team gets.

But there's, like, too much for the product team to actually manage. So, I think there's something interesting, like, looking outside of all those different things, even just giving the product manager the ability to understand more about the environment and the tra the kind of changing landscape over time, means that they can make better strategic decisions, especially in a very volatile environment.

Erica:

Yeah, and I'll add that I do think when you launch Gen AI products, there are different VOC themes that come. So I differ a little bit. That's fair. That's fair. But I'd say we can voice the customer is really about going upstream as far as you can to prevent the negative feedback. So when you're launching a net new Gen AI product, you got to think about what are your product principles for Gen AI.

Right? And it's slightly different than your product principles for non gen AI. Such as a product principle around responsible AI. Right? It shouldn't be biased. It shouldn't promote self harm. It shouldn't do anything illegal. Like, if you're using recruiter copilot and you say, Find me a computer scientist who is a US citizen.

And it gives you candidates. That's illegal. You're not supposed to hire someone based on citizenship. So you gotta think about responsible AI as a product principle. The second product principle that I think is unique to Gen AI is it has a productivity value add. It could be saving time, but it also could be reducing human error.

Or it could be improving creativity. Another product principle that's kind of helpful is a delight factor. A delight factor is not always relevant to a Gen AI product, but it could be something like we've taken something that wasn't possible before, and we made it possible. Or it could be an entertainment factor, like Art that is generated by Gen AI, it's not really saving the artist's time, and some people argue it's taking over artist's job, but it's entertaining to look at, right?

So product principles before you even write the spec or decide what model you're going to build on top of are very, very helpful. The second learning is after you launch the Gen AI product and you start to get feedback, you'll have feedback themes that tend to be very similar, but they'll be specific to your launch.

One feedback theme will be around inaccuracy because it hallucinates, right, and makes things up. It forgets some important keywords that you gave it. So you're going to see feedback come in around inaccuracy. You'll also see feedback come in around low relevancy. So it's not personalizing it to the user, to the context.

It's too generic to be helpful. So those are common things you'll start to see. over and over again. Once you launch your gen AI product to external customers.

Chun:

Awesome. Why was I not use it? Erica touched a little bit about the whole sedation and trust one. I think my second question, which is to Mia and Kenji how do you, like, we're using like AI to like analyzing unstructured data, structured data, support tickets, incidents.

Those are all like high stake data that produce like high stake positions. using like A. I. In your analysis, like how do you evaluate it? How do you validate accuracy and how do you like set the right expectation to trust the insights?

Mia:

I think I think about like contacts. So for I would say to to address the The concern with context and reliability in AI solutions is number one probably the most obvious answer providing that those contacts to your AI models. In, in, in the example of prompt engineering, maybe, you know, you can leverage different techniques like retrieval, augmented solution generation.

And fine tuning you know, use, make your model more capable. So in addition to that, you know, combining your AI solutions with a degree of human interventions and manual effort as necessary. So think about predefining some of your unique themes and categories labeling some of your training data set and verifying the results and accuracy.

And thirdly, think about having a. Making sure that there is sufficient transparency into your data into your solutions and have an understanding of its potential limitations and biases. And finally Making sure that there's also flexibility and adaptability to your to your AI solution.

So augment or modify your your techniques as necessary to, to make sure that it's working customized to your business decision making.

Kenji:

So mine's a little bit different of an answer. So again, on a smaller scale, just to give you an idea of my support team is 26 folks. I've got one person focused on data land.

Stand up. I told you I didn't get to embarrass you hardest worker right here. So for me, this is where the human element's important. Like there's things that make complete sense to automate, you know, parse. large amounts of data, come up with trends. This is where I really rely on Lemuel to kind of be like that last checkpoint.

So it's not sophisticated, but it's still, you know, it's, it's enhancing him to do his job. So I really rely on the feedback of my team to tell me like, this doesn't make sense. Like, let's dig in deeper. So I keep it pretty simple.

Chun:

Yeah, I think that's one interesting thing that I noticed when I talk with a lot of C level folks.

People will be like, oh yeah, AI can do this. Or a lot of people when they're using a techie, they'll be like, oh yeah, we can just like, stuff like a large scale data into it and to believe what what the AI is telling us. So I feel like setting a record expectation and like training a team that like how to use AI is extremely hard.

I'm curious if you hear from your perspective, what has been some Useful like tips or practices for the training side.

Chris:

Oh, for like training a full model and everything?

Chun:

Yeah, like training humans. Ah,

Chris:

training humans, yeah. That's actually much harder. That is much harder.

There's a lot of like intuitive, tacit knowledge that people Oh, sorry. There's a lot of intuitive, tacit knowledge that people bring into their job roles, which is really important. Like, really great customer reps understand, you know, Kind of how to show the right levels of empathy, compassion. They understand, like, re decoding what is actually going on with that person in some way.

And it's really hard for machines to actually get that in some ways, right? There, it's very surface level in what they do. And so I'd say, like, change inside of organizations is very hard a lot of the time because people try to do it in a way that is very large scale at first. Like, really big bang change.

And so I would say, like, when using these types of systems, one of the things we started to do at GitHub, just with, like, our operations team, which is, like, 20 something people. I've started to run like a monthly, what we call like an AI playground, where we focus on a particular type of technology just to get used to kind of the terminology.

Like I don't want them to be technical. I don't, I like, I may mention something like RAG, but I actually don't want them to know how it really works. Like I just, Want them to be familiar with the terminology. But then what we do is we then look at a problem that we're dealing with. So what the last one we just did was around meeting summarization.

And so there's a lot of tools that do an okay job of it right now. But part of the reason why it doesn't do a great job is probably because it doesn't have enough context. And so just getting people used to using not only just things like, you know, an equivalent of chat GPT that we have internally, but just access to raw models and doing things like prompt engineering.

Taking a transcript, taking the agenda, taking the goals of the meeting, taking the actual document we're talking about, and combining all those things and seeing what levels of kind of benefit we get out of that, ends up giving people a better idea of what to expect. And actually, it was very hard, right?

At first, I would say, like, Three quarters of the people couldn't get an actual good summary out of it. So if anything, it's like resetting those expectations because those executives that think we can just do everything with this technology, they're usually also setting the deadlines that don't make any sense.

So I would just argue that we should, we should be doing things that are, I guess, it's about this idea of like slow adaption of this change because the fast versions of it will just break and not work and will ruin all the expectations around the technology. That's what I'd argue. Yeah.

Kenji:

So for my team it's really about bringing them along for the ride and demonstrating value.

So for support teams, it's really, like AI is really scary. You know, we, I think every AI company in here is probably looking at us like, Oh, you guys are a perfect target market to, you know, automate everything that you do. So with. My group, it's really, you know, small, little feature by feature. Like let's use summarize, let's do automatic tagging.

And then we just kind of build off of that. And then for us, we have a really tight feedback loop with product and they rely on us. So I think that that's been really helpful of like, we're not forcing this down your throat. We really value your feedback. We're going to do it step by step together. I think that's been a big, you have the whole change management thing has been really huge with AI.

Erica:

Yes.

Chun:

Also looking at Erica, because she is amazing, also like building a whole team, and to kind of like understanding the AI, and then also try to deploy like AI products. Yeah, I guess like my last question is like, all of you, your customer base is huge, right? Like I mean,

Chris:

it could be bigger.

Chun:

It's pretty big.

Your customers are pretty loud too

Chris:

We're trying to get to a billion minute service. Yeah. Fair, fair.

Chun:

Awesome. Awesome. And then like, instead of like using trends or using like cluster themes all this like practices that AI can provide what are some other things that you're hoping to get out of AI for when you are using AI VOC or in the future?

This is me clacking.

Erica:

So I'm excited for when AI can look at an image and tell you what that image is about. I think right now it's really great for text, but imagine if a customer sent you an image And I can say, yes, there's a bug here. I've already filed the JIRA for you. Like, you don't, like, there's so many other ways other than text that we get information.

And I'm really excited to see, like, when images and videos start getting read by Gen AI for insights and stuff like that. Or this chart or this tableau. Not here yet, but I think that'll also truly transform maybe how we communicate with a customer, right? Do they have to write in or can they just share their screen recording of what happened?

So I'm excited for that, if it comes.

Mia:

Definitely, I want to plus one on that. I think having the support for multi modality way of feedback format and multilingual as well Definitely. I mean, thinking about I mean, you already see like in apple podcast, you can see the interactive transcript going on.

So even though that's not like V. O. C. But thinking about the capabilities of, of the technologies and tools that can accomplish that. And then also, I guess in outside of the advanced chatbot, we can think about leaning into operations and reliability engineering with AI. And things like chatbot, which is near and dear to sorry AI ops for, like, automated instance response, automated like predictive and predictive analytics.

So, in order to take proactive measures into dealing with potential issues before they occur, you can use AI algorithms to analyze the data and see and, and spot those potential incidents and anomaly detection, which I'm, I know that Monterey AI is having that feature is, I really think you guys have thought of a lot, you checked a lot of boxes in terms of, you know, you're not just looking at customer reviews but you're also looking at, like, all the, all the metrics from all the different perspectives and how, to aggregate them and combining them to correlate the relationships between all of these different perspectives.

Kenji:

Yeah, I'm excited about how just. Roles are going to evolve on my team You know, companies like Monera AI and others just really allowing us to do more with less. I know it sounds cool like from business perspective, but I'm, I think I'm most excited about like actually how this is going to help the customer.

Like cause right now what happens is, let me, sorry, I'm going to put you on the spot again. We're always working backwards. So we're collecting ticket data, customer data, gone calls. We're processing it. Even with AI's help, it's not fast enough. So I'm really excited about the, to the point where we get to real time and the customers can feel that.

So that'll be, yeah, that'll be the nice day when that happens.

Chris:

Yeah. I mean, one other thing I would include in this is that I guess the reason why I joined GitHub is not just because of the idea of changing the way the development happens, but also the way that all types of kind of product building happens.

And so I guess I imagine a future where. You know, yes, we can start to write something like a specification. We can work with a cross functional team. A lot of the boilerplate type of stuff gets taken care of automatically. But this, like, translation from anything to anything means that when there's some type of trade off for to do in the code, that can be something that is, you know, put into more regular language for non technical people to decide.

And when we start to then include voice of customer, it means that we're always checking in with what's going on and including the actual customer as maybe a virtual entity, but one that is inside that cross functional team in a much bigger way. And so I think that's the future. A lot of this stuff is through decision making capabilities and how we actually Kind of look at options.

It means that it's going to be much easier for us to include that voice inside of that thing, rather than it being something that like a PM goes and like rifles through when they need to validate a product decision they're trying to make. So

Chun:

in five years or 10 years, what is one thing about VOC that you don't want an AI to do? It's not about like AI cannot do it, but you don't want an AI to do.

Chris:

So, so one thing is that I think, like, machines are very good at figuring out rules and kind of creating a way to calculate something. Let's say human beings are really good about breaking the rules and being lazy. And so, from that perspective, I think that people should be able to understand the nuance of a situation and decide to break the rules for someone.

And so this may be more Not just like customer support intake, but the idea of like a customer support handling an issue. I think, I still want human beings to be able to not only help break the rules for that person in some way, because I think it's about that connection that really is valuable in those circumstances.

So I think that's what I, I don't want to be taken over.

Kenji:

Totally, huge plus one to that, as selfishly as somebody who works in support. I'd like to see us here in five to ten years. I, I do believe, like, I've been to a lot of support conferences, and maybe a year ago, we were really worried about, oh my god, AI is coming, it's gonna take all of our jobs, but I think we know just he good human support is gonna be even more valuable.

So when a customer actually needs an interaction with us, like, that's when we step up. So I think, yeah, I hope that doesn't change.

Erica:

I think mine is a little bit different, speaking as a mother, the first thought that came to my head is I don't want AI teaching my children, like about the world. I want them to experience the world and I really, really want to make sure that even though the world today is biased, I don't want them to think that that's how it's supposed to be forever.

So mine's more from like a mother of two toddlers right now. That's what I don't want AI to do in the future. It

Chris:

won't be AI, it will be virtual reality that does that.

Erica:

Whatever it is, yes.

Mia:

Oh, sorry. It's really within VOC that I think we should not use AI for. I think you know, it's a really good inspir It's really good to, like, start to inspire teams and engineers to think about all the, all the daily tasks that you perform You know, in your traditional ways, like, you know, if you just think about setting up alerts, like if an AI ops teams come to you, come to your team and say how do we use AI to set up alerts?

And you may say, we already have alerts. We don't need that. So I think it's, it's always good to kind of start to think about, like the some of the more advanced Advanced techniques to, to increase your effectiveness, your efficiency of the, of the ways that you're already conducting your daily tasks, like you know, setting up, you, you may be able to set up your alert configurations using the simple traditional way, but having that additional level of consideration and more advanced techniques, I think it always brings more options and I think it's, it's good to just start to lean into that more and have, just feel comfortable with it.

Chun:

I love it. I love it. I mean like selfishly, I've been thinking about kind of diversity of VOC a lot the most beautiful thing about voice of customer is like you're, you guys are the core of all the customer voices across the world, across your product lines. And then like AI is trained by, to be honest, a lot of, you know, engineers, nurses in the Silicon Valley, to a sense.

So when people are start, like, adopting AI, you kind of, like, trying to learn or being affected or being impacted of, like, how AI voice the opinions or ideas. So, like, the most beautiful thing about humanity or humans is, like, our diversity of voices. And that's, like, the selfish, like, I want to keep that.

Awesome. I will open this up for questions.

Q&A:

I have a question about data privacy. So when you upload voice of the user to an AI, I guess the, to answer your question about something that I don't want to happen is when my competitor also asks Monterey like, you know, what are some product features that people are asking? They expose our product gaps.

So how do you think about building privacy into your AI product? 

Chris:

So one of the ways that we do this is we use very, like, it's the large language model, which we base a lot of things off of, like, GPTs. That is based off of, usually, and should be, public data. What I would argue, though, is that, like, when you start to do that type of thing, you're, you're probably not doing as much fine tuning for a lot of that stuff, which would then Customize it to your model, and then that model will be distributed to a bunch of people.

There are some groups that need to have highly specific fine tuned models, rather than just doing it at the prompt level. And so for those people, generally we s There are techniques now kind of from like a federated learning standpoint. That allow for either fully privatized, fine tuned models or the idea that there's different enclaves of data that are then separated from each other.

So, this is something that like, you know, most cloud services have learned how to do over time. Everybody trusts AWS even though probably competitors are on the same machine at some point doing the same thing. So, I just think it's like, it's, it It's kind of a technique and practice thing, like from an ops perspective, and there's a little bit of like kind of R& D stuff that needs to happen to still do that, I think, in a really effective way, but most of the things you're doing today are probably not actually fine tuned models or fully trained models with your data that then be given to someone else.

It's usually at the prompt and context level, so it's more about like. What happens with the information at that point? Is it collected to then retrain? So in our case, we have internal tools that specifically do not capture any of our internal data. We do have policies around PII or customer data or personal data that gets pushed into these tools, But again, like, you, that's why you should, you shouldn't have to read the TOS, but they should, when you're using these systems, should tell you, especially in a enterprise level, what they're doing with that data.

And so some do a better job than others. But it's just, again, it's something that needs to be figured out a bit more, but I usually don't think it's a problem right now from that perspective.

Erica:

Yeah, I'll just add, like, hopefully it's part of your principles, right, responsible AI, starting from the underlying data you use to train the model, right?

Like, is that data, data you can use or can you not use it? So I can't really answer it because it's so dependent on what's your data regulation, your regional laws, right? But it should be part of everything that you think of when you're building, right? Put it in the product spec, put it in the RFC that your engineering team has, and then continuously monitor it, because nothing is foolproof, right?

There is no such thing as 100 percent assurance, but you should be continually monitoring your own product as it's live, that it's not being hijacked, that the information's not being linked, right? You have to continuously monitor AI. It's not something that will 100 percent never leak any privacy issues, that Stuff like that.

Just like our existing products, right? There could be a bug that's a privacy leak, right? Somebody could hack into the system. It's not totally fundamentally different, but it's just maybe a different process of how you would regulate it and make sure that you're abiding by your regulations.

Chun:

Yeah, I think like from our side, at the end of the day, it's about like a trust.

So sometimes I joke to them, like, look at all the amazing big companies who are using our tool. We spend a lot of time, like, just telling you, these are all the models we're using. These are all the terms of license. We're doing like private cloud, PI, all this like best of practice out there. But at the end of the day, it's about like communicating the trust and the community transparency with customers.

And then if the company like wanted to, you know, encrypt their. Customer email of that. We can do that. We will tell you like, Hey, without customer email, we might not get this function. Wanted to like, you know, make you feel really good about sending us data.

Chris:

And also just by the way, like open source models are only about six months to a year behind most of the state of the art for other models.

So if you need, and they're cheaper. Yeah. I mean, they're, they're just more optimized or quant, you know, a bunch of different things like that, but that's another thing is if you want to, you can use these open source models. And that means that you control what happens to the data eventually. So most things, most services.

will eventually go to the allowing open source or their, or other models. Like if you look at perplexity they offer a bunch of different things, including their own llama that was trained.

Q&A:

One, one thing I've experienced, like rolling out Monterey internally is users expectations for an AI product and a product that doesn't exist. It's like wildly crazy. Like some days people are like, Oh my God, this is amazing. And then some days are like, Oh, this isn't accurate. Or why didn't it predict this three months ago?

Do you guys have any tips on like how to not become an alcoholic when you're trying to like just drive adoption of these tools and like dealing with the ups and downs of expectations?

Erica:

I've, I can definitely relate. I think I would note like when you have a free member versus a paying customer, like paying customer expectations are much higher, especially if they have a legacy kind of experience, and now AI is taking over that flow.

So I think like, Wouldn't be anything surprising or shocking, but just thinking about the bigger picture and you know I think a lot of the AI products have this disclaimer, right? And maybe making that a little more well known or potentially like, you know thinking about how you can iterate on your feedback collection process like So if it's all like, oh underlyingly happening?

Like is there a new feature that was rolled out or a new functionality? So I'd say, like, spend some time in the data on, like, why is it so? Is it related to news articles, like, because it's all over the news or something, or you just did a p big PR splash? There might be some correlation there on, like, why is it so bipolar all the time?

Chris:

I mean, I think it's our sales team's faults, but I would, I would, in all seriousness, though, like, we need to set the right expectations for these systems. And so, like, there's a really great paper, I, I like reference all the time. I'm, I'm teaching a Maven course in AI product design, too, if people want to take that.

But it's by Paris Zerman about use, disuse, misuse, and abuse, which is about, you know, Basically trust in automation. And the problem is that we are trying to set the expectations incredibly high. We over personify these systems. We are jumping on the AI hype bandwagon as like a marketing moniker.

We're putting that little sparkly icon everywhere all over our interfaces. And what that does is it raises the expectations so high that we're never going to be able to actually meet them. And so, I think it's almost better that, like, we can say this is AI powered, right? But when we talk about what the problem that's actually being solved is what is the person problem, like, and that could be the job to be done, that could just be a straight up problem and solution, it could be any of those things, but that's what we should really be focusing on.

Because that's why customers will buy it, is the value they get out of it in the first place. So, I think it's like our own fault, a little bit. But it also, like, there are ways to repair that, which is focus on the problem you're solving, focus on the value you're giving.

Erica:

Yeah, and I do think, like, I know we all talk about, like, oh, we need quick time to get this feedback, but AI is not something you can fix, right?

Like, if you have feedback that it's not, it's inaccurate, you have to probably retrain the whole model, or adjust the prompt instructions, and you might make it worse. Like, so sometimes, like, you have to take a step back and just realize, like, hey, product development for AI is totally different than traditional product development.

And so I need to really, like, tease out what's the most important thing to focus on here, and what is my baseline today, and can we improve it? It might not be possible to ever get to Gen AI that has 0 percent hallucination. Like, maybe less than 5 percent is possible. is what we want to achieve. So being realistic to you about like the product development life cycle is totally different.

R&D says, yeah, we'll fix that. They might make it worse. And you have to monitor if they made it worse or not.

Chun:

Awesome. Awesome. Well, thank you everyone for the AI panel. I've learned a lot.

Jul 29, 2024

Monterey AI VoC 2024: Of Course It's About AI

Monterey AI VoC 2024: Of Course It's About AI

Build Autonomous Product with Monterey AI