Intelligent Data Exploration

Interview with Shreshth Sharma: People, Processes and Technology

Caitlin Bigsby Season 4 Episode 14

On our latest podcast episode, Caitlin Bigsby, Virtualitics' Head of Product Marketing sits down with Shreshth Sharma, Senior Director of Strategy and Operations at Twilio, to talk about his journey into the world of data and analytics from management consulting. Drawn by the intersection of business, technology, and data analytics, Shreshth has built a great reputation in analytics.

[00:18] Caitlin: This is intelligent exploration. A Virtualitics podcast. Welcome to the Intelligent Exploration Podcast. My name is Caitlin Bisby. I'm head of product marketing. And I'm joined today by Shresh Sharma. Shresh, can you tell me a little bit about yourself, the kind of work you do?

[00:34] Shreshth: All right, well, thank you for having me, Caitlin. Shresh Sharma. I am currently with Willio Senior Director and Strategy and Operations team. I also lead the enterprise data team here. Prior to this, I have been in management consulting for a long time. I worked across different industries, different functions, doing projects, across strategy, operations, data. But over the past five to seven years, I have focused a lot on data analytics. How I can help companies become more data driven, drive decisions through data, set up the right organization, right technology to enable the best use of data in the companies.

[01:19] Caitlin: Great. What I find so interesting about your background is that you didn't start out with the intent to end up in data and analytics. You didn't start out in tech. You started out kind of in that management consulting and that business oriented approach. So what was it that led you to the data side of things? What did you observe? What drew you?

[01:42] Shreshth: Yeah, I think there was an inflection point around 20, 13, 14 where I really became interested. So a lot of management consulting is about solving problems, and especially in early stages of your career, the client part is their relationship building, part is their influence, part is there, but a major part of a job is analytics. So I remember back in very early days, I had done a pricing promo project on using Microsoft Access Database, and at that time there was SaaS. SaaS is still there, but SAS was the platform at that time that we were using, and that had piqued my interest, but that wasn't infection. But I think right around 2012, 1314 data science started to become a thing. And there were like online courses coming up. I still remember the first course that I ever saw was on Courseradi course on R, and it was so rudimentary, and it wasn't the best of courses, if you look at the courses that are offered today. But I did that out of interest, and I went, this is very interesting. But it was a series of events in that I somehow landed into this project where we were building a mobile data analytics platform. And that was inflection point around 20, 15, 16, where I said that, okay, this seems to be a place which brings together the three pieces that I like, which is business, technology and data analytics. And that's what drew me in. So it wasn't like a specific conscious decision. But over years, through projects, through observing that this is something that I like doing, I gravitated towards it.

[03:30] Caitlin: Right. That makes a lot of sense. And I think what was interesting about the introduction of data and analytics into business decision making is it kind of in a way, sort of democratized management decisions. Whereas before people were being driven by their gut and experience, it was like, well, I have the experience, therefore my gut is right. Suddenly bringing in data starts say, well actually, maybe your gut isn't. What do you think?

[03:58] Shreshth: Yeah, that is certainly true. Whereas in reality, does it actually end up helping make a decision against the gut? That still doesn't happen always. But you're right and I'm bringing in management consulting back again. If you think about the business of management consulting in the right, the fact that these businesses were speaking with multiple businesses and working with multiple businesses, they had a knowledge set which was just not available because information was scarce. Right? And that was the key value driver. But over 90s it started to change and mid 2000 onwards it has completely changed. There are a few fundamental shifts, right? No one has the monopoly or information is not the driver of value now. It's the insight that drives the value. And I read a quote yesterday somewhere that said it's the times of change. It's the one who is faster who succeeds, not the one who's bigger.

[05:10] Caitlin: Right?

[05:11] Shreshth: So some of those things have emerged. And you're right. Like earlier, it would have been really hard to get management to change their viewpoint or the leadership to change their direction if it was against their gut. But today, given the wealth of data and the other thing is the things are evolving so rapidly and changing so rapidly. One of the changes in tone that I now see is that leadership and executives really want to know what does the data say? Insight, what are the trends? Because they also don't want to just purely go by their gut and end up making the wrong decision. So they might do it by their gut, but they want to be informed and they want to have backup and support of knowing that there is some objectivity to my decision right now.

[06:00] Caitlin: One of the things you've just alluded to now, and you mentioned it in an article that you wrote that I read before I met you, is that people are still leading with their gut. But the question is how much will they use the data to validate their decision to move forward and how much are they sort of looking at it and throwing it away or shaping it? Do you think there has been a shift in how people are making decisions or are leaders still primarily gut driven?

[06:29] Shreshth: I think there's a shift. It's obviously less than one would hope for. And the reasons for that are less rooted in the willingness of leaders to use data or trust data. I think the reasons are more rooted in organization dynamics and people dynamics. Even though someone might recognize that the right decision is a the organization dynamics and the competitive dynamics might put one in a position to take a decision B instead of A, even though they might realize that A is still the best data driven decision. So I think we are still farther away from it. I don't think we'll ever get to a world where we are making objective data driven decisions because organizations are run by people, their biases, people have their agendas and motives. So I think it's that softer part of the organization dynamics which makes this difficult. But there's certainly been a shift. I was saying like a decade ago, people weren't focusing on data and data being something that has to be central to the organization. So at least that vocabulary is now there, that data is central to our organization. Whether it truly ends up it's acting as central is a different question. But at least it's a journey. So we've got to a place where everyone recognizes that this is important, right? And the organizations and leaders who actually do make it important will be able to make better decisions, drive better outcomes in the market.

[08:10] Caitlin: Right? What position do you think this puts the data and analytics teams in to know that they're trying to work within a world of objectivity but the surrounding organization isn't always ready for what they have to say?

[08:28] Shreshth: I think it's important for the data teams to understand the context that they're operating in. If you think about any team, whether it's a sales team or it's a marketing team, when a salesperson goes into a client meeting, right? They cannot just pitch in the same way to every client, they have to contextualize it to that client and serve their needs. Right? So data teams inherently are in an internal service business. In a way they are an internal service team most of the times. So your internal customer or your executives or your partner teams or your other stakeholders and you have to really contextualize as to where they are coming from, what their objectives are and what success means to them and then shape your analysis and your activities in that way. Now, that might sometimes mean that you do an analysis which is less important than what you think objectively is the most important analysis. But if that's not going to help your business stakeholder move the needle, then it doesn't matter, right? For example, in my role with Sony, a lot of our focus was on packaging and pricing. Now, in a certain context, if the product selection and packaging of the products is not the most important thing, then you continue to focus on that because objectively that's a better answer. Instead of focusing on pricing, then of course you will not get attraction because maybe the client has certain preference of the certain selection of products that they want and that's that's what their choice is. And what you will try to optimize is on the pricing and not your own inventory management. Right? So you always have to contextualize it. And the data teams who are effective and good partners are the ones who contextualize it and understand that we are, as you were saying, our job is objective. The context we operate isn't very subjective. So there has to be some subjectivity that you need to bring in, right.

[10:30] Caitlin: That makes sense. What I do find interesting and then when we talk about data and AI data driven decisions, I notice when we're talking about analyses and reports and insights that somebody then business leader has to act on, there's sort of a degree of like, do I take it? Do I not take it? What I observe with people when we introduce AI and an AI recommendation is people seem a lot more willing in some ways to accept that the AI is right. Is that something that you see? Maybe this is just me looking kind of at general pop culture AI. What is your take on that?

[11:10] Shreshth: My personal experience has been a little different.

[11:13] Caitlin: Okay.

[11:15] Shreshth: So there are tones of what you're saying. What I have seen is there is a lot of excitement about, hey, can we use this? Can I somehow just use it for an advantage that we can create for a company? But there's a fair dose of skepticism of does it actually function correctly?

[11:37] Caitlin: Right.

[11:38] Shreshth: Is it reliable? And even though right now it's reliable, if I change my processes and everything, would it still be reliable in six months? I think we are at a point where and all this generative AI has suddenly made this very sort of tangible for people, right? All of this was happening. They were like, this hasn't happened for a decade. Now. There are companies who have been working in this space, but suddenly it has become the last two, three months, it has become very tangible suddenly. And then people are thinking, hey, how can I make use of it? And I feel like there is a lot of skepticism when it comes to using it for actual action, actually making different decisions or investing differently. Leaders still want to have a validation of it through a traditional process.

[12:36] Caitlin: So maybe it's the difference between when the AI is almost invisible and making recommendations people kind of take it versus when we explicitly know that AI has operated and generated a recommendation to us. When our skepticism kind of pops up.

[12:50] Shreshth: I think that's a very nice way to put it. Many of these people, decision makers are probably using AI day to day in their lives in fully baked products. And they probably realize it or don't realize it. Right. And many of the apps that we use, many of the recommendation engines, ad serving all of that, there is AI running there. There is probably a component of what is at stake and there is a component of which product or industry you're operating in.

[13:23] Caitlin: Right.

[13:23] Shreshth: If you're in A, B to C company where you're serving customers and your competitors are using this, then it's much easier to do it. Whereas if it's about how you're making investing decisions or budget allocations, then probably there will be a healthy dose of skepticism or you're trying to forecast numbers to show the street, then there will be skepticism because there's also risk reward. Right. Like if someone serves a wrong ad to Shray Sharma, fine, it will impact the ROI, but fine, it's not going to have a major negative consequence. I think there's that balance of it. So I feel like there is a lot more appetite in the B two C space when you're trying to serve customers because you're trying to stay on the cutting edge, whereas you are operating in B two B spaces or you're trying to apply these to internal company operations. That's true. And it's fairly evident when I speak to some of my colleagues at some of the big tech companies who from a consumer perspective, from outside perspective, are at the cutting edge and leading edge of deploying AI, and you speak of their internal data and internal processes, they are in shamros. Right. Forget AI, it's hard to find the correct data. So there is that duality of it, I feel.

[14:48] Caitlin: Yes, I'm chuckling with recognition, but I think, yeah, there's an interesting distinction between sort of what you outwardly do with AI, the chat bots on your customer support page or the recommendations versus the AI built to really change how a company does business. Internally optimizing supply chain making recommendations about which customers to target as identified as flight risks or preventative maintenance. There's a whole bunch of ways to go. But it does because you're not just talking about technology, you're talking about changing how you do business and the implications. The risk reward is really high because if it isn't, there's not much point doing it. Right? AI has to have a bang to be worth doing. So what do you see as that decision making? How does that look?

[15:47] Shreshth: Yeah, I think it's generally a divide of what we were talking about, the external versus internal. Right? Like the risk reward on the external side is really high and not doing it has really high consequences. And you would basically in today's world, in two or three years, you might become not completely redundant, but like an all surround company if you're not leading edge, whereas in certain areas and that's where the core competence comes in. Right? Like if I was Amazon, then investing in my supply chain automation and using AI there to do operational cost optimization is super essential. Right. But if I was a SaaS company whose operations is primarily software development, do I really want to invest in some DevOps AI thing and automate some? That maybe I do but it's probably not the place where I would do I would probably do it for my some end customer as you're saying like in customer support I might deploy the chatbot because even though these technologies exist, they are still nascent and huge amount of organization bandwidth goes into doing because no one is doing native AI like today. No reasonably sized company is natively AI. They haven't built their core products or operations or anything to be using AI. They're still built on traditional platforms and working in traditional processes. So it's a change that we have to make or introduce something new. And then whenever you do that, there is loss of efficiency, there is more investment of resources that are needed. So companies, I think, choose where they want to do it and do it just there. That's how you see this interesting thing that I was pointing at that your customer might be saying like oh my God, this company is so advanced. Their chatbot is so great and it's able to converse with me and give me the right answers and solve my problem so quickly. But for management reporting, there are two poor analysts who are punching away numbers in Excel to create a PowerPoint presentation.

[17:57] Caitlin: Right.

[17:57] Shreshth: There is zero AI in any internal reporting. Forget AI. There is zero automation process of writing SQL queries and then manually collecting the data and putting it into a PowerPoint. There will be and people will invest in places where they feel they will get better revenues or margin outcomes.

[18:17] Caitlin: Yeah, no, I think I think you're right. It's probably one of the things I've observed as we look at different use cases for AI Ops is in some ways it's sort of the more the older kind of organizations, the supply chain manufacturing places or banking kind of is where banking is really cutting edge, of course, with AI and reporting and also at the same time a very traditional structure. But I see the applications of AI in kind of old school person heavy processes. And those are the exact kind of organizations that maybe don't feel like they're ready for AI, even though AI could have the biggest impact.

[19:00] Shreshth: Right. It's a people challenge because the older the organizations get, they have very strong processes, very set in stone ways of doing things and it's hard to change that. And the traditional larger corporations will actually face a challenge in doing some of these transitions. That's why in studies and you can find numbers, I think from 70% to 98% of digital transformations fail. And that's why they fail because it's hard to get people to do things in a new way. Because inherently people are worried about their jobs. Unless there is a path of low resistance where they are upskilled and reskilled into doing things in a new way, it won't happen. And then you had mentioned the human machine topic. Over the next five years it will become something that will begin to change how companies operate. Till now, we had always one unidirectional interaction with our machines, regardless of how smart they were. We told them what to do and they did exactly that and gave results back to us. Now with generative AI, it's a two way process. That line has blurred. Right. And I foresee in a few years you'll be at a place where some of these technologies will be taking up a lot of the manual tasks that we do. And that has always happened with the surgery. Even in highly skilled areas like surgery, surgery robots are now helping. Right? And the thing is then these doctors can actually focus on either on research or really complicated ones where there's a lot of human judgment and emotion. Not emotion, but human judgment needed. Right. We've always done that. When Calculators came, probably there was someone with a job of manually maintaining the books. Right. And it's not like that job essentially became redundant. The nature of it changed. On the big corporations. I feel what will happen is the newer, smaller companies will start to take advantage of these technologies. For example, the early cases that we are seeing are on the marketing automation side. Through generative or copywriting side, we'll start to see places where operations management will be almost fully automated supply chains or inventory keeping the machines will just be doing it. And the newer businesses who adopt this will have a massive cost advantage over the older businesses who are not using it. And not just cost advantage, they will have quicker time to market. So the older businesses, larger corporations, I think they will continue to struggle. And they will struggle because it's really hard to change people processes at a large scale.

[22:08] Caitlin: Yeah, 100%. I'm always reminded of the stories when they introduced bank machines and how bank tellers were out pouring honey on the keyboard and certain they were going to lose their jobs. And yet if we look at the numbers of bank branches there's actually more bank branches today than there were then. And it's the work that the tellers are doing that has changed. Instead of they're not just like pulling out money, they're helping advising upselling. So it's a mindset shift. So do you feel like your role as a former management consultant has positioned you to help advise your leadership on that bigger picture or do you look at potential projects through a different lens?

[22:57] Shreshth: Yeah, I think one of the things that management consulting does is it gives you an appreciation of the complexities and the various angles involved. Right? So when advising this, bringing together a business, technology, data, people process anything is typically three things, right? It's people process and you can call it technology or the hard process or if it's manufacturing it's a different thing. But people process and some sort of hard technology of doing things and you have to really just bring them together and manage. Consultant does a really good job of teaching you how to bring that together and how to surface up the failure modes. The things that will go wrong and that will break down and trying to prevent those. So today, I think I would say the majority of my mind share actually goes in the people process part and not the technology part. Technology is relatively simple in the way that there are ample number of extremely smart, talented technology folks who are building these things. And the technology will do its job, but it's the people and process things which will not let it fly. You can put something in place, but if no one uses it, even simple things like dashboards, right? You can build the nicest dashboard. Most insightful in three clicks, you can get to the answer you want, but if no one gets into it, how will you get them do it? In my jobs, I've seen places where some executives and some sales people weren't using it because they just didn't know how to open it. So simple things as our first thing that we did in those workshops were we went around and we bookmarked the page on their thing and pulled the bookmark button onto that chrome bar on the bookmark, because a lot of them just didn't know how to create a bookmark. And what that meant is that in their outlook, they had to go and search the mail, find the link, click on it, and that's just too much work to do, right? Because they just didn't know how to bookmark things right. And you could think that why is my usage not high? This is such a dashboard and the reason is that they don't know how to bookmark and they don't know how to get to it. So that's where to your question of does management consulting, it has given me an appreciation of it's the people process part, which will always be the biggest failure mode. So I focus a lot on that.

[25:40] Caitlin: Yeah, I think you're wise too. It's really remarkable how people get in their ruts of doing business and can't really give them a great new tool and they can't move out of it into that tool.

[25:54] Shreshth: And it has always been so there are like some really very senior leaders who I have seen get on the podium, but then their assistant or someone from their team had to get there and do an f five to get the PowerPoint into a presentation mode, because they come from the era of those transparency slides. So they never ever got to a place where they knew how and they got to actually, by the time these things, the PowerPoint came, they were in a place where there was always someone to do it for them. So they never learned how to do it.

[26:29] Caitlin: Oh my gosh. Learn helplessness. See, that's why senior leaders should never have assistance. They should always have to do their own work for them. Keeps them fresh. That's what I like to say. It's also sailed. I mean, people we're talking about people who can't bookmark a dashboard and at the same time we're talking about, well, if we make the AI transparent, people will trust it, users will trust it. Clearly we have a lot of ground to cover to get business users to understand not just where to find the information and how the recommendation got there and how to act on it. If you were to design kind of an information process to familiarize business users with an AI app for their particular business, what would you focus on? What do you think is most important?

[27:25] Shreshth: Right. I think usability is a very important thing. And I'm actually quite excited by all these advancements that are happening in that a lot of places, why it has been hard to get traction and buy in is because of these multiple steps that one has to take. Now, trust will have to rebuild. When I say Usability, right? I see in a few years a lot of the dashboarding becoming very different in nature to what it is today. I hope we do. We'll have tools where one can ask questions and get clean answers. The thing to focus on there would be to build that trust, because there are some half big solutions already there and they don't work very well and they very quickly break trust. So if I remember, like, tableau did this thing a few years ago where they created this, if you upload a dashboard on tableau server, there was this thing of like, ask tableau and you could type in a question, it would try to do a job. It was a very cute attempt at doing it, but it would throw something very basic. Right. And it's hard for them to understand from the traditional setup in terms of which metric are we actually asking for, if it's not very clearly defined and it matches the label of the column and all of that, but I see a world where that would have happened much better and there would be trust. Right. And trust is interestingly, not a very hard thing to build. If that machine or that bot gives you an answer, like four out of five times, it gives the answer which matches your gut. You'll be all in. You'll be like, yeah, this thing works. Right. Because you'll try to test it and if it's giving you useful information and also information which logically makes sense, you start crushing it.

[29:19] Caitlin: Yeah.

[29:19] Shreshth: And actually I think it will almost help people leapfrog that gap in, because right now, ramping up on a new technology is a manual learning. It's a steep learning curve. You have to manually learn the nuts and bolts of how to operate. It's almost like this big leapfrog that will happen where all you need to do is you need to behave like a normal human and the machine will respond to you and act like a human being. So it would be like having an on call analyst who is doing the job for you. Very specific, very tailored.

[29:56] Caitlin: Yeah, like somebody in your chat, in your slack. Hey, which customers should I be focusing on today? And just sending a list. I'm like, well, we think you should call these three people.

[30:06] Shreshth: Yeah, it would be like personal analytics assistant.

[30:11] Caitlin: You said something to me when we spoke last time about and you sort of touched on it in this conversation too, about how AI has been developing quietly in the background. But what has really changed with a bang is our access to it, the general public's access to it, which of course Chat GPT just blew wide open. And I think it has proven that we are willing, we can interact with it when it's something that we can kind of talk to. And I've been thinking about that for a while, that it's this accessibility thing. But I got to say it also makes me a little nervous because there's no way to validate what comes out of Chat GBT. And you said just now that it doesn't take much to build trust. A couple of right answers and we're good to go. So do we then have the opposite worry about people trusting when they shouldn't?

[31:06] Shreshth: Because there would be that right. And that's where Chat GPT is like a general purpose, for lack of better word. It's like a toy for the public. Yeah, it's like a toy. Very little harm. And what if it gives out the wrong answer or a biased answer? But I think when companies start to implement these technologies, they will probably and hopefully be a lot more thoughtful about it and have lot more testing. So even today, right, like even when you have your data tables and everything, you run processes and even automated or semi automated or manual processing on top of it to make sure that what you are propagating through the organization is actually correct. And right, there are controls that you put in place. So I'm sure as organizations take this up, they will put controls in place. The problem would remain that people will start to trust it and at some point it will break that trust with a wrong prediction. So you have to coach people in how to use it, where to trust it, how to trust it, when to trust it, and when to pressure test it. Right. One of the challenges would be this, and it's going to be interesting thing to see how this pans out because what will happen is the execs and the leadership will see the potential of the toys out there to be like, oh, this is such a cool thing, why can I not use that? Well, you cannot use it because you've never focused on data quality and building a robust stack. Right. You have really bad data on top of it. You can run whatever you want to run garbage and garbage out your data. You have five. Different definitions for the same metric. You have the same data lying around in five different places, no one owns anything perfectly and they're going to be all these fundamental issues and you just won't be in a place to take advantage of any of these technologies. Because what probably is hard to see from outside is that at some point of time OpenAI would have done a massive amount of data cleanup and data structuring for training the algorithm. Right. It wasn't as simple as probably, oh, the sum engine went and it read through the whole web, and it went to the algorithm and voila, this break right there would have been a lot of manual effort that have gone in, helping structure it the right way and do the right feature engineering, shape the data in the correct way, which hasn't happened. And that's the other risk. One risk is like people trusting it inherently and there being biases and wrong answers in there. And second risk is people moving too quickly without actually building the right foundation and using these technologies on top of fragile data layer. And then you'll get, of course you'll get incorrect answers and you might build it to be correct for a certain use case. But as soon as someone asks anything outside of the boundaries, it will start to fall apart very quickly.

[34:16] Caitlin: Yeah, you really need to know where the guardrails are on the questions and keep them focused. I do think my personal bias is that people tend not to pay too much attention to their data until they actually want to start using it. So undertaking big projects is a great forcing activity and analytics is a good place to start with that. One more question for you is what do you think the future of data science is based on where we are right now? What do you see, say five years from now or more?

[34:51] Shreshth: Right? I think one, the field will continue to exist, it will not die out. What will happen is that I feel there will be lot of the complexion of job will change. So the data scientists who will be successful and in successful data science teams would be spending a lot more time than day to day than they do today on interacting with the organization, understanding what's going on, building the right set of capabilities and enabling people. Right now, I would say that's a minority of their job, their focus on the job is highly technical. A lot of the technical stuff will be taken care of. Even with something as early stage as Chat GPD, it can do a fairly good job writing code, proven code. We will see tools which will take away the jobs. If I look a decade ago a job anyone who understood the basics of algorithms and could write code in Python or R, that was one fundamental skill model tuning was a big skill. Right. I remember in those days all these creating boosted models. There was a lot of focus on understanding all the hyperparameters and tuning the hyperparameters and understanding the interplay of hyperparameters. You don't need to do any of that today. Tools just do that for you. Similarly, we'll reach a stage where these hard skills of knowing how to code and tuning the models would probably not be the most valuable skills. The most valuable skills would be knowing the foundations of it, to understand which one to deploy, when maybe that will also be less important. Skills like algorithms and machines will be doing that for you. So it will be setting the context right, interacting with the people, translating the business requirements into the right tasks and right prompts for these machines to execute effectively and quickly and then enabling people to use is going to be the complexion of it will change. The skill sets that people would need to develop would change. But some things will remain still the same, right? The fundamentals of understanding how the technology is working, what the algorithms are, will still remain the same. Second, the fundamental of how you interact with the organization and how you build relationships and how you understand the needs will still be there. And then third, the collaboration within the company and the technology teams, there would still be data platform there, there would still be analytics team, there would still be data engineering. Right. And you'll have to still work with them and work effectively with all of them. So those will remain the same more and more it will become like management consulting, I guess.

[37:45] Caitlin: Yeah, I think you're right. So if I'm hearing you right, it sounds like the advice to the baby data scientists just starting out today is to stretch and really work on practicing those people skills and interacting with their business counterparts so that they're sort of fleshing out their skill sets and that hiding in the code is maybe not the best bet for a successful long term career.

[38:08] Shreshth: Yeah, I wouldn't think so because this whole thing, and it's very early days, this whole concept of prompt engineering seems to be taking off and I'm not entirely sure how that will pan out. But there will be something there where it won't be about writing native code, but it would be like Copilot, I hear is from a lot of engineers who interact with Copilot has been taking away anything from 50% to 90% of their manual coding effort. So being able to write the right syntax is going to be a redundant skill. It's almost like today when I was in school, having a good handwriting gave you an edge because a lot of your examinations were you had to write answers and subjective answers and if you had a good handwriting you would have an edge because it was easier for the examiner to read and create. And whatnot today it's not a differentiator at all. And similarly, writing good code syntax and all of that is not going to differentiate. You really need to focus on the more complex subjective things.

[39:24] Caitlin: Yeah, so like I said, technology is just not taking away our jobs, it's changing them.

[39:29] Shreshth: Definitely true for all of us.

[39:33] Caitlin: That's great. Well, thank you so much for sitting down and talking with me today. I thought this was really interesting and I think will be informative to people who are thinking about technology and data science at their organizations.

[39:45] Shreshth: Well, thank you for having me on Or Kit, and this was really fun chat and I look forward to tracking with you.

[39:54] Caitlin: Yeah, me too.