Watch CCC’s October NXTUP⬆️ LinkedIn Live Episode
CCC’s new LinkedIn Live original content series NXTUP⬆️ continued this month with episode 2, featuring guest speaker Jonathan Zittrain, the George Bemis Professor of International Law at Harvard Law School, professor at the Harvard Kennedy School of Government, professor of computer science at the Harvard School of Engineering and Applied Sciences, director of the Harvard Law School Library, and co-founder and director of Harvard’s Berkman Klein Center for Internet & Society. Co-hosting the event were CCC’s Chief Product Officer Shivani Govil and Vice President of Ecosystems and Alliances Manju Bansal. Episode 2 featured a lively discussion around tech policy, data privacy, AI ethics, and the evolving regulatory impact on the P&C insurance economy. Catch the replay below (or read the transcript) below and be sure to follow us on LinkedIn for future episodes!
Shivani Govil: Good morning. Good afternoon. Good evening. Welcome everyone and thank you for joining us from wherever you are. We’re so excited to have you with us today for our second episode of NXTUP. As you may recollect, what we do in these episodes is we bring together domain experts, industry thought leaders and visionaries to talk about the impact of technology regulations, policies, and other aspects on the property and casualty insurance industry. Our guest speaker today is Jonathan Zittrain, and super excited to have him here. I will introduce him in a quick second. But before that, let me hand over to Manju for some housekeeping items.
Manju Bansal: Thank you, Shivani. So thank you, everybody for joining us online. Really appreciate it. Just a couple of quick housekeeping notes. While we’re having this conversation, it would be great if you could join the discourse online as well. So please do not hesitate to write your comments or your thoughts to things we’re discussing online. And also, of course, to ask your questions on that bed. We will be taking your questions and answering them at the end of the conversation here. So please do bear with us we appreciate your patience. Shivani, anything else before we kick things off?
SG: No, let’s do it. And so let me introduce our guest speaker for today. Jonathan Zittrain And I’m gonna have to read out his bio because he has such an impressive bio it was hard for me to memorize everything, but Jonathan is the George Bemis professor of international law at Harvard Law School at the Harvard Kennedy School of Government is a professor of computer science at the Harvard School of Engineering and Applied Sciences, director of the Harvard Law School Library, and Co-Founder of the Berkman Kline Center for Internet and Society. His research interests span technology policy, the ethics and governance of artificial intelligence, new privacy frameworks and more. He’s an author forum fellow of the World Economic Forum, Trustee of the Internet Society, and member of the Board of Directors of the Electronic Frontier Foundation, amongst so many other distinguished honors. Such an impressive background, Jonathan, welcome to our show.
Jonathan Zittrain: Thank you so much for having me Shivani and Manju. It’s great to be here. And hello, everybody.
SG: And I’m so excited. I think we’re going to have a very interesting discussion on a variety of topics, and how they relate back to our industry. So with that, Manju, why don’t you go ahead and ask the first question.
MB: Thank you Shivani, so Jonathan, the automotive industry is at the forefront of technology revolution in many ways. Cars are transforming from you know what they used to be metal cages on wheels, to what they’re progressing towards essentially as mobile phones on wheels, you could argue, right? They have their own operating system. There’s an app landscape that’s coming up, in fact, arguably even a UX/UI piece. If you saw the Apple CarPlay announcements last month at the developer conference. As cars get smarter, more connected, perhaps more autonomous, there’s a lots of implications for data privacy, the transmission of data, I can imagine that you know, payment systems are embedded in the car. So if I’m going to drive-thru McDonald’s, I just press a button and things get sorted out, right. How do you see the regulatory framework that we need it to be put in place to drive the adoption of these smarter vehicles in the future?
JZ: Well, what a great question and I can’t help but think as is often the case, totally understandably, that the stuff that is percolating up to the regulatory environment is stuff that is kind of already happening. It’s like the fierce urgency of now. And that might be if we’re talking about autonomous vehicles, first and foremost, a bunch of safety standards, wanting to make sure that cars are held to the right standard, do they just need to be better than the average driver or do they need to be perfect, somewhere better than the best driver but less than perfect? And that all makes sense to be taking that up? And if it isn’t, formally, United States example, regulated sort of in a broad and rules-based way then you know, the tort system will fill in to see about how good these products are. But I think it’s also worth looking ahead to what autonomous vehicles for example, could really start to unleash. I don’t know how far ahead this is. I think actually having truly autonomous cars is a really challenging problem if they have to share the road with anybody other than other autonomous cars if they could just be on their own test track and you made the interstates the test track that would make life a lot easier for the autonomous cars. But I think the key is first not just to think of it as like a cool chauffeur that drives your car for you and you don’t have to pay as much attention to the road if you don’t want to.
But I remember my colleague here Clay Christensen started thinking about well, you know, if you don’t have to even be looking at the road, maybe the entire interior of the car should be redesigned. So you can put a poker table in the middle and like have four seats, each facing one another, or a bed or whatever, you’re in a moving box that could actually be a kind of fun place to hang out. And that’s certainly interesting, but my mind – you mentioned the regulatory framework – starts going in other directions. So, for example, I think about what it could mean with a nicely connected and autonomous vehicle if the police issue a warrant for someone’s arrest, and then we identify that they’re in a particular car autonomous vehicle. So, all right, push the button, transmit the warrants the doors to the car lock themselves and the car starts driving to the nearest police station, as if it were dropping off a package and waits for the police to come out and collect the person. And I don’t know if it’s going to drive adoption of the cars, but could turn out to be a useful thing once people have them…
SG: And then it may prevent some of those high-speed chases on the highways.
JZ: That’s right! Well, I’m thinking of high-speed chases. Imagine being able to declare an onboard emergency in your car, at which point all the other autonomous vehicles part like the Red Sea and slow down and your cars zooms to the nearest hospital where I imagined at some point you have to collect a voucher to prove you really had an emergency rather than just you were kind of in a hurry and one of the press the turbo button. I think about if it were… consider this free idea – a Facebook for Uber, which is to say, imagine instead of having to pay for a ride and own your own car, but you would normally take an Uber to get somewhere or Lyft or something – imagine that you could take a sponsored ride so it’ll be free, but you’ll go to a surprise intermediate destination like that McDonald’s, like you mentioned Manju, and it’s going to wait outside the drive-thru window and like you know, you have an opportunity to order a whopper or something. And you can go ahead I guess that’s a message from McDonald’s if there’s certainly an opportunity to get the idea that you could end up having an entirely new economic model for transportation. When you’ve got all these variables in play about a highly connected car that can drive itself.
Some of them are scary to me. Some of them are quite fascinating. One last example I can’t help but share with the imagine when Hurricane Harvey was about to hit Houston in 2015. And officials in the area decided not to declare evacuation because they were worried that there’d be big traffic jams with people following the order and getting stuck and then the hurricane would catch them on the highway. Well, with autonomous vehicles that could smart routes themselves so they don’t end up jamming up where they shouldn’t, you could declare evacuation and your Smart car would come up to your front step and say you have 20 minutes to load your package and your family in the car and then I’m outta here. And that could help with an evacuation. So, these are things that really stand to change the locus of control, which kind of by necessity, has been focused on the person right behind the steering wheel and to put it in all sorts of other places, local officials, wherever it might be the police issuing that warrant. And I think we have a lot of thinking to do. And as you might guess, this is not on the regulatory landscape at all, yet it will be a regulation of convenience once everybody had the cars and somebody had a eureka moment and said, gosh, we could do this.
MB: Do you see this in a classic situation where the economy and the corporate sector and innovation overall gallops forward and eventually the regulation will catch up? As and when they catch up?
JZ: Well, that’s been the classic Silicon Valley model and something that traditionally I’ve found myself – with respect to innovation in the digital space and the Internet – quite happy about. The idea that I guess to put it as kind of directly as possible, whatever isn’t prohibited is permitted. And if the law is silent, go for it. And you know, you’ve seen some Silicon Valley companies, possibly including ones I have already mentioned, that were just like even if it isn’t permitted, go for it and create a reality on the ground so compelling, or irresistible, that the law will then conform to match the thing and there might be a number of taxi drivers not so happy about that mode of doing things. And whether that is suitable in each sub area I think is a question really worth reflection upon. And there are some areas including those in AI, where they really do bear great possibility for innovation, but there can be failure modes, with enough societal implication that we desperately need a model by which to kind of collectively think it through and by the “we” here, I don’t know if that means regulators in the first instance, but a “we” greater than just the two people in a garage, saying I’ve got nothing to lose, let’s unleash this thing on the Internet on Tuesday and see what the world looks like on Friday.
MB: There’s a lot of things you mentioned. So thank you, and we’ll pick those up in subsequent questions, but Shivani, why don’t you take over the next question, please?
SG: Yeah, I wanted to go back into one of the things you mentioned around all the connected cars, and you know, in general connected devices and IoT. And, you know, there’s over Some studies say there’s over 46 billion connected devices today, of which, you know, more than 13-16 billion are active. I’m curious to know you know, as you think about all these connected vehicles, and not just the connected cars themselves generating the data, but you know, as people have their mobile devices, and the devices themselves can generate data, we saw the Apple announcement at the developer conference, Google and the Android systems have been doing crash detection for a while through the mobile devices. Today, your car can tell you when a crash happens and potentially summon help. Tomorrow, there could be even more functions and capabilities that are offered. I’m curious to know your thoughts, Jonathan, as you see, the increase of IoT devices, as you see computing move to the edge, what’s your thought and where is all this going? And what’s going to be available for us in the next three to five years specifically, and maybe if we also tie it back to insurance?
JZ: Yes. So, a question that kind of has the answer on the premise of like everything everywhere all at once. And is that good or bad? What does it mean for us? And I agree that we are fitfully moving in the direction of so many connected things. It’s funny when you talk about how many things are connected, just to reflect on part of how decentralized the Internet is that we kind of have to like sample the water with a ladle and then extrapolate to know how many things are connected if this were all like one big, you know, AT&T or Verizon network, they know exactly how many things are connected at any given instant and on the internet. It’s like… there’s a bunch of stuff, it’s all IP addresses, and funny also the think there was a time when we thought internet protocol version four, we’d have enough IP addresses forever. There’s also the era in which MIT was accorded more reserved IP addresses than all of China. And we’ve been, as you probably know, going to IP version six, which at last, has had more IP addresses than there are stars in the known universe, which is a very impressive statistic until you realize that there’s an infinite number of numbers and can’t you just have one whenever you need one and that was the great insight of IP version six.
So, having this idea that every girder in a building could be a sensor, have one embedded in it and know if it’s been subjected to stress in an era of climate change when there’s a lot more events that are outside of 500 year expectation band or whatever it might be, that insurance companies think about and for which the building was designed, you can do all sorts of I think great and salutary uses of collective emergent use of data from those sensors that, in my girder example, that sensor was embedded there for exactly that purpose, and your iPhone example, somebody got the iPhone for whatever reason they wanted to call somebody or they wanted to text and then it turns out it comes along with all these sensors that can generate telemetry that can be enormously helpful whether for that individual to detect a crash, or collectively so that there is a bird’s eye view to whomever is in a position to view all that data and say something like Cleveland is looking restless tonight. If I compare the movement of phones in Cleveland, and I’ve got a little bit of insight into totally anonymous and aggregated data, so no privacy issue of everybody’s aura rings and Fitbits, and like, there’s something up in Cleveland. Is there a sense of group privacy around that? Or should we all be entitled to that? Oh, gosh, it’s the World Series. That’s why Cleveland is upset or whatever it might be. It does open up tons of possibilities.
I would not be, in the vehicle space in particular, I think it’s worth noting that in the most immediate term, I think because of supply chain issues and the deficit of new cars traced back to a chip shortage, all sorts of things. The rate of adding new connected cars into fleets has actually slowed recently, so we were kind of right on the brink, maybe, of a state change, and that’s been delayed a little bit. You see connectivity happening primarily through cell phones or dash cam devices not built in technology. And in some ways, that’s for stalling for just a little while longer.
Another question latent in your question, which is, how to think about interoperability. If we see any virtue in both having a collective view on something, properly regulated and circumscribed, who gets to access that and share it and to what uses that would be put, and if we see a virtue and allowing people to have a Hyundai for one life phase and then move to a Ford F-150 For the next and then onward to a smart car, whatever it is a Rivian, you don’t want them have to be like, Oh darn, I was in the blah blah family of smart devices, and I got to keep it all within the family. And that that’s a real problem because it’s just too much to expect in a market sense of consumers. At the time they buy the first thing that inducts them into, you know, the Corleone’s of the IoT, that that’s that they’re in forever that you know, leave your real problem.
SG: I think that’s an excellent point. Is that interoperability because you’re right, you get that first device and then everything else connects in. So, you know, let’s say you’re on an Alexa system, Amazon Alexa versus a Google you know, you’re kind of in that channel and everything you do ends up being in that channel versus being able to mix and match and plug and play. And I was thinking about the comment he made about the sensor and the girder. And you know, recently, I live in Northern California, we had the earthquake, and you know, you could imagine having a sensor or IoT device in the home that allows you to understand what was the impact of that earthquake on your home and the foundation and how does that then impact into translation into the insurance side as we think about the connected vehicles and the connected cars you know, there was a lot of telematics data was being collected originally used largely for underwriting and policy purposes. But more and more we’re starting to see that feed into the crash detect detection and claims resolution, not just how do you provide support but also how do you accelerate the whole claims resolution cycle as well? Because you have a lot of good data that can help you inform that.
MB: Just one thing to add, at the insurance – there’s a big annual trade show called InsureTech Connect that happens in September in Vegas – so, this year they had when I was there, so a handful, maybe half a dozen a dozen companies that were all building management systems integrated into IoT devices. Essentially, what you said the chip on the girder and the whole idea was connecting back to what Shivani mentioned that hey, something is happening in the building, temperature are rising whatever, and there is water going up more detection and therefore, at some point in the future, maybe not too far out the claims being fired and all the other rest of the activity is happening on its own.
JZ: Yeah, and in some ways I could see, finding it advantageous to just more data is always better and feeling like you’ve got a better bead on what’s going on. And particularly for casualty in the insurance sense events, where you have an earthquake or something that is gonna affect a lot of people simultaneously, possibly in the same region, to have a sense so you can triage and know, both for public safety purposes and for remediation purposes, what are the things that can wait a couple of days or a week one of the things that need an immediate intervention, even to be able to telegraph back to the folks in the apartment building or in the house – what’s going on? This is a wonderful question even of the division of responsibility between public and private.
Is this a governmental function, which we think of the public safety function, you know, the emergency services and such as being or is this the kind of thing where you could see a consortial approach where a bunch of entities that have the telemetry could come together for the purpose of intervening during natural disasters, again, for which we might have reason to think there’s going to be more of them over time than fewer, and model how to be able to respond in ways that right now is still in an individualistic or, you know, “when do we summon the Red Cross to set up a tent here” kind of way? And I think in that sense, it’s, it carries a lot of hope and possibility and I would love to see something like that model and possibly not waiting for a government to do it. If that consortium can be in a position to say, “here’s what we’re going to do with the data. Here’s what we aren’t going to do with the data,” but it’s much harder for individuals to plan for that extra rainy day when it’s not raining. And so to have structural incentives for others to help them do it, and to be ready to be there for them when that extra rainy day arrives, that seems like a pretty noble calling.
MB: I want to pick up this thing where you mentioned about data and AI is clearly the tip of the spear. Now almost every innovation that’s happening is using AI in some way, shape or form. Not only is it everywhere, the implications on end-consumers or population segments is fairly profound. Whether your mortgage has been denied or your cancer has been detected. Who knows there’s all kinds of wide cuts. If you’re a company, an enterprise profit making, and you’re trying to do all this stuff, and you want to innovate purposely or purposefully, but you also got to be mindful of where this regulatory field is heading. It’s not the hammer coming down on you, but almost like how do I make sure that what I’m doing today will continue to get me in the future without getting in trouble with any potential regulation that comes down the line.
Just as an analogue the European Union’s GDPR law in 2018, was pretty landmark and subsequently corroborated by a lot of other countries. And California, for example, was an early analog of that. Do you see a similar situation happening where for our own algorithmic world that we now are trending towards or almost living in? We will need a similar regulatory regime to help people frame this, because then you know, I cannot go there. If I do go here. I need to take care of these five things to spell things out. Where do you see this playing out?
JZ: Well, it reminds me of where I’ve been summarizing sort of governance in the digital space lately, which is we just only have two problems. The first is we don’t know what we want, and the second is we don’t trust anybody to give it to us. If we just solved those two problems we’d be in great shape. And we don’t know what we want part of things is a little bit different from the GDPR and the CCPA kind of frameworks because with privacy, I actually think for a good 15 or 20 years there has been pretty good consensus among some of the disinterested parties and privacy advocates about what they want, you know, there’s argument around the margins, but it has to do with vindicating personal choice around the use of sensitive data or non-sensitive data around which and some inferences can be made. It has to do with not having people be exploited through so-called dark patterns or other things that extract consent from them, but they don’t really know what they’re signing up for. The kinds of things have a classic consumer protection thing to it, that the biggest trouble over the past while in making progress there has been just about persuading regulators that this is worth their time and that this is a constituency worth paying attention to versus the others. And you also have with privacy, kind of at least a sense of the basic unit of information privacy until recently it’s just been data… datum about somebody. When we start talking about AI and machine learning, we don’t know what we want. We have clear examples of terrible failures, were of things that have been maybe the polite way to say it is so innovative that the world just wasn’t ready for it yet.
I think of the UK insurance company, Auto Insurance, that did a little bit of so-called supervised learning machine learning thing, where they had a bunch of data about car accidents and who was in them, possibly even who was accorded responsibility for having caused them, and they had Facebook accounts for those folks that they could train up as a dataset, and they went ahead and trained it up, and it seems that this company had the idea that, you know, if you wrote in short concrete sentences, and you use lists and you arrange to meet friends in a set time in place rather than just tonight, you will probably a safe driver. Now, first of all, it could be wrong, at which point if I have hooked up my Facebook account for a potential discount with the insurance company and who knew that because I use short sentences even though I’m very careful, I’m in trouble. That’s not great because I as the customer don’t even know the rules I’m supposed to play by it’s different from like, I’ve got something in my car that I agreed to that like so long as I don’t speed terribly, I’m getting discounts, but there’s also the issue of just like what if it gets right? What does that even say about the use of this? And how much I now start watching my own Facebook use because I’m realizing they can affect my insurance rates. If I too hastily advertise this event on Facebook using short sentences, and I say “Tonight!” and so that’s the kind of thing that means everybody acting rationally and – for a company, generally speaking and profit-maximizingly – for better or worse, everybody doing that could still lead us together into places that nobody really wants to be. And that’s the kind of thing where you’re like, great, we need some industry standards, potentially we need wise regulation, but we’re still trying to figure out well what’s the unit of AI could just give you quickly, a first cut at it, for which if we come back and have this conversation of five or 10 years, I’m going to have a lot of stern words for myself about how wrong I am.
But, I would include in this data and datasets that is the like unit the machine learning models inhale and get trained upon either all at once and then they were released and they don’t alter in their behavior after, or so called online learning means somehow they’re getting data from back after they are put into use possibly through their use, which means they will evolve in the wild in ways that the people that built the dataset can’t expect and we are still in a really rudimentary time so especially for businesses on the demand side who aren’t in the business of building AI systems to sell to others but potentially considering acquiring or rolling their own for use in a substantive business of their own… figuring out what is my dataset that I am trying to cultivate or borrow and often these datasets are passed around like Christmas fruitcakes, I think of if you are trying to do some kind of smart email program, one of the most common data sets of emails to train it up on is the Enron email. He was released by the Federal Energy Regulatory Commission, because they, you know, as part of their $63 billion bankruptcy and the associated fraud investigations, got all of these emails and then released them to the public. It’s like the idea that you would train up an email system on that company’s email is, you know, ludicrous is a great example. So anyway, data is one. Models and the PhDs behind them as the next, and compute powers the third, back to you, Manju.
MB: No, I was just thinking this Enron thing triggered a thought that you know, the whole issue of ethics comes in very quickly, because if everybody’s using the same Enron emails to train your email algorithms or what have you, once that flawed thinking, for lack of a better word, is embedded in that model, that is then suddenly replicated across hundreds of other places where that same bias has crept into the thinking. You probably heard about that credit card company that Goldman Sachs flagged recently like six months ago where they are with the same level of qualification. Males were getting a far higher credit limit assigned to them than corresponding female applicants. You could argue it’s not the end of the world, but when far more critical use cases are being handled by AI, this could have more far-reaching implications. So the question was, what is ethics coming to this whole thing? Again, ethics is such a slippery one to define. And if you’re a company, who as you mentioned earlier, is in the business of consuming a certain model as opposed to building it particularly. They probably wouldn’t even know what’s in the bottle and if it passed certain ethical checks and guidelines…
JZ: Yes, sometimes you only by this telling, really become aware of the problems after you’ve gone into prime time, release this into the wild, and then through a measure of disparate outcomes, you realize we’ve touched a third rail, like you’ve discriminated against sensitive categories of people, and now what. So, I mean so I agree there’s lots of issues here and of course, stories of emergent bias from these systems, often which don’t need a data set as clearly, you know, iffy as the Enron, one, to generate these kinds of outcomes.
There’s only recently emerging some practices by which you can try to tiger-team your model the data within it try to clearly label the data so you know exactly what’s going in. So, if it’s garbage, it’s garbage out. And even if you’re simply making more efficient and replicating, simply, discrimination already out there that is reflected in the model data and collected, you might have a chance to try to correct for that before you go live. Now, you ask the question of like, so – okay, ethically, how might this be handled? – some of the most direct approaches from some of the companies and think about this stuff, is again, I think, understandably, kind of top down. It’s like, well, let’s have an ethics committee. Let’s have some central node through which we run everything. There’ll be like the kidneys of the company, to try to cleanse or detect and cleanse out impurities, whatever’s going on, and that might be a worthy thing.
I think there needs to be a counterpart to that precisely because of your point about – it’s not like the CFO was gonna know how this was trained, I sometimes think of AI as like asbestos, it’s wholesale and not retail. It gets embedded in a bunch of infrastructure. Only later do you realize you have a problem which point you don’t even have an inventory of where it was installed. And that’s, that’s a problem. And that’s why I think the counterpart to those top-down approaches is to really invite and incentivize the data scientists, the engineers, if it’s an outside vendor, you got to figure out how to incentivize that outside vendor to flag these issues to really sensitize them to it at the outset, because right now, nobody owns detecting and dealing with this category of problems. So, there was somebody who was like, I’ve got some emails there, Enron, let’s put those in somebody’s in that workshop might have, if invited, have a think about it.
SG: And, you know, Jonathan, that’s really interesting. And that’s you know, we’re talking about ethics and the intention, right where you’re intentionally wanting to do the right thing and you put the processes in place to be able to do that and be ethical. How do you know there’s also if you think about the mis-intentions, right, if you put think about people that are coming in with an active mindset of fraud, or deep fakes, or you know, there could be fraud from a photo perspective, that could be fraud from actors, it could be fraud in network rings, etc. And in fact, you know, in our industry itself, we know that there is a significant cost to fraud, which actually ends up resulting in individual premiums going up almost $400 to $700 per year. So, you know, as you think about that, you know, we talked about the intentionality of trying to do the right thing, but what do you think about the intentionality of doing the wrong thing or you know, trying to be fraudulent or malicious in some ways. Like, how important is trust and how do you prevent against some of those aspects?
JZ: Well, I think you’re really helpfully kind of pointing to two pretty distinct threat models to deal with. One threat model is the bad actor that is somehow ready to take a few risks to try to get away with something. And that can proliferate if there is a sense the system is a little bit of a sieve and there’s a way to, you know, and it’s not worth the company to probably double check this I’m just gonna, you know, fudge it kind of thing. And I don’t know if deep fakes, for example, represent a huge state change in that. It may be that the same people that were forging receipts from like “oh yeah, here’s my repair receipt.” This is how much it cost. Okay, you know, now they have a dot matrix printer and it can look like a more official receipt or they can you know, Photoshop, something that if you’re really out to just not have any ethics at all, that can be a problem and deepfake generators might make that harder. They’re in the cat and mouse game. Oh, well with these deep fake generators, here’s a deep fake detector. And then it’s like “Aw, they’re gonna fudge it,” and I think because it’s almost like dealing with any industry that has issues at the margins, you know, Walmart or somebody’s thinking about shoplifting, how much do they consider it structurally a cost of doing business that they can turn the dial up in enforcement that will alienate some customers that will carry other risks with it, but it will they be dialed down, you know, and they just try to find where to set that dial. And that might happen, with deep fakes too, if used in like that kind of malicious ways.
And again, I think if it became common enough, there would then be a marketplace in detecting it or if I’ve got a claim above a certain amount, you know, download this app to your iPhone to take the picture, not use the iPhone cam. This picture will have an embedded watermark throughout the pixels, that if you use some online thing and to you know, insert, another picture of molded ear, it will stand out like you know, CSI kind of thing. And those things could become then opportunities within the industry to work with the adjustment and claims process. But I would, think the other question just has to do more broadly, with incentives and mistakes, again, that could be emergent, not from people who are out to do wrong, but for people who were never told or incentivized to think that something was their responsibility to be helpful with or notice something and thinking through the systems more generally, and where to accord responsibility within them. I think there’s a lot to be studied there on that front that is distinct from kind of the bad actor model of ethical stuff. There’s just so much more of this moment that is ethically freighted than with jobs that maybe were a lot similar to their counterparts 50 years ago or 100 years ago for which the ethical dimensions were worked through in a process that just hasn’t happened yet.
SG: Now that makes sense. And I know we’re running out of time, Manju, maybe we should open up to some questions from the audience.
MB: Absolutely. Thank you. Shivani. And thank you, Jonathan. I see one right here says I’m going to read it – since regulation and policy are evolving so quickly, how can an insurance company over the industry stay on top of it all, especially considering the variances that might happen at the regional and the local level?
JZ: Well, the insurance industry as a comparatively regulated industry, within the US, you know, has some sectoral maybe advantages and disadvantages. And anybody watching you know, deeply enmeshed within the industry will know whether it’s like, is your friendly local insurance commission, isn’t like, “It’s just great I can call them up see what they’re thinking!” or is like, “No, no, they’re doing another rule and it doesn’t make sense to me.” I don’t know why it makes sense to them kind of thing. So, but there are avenues of regulation that exists and of course, it’s an industry that’s inherently actuarially speaking, kind of statistical and about managing risk.
MB: It’s all about data.
JZ: Yeah. Isn’t another kind of risk to manage. And I think it’s fair to ask well how innovative Do you want to be if you’re generally trying to think over a period of months and years rather than just tomorrow, in your portfolio, maybe it’s not as important to be at the front of the line experimenting with stuff than other industries. And I remember once asking a colleague who’s an insurance law professor, you know, I said, you know, “Gosh, this form that I just filled out in the Commonwealth of Massachusetts, is so confusing. I’m surprised it hasn’t improved.” He said, “Trust me, there’s a couple cases that that form weathered, that was like this form works. We are never changing it again.” And I don’t know if that’s, that’s certainly not like a, you know, Palo Alto attitude, but there’s a reason why the dial is set differently, sectorally, towards you know, innovation where and if we’re talking about not the underwriting process, but the claims management process, that might be something where you’ve got more flexibility.
MB: There’s also litigation risk tolerance, right? Because if you buy a house in California, for example, you sign a stack of papers they stake because every form you sign has had a lawsuit preceding, which is what necessitated the phone to occur in the first place. Which leads to the next question, since the insurance industry is so obviously from an automotive perspective, particularly embedded in what AI will do or not do for that adoption and the expansion of the industry, how can insurance companies then participate in designing and being part of that AI governance process, as opposed to being on the outside waiting for things to happen decided by somebody else?
JZ: I think, my advice to the industry and its constituents is: make your mistakes early and often, and ideally, in a sandbox environment that is the kind of environment that national security folks use when they do they’re war gaming, because they don’t want to actually do a war game. And there are ways to really try this is through human beings not through AI to simulate the environment and invite people in including perhaps some of the public advocates that are often seen as at odds with the industry or with industries generally, and see what it would take to kind of red-team some of these ideas and to have as the baseline, we find it in all our interests, this is like a public interests exercise, and if we’re helping people, that’s earning money for us and helping the people we’re helping. That, I think would be the advice to the industry: figure out ways to surface mistakes and vulnerabilities including sort of structurally and what you’re doing, early, and don’t worry about getting called out for something that hasn’t even happened or been deployed yet.
You kind of just get your choices to is it going to be sooner or later, and I think my advice to the regulatory side of things will be: think about how to incentivize that kind of forthcoming behavior. Could it be through the careful use of liability caps and immunities? If a company should come forward with something it’s thinking of doing or maybe that it already did, but it’s coming clean on it, and it’s not even clear if it’s against the law, but it wants to know, because there’s something that gives the Spidey sense, not a great feel. I even think, in the tax realm, you can get these things called PLR, private letter rulings, where under the right circumstances you can write in the United States, the IRS and be like, so I was just thinking, like, what do you think? And you get your personalized ruling before you take the plunge to see what it is, and then those PLR letters pile up. I think that’s some creative work to be done on the regulatory side, because we don’t fully know what we want. We can point to clearly bad things and disasters by bad actors of things and how to just close off those loopholes, but there’s a whole middle zone of stuff that’s just confusing and new and unclear, and the more we can ventilate that, rather than concentrate it in pockets, where it’s really hard to kind of figure it out, the better.
SG: I think that’s great advice in terms of going out early, testing things, both good and bad, what’s working what’s not, and doing it as a partnership across the companies that are trying to build these are in use these types of technologies and the regulators that are trying to make sure we have the right regulations in place to protect everyone at large. So, great advice, Jonathan. I know we’re out of time. Thank you so much for joining us. This was a fascinating discussion, and I’m sure we could have gone on for hours more, unfortunately, our time was limited, but thank you so much, again for joining us and for the rest of you looking forward to having you at our next session that we will be announcing shortly on LinkedIn live. Thanks so much.
MB: Thank you, Jonathan. Thank you, Shivani.
SG: All right. Bye, everyone.
MB: Bye.