PwC's Tech While You Trek

PwC's Tech While You Trek: Responsible AI

March 22, 2021 Season 1 Episode 25
PwC's Tech While You Trek
PwC's Tech While You Trek: Responsible AI
Show Notes Transcript

Tune into another episode of Tech While You Trek to hear PwC Director Ilana Golbin on the importance of taking a step back and responsibly developing artificial intelligence (AI) to mitigate risks and reduce bias, as well as preparing for the future. PwC recently published the AI Predictions report to share the top five AI trends that businesses are facing.

Tech While You Trek:  Responsible AI
Release Date: March 22, 2021



Adam (00:08):
Hello everyone and welcome to another episode of PwC’s Tech While You Trek. I am your host, Adam. And today I have with me Ilana Golbin to talk about responsible AI. So please introduce yourself and tell us a little bit about yourself, how you started working for PwC, et cetera.

Ilana Golbin (00:23):
Thank you for having me, Adam. It's a pleasure to be here. I'm a director at PwC in our emerging tech practice, and I've been a data scientist with the firm. So I've been specializing in AI and machine learning for just about a decade. And I also lead our work globally in the space of Responsible AI. I came into this space just because I've had the pleasure of growing as a data scientist as AI machine learning has been adopted in new and unique ways by companies. And just curiosity as to how do these systems work, how do we explain them? What are some of the unintended consequences and the repercussions of them?

Adam (01:02):
So I need to know right away then what it means to have Responsible AI and I need you to put my mind at ease. Because to me, Responsible AI sounds like making sure the robots don't come to kill us all.

Ilana Golbin (01:13):
Well, I think personally we're a very far time away from automated general intelligence, which is really the robots being sentient beings and taking over our world type of perspective. But there are real problems that we're facing today with systems. They're right now making decisions on whether or not people get benefits for certain types of social infrastructure, social safety nets, and not just in the US but globally. They're being used to determine who gets through the first few rounds of a screening process that is meant to replace resume reviews. They're determining who gets access to loans and other type of capital. They're being used to determine who gets access to vaccines as we've seen in a few instances. So when I think about what does it mean to be building systems responsibly, it's first and foremost thinking about how we can address and mitigate some of the risks around AI that are here today. As well as the ones that might be around in the future.

Adam (02:07):
An automated system is only as good as the people who automated it, right?

Ilana Golbin (02:11):
The people who created the data that feeds it, the overall process and how those decisions are then relied upon in order to inform other processes. So it's not as simple as remove humans from a decision making and suddenly you have parity, it's much more complex than that. How do you understand how a model comes to its conclusions? Is that important? How well does the model work over the long run? Is it robust? Is it secure? Does it have the right type of governance and controls? It's these types of questions that really we need to address in order to have trusted systems. And this is increasingly important because we see over and over and over again, we do an annual survey called AI Predictions, and we know that companies are continuing to invest in AI. And they're continuing to push for the adoption of AI and other type of automated decisioning.

Adam (02:59):
So could you expand on that? Talk to me more about regulations, the need for better governance. What kind of impact could this have on AI in the immediate future?

Ilana Golbin (03:09):
There are a lot of different regulations that have been proposed over the last few years. Some stemming from the data privacy side, others increasingly coming in from questions about bias and fairness and accountability. And especially with the change in administration, it's likely that we're going to be seeing more and more of these. And we're really in this pivotal point, where as organizations, how do we get ahead of that? And how do we appreciate what responsibility means for us as an enterprise for individual enterprises?

Adam (03:39):
So I've heard you mentioned bias now a couple of times, talk to me about bias. Why is it such a big problem for companies to solve?

Ilana Golbin (03:46):
I don't think that when we go about building systems that we think, oh how can I penalize a specific group today? Or how can I exclude a specific set of individuals from some type of a benefit? Be it access to mortgages or being hired for a specific job, or even if it's just how well a specific product works on my face versus someone else's. But the reality of the world we live in is that data is not impartial. And the data that has historically been collected doesn't necessarily reflect how we think the system should operate in the future. Even if you think about applications like facial recognition, which on the surface can seem innocuous. They're used to unlock your phones. There has been research that has shown that a lot of facial recognition systems, most facial recognition systems don't perform equally as well on black faces as they do on white faces, but becomes a much bigger problem when that technology is used in instances that might have a really difficult repercussions for individuals like policing. We've seen over and over and over again, this is starting to happen where people are being flagged as potential criminals based on their faces. And they've actually been flagged incorrectly. It was an error that was generated by the model.

Adam (05:09):
Well, that's interesting though, because in something like facial recognition, the data is there. It's just the system was programmed in a biased way. There would be a way to create a system that could collect enough data from disparate faces to tell them apart. It's the system itself that has been designed in a way that doesn't do that properly. Is it not?

Ilana Golbin (05:28):
Not necessarily. We don't necessarily know where a lot of this data comes from. There are open source data sets that have faces specifically for the purpose of training these models, but they were created for academic purposes, not for commercial purposes. And so they really don't actually have parity. So data is a problem. The model is also a problem because you have to think about the development practices. Who are the people who are creating these systems? They're more likely to test whether or not these systems work well for people who look like them. And so if you don't have diversity of not just demographics, but also schools of thought and perspectives, you're not as likely to catch some of these issues on the models themselves. We don't ask ourselves enough, should we build something? Or what could go wrong? We go, "Hey, wouldn't it be cool if..."

Adam (06:15):
So then how does one go about building an AI system responsibly?

Ilana Golbin (06:20):
It comes down to governance and to accountability, ultimately. From the conception or inception of a specific use case, we need to be thinking about how could this go wrong? What data are we using? What are some of the controls we need to enact? And flagging sensitive use cases so that these types of applications go through a more rigorous review. Who is ultimately responsible for this application? Who is going to be overseeing how it was developed? Is there a second line of defense? Meaning is there someone who is tasked with independently reviewing this system.

And then does someone take a third line perspective and say, "What are the actual controls we have within the organization in order to assess whether or not these systems were developed properly?" It's often times referred to as the three lines of defense structure. Do we have that in place? In the vast majority of applications, most of that does not exist, this whole structure. It's "Hey, we have this cool use case. Let's push it in and now we have a great model in production, and we'll just see where it goes from here". And that's not how things should operate.

Adam (07:23):
The old fingers crossed approach.

Ilana Golbin (07:24):
Yeah, fingers crossed. And the governance model is hugely critical.  Models need to be robust. They're probabilistic, they're not deterministic systems. And so having that uncertainty by nature might cause concern in the long run, but we need to ensure that these systems do actually perform well. That they're not subject to be manipulated or be misconstrued or even misapplied so that an undesirable outcome could potentially be garnered. Or it will be too sensitive to small changes in the environment.

There are other type of questions around privacy, which I think we will continue to have to address. There's always a delicate balance between the privacy of systems, the data that they collect, and also being able to validate whether or not a system is bias... Bias aware. I don't like the term bias free because I don't believe that they could ever be removed. But without collecting that data, how can you then provide confidence that systems in fact are working in a manner that would achieve parity across a variety of attributes?

So those are the general tenants of responsibility, but there's an increased focus as well on the general societal impact of systems and just appreciating some of the harms. So one of the first things that we can do as organizations is just institute a culture around, should we do this? You need to take a more measured approach. So how do we take a step back and incorporate that as part of our development process so we can still move quickly, but in a more methodical way, that's more appreciative of the risks?

Adam (09:08):
So you've already led me very nicely into my next question, which was talk about some approaches companies can take to implement such things. I know you started with the, should we build this? But what else?

Ilana Golbin (09:17):
We have seen a push where organizations are starting to adopt sets of ethical principles around AI. I think we've seen well over 120 different organizations adopting sets of ethical principles around AI, which is great. This is a fantastic development. However, we're understanding that now we're at this point where organizations are trying to figure out what to do with these principles, how do you put actual practices behind them? The translation of these principles is one of the first things that organizations really need to do. And I would say that there are a few developments that are going to help clients and organizations generally make that jump.

One is that you have very standards bodies in industry consortiums that are starting to adopt requirements around these principles and trying to give examples of what they mean. But I do think that there's a few other things that need to be enabled and it comes down to governance. Having a robust governance structure that is specific to AI analytics and automated decisioning systems. So not necessarily data governance or not necessarily privacy governance, but having a governance structure that's specifically designed to address the risks around AI and other analytics systems will really be crucial.

Adam (10:32):
It sounds like what you're saying is they need to create new governance systems and what they're trying to do right now is force square pegs into round holes. They're overlapping existing things, none of which are quite doing the job.

Ilana Golbin (10:43):
I think what's happening is that there's an appreciation that something needs to change, but no clear guidance on how to do it. So organizations are approaching this from a few different angles, not necessarily knowing who within their organization is already doing something. And you'll have disparate practices that emerge. So an example, on the tech side, I would say the vast majority of cloud vendors that have AI and machine learning capabilities that people use for development are starting to incorporate bias assessment tools and explainability tools and robustness tools. And so they're incorporating those, which is great. But are they using the same definitions of fairness that the organization would want? Probably not because no one has told them what that means. So that all needs to come together and it's going to be painful for sure.

But a lot of companies have some existing governance models that they can build off of. And if they just take a step back and appreciate what's already being done and bring the right people to the table to have a cohesive approach, I think there'll be better off in the long run.

Adam (11:43):
So talk to me about the concerns I would have, the concerns a consumer would have.

Ilana Golbin (11:47):
I think that's really important because consumers nowadays in the absence of standards and regulations are really the ones that are pushing companies to adopt better practices around AI and machine learning. The court of public opinion is quite harsh when it comes to applications that don't work well for a wide variety of audiences. And that's not just on the questions of bias. There are also questions about accessibility. Do these systems work? Are they open and acceptable to people who have different sets of abilities? And that's another area where we see an opportunity for consumers to really have a voice. I would say that as consumers, more and more, we are demanding representation. We are demanding that systems work for us in a wide variety of contexts. And this is good for all of us in the technology space because it gives us a North star that we can drive toward.

Adam (12:44):
So I'm going to ask you one final fun question before I let you get out of here. I try and ask all of our guests, are you quite prepared?

Ilana Golbin (12:50):
I'm prepared.

Adam (12:51):
So the you have 10 years ago, what would they be surprised that the you of today is using from a technological standpoint?

Ilana Golbin (12:58):
I don't know why all of my appliances are now wifi enabled and I cannot avoid it. I don't want my oven connected to the internet, but I don't have a choice anymore. I just find it very interesting that enough people have vocalized a desire to have a wifi connected oven.

Adam (13:15):
You don't have the fridge with the camera that can show you while you're at the grocery store what is, or isn't in your fridge at that exact moment in time?

Ilana Golbin (13:22):
You know what, if I had that fridge, I'd probably forget to look at the app when I was grocery shopping anyways. So I don't know what the purpose is, but I'm sure there are a few people out there that are getting some awesome value out of this. If you can tell me why my oven, my lights and everything are now wifi enabled 10 years ago, I would think you were crazy.

Adam (13:39):
It's because they can, right?

Ilana Golbin (13:41):
Because they can.

Adam (13:42):
Well listen, Ilana Golbin, thank you so much for taking the time and stopping by today.

Ilana Golbin (13:45):
Thank you for having me.

Adam (13:47):
Thank you all for listening to another episode of PwC’s Tech While You Trek. I've been your host, Adam, and we will talk to you again next time.

Speaker 3 (13:58):
This podcast is brought to you by PwC all rights reserved. PwC refers to the US member firm or one of its subsidiaries or affiliates, and may sometimes refer to the PwC network. Each member firm is a separate legal entity. Please see www.pwc.com/structure for further details. This podcast is for general information purposes only and should not be used as a substitute for consultation with professional advisors.