In: Operations Management
WATCH THE VIDEO ---> Transcript of the video.
>> I think one of the things we've all gotten a little obsessed about at the moment around AI is what's technically possible. And I think we need to be paying much more attention to what's culturally, appropriately, socially acceptable and works inside our laws and governments.
>> So, you're focused on will robots share our values, not will robots take our jobs. Will they? Will they share our values?
>> Oh, another really good question, right? So, listen, we know that artificial intelligence is going to go to scale. We know it's going to end up in lots of different places. The question becomes, how do we ensure that that's something that we're comfortable with, something that we feel good about, something that reflects the things we care about. And that means asking questions beyond just what can we do technically, but to ask questions about what are the values we want these objects to enshrine, who gets to decide what those values are, and how do we regulate them.
>> Are these questions being asked as often as they should be?
>> Well, I'm an anthropologist so, you know, I think the answer is no. We should ask them all the time. At least the good news is I think they're starting to resurface. So, the more you hear talk about AI and ethics, AI and public policy, AI and governance, those are at least the beginnings of conversations about what's the world we want to build and how we're going to live in it.
>> So, let's take the pro side. Let's say these questions are asked as often as they should be. What is the potential of AI to affect our lives in positive ways?
>> So, I think if you manage to kind of think through the, where are the places that AI can be most useful, and frankly for me, again, as a social scientist, the question I always want to ask is not can we do it technically, but should we do it socially. So, are there places where AI makes better sense, not because it's about an efficiency, but because it either has a way of making decisions that's a little less messy than humans making it. By the same token, depending on who programs it, depending on what data they use, sometimes we have the potential of these technologies to reproduce and enshrine really longstanding in equities and bias. And that seems like not a good trend at all.
>> Right. So, what are the gravest dangers? What are the gravest dangers if these questions do not get asked?
>> I think the gravest dangers are we take the world that we live in now and we make it the world in perpetuity moving forward. So, all the things about the current world that don't feel right is what the data reflects, right? It's a world where women aren't paid as much as men, where certain kinds of populations are subject to more violence, where we know that certain decisions get made in manners that are profoundly unfair. If you take all the data about the way the world has been, and that's what you build the machinery on top of, then we get this world as our total future. And I don't know about you, but I'd like something slightly different
If AI was to replicate the current state of affair in terms of hiring practices, it is possible that the various discriminatory practices will be copied. As a result of this, the future of hiring would be strongly biased against women, minorities and older people. This would violate the laws that are present and jeopardize the reputation of the company.
The worst part is that since AI learns from large set of data, it would automatically learn what we practice today. As a result of this the change in the hiring practices in the future will be an unplanned one.
Hiring and promotion situation are already ad internal force for organizational change. There have been many instances of discriminatory practices in workplaces from many decades. The laws and acts that exist today tend to provide some protection to the minorities, women and elderly. Now, we should ask the question, “If there were no discrimination to begin with, was these laws necessary?”
Since there have been (and ongoing) discrimination, the laws are put in place. This has made many of the organization understand their mistake and make the diversity hire an internal force and changed their HR practices. If AI learns from our current discriminatory practices then it is possible that there will be no internal force tomorrow because the workforce may be completely homogeneous.
If there was a correction in the AI and it was handled as change process, the reactance would likely come from the currently dominant group, i.e. white males. This is because if the AI changes its process of hiring and logic of hiring then it will likely not prefer white males as it is doing now. As a result, the opportunity for them will reduce. The key reason to resist the change will be loss of status or job security in the workplace.