AI: Uses, Ethics and Limitations

by | May 1, 2025 | Podcast

Todd Price: Welcome to Short Talks from the Hill, a research and economic development podcast of the University of Arkansas. My name is Todd Price.

Everyone is talking about AI, and AI is talking back to us. Artificial intelligence has been integrated into our phones, our internet searches, our cars, and even some high-end kitchen appliances. Researchers are using AI to make new discoveries in medicine, chemistry and physics. While some worry that AI will replace white collar workers, others see a future where AI benefits everyone.

Varun Grover, a Distinguished Professor of information systems in the Sam M. Walton College of Business, studies the impact of technology on people, companies and society. And he has thought a lot about AI in research papers, editorials and a regular series of philosophical posts on LinkedIn.

Varun Grover, welcome to Short Talks.

Varun Grover: Thank you. Glad to be here.

TP: AI is a huge topic, and the conversation around AI changes constantly as the technology evolves and new uses for AI are found. I want to start this conversation with your own experiences. As a researcher, how do you use AI? How does the current technology help you do your work and where does it fall short?

VG: So when I started using AI, and that’s with the introduction of ChatGPT a couple of years ago, I didn’t find it that compelling. The outputs were not something that I found terribly useful. But over the last year there have been dramatic improvements. And now when I look at my work, a lot of my work deals with technology and how it impacts companies. And so we study the relationship between technology impacts and outcomes for corporations. And so I look at dilemmas, of confusing things in practice that I just don’t understand. And sometimes feeding it into AI — for example, we’re seeing technology everywhere. But technology budgets in corporations are going down. So that’s a puzzle. That’s something that I don’t understand. So feeding it into AI allows me to get some informed speculation, maybe even some theoretical insight into why this puzzle exists and why it’s happening. And that can trigger some interesting research. So I find it useful on that side.

And then on the data side, I can collect data and feed it into AI, and it gives me insight into the types of analysis that are possible. It allows me to do some summary analysis and presentation of data. So it’s useful in that regard too, as an advisor on how I should tackle the data. So I’m finding it increasingly useful now, particularly with the new generations of AI.

TP: Are there some uses, though, where you thought AI would be a tool that would help you, but you’re finding it’s just not there yet, or it’s not giving you the kind of results that you need?

VG: I do. It’s not going to be able to write a research paper for me, but if I consider it as a tool and I use it for specific tasks…So, for example, I have an introduction of a paper written and I can feed it into the AI and say, you know, shorten it or tighten it or give me a more compelling conclusion. It can set me on the right path, but it’s really important that you take everything AI gives you with a grain of salt. You kind of make sure that you’re cognitively engaged, because without that, you’re basically using it as a tool to replace researchers, and that’s not where it is right now.

TP: What advice do you give your students about how they can productively use AI?

VG: I basically say, don’t use it as a crutch. Be a critical thinker. If you are using AI passively, then you are basically feeding into the algorithms and you’re providing data to AI. And AI is winning. The human is losing. On the other hand, if you are critically engaged with AI, you take the outputs and you think about it, you look for the sources and the credibility of the sources. You’re actively engaged. Then the win goes to the human, not to the AI, because the human is cognitively engaged and learning through that process.

TP: As consumers, do we misunderstand AI because we focus so much on these consumer facing products? Are companies using AI in ways that we don’t see, that we might not even recognize, but it’s still benefiting us as consumers?

VG: Oh yes. Certainly. I mean, there’s a lot of behind-the-scenes stuff going on with AI. So, the vast majority of apps on your phone have AI, but it’s not in your face. It’s kind of in the background. There are companies that are using AI routinely. So for example, your utility company, when it analyzes data and provides guidance on reducing electricity bills or optimizing general utility consumption. That is something that AI is there, but it’s not that visible. It’s behind the scenes. And now with digital technologies embedded in almost everything, including so many physical objects with the Internet of Things, all of these are spitting out data. And data is the oil that fuels AI.

So we’re seeing massive amounts of data, massive amounts of generative AI use through this data. And so, I don’t think that’s going to abate. In fact, I saw somewhere some study that was recently done that basically said the complexity of problems being dealt with with AI is going to double every six months. So every six months we’re dealing with far more complex problems, the number of variables being considered. So AI is here to stay.

TP: Clearly tech companies have bet big on AI, but consumers are not always that enthusiastic about what these companies are doing. And there’s a sense that some companies — Microsoft, Google — they’re really pushing AI on us. They’re insisting that we use it in their products, but it’s not always things that we necessarily need, or sometimes they’re wonky and give strange results. Do you think tech companies, in their enthusiasm, have misjudged the current abilities of AI and the value of the technology? Are they too far ahead of where the technology is right now?

VG: I think the technology is advancing at an incredible pace. It actually boggles my mind, because I’ve seen many technological revolutions but this one is moving fast. The problem is, you’re absolutely right. I mean, it’s tough for companies to deal with this pace and so the challenges, the limitations, are largely in what I would call alignment and integration.

It takes training to get AI to do what you want it to do. You can always throw tools at companies, and everyone has access to these chat bots and these general tools. The question is, if everyone has access to it, no one really differentiates themselves and most of the use cases are individual. So the question is, can you get alignment. And alignment doesn’t require that you just don’t take a tool and allow it to spit out a creative output, but you continue to engage with the tool until you get it to do what you want it to do. And that requires some degree of prompt engineering. It requires some degree of training. So if you’re generating code, it’s an iterative process. And so companies are not quite there. That requires time. So that’s the individual getting alignment.

But then the company has to integrate AI into business processes. You can, again, through AI add an existing process and give it to people. And some people may use it well, some people may not. But can companies actually integrate it into their business processes? Just like we went through massive re-engineering many years ago of corporations. We need to probably go through a massive change of business processes, so they can truly effectively deploy AI. So AI, yes, companies are pushing it aggressively, but the companies that are using AI are not quite seeing the business value case yet. So if you look at AI investments in companies and you look at the value outputs like profitability, those relationships are weak. They’re almost non-existent. And partially it’s because of alignment and integration. And these kinds of things take time.

TP: Well, one thing in your writing on AI that you talk about is the need to use active intelligence when we work with AI, and you’ve touched on that already in this conversation — that we need to have the right prompt, we need to understand what AI is good for and what it’s not good for, and to be skeptical when we should be skeptical. Do you think, though, that definition of what it means to have active intelligence is going to evolve as AI evolves, and intelligence perhaps takes on a different meaning as we engage with these AI machines constantly?

VG: Yes, certainly. I mean, the fact is we are at a point where we have access to intelligence. It’s actually externalized. Our intelligence is usually in our skull. Now we are externalizing it. Now we have access to extra intelligence. How do we effectively leverage that? To do that, we have to understand and critically evaluate all our engagements with AI. What is it doing? How is it providing outputs? How do we evaluate these outputs?

But I think over time this intelligence is getting better and better very rapidly. And so there will be increasing movement from this critical engagement to some level of dependance on AI. And that has its own dangers for society. If we tend to get too dependent on AI, just like an airline pilot might get too dependent on autopilot or an automatic car that’s driving itself, a self-driving car, if there’s too many of those then we forget how to drive or deal with emergency situations. So there is some kind of careful planning and adjustment that needs to be made.

But as of now, I advise my students, or anyone engaging with the AI, to be cognitively present. And that means we see so many people that are aimlessly scrolling, doomscrolling as they call it, on their iPhones and social media. And that is what I call passive intelligence. You’re actually giving more to the machine than the machine is giving to you, and it’s not very productive. The human element has to stay in the loop. So they call it human in the loop. So that is what I mean by active intelligence and being cognitively engaged.

TP: So also in your writing you talk about the difference between using AI for augmentation and using it for substitution. And I think it’s a really important distinction. Could you explain for our listeners, first of all, what is the difference between the two approaches, substitution and augmentation?

VG: So in my mind, when you say substitution, we’re basically automating. We’re replacing the human with the AI. And replacing the human with the AI means the human is actually competing with the AI. And when the human competes with the AI, the AI wins because AI is usually lower cost. So when you have humans competing with AI, the cost of the human or the value of the human goes down, labor costs go down. And I don’t think it’s as productive for society. When you automate or you substitute, you’re not really innovating. So substitution, if companies push this idea of automation and substitution very aggressively — and they have incentives to do that because most companies are driven by profit motives — if AI can reduce the cost side of the equation, it’s only reducing that one side. It’s not increasing the revenue side of the equation. And so they’re incentivized to increase profit by automating and reducing costs. But it reduces the value of the human because humans are competing with AI.

Augmentation, on the other hand, is enhancing the value of the human through the AI. So if you’re looking at it in the same terminology, it’s a human with AI competing with a human without AI. And the human with AI will do better because the AI and the human interact to augment value. So the value of the human goes up. So companies need to be very aware of this distinction and not follow the path of least resistance, which is automated, reduced costs and increased profitability through cost reduction, and subsequently reduce the cost of the value of humans. Companies need to think of augmentation ways because augmentation leads to innovation. People and AI can interact to create valuable outputs, more innovative, more creative outputs that can also enhance the revenue side of the equation.

TP: There’s a lot of fear of AI that people have. Do you think that’s because of this fear of substitution that the machine will replace us, or do you think there’s something deeper causing that fear?

VG: Yeah, I think there is, simply because all the incentives are kind of pushing the idea that labor will be reduced. The lowering of costs. Also, the innovations are trying to make it more and more human like. So when you make something of technology more and more human like, the general tendency is, let’s make it like a human so we can replace humans because it’s so similar or it’s getting there. And so that’s why people are worried. But I think that the replacement of humans, there are certain jobs that are more repetitive that will be and are being replaced by AI, but jobs that involve empathy, jobs that involve social interaction, those are the ones that will probably not be replaced, or jobs that involve a lot of manual labor, like plumbing. You can’t see plumbing being replaced by AI in the near-term. But yes, there is concern.

I mean, I just look at, for example, a company like Waymo, which is, you know, the robot taxi service that’s in San Francisco, Austin, L.A. It’s doing extremely well. This is an automated car. Obviously, when these cars are successful, they’re replacing cab drivers, so taxi drivers. When you have automated trucks, they are going to be replacing truck drivers. So there is some genuine concern that’s valid. But I think the replacement of most jobs will be over a longer time horizon. And certain kinds of jobs will always be needed. And the interesting thing is, AI augmented by humans can just make those jobs better. So you don’t have to often replace those jobs. You can just make them better through interacting with the AI.

TP: When you look at the future, are you optimistic or pessimistic about what AI will do to our society?

VG: I’m generally a technical optimist. I’m optimistic about the technology, so I have no doubt the technology will advance as it has been. The pace of technology advancement is going to be incredible. I’m less optimistic about the humans and their ability to react, and by humans, I mean the humans as individuals and the humans as social institutions. That includes government — the ability to regulate, the ability to absorb, the ability of companies to effectively absorb these technologies, to kind of prevent nefarious players from taking advantage of these technologies. So I’m optimistic about the technology, jumping in leaps and bounds. I’m not as optimistic, but I’m hopeful, that we’ll figure it out as humans.

TP: Well, Varun, thanks for coming on Short Talks.

VG: Well, thank you. Thank you, Todd. Appreciate it.

TP: Short Talks from the Hill is now available wherever you get your podcasts. For more information and additional podcasts, visit ArkansasResearch.Uark.Edu, the home of research and economic development news at the University of Arkansas. Music for Short Talks from the Hill was written and performed by local musician Ben Harris.