Employers toggle the deployment of AI for their workforce and customers

2 weeks ago 11

Ads by AdZippy

If a client of TIAA visited the company’s website and typed into the search bar a question about retirement planning, two sets of responses would be given.

The first is a general list of responses that TIAA has traditionally offered to clients and the general public. But more recently, a second tab has appeared with another set of answers, offered from a beta version of TIAA’s “gAIt,” which uses generative artificial intelligence to share more tailored and specific information about retirement services. 

Initially, this tool was meant to help all TIAA employees understand the financial jargon of retirement financial services. But it was so successful the generative AI tool is now offered publicly, too.

“This is where we are now taking it—from what was a truly internal use case to educate our folks on this complex set of knowledge management—and starting to put that in the hands of our prospects and clients,” says Sastry Durvasula, chief information and client services officer at TIAA.

As companies like TIAA explore dozens of use cases for generative AI, one pattern that has emerged is a spillover of the projects companies were initially deploying to help workers, now finding their way into external uses. Since Durvasula joined TIAA a little over two years ago, he’s helped the company set up an incubation arm that gives clients the ability to initially test AI use cases internally. 

“I would say there’s a large portion of our use cases that start internal and graduate into an external realm as well,” says Durvasula.

“The best place to start is with your employees,” says Sabry Tozin, VP of productivity engineering at social media platform LinkedIn. Colleagues are more willing than customers to offer feedback about what tech works—or doesn’t work. 

LinkedIn started to use virtual AI chatbots internally to help with knowledge management, like helping workers learn about benefits the company offers to those having a child. LinkedIn is applying the lessons learned from that AI tool to more helpfully share content with customers. 

And while LinkedIn says colleagues are considered “customer zero” for the products it develops, that group is also a bit more tolerant when AI makes a mistake. “From that perspective, we’re a little bit more strict on the customer-facing stuff,” says Tozin.

Ryan Bulkoski, a partner at executive search and management consulting firm Heidrick & Struggles, says companies need to balance how they tackle both internal and external AI use cases and shouldn’t tilt too far in one direction. 

“If you were 90% focused on using AI in your product and making sure that everything is AI-enabled externally, that can be a source of frustration for employees internally [who may be] saying, ‘Why don’t we have the same access to these tools?’” Bulkoski notes. Conversely, organizations can lose a competitive advantage if they are too focused on deploying AI tools for their workforce and not embedding the technology in their products. 

Ultimately, Bulkoski says, the AI strategy should be a board-level discussion, including the need to appoint an executive who will serve as a champion of AI and will be responsible for the execution of that technology. Traditionally, chief information and chief technology officers would be tapped for the next technology revolution. But Bulkoski says that’s changing. 

“More frequently, I’d say in the last 18 to 24 months, we’ve seen a new role created: a chief AI officer who’s reporting directly to a CEO,” says Bulkoski

At e.l.f. Beauty, chief digital officer Ekta Chopra is exploring internal use cases for generative AI, including an employee support chatbot named Alfred. The cosmetics brand is also exploring generative AI prototypes that are meant to enhance the consumer experience. But e.l.f. Beauty has been more cautious on that front, and the tech isn’t yet live externally. 

“It’s really important to us as we think about our AI ecosystem [that we] keep our purpose in mind: Ethics and AI are so crucial,” says Chopra. “Especially when it comes to beauty; we have to be even more responsible because the way we want to use AI, we want to be transparent, we want to be accountable, and we want to be inclusive.”

A little over a year ago, chipmaker Advanced Micro Devices (AMD) consolidated AI efforts internally to focus on two different audience segments: engineers and everyone else. “We firmly believe that AI as a tool has a lot of potential in terms of harvesting more productivity out of our employee base,” says chief information officer Hasmukh Ranjan.

The internal use cases AMD is exploring today include assistance tools that can make a worker’s job easier and the use of AI to automate certain responsibilities. While the internal use cases of AI aren’t meant to go to market, AMD has established a review committee that includes Ranjan alongside the company’s president, chief software officer, and chief legal officer.

“We have a test framework that we apply to every project, and you have to follow this test framework before it goes into production,” says Ranjan.

Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

Read Entire Article