Ready or not, artificial intelligence (AI) is already
permeating the business world, posing a host of opportunities and—if AI isn’t
approached intelligently—an accompanying host of risks. AI’s lure may be in its
capacity to collect and learn from data, which is indeed revolutionary, but AI’s
implications extend well beyond having the right data at the right time and
deploying it well.

NACD, in partnership with Grant Thornton, hosted an October 29 roundtable discussion in Naples, Florida for directors wanting to better understand the implications of this rapidly expanding technology and the board’s role overseeing how it is implemented and managed within an organization. Over the next two weeks, the NACD BoardTalk blog will feature highlights from this discussion.

Nichole Jordan, Grant Thornton’s national managing partner of
markets, clients, and industry, led the conversation by breaking down the
concept of AI into three questions boards should consider:

What is our understanding of our company’s
digital transformation strategy?Are we leveraging technology for our board work?
How is our company staying ahead of regulations?
A digital transformation strategy hinges on the people that
a company has to deliver on that strategy, according to Jordan, and AI can be a
differentiator in a marketplace clamoring to attract and retain top talent. For
example, some companies are using artificial emotional intelligence to monitor
employee engagement and to make better-informed decisions and better drive
business value.

In the financial services industry, for instance, the responsible
company must pay financial penalties when trading errors occur, but these
errors are common—and understandable—because the people responsible for
executing trades are constantly operating under high-stress conditions.
Innovations in wearable technology could be used to notify an employee when
they are under a heightened state of stress and encourage them to slow down or
wait to make a decision in the interest of avoiding making an error.

That same wearable technology could be used to monitor an
employee’s facial expressions and vocal cadence—which could result in better
business outcomes and, as one director observed, coaching and feedback in a
call-center context. Other directors observed that AI could be used for
employee safety and compliance—such as using AI technology to monitor time on
the road in the trucking industry, in which drivers are required to drive no
more than 11 hours per day.

These possibilities do raise ethics and compliance issues,
though. For example, these potential advantages could also be seen as invasions
of privacy. Many of the AI programs being piloted now to help employee
performance are opt-in only, meaning the employee must consent for the company
to collect their personal information in this way. Multiple attendees also
expressed concerns about the hiring phase, in which AI could ostensibly be used
to screen for people that fit the company’s current mold—potentially perpetuating
or introducing discriminatory hiring practices, as well as denying a company of
the game-changing talent it might have hoped to attract.

Here, it’s critical to remember that AI is only as good as
the algorithms that underpin the system. “This is the risk here—and also one of
the reasons why these systems are in pilot mode,” Jordan said. “But it’s also
why the combination of the human and the machine leads to the very best
outcome.”

Jordan emphasized the need to mindfully temper technology
with human discretion and judgment:

“AI provides data points for a hiring manager to consider or
can reduce a significant volume of applications—and those industries where
there is a high job application volume is where we see this technology being
tested right now.”

“But,” as one director observed, “there are so many
mom-and-pop shops that don’t bring enough sophistication to the table that they
run a huge risk of making some significant errors.”

“And it’s not just hiring,” another director added. “It’s
promotions from within and making judgment calls. I’m concerned about biases
and missed opportunities.”

Jordan noted that at the board level, an AI strategy is
required because of that risk. “While the company may not be engaging with AI
today, there should be a discussion about when it will be incorporated into the
strategy,” Jordan said, “or, at least have some outside organizations come in
and talk with you, because it’s good for boards to get that outside
perspective.”

Visit the NACD
BoardTalk blog next week for additional coverage of this discussion, including
insights on how boards are using AI to approach their work and the regulatory
concerns around this rapidly evolving technology.