The rapid pace of technological advancements is causing tectonic shifts in the business risk landscape. Social media and artificial intelligence (AI) in particular are causing directors to reconsider how they think and talk about risk. Consequently, these topics were the focus of the first part of a roundtable discussion on the next generation of risk hosted by EisnerAmper LLP and the National Association of Corporate Directors (NACD) in New York last week.

Thomas Jones
There is an abundance of examples of companies that sustained severe reputational damage after being caught in the center of a social media storm. Most recently, credit reporting company Equifax made headlines after the company disclosed that it was the subject of a major data breach that compromised the information of roughly half of the U.S. population. The company’s offering of free credit monitoring to affected customers only made matters worse: several print and digital news outlets, including The New York Times, analyzed the terms of the offer, which suggested that by signing up for the service, a person relinquished his or her right to take legal action against Equifax. While the company later changed the legal language in another effort to assuage public concern, reestablishing its trustworthiness may be more of an uphill battle.
“Some of these things would have always been in the news, but the amount of time and the quickness with which news reaches an audience is unbelievable,” EisnerAmper Audit Partner Steven Kreit observed. “Boards need to make sure there’s a social media strategy throughout the company. Boards need to ask management what it has planned for and make sure they can react to those issues as they come up. It’s also important to have policies around social media. What is the CEO allowed to say? Are they allowed to have personal accounts and use that to disseminate company information?”
When attendees were asked if they knew their company’s social media policy backwards and forwards, few indicated that they did—but there was some debate as to how necessary this is. “I don’t think it’s appropriate for a board member to know the details of what the policy is,” one director opined. “What the board needs to know is that there’s a policy and that employees know what they can and cannot say about the company.”
Kreit agreed. “You don’t want to get too far into the weeds,” he said, “but a CEO may react to something in the middle of the night and that response may harm the company. And board members need to make sure the company doesn’t get hurt.”
While most of the discussion focused on preparing for the worst, one attendee observed that a company response plan that is effectively used to respond to negative feedback on social media can not only curb a damaging situation, but help to restore trust in the company.
Discussion then turned to AI. Here, some companies are ahead of the curve in applying technology that has the power to parse through massive amounts of data to make a determination about something. Take for example, IBM’s Watson, the supercomputer that famously competed on the game show Jeopardy!, facial recognition software and self-driving cars. Here, the risk is that AI is advancing so rapidly as a disruptor across nearly every industry. If a company isn’t paying attention now, the competition will leave it in the dust later. But AI is a broad subject area and identifying the elements that are most relevant to a board agenda—namely the risks—can initially seem daunting.
“These are conversations I rarely hear discussed around the boardroom table,” Kreit remarked. “And these are risks that keep changing.”
“An interesting exercise is to look at risk factors in public disclosures,” one attendee said. “We look at competitors and it’s easy to see what risks they are identifying in the same industry.”
“In the conversations I’ve had, it isn’t so much about whether the machine will do its own thing and crush humans as much as asking what fundamental technology are we not using to help us be more competitive and customer-focused,” one attendee offered. “The other thing is, technologists sometimes rely too much on technology. At some point, a human being has to put subjectivity in the mix to make sure the automated methodology you employed doesn’t come back and bite you. This conversation comes through the CISO [chief information security officer] on my board as well as the CTO [chief technology officer] together.” Another director remarked that these discussions take place on the audit committee level.
“It’s important to not think about technology and risk without it being an integral part of the strategy discussion,” another director piped in. “If it isn’t, I think it becomes an academic conversation and you’re walking ahead with one eye open and one eye closed.”
To this end, and in closing this portion of the roundtable, another attendee remarked on how board composition it critical in positioning the board to oversee this issue in the years ahead. “If you don’t have enough forward-looking people with experience from other industries, you’re doomed. Look at who you’re working with and have some sense of what you are [as an organization], what you want to be, and how you’re going to get there.”
Next week, the NACD Board Leaders’ Blog will feature roundtable discussion highlights that explore geopolitical and regulatory risks.
No Comments