OpenAI and Building Trust
One of the fascinating things about artificial intelligence vis-a-vis communications is that, for all intents and purposes, it’s an entirely new field to the public and thus a blank slate for comms—no rule book, no best practices or case studies, no preconceptions (or few, if we're counting Hollywood movies). The comms people at Google, Microsoft, OpenAI, and a handful of startups that have broken through public consciousness are building the playbooks for AI and the public's mental models for the technology as they go. Pretty fun work.
If you're the head of comms at one of these companies, the playbook you're building isn't (or shouldn't be) most focused on name recognition or brand awareness. Those are important, but your product and coverage of it are most likely driving that for you without you having to do very much.
No, you are (or should be) most focused on trust - earning it, maintaining it, and ultimately banking on it as your company navigates what is sure to be (and already has been) an uneven path with lots of weird twists (say, attempted board coups), turns (perhaps a dispute with a famous actor), and inevitable errors along the way.
To put it more pithily: brand awareness will get you subpoenaed for a Congressional hearing, but trust will get you through the hearing.
One of the questions fundamental to trust-building in AI is, "How'd you train your models, and on what did you train them?" A good answer here should be simple enough to be understood by a wide array of audiences but detailed enough so as to not look evasive. You don't have to address every individual source of content in follow-up questions, but you should at least be ready to be asked about the most obvious cases of content online.
OpenAI seems to have chosen a different path. CTO Mira Murati was asked by the Wall Street Journal's Joanna Stern in March, and COO Brad Lightcap was asked by Bloomberg's Shirin Ghaffary on stage at Bloomberg Tech this month: Did the training set for your Sora video model include YouTube videos? Neither exec's attempt at answering the question went over well, though for different reasons: Murati for her visceral reaction to the question and vacillation in response, and Lightcap for his prepared dodge.
Now let's be fair: every company has some moment of an exec fumbling a question and having to clean it up later. The comms team at OpenAI are not neophytes, and incoming head Chris Lehane is as grizzled a veteran of intense public scrutiny as they come. I imagine there were fierce debates among the executive team, the comms team, and probably a gaggle of highly paid outside counsel about how to navigate this question.
Where they seem to have landed, judging by Lightcap's answer, is, "Make lots of soothing noises resembling words about responsibility until the interviewer gets bored, gets the picture we're not going there, and moves on."
That's an understandable choice legally, the kind of best-practice advice you'd get from any Big Law partner whose job is to give the most conservative point of view. But it's a huge whiff when it comes to trust building. OpenAI should be taking this question and ones like it head on.
The first and most obvious reason to take this particular bull by the horns: everyone knows they did it. Everyone. YouTube seems to have known it. The press has reported on both sides’ knowledge in detail. If Alphabet ever went to court over it, there would likely be voluminous discovery about the technology developed specifically to ingest YouTube videos. (This prospect seems highly unlikely, given how nervous Google and YouTube execs reportedly were about the limits of their own terms of service.) Professing ignorance or stonewalling makes you look incompetent at best and guilty at worst.
The second reason is more integral to OpenAI's ethos. It's clear OpenAI's operating beliefs about AI, content, and the open web hold at the center that any content on the public web that's not explicitly blocking its scrapers is fair game as training data, including YouTube. This is not a universally shared belief, evidenced by the thicket of lawsuits involving OpenAI. The company is going to end up needing to present a comprehensive public defense about its approach to training eventually. Why not make that argument publicly before you have to do it in court? To borrow a phrase from sports and culture Substack writer Ethan Strauss, don't apologize: double down.
But let's say I'm being too cavalier here about legal risk and that all these arguments have been made and rejected by OpenAI's leadership and counselors. The next move for comms is obvious, then: don't put your execs out in positions where they have to field hard questions! There is a bevy of places you can go and get much more sympathetic treatment than the Journal and Bloomberg, where they aren't going to push you into uncomfortable places. This is probably something OpenAI should be doing anyway, given how new to top-tier media their non-Sam Altman execs are. Get them reps, build their comfort, and road-test your answers.
You also don't have to do any media at all if you don't want to, though there are reputational downsides to that which may be worth exploring in a future post about going direct and eschewing the press.
Lastly, it's not just important that OpenAI address this question head-on for its own sake. Because of its leadership position in AI, its actions and decisions have an impact on the public perception of the entire field. If the conventional wisdom becomes that OpenAI isn’t trustworthy, it rightly or wrongly casts a pall over every other similar project. Most people just aren’t that nuanced in their thinking.
I've zoomed in here on just one very specific trust issue OpenAI is facing. The only thing that’s certain is it won’t be the last.