The Elevation of “Trust”

In the United States, we have never had less confidence in companies and institutions. Of the 9 institutions Gallup has rated every year since they began their annual poll, average confidence fell from 48% in 1979 to 26% in 2023. Confidence expressed last year by Americans in big business (14%), banks (26%), large technology companies (26%), newspapers (18%) or television news (14%), is abysmal.(+) 

That said, we are about to enter a new era where “trust” becomes more important, and far more technology gets put at the center of it. Humans are not going away, but more firms will be able to benefit when increased speed, intelligence and automation can augment interactions with their customers, and where their business practices could be streamlined and improved by AI technologies… but also, AI will be used to attack our companies and institutions. The changes in orientation around trust and safety in technology companies might provide some useful insights into what will happen in the coming years.

At large tech companies, ‘trust and safety’ or ‘integrity’ used to be seen as an understaffed backwater.  The small engineering teams who often toiled in relative quiet in this less-prestigious area often deeply cared about solving these problems. Some of these groups had been under-resourced for a long time. This area was seen as a cost center - diverting resources that could instead be used to develop core product features and functionality that drove user growth and engagement. With fewer constraints and checks from integrity teams slowing them down, product teams could ship new features and cycle through experiments more quickly. It was heavily operations- and outsourcing driven, relying on vendors like Accenture, Cognizant or Genpact who had thousands of workers reviewing content per guidelines given to them by platforms(*).    

The view of this area’s importance started to shift in 2017, with platforms responding to pressure from the public, advertisers, and policymakers to crack down on platform abuse, especially on social networks which took a “default to open” approach to connecting people with each other and to businesses. Misinformation spread during the 2016 US election contributed heavily to this. Companies realized they were facing:

  1. Reputational damage: Neglecting integrity issues like harassment, misinformation, and illegal activity could lead to negative press, erode user trust, and damage a company's reputation.

  2. Regulatory risks: As governments scrutinized tech more closely, lack of strong integrity controls left companies vulnerable to fines, lawsuits, and regulatory action.

  3. Negative societal impact: Failure to responsibly manage online platforms could enable real-world harms like violence, election interference, and erosion of democracy.

  4. Unsustainable growth: Focusing only on engagement without safeguards against abuse was short-sighted and could make platforms less appealing to advertisers and lead to churn as users got fed up with toxicity.

At Facebook we called all these teams fill-in-the-blank “integrity” but I myself worked more on “trust” (ads and businesses) than on “safety” (societal harms, organic content) - I was leading product for a team called Business Integrity and we had to grow very quickly. In 2017 and 2018 as an increasing number of technologists worked to build out these integrity teams, we had to convince great engineers, machine learning PhDs, product managers (PMs) and others to join our efforts instead of working on things like camera filters, growth, newsfeed ranking and shipping more “fun” features. Early on I was lucky enough (I needed the help!) to get senior company leaders to help ‘sell’ PM candidates on working in this area. Also, for a while Facebook required generalist engineers and PMs not yet attached to a specific team to at least talk to/consider integrity teams. Eventually the pendulum swung and we got a lot of inbound interest from smart people who knew how important this area was to the company (and the world).  

[Note: Over the last three years amidst widespread tech layoffs, there have been media reports of people leaving integrity/trust & safety teams, and companies reducing some of their resourcing in these areas(**). Although I hear anecdotes from industry peers and former team members from time to time about lowered standards and staffing, I don’t have any special analytic insights on this.]

Automation by machines combined with AI-generated content streaming into connected touchpoints from outside parties will become more widespread. As I wrote in April 2023, a much greater range of companies will “need to detect not only the use of AI/fake/auto-generated content, but also the automation of interactions into many places they may not have been monitoring before”. 

Firms will need an actual in-house understanding of how these interactions will affect their business and how it affects their users and customers. This means that engineering and product management leadership for trust initiatives will move to the forefront for large enterprises. As large language models (and other AI) become increasingly useful, the center of gravity for trust and safety initiatives will move even further away from humans and towards automation. This means that the leaders who run these teams will need to have stronger technical chops versus in the past, where operations experience might have been at more of a premium. Furthermore, since product changes that prioritize user trust or integrity often come with growth or usage tradeoffs, those engineering and product leaders will need to have a better understanding of the “revenue side” of their employer’s business, and be in closer sync with their counterparts there. 

 This could imply in many larger companies:

  • Some operations teams may need to shift to report to engineering 

  • Companies will need more cross-cutting product reviews to account for safety earlier in the process. Having a cross-functional product review structure that includes representatives from legal, policy, operations, engineering, and product from the revenue side of the company would help ensure that trust and safety measures are balanced with the company's growth and innovation objectives

  • Growth/revenue-oriented teams may need ‘trust’ representatives embedded to more quickly address potential concerns 

  • Pragmatic thinkers become crucial. Someone who has worked on both monetization and abuse prevention could help a team iterate faster. Being dogmatic is not useful.

CEOs and other company leaders will have to address trust issues publicly. They’ll need to have trust/integrity leaders reporting to them or other mechanisms that let them stay abreast of the latest developments there. Company leaders will need to know how their trust initiatives (both defensive and trust-building) compare to their peers, and so measurement and data science will be key to ensure machines and people work optimally together. 

The abuse cycles will become faster and more adversarial. As bad actors increasingly leverage more AI tools themselves, they’ll be able to quickly iterate their abuse strategies. Teams will both have to react more quickly, and have proactive strategies they can deploy to mitigate risks. The tools our teams use will greatly benefit from ML or AI-assisted workflows that can make our people super-human. 

When we connected all the machines and all the people as the Internet rose to prominence, security evolved from a back-office function within “information technology” into a C-level role (Chief Information Security Officer) in many companies, sometimes instead of or in addition to the larger CIO (Chief Information Officer) role.  Over the last 20+ years, the CISO role evolved from a largely technical position focused on securing IT infrastructure to a strategic, business-oriented role responsible for managing risk, ensuring compliance, and aligning security initiatives with organizational goals. Over time, I could see a senior role emerge focused on “Trust” more broadly - and I like this idea more than someone focused solely on AI. AI will often be a mechanism or tool that affects trust instead of an end in itself. 

The entire concept of “trust” is going to undergo a transformation & we’ll need smart leaders who can combine deep technical understanding with a pragmatic business-orientation to help companies evolve and adapt to this fast-moving new world.


–——- 

Notes:

(+) As an aside, the poll sees “small business enjoying the most public trust, with 65% of Americans having a great deal or fair amount of confidence in it”. We likely have an overly romanticized, “Americana” view of mom-and-pop businesses that service our local communities, and we tend to trust them more. 

(*) Reporters like Casey Newton (then writing for The Verge) covered the working conditions of many of these reviewers in articles like “The Trauma Floor” (2019).

(**) NBC, “Tech layoffs shrink ‘trust and safety’ teams, raising fears of backsliding efforts to curb online abuse” (CNBC 2/10/23), “Tech layoffs ravage the teams that fight online misinformation and hate speech” (5/26/23 Bloomberg), “Google Trims Jobs in Trust and Safety While Others Work ‘Around the Clock’” (3/1/24 Bloomberg)

Next
Next

CNBC: “Still a question over how AI will be integrated into Alphabet’s products”