Direct answer: The question of whether Sam Altman can be trusted is subjective and depends on how you weigh safety, transparency, and long-term AI governance. Recent reporting has raised serious questions in some circles, but there are also strong arguments and diverse opinions from experts and institutions.
Context and key points to consider:
- What can be trusted is often tied to governance and oversight. Critics argue that concentrated leadership in fast-moving AI labs can outpace safety measures, while supporters point to the board, independent reviews, and public accountability efforts as essential checks.[2][8]
- High-profile investigations have framed Altman as a highly persuasive leader with ambitious goals, which some interpret as a risk if it outstrips accountability; others view as a driver of responsible innovation when paired with external scrutiny.[8][2]
- Public and regulatory responses vary: lawmakers in multiple regions have signaled interest in more transparency and potential oversight, which could either constrain or legitimize leadership decisions depending on implementation.[3][9]
- The reliability of character judgments in online discourse varies widely; reputable journalism has raised concerns about ambition, governance, and safety trade-offs, while other sources emphasize achievements and contributions to AI safety discourse.[1][8]
What to read to form your own assessment:
- The New Yorker investigation “Sam Altman May Control Our Future—Can He Be Trusted?” for a detailed, investigative perspective on leadership, governance, and safety concerns.[9][10]
- OpenAI governance discussions and board dynamics reported in major outlets for insights into checks and balances and how decisions are made in practice.[2][8]
- Reactions from policymakers and industry analysts on transparency and safety safeguards to understand the regulatory context shaping trust.[7][3]
Illustration of how trust is evaluated:
- Trust is higher when there are independent audits, transparent decision-making, and enforceable safety commitments; lower when there’s perceived secrecy or unilateral power without accountability. For example, discussions around internal memos, board dynamics, and safety reviews feature prominently in recent reporting.[10][8]
If you’d like, I can summarize specific passages from the cited articles, compare viewpoints from different outlets, or pull in more recent developments to help you form a nuanced view. I can also provide a balanced pro/con table about leadership in AI labs and the implications for public trust.
Sources
While I do not have sources to hand (so I will not assert this as true but just claim it is my memory) I recall Sam Altman himself saying that he himself did not think he should have control over our future, and the board was supposed to protect against that, but since the 'blip' it was evident that another mechanism is required. I also recall hearing an interview where Helen Toner suggested that they effectively ambushed Altman because if he had time to respond to allegations he could have...
news.ycombinator.comThis comes as public fears about the potential impact on humanity by AI continue to grow
www.uniladtech.comThis is the question that The New Yorker asked in their latest investigative article ( https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted )
www.irrationalchange.comFrom the daily newsletter: an in-depth investigation into the OpenAI head Sam Altman by Ronan Farrow and Andrew Marantz.
www.newyorker.comNew interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI, Ronan Farrow and Andrew Marantz write. n the fall of 2023, Ilya Sutskever, OpenAI’s chief scientist, sent secret memos to three fellow-members of the organization’s board of directors. For...
www.scoop.itOpenAI’s chief executive Sam Altman is once again under the spotlight, this time after The New Yorker published a damning dossier that combines fresh interviews with a cache of internal memos previously kept under wraps. The piece, co‑authored by Ronan Farrow and Andrew Marantz, paints Altman as a c
aipulsen.comNew interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI.
www.newyorker.com