Sam Altman Answers Lawmakers’ Questions on Regulating AI at US Senate Hearing

Administrator

Administrator
Staff member
Apr 20, 2025
292
59
28

Sam Altman Answers Lawmakers’ Questions on Regulating AI at US Senate Hearing

6820e3f7ca4eb.jpg


Opening the Senate Hearing on Artificial Intelligence

The Senate Judiciary Subcommittee on Privacy, Technology, and the Law brought the hearing to order and welcomed everyone. The focus was on both the promise and dangers of artificial intelligence (AI). Lawmakers explained that AI is the most powerful technology today, with potential for both good and harm. Their goal is to make sure AI is used safely, openly, and with responsibility, so people in America and around the world know what the future may hold.

The main speakers at the hearing were Sam Altman, CEO of OpenAI; Dr. Christina Montgomery from IBM; and Professor Gary Marcus from New York University. Each was thanked for participating.

Sam Altman’s Opening Statement

Sam Altman introduced himself as the CEO of OpenAI. He said OpenAI was started because AI could improve almost all parts of life, but it also brings serious risks that must be managed by working together. He said, "People love this technology," and compared it to the printing press. He stressed that everyone should work as a team to make AI beneficial.

Altman explained that OpenAI is a "capped-profit" company. This means there is a limit on how much profit investors can make. The company was founded to build safe and helpful artificial general intelligence. He emphasized that AI should reflect democratic values, and that its benefits should be shared widely.

Altman said government regulation is very important to manage the risks linked to stronger AI models. He suggested that the US could require licenses and testing before releasing AI models with advanced abilities. He also recommended strong incentives to make AI safe and secure. Altman said companies should have to test AI before release, tell the public about known risks, and allow independent audits. He expressed his willingness to help lawmakers regulate AI systems.

Demonstration: AI Voices and Statements

The hearing continued with a demonstration. The committee chair played an audio recording created by ChatGPT, OpenAI’s language model. This recording used a computer-generated version of the chairman’s voice, reading words written by ChatGPT. The chairman explained that the voice and the words were both created by AI, not by him. He noted this was a sign of the realities ahead—AI can do positive things but also cause problems, so rules are needed for its development.

Senators Ask About AI Risks

Senator Blumenthal asked Sam Altman about the biggest risks of AI. Altman answered that his worst fear is AI causing big harm to the world. He said the technology could go very wrong if not managed well, and OpenAI wants to work with the government to stop that. He stressed the need to clearly see the possible problems and work hard to reduce them.

Blumenthal then asked what Congress should do. Altman repeated that if AI goes wrong, serious trouble could follow. He again suggested that Congress consider licensing and testing requirements for strong AI models, plus safety and security incentives. He repeated the need for testing systems before release, disclosing known risks, and independent audits.

Questions from Other Senators

Senator Hawley said Altman had compared AI’s impact to the invention of the printing press or electricity. He asked what this means. Altman said he believes AI will be transformative, creating new jobs, improving current ones, and making life better. But he added that new risks will come, and careful management is needed.

Hawley followed up on what the risks are. Altman listed disinformation, job loss, bias, discrimination, and use by bad actors. He said dealing with these risks requires teamwork.

Hawley asked if Congress should regulate AI. Altman said yes—he supports rules for safety and transparency, as well as independent checks on AI systems.

Senator Klobuchar asked about threats to democracy, such as deepfakes and disinformation. Altman answered that these are real concerns, especially for elections and information spreading. He said OpenAI is working on tools to detect and reduce these dangers. He added that oversight and regulation are important.

Senator Graham asked if there should be a new agency to regulate AI. Altman said it is worth considering. It’s most important for the rules to be flexible so they can change as AI changes, whether managed by a new or current agency.

Senator Booker asked about privacy risks. Altman said privacy is very important to OpenAI. The company works to protect user privacy and be transparent about data use. He asked for clear rules and standards about privacy.

Senator Kennedy questioned whether AI will take jobs from people. Altman replied that there will be effects on jobs—some will disappear, some new ones will arise, and many will evolve. He said it’s important to help people adjust to these shifts and to offer support to affected workers.

Senator Padilla asked about risks to national security. Altman responded that national security matters a lot. OpenAI is working with the government and others to prevent AI from being misused. Strong protections and oversight are needed.

Senator Blackburn brought up risks to children. Altman said keeping children safe is very important. OpenAI wants their systems to be safe for kids and is working with parents, teachers, and lawmakers on this issue.

Conclusion of the Hearing

The committee chair thanked all the witnesses for their answers and their willingness to work together on these important issues. The hearing closed with a reminder that the goal is to make sure AI is developed and used in ways that are safe, open, and responsible.