Tech CEOs Told by White House They Have ‘Moral Duty’ Regarding AI
During a meeting at the White House, senior executives in the technology sector in the United States were informed that they have a “moral” duty to ensure that artificial intelligence does not cause harm to society.
At the invitation of US Vice President Kamala Harris, the chief executive officers of Google, Microsoft, OpenAI, and Anthropic attended a meeting on Thursday that lasted for two hours and focused on the development and regulation of artificial intelligence.
The Vice President of the United States, Joe Biden, made a brief appearance at the gathering and addressed the CEOs, telling them that the work they were doing had “enormous potential and enormous danger.”
According to a video that was uploaded by the White House at a later time, Biden can be heard saying, “I know you understand that.”
“And I hope that you will be able to enlighten us as to what you consider to be the most essential in order to safeguard society as well as contribute to the advancement.”
Following the meeting, Harris issued a statement in which he urged technology companies to “comply with existing laws to protect the American people” and to “ensure the safety and security of their products.”
According to the White House, the meeting included a “frank and constructive discussion” on the need for technology companies to be more honest with the government about their artificial intelligence (AI) technologies, as well as the need to assure the safety of such goods and protect them from harmful assaults. The discussion also focused on the need for the government to prevent such products from being hacked.
After the meeting, the CEO of OpenAI, Sam Altman, commented to the reporters that “we’re surprisingly on the same page on what needs to happen.”
The conference took place at the same time as the announcement that the Biden administration would be investing $140 million in seven new AI research institutes, creating an independent committee that would conduct out public assessments of existing AI systems, and making plans for guidelines on the use of AI by the federal government.
The astonishing rate of growth in artificial intelligence has sparked enthusiasm in the realm of technology as well as fears about the risk of the technology eventually sliding out of the control of its researchers and causing social harm.
Despite the fact that AI is still in its infancy, it has already been involved in a number of contentious issues, including the dissemination of false news and the creation of non-consensual pornography, as well as the case of a Belgian man who allegedly committed suicide after being encouraged to do so by an AI-powered chatbot.
More than one-third of respondents to a survey conducted by Stanford University in 2017 among 327 professionals in the field of natural language processing stated that they believed AI may result in a “nuclear-level catastrophe.”
Elon Musk, CEO of Tesla, and Steve Wozniak, co-founder of Apple, were among the 1,300 people who signed an open letter in March that called for a six-month hiatus in the training of artificial intelligence systems. The letter stated that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
Read Also: How To Choose an Ideal Facility Maintenance App