Biden, Harris meet with CEOs about AI risks

Vice President Kamala Harris met on Thursday with the heads of Google, Microsoft and two other companies developing artificial intelligence as the Biden administration rolls out initiatives meant to ensure the rapidly evolving technology improves lives without putting people's rights and safety at risk. President Joe Biden briefly dropped by the meeting in the White House's Roosevelt Room, saying he hoped the group could 'educate us' on what is most needed to protect and advance society. 'What you're doing has enormous potential and enormous danger,' Biden told the CEOs, according to a video posted to his Twitter account. The popularity of AI chatbot ChatGPT - even Biden has given it a try, White House officials said Thursday - has sparked a surge of commercial investment in AI tools that can write convincingly human-like text and churn out new images, music and computer code. But the ease with which it can mimic humans has propelled governments around the world to consider how it could take away jobs, trick people and spread disinformation. The Democratic administration announced an investment of $140 million to establish seven new AI research institutes. In addition, the White House Office of Management and Budget is expected to issue guidance in the next few months on how federal agencies can use AI tools. There is also an independent commitment by top AI developers to participate in a public evaluation of their systems in August at the Las Vegas hacker convention DEF CON. But the White House also needs to take stronger action as AI systems built by these companies are getting integrated into thousands of consumer applications, said Adam Conner of the liberal-leaning Center for American Progress. 'We're at a moment that in the next couple of months will really determine whether or not we lead on this or cede leadership to other parts of the world, as we have in other tech regulatory spaces like privacy or regulating large online platforms,' Conner said. The meeting was pitched as a way for Harris and administration officials to discuss the risks in current AI development with Google CEO Sundar Pichai, Microsoft CEO Satya Nadella and the heads of two influential startups: Google-backed Anthropic and Microsoft-backed OpenAI, the maker of ChatGPT. Harris said in a statement after the closed-door meeting that she told the executives that 'the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products.' ChatGPT has led a flurry of new 'generative AI' tools adding to ethical and societal concerns about automated systems trained on vast pools of data. Some of the companies, including OpenAI, have been secretive about the data their AI systems have been trained upon. That's made it harder to understand why a chatbot is producing biased or false answers to requests or to address concerns about whether it's stealing from copyrighted works. Companies worried about being liable for something in their training data might also not have incentives to rigorously track it in a way that would be useful 'in terms of some of the concerns around consent and privacy and licensing,' said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face.

Source: Azerbaijan State News Agency

Facebook
Twitter
LinkedIn
Pinterest
Print
WhatsApp