Javascript Disabled!

Please Enable Javascript if you disabled it, or use another browser we preferred Google Chrome. Please Refresh Page After EnablePowered By UnCopy Plugin.

Amidst controversies, OpenAI insists safety is mission critical

[ad_1]

OpenAI has addressed safety issues following recent ethical and regulatory backlash. The statement(Opens in a new tab) published on Thursday, was a rebuttal-apology hybrid that simultaneously aimed to assure the public its products are safe and admit there’s room for improvement. OpenAI’s safety pledge reads like a whack-a-mole response to multiple controversies that have popped up. In the span of a week, AI experts and industry leaders including Steve Wozniak and Elon Musk published an open letter calling for a six-month pause of developing models like GPT-4, ChatGPT was flat-out banned in Italy, and a complaint was filed to the Federal Trade Commission for posing dangerous misinformation risks, particularly to children. Oh yeah, there was also that bug that exposed users’ chat messages and personal information.
SEE ALSO:

Nonprofit files FTC complaint against OpenAI’s GPT-4

OpenAI asserted that it works “to ensure safety is built into our system at all levels.” OpenAI spent over six months of “rigorous testing” before releasing GPT-4 and said it is looking into verification options to enforce its over 18 age requirement (or 13 with parental approval). The company stressed that it doesn’t sell personal data and only uses it to improve its AI models. It also asserted its willingness to collaborate with policymakers and its continued collaborations with AI stakeholders “to create a safe AI ecosystem.” Toward the middle of the safety pledge, OpenAI acknowledged that developing a safe LLM relies on real-world input. It argues that learning from public input will make the models safer, and allow OpenAI to monitor misuse. “Real-world use has also led us to develop increasingly nuanced policies against behavior that represents a genuine risk to people while still allowing for the many beneficial uses of our technology.”OpenAI promised “details about [its] approach to safety,” but beyond its assurance to explore age verification, most of the announcement read like boilerplate platitudes. There was not much detail about how it plans to mitigate risk, enforce its policies, or work with regulators.
OpenAI prides itself on developing AI products with transparency, but the announcement provides little clarification about what it plans to do now that its AI is out in the wild.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.

Powered By
Best Wordpress Adblock Detecting Plugin | CHP Adblock