In response to criticism, OpenAI will no longer use customer data to train its models by default

Posted on

As the ChatGPT and Whisper APIs launch this morning, OpenAI is changing the terms of its API Developer Policy, aiming to address developer and user criticism.

As of today, OpenAI says it will not use data submitted through its API for “service improvements,” including AI model training, unless a customer or organization chooses to. In addition, the company is implementing a 30-day data retention policy for API users with options for stricter retention “depending on user needs”, and simplifying terms and data ownership to make it clear that users own the inputs and outputs of the models.

Greg Brockman, the president and chairman of OpenAI, argues that some of these changes are not changes necessarily – it has always been the case that OpenAI API users own the input and output data, be it text, images or otherwise. But emerging legal challenges around generative AI and customer feedback led to a rewrite of the terms of service, he says.

“One of our biggest focuses is figuring out: How do we become super-friendly to developers?” Brockman told AapkaDost in a video interview. “Our mission is to really build a platform for others to build businesses on.”

Developers have long objected to OpenAI’s (now outdated) data handling policies, which they said posed a privacy risk and allowed the company to profit from their data. In one of its own helpdesk articles, OpenAI advises against sharing sensitive information in conversations with ChatGPT because it “cannot remove specific prompts from [users’ histories].”

By allowing customers to refuse to submit their data for training purposes and by offering more data retention options, OpenAI is clearly trying to increase the appeal of its platform. It also tries to scale up massively.

To that last point, in another policy change, OpenAI says it will be removing its current developer pre-launch review process in favor of a largely automated system. Over email, a spokesperson said the company felt comfortable moving to the system because “the vast majority of apps were approved during the review process” and because the company’s monitoring capabilities have “significantly improved” since last year around this time.

“What has changed is that we have moved from a forms-based pre-screening system, where developers wait in a queue to have their app idea draft approved, to a post-hoc detection system where we identify and investigate problematic apps through monitoring their traffic and investigations as warranted,” the spokesperson said.

An automated system eases the burden on OpenAI’s review staff. But it also allows the company — at least in theory — to approve developers and apps for its APIs in higher volumes. OpenAI is under increasing pressure to turn a profit after a billion-dollar investment from Microsoft. The company reportedly expects to earn $200 million by 2023, a pittance compared to the more than $1 billion poured into the startup to date.

Leave a Reply

Your email address will not be published. Required fields are marked *