OpenAI makes adjustments to GPT-3.5 Turbo

Google+ Pinterest LinkedIn Tumblr +


OpenAI customers can now bring custom data to the lite version of GPT-3.5, GPT-3.5 Turbo, making it easier to improve the reliability of the text-generating AI model while incorporating specific behaviors.

OpenAI claims that refined versions of GPT-3.5 can match or even surpass the basic capabilities of GPT-4, the company’s flagship model, on “certain narrow tasks”.

“Since the release of GPT-3.5 Turbo, developers and enterprises have been asking for the ability to customize the model to create unique and differentiated experiences for their users,” the company wrote in a blog post published this afternoon. “This update gives developers the ability to customize patterns that work best for their use cases and run those custom patterns at scale.”

With fine-tuning, companies using GPT-3.5 Turbo via OpenAI’s API can make the model follow instructions better, for example by having it always respond in a given language. Or they can improve the template’s ability to consistently format responses (e.g. to complete snippets), as well as fine-tune the “feel” of the template’s output, such as its tone, so that it matches. better to a mark or a voice.

Additionally, fine-tuning allows OpenAI customers to shorten their text prompts to speed up API calls and reduce costs. “Early testers reduced prompt sizes by up to 90% by refining instructions in the model itself,” OpenAI claims in the blog post.

Fine-tuning currently requires preparing data, uploading necessary files, and creating a fine-tuning job through OpenAI’s API. All fine-tuning data must pass through a “moderation” API and a GPT-4 powered moderation system to see if it conflicts with OpenAI’s security standards, the company explains. But OpenAI plans to launch a fine-tuning UI in the future with a dashboard to check the status of running fine-tuning workloads.

The tune-up costs are as follows:

  • Training: $0.008 / 1,000 tokens
  • Usage Entry: $0.012 / 1,000 tokens
  • Usage output: $0.016 / 1,000 tokens

The “tokens” represent plain text, for example “fan”, “heap” and “tick” for the word “fantasy”. A GPT-3.5-turbo fine-tuning job with a training file of 100,000 tokens, or about 75,000 words, would cost about $2.40, according to OpenAI.

Separately, OpenAI today released two updated GPT-3 base models (babbage-002 and davinci-002), which can also be refined, with paging support and “more extensibility “. As previously announced, OpenAI plans to retire the original GPT-3 base models on January 4, 2024.

OpenAI said fine-tuning support for GPT-4 — which, unlike GPT-3.5, can understand images in addition to text — will arrive later this fall, but didn’t provide details beyond that. .

Tech

Share.