Get Started with GPT-4o in Tune Studio: 3 Easy Steps

May 14, 2024

2 min read

OpenAI has just released its new and updated edition of the GPT Sage, GPT-4 Omni, or GPT-4o. We at are proud of our tradition of providing these advanced LLMs within hours of their release, and today is no different.

Aimed at a more natural human-computer interaction, the new unified model accepts many input formats, such as text, audio, and image, and can generate outputs in any of these formats. With its signature audio synthesis and generation improvement, the model can replicate almost human-like response times.

Are you excited to explore GPT-4o’s capabilities? Visit Tune Studio to seamlessly deploy and experiment with this advanced model in three simple steps!

Step 1: Get your OpenAI API Key

Visit the OpenAI Dashboard to find your API Key.

Step 2: Deploy GPT-4o in Tune Studio

Next, head to the “Models” tab in Tune Studio and click "Deploy model.” Choose "OpenAI" as the source, select "gpt-4o" and give your model a meaningful name, such as "my-gpt-4o." Click "Add Model" to complete the deployment process.

Step 3: Experiment with GPT-4o in the Playground

Find your newly added "my-gpt-4o" model and click "Open in Playground" to experiment with its capabilities.

Try Now: Tune Studio

For the more inquisitive, we will shortly release a blog mapping GPT-4o’s abilities against the other big giants in Text, Image, and Audio generators!

Written by

Aryan Kargwal

Data Evangelist