With the latest update, ChatGPT is getting high-end features.
OpenAI, a San Francisco-based artificial intelligence company, has developed a big language model. ChatGPT announced that it has added new features to ChatGPT. Thanks to these new features, ChatGPT can now see, hear and even speak. We have compiled exactly what these features do under the headings in the rest of our article.
Voice response
The new voice feature is powered by a new text-to-speech conversion model that can produce human-like voices from just text and a few seconds of sample speech. This model enables ChatGPT to convert text into voices.
With ChatGPT, you will be able to engage in a voice dialog. For example, you can ask ChatGPT a question or tell a story with it.
Interpret images
ChatGPT states that it can now interpret the images you send it. For example, it can identify objects in a photo or describe the subject of a scene.
ChatGPT says it can even help you repair a bicycle. For example, it can tell you which parts of the bike need to be replaced or how to fix it.
ChatGPT accesses the source of the photo sent to it by scraping the internet. For example, it can find out where a photo was taken or who took it.
Will be able to do reverse image search
With Google Lens, Google’s famous feature, it was possible to access the source of the image on the internet by reverse image search. It was recorded that ChatGPT accessed the source of the photo sent to it by scraping the internet.
It will be available in two weeks
It was noted that the audio and video feature can be experienced by Plus and Enterprise users within two weeks.