GPT-4 has just been released, and it already has a mobile app!
GPT-4 helps visually impaired people “see” the world in front of them by uploading images and asking for them via voice.
Anytime, anywhere, real-time interpretation, as natural as chat conversation.
For example, if you want to change your clothes, but you don’t know what color your clothes are:
Just take a photo and upload it to GPT-4, and in no time, it can describe the texture, texture, color and shape of the garment:
Previously, visually impaired people had to rely on people around them or volunteers to identify objects other than just touching them.
This is one of the most amazing GPT-4 apps we’ve ever seen!
So, with the addition of image understanding, what capabilities does GPT-4 demonstrate?
Based on GPT-4 graph reading ability
The new GPT-4-based feature is called Virtual Volunteer from the app Be My Eyes.
So far, Virtual Volunteer, including asking directions, navigation, menu reading, search and other functions, can be well completed by joining GPT-4.
For example, asking for directions.
Simply take a picture of your location, ask the GPT-4 how to get there by voice, and it prints out a complete road map that “reads” to the user:
By showing GPT-4 a description or even a shape, it can search for and output the product’s function, instructions and usage.
Another example is public navigation.
If you want to go to the gym but can’t figure out where the spare equipment is, just take a picture of the scene and the GPT-4 will guide you to the empty equipment.
Of course, there’s ordering food, using vending machines to buy drinks, searching for the name of a certain plant, giving fashion advice…
If you tell GPT-4 what you need, it can help the visually impaired.
However, the feature is currently in beta, the Apple Store can be added to the waiting list, and an Android version is coming soon.
Its app, Be My Eyes, is a social app for the blind.
It started as a support community in 2012 and was launched for iOS in 2015, followed by an Android app.
This app is divided into two groups: volunteers and blind people. Volunteers will accept photos or videos sent by blind people and help them solve difficulties through voice communication (phone calls). At present, it has been used by nearly 45w+ visually impaired people and 630w+ volunteers.
If you volunteer, you just need to stay online and make sure you’re always available to hear from the visually impaired.
If it is used by the visually impaired, you can call volunteers or ask some professionals for help when you need it:
Now, after joining Virtual Volunteer, blind people can also call on “virtual volunteer” GPT-4 to help them without having to worry about no one answering late at night.
The official also played a punchline, AI→Eyes, “Let AI be your eyes” :
Also as the American version of Zhihu answer bot
Of course, Be My Eyes isn’t the only APP vying to plug into GPT-4.
For example, it is now possible to chat with GPT-4 on Poe on the American version of ZhiQuora (no more than one sentence) :
DoNotPay, an AI lawyer software, has also plugged into GPT-4 and plans to use it to launch a “one-click litigation” service.
With this service, you can report phone scams you don’t like with one click.
If you receive a phone scam, you just need to click a button and the entire phone call is recorded, generating a 1,000-word lawsuit for $1,500. (For now, it’s only available in the U.S.)
It’s worth noting that Joshua Browder, CEO of DoNotPay, said that they had previously done a similar feature with GPT-3.5, but it didn’t work very well, and GPT-4 worked well enough.
There have even been attempts to use GPT-4 for drug discovery
In addition to the above applications and features, some netizens saw the prospect of GPT-4 developing small games.
The new GPT-4 also seems to have improved programming reliability, whether it’s playing a little table tennis game in 60 seconds:
Or a complete snake in 20 minutes:
Can be said to be more flexible, change requirements can also be timely completed, the preparation of a simple program basically did not encounter the need to modify the bug.
Do you have any other interesting applications in mind for GPT-4?