OpenAI showed off the GPT-4o model that gained a lot of momentum among the masses, who saw for themselves how much better ChatGPT has become when it comes to “seeing, hearing and talking” in real-time. In a demonstration, the OpenAI team tasked ChatGPT with various prompts like giving advice on therapeutic breathing techniques, helping someone solve an elementary linear equation step by step and providing guidance for writing code as well as translating from Italian or interpreting a weather graph. It all seemed to happen in just 10 minutes or so, the voice coming back through my earphones as a cheerful — and almost flirtatious!— imitation of Scarlett Johansson herself—a bitchy twinkle near present.
Theres a standout moment in the demo that proved to be as eye-opening as it was unnerving. The team tasked the AI by giving it a prompt : Write a bedtime story about robots who just so happen to be going in love. About halfway through the tale, the protester interjected: "Can we have a little more feeling from that AI voice please?"
As the AI began to reload, another member of our team cut it off: "No, no NO ChatGPT — give me MORE Emotion!—MAX out emotionality and expressiveness above where you were doing before."
The AI started the story again and was once more stopped mid-story by a second prompt of telling it to tell in robotic voice At long last, the ensemble instructed GPT-4o to perform an outro vocal. The audience clapped as the AI moved through its scales. Interestingly, and in a similar vein to their previous work, the AI took five interruptions within 90 seconds of each other remarkably well. You can even interrupt the model, as one team member joyfully observed. It doesn't even have to complete all of it, you can interrupt at any point!)
While that is most certainly an amazing technical feat, there has been quite a dearth of conversation about how AI assistants such as GPT-4o will affect our behavior with others—and therein lies the worry.
GPT-4o and similar models by other companies pull people for their humanlike elements(cost of the deployment). You respond on the fly, joke and (I hope) learn to read facial expressions. Their humanity only goes skin-deep, as they do not harbor the emotions, desires or needs and limitations that makes up a person. Think of it this way, if a parent was interrupted 5 times in 90 seconds each time being asked to do something new they would be worn out and feeling outraged. AI assistants do not have that problem.
In the not too distant future—or rather now, as we speak—AI in our lives is going to be another daily interaction layer; more AI or tutors/ friends/brother and sex partners. Although these might seem innocuous and even useful, they will change the way we engage with one else. In the face of getting used to these "humans" who never say no, don't get tired and are endlessly patient when we interrupt, that brought with our expectations from other people as they too behave like this page. It is not hard to imagine a future in which we are less empathetic, understanding or respectful of others who seem conditioned by AI to exhibit unrelenting availability and cheeriness without complaint (already hugely concerning with the rise of AI sexual partners)
They could also compound loneliness, which experts have begun to rape alongside the mental health epidemic in this writing. Human relationships are intricate. They take patience, consideration and actual willingness to understand each other. The rewards of industry are vast but few will earn them in this way. If the way tech is affecting our ability to interact in our everyday lives with smartphones and social media hasn't already impacted us enough that we've started looking toward digital "socializing," AI assistants are making sure of it. Highlight - The research consistently suggests that digital connections do not even come close to replacing real-life interactions.
Before we get too excited about the productivity boosts from AI assistants, we should pause and consider the potential costs to our relationships and mental health.