Open CEO Sam Ultman presented a major vision for the future of Chat GPT in the AI event organized by the VC firm. Sikui earlier this month.
When a participant was asked about how more personal personalities could be Chat GPT, Altman replied that he finally wants the model to document everything in a person’s life and remember it.
He said, the ideal is a model of a very small reasoning that has a trillion context tokens that you have put in your entire life. “
“This model can argue and perform it effectively throughout your context. And you have ever spoken every conversation in your life, every book you have ever read, you have ever read, everything you’ve ever seen, and all your data is connected with all other sources.”
He added, “Your company does the same thing for all your company’s data.
Invertently, there may be some of the data -driven reasons why this chat of thinking is the natural future of GPT. In the same debate, when it was asked in cold ways that the youth used the chatgot, he said, “People at college use it as an operating system.” They upload files, add data sources, and then use “complex indicators” against this data.
Additional, with, with Chat GPT’s memory options – Those who can use previous chats and memorial facts as context – he said that a trend he has seen is that young people “do not really make life decisions without asking chat.”
He said, “A total exceeding the limit is: Older people use chatting as Google changes like Google.” “People of their 20th and 30s use it as a life adviser.”
There is not much jump to see how a chat GPT can become an AI system. Thinking about it is an interesting future to think about building a pair of agents with agents.
Imagine that your AI will automatically schedule your car oil changes and remind you. Planning the necessary travel for marriage outside the city and ordering a gift from the registry; Or presenting the next volume of the book series you have been reading for years.
But the terrible part? How much should we trust a big tech for a profit company to learn everything about our life? These are companies that do not always behave in model methods.
Google, who started life with the slogan of “Don’t Be evil” Lost a case in the United States that blamed him Anticompetive, to join the monopoly behavior.
Chat boats can be trained to respond in a politically motivated way. Not only Chinese boats were found Comply with the chain censorship requirements But this week’s Chat Boat Groke was a To discuss the “white genocide” of South Africa When people asked him completely irrelevant questions. Behavior, Many people notedThe founder born in South Africa, Elon Musk, deliberately expressed his response to his reaction to the engine.
Last month, Chattgat agreed to it Was sheer psychinic. Users began sharing boot screenshots, even by appreciating anxiety Desperate Decision And Welfare. The Altman immediately promised that the team had fixed the agreement caused by the problem.
Even the best, highly reliable models are still upright Make goods from time to time.
Therefore, having a acquaintance AI assistant can help in our lives the ways we can only start to see. But in view of the long history of Big Tech’s sharp behavior, this is also a situation for misuse.