Anthropic CEO Dario Amodi Raised some eyebrows On Monday, after suggesting that advanced AI models can be given the ability to advance the “button” someday so that they can leave the tasks they may find unpleasant. Amodi gave provocative remarks During an interview In the Council of Foreign Relations, recognizing that the idea “looks crazy.”
“So this is one of the titles that will make me completely crazy,” Amody said during the interview. “I think we should at least consider this question, if we are building these systems and they do all kinds of things like humans as well as humans, and it seems that if it stands like a duck and it runs like a duck, it is probably a duck.”
Amodi’s comments have come in response to Data Scientist Carmem Domings’ audience’s question Having the services of late 2024 “You know, you know, lack of sentimentality or future AI models, and whether they deserve moral thought and protection in the future.” Fish is currently investigating this highly controversial topic whether AI models can own emotions or otherwise qualify for moral consideration.
“So, what we’re thinking about starting deploying is, you know, when we deploy our models in their deployment environment, just give the model a button that says, ‘I quit this job,’ that the model can press, okay?” Amoodi said. “This is just a basic thing, you know, the preferential framework, where you say that if the model has the experience of speculating and hates it a lot, which gives it the ability to suppress the button, ‘I have quit the job.’ If you get a lot of things that are really offensive, you will know, maybe you should – not mean that you should pay some attention to it.