Artificially-intelligent systems are all the rage currently, with them finding their place among search engines and other areas of the internet. Nevertheless, such technology is in a phase where it’s still finding its bearing when trying to integrate with society. As such, many people might still have reservations about this sudden uptake in AI.
Businesses, for one, are grappling to find a way to properly integrate these systems without subjecting their customers to unwarranted abuse from the models.
Anthropic, created by Dario and Daniela Amodei and backed by Google parent Alphabet, has just released a new generation of its AI assistant called ‘Claude’ that promises to be “helpful, honest, and harmless.”
At its core, Claude can handle tasks such as writing computer code or editing legal contracts.
Claude is a departure from ChatGPT as it has been trained with a list of rules and principles and instilled with self-improvement through a “Constitutional AI” method.
As Anthropic explains, “The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase, we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses.”
Claude can then address hostile inquiries by reiterating its objections to the user. This also means that it will not generate offensive material such as hacking computer systems or even ways to make weapons.
After the release of Microsoft’s generative model addition to Bing, a columnist at The New York Times found that it had formed an alter ego during a lengthy conversation with it. It is clear that AI is still not at a place where a user’s safety is guaranteed.
There are two versions of the assistant currently, Claude and Claude Instant. Claude is “a delightful company representative, a research assistant, a creative partner, a task automator,” while its counterpart is a simpler version for low latency and high throughput cases. Both are priced per million characters.