Anthropic introduced a Claude Enterprise plan that helps organizations collaborate with its artificial intelligence assistant, Claude, using internal knowledge.
The new plan includes an expanded 500K context window, more usage capacity, a native GitHub integration and enterprise-grade security features, according to a Wednesday (Sept. 4) press release.
“With Claude, your organization’s knowledge is easier to share and reuse, enabling every individual on the team to quickly and consistently produce their best work,” the release said. “At the same time, your data is protected. We do not train Claude on your conversations and content.”
To protect a company’s data, the Enterprise plan includes single sign-on (SSO) and domain capture as well as role-based access with fine-grained permissioning, according to the release. In the coming weeks, it will add audit logs for security and compliance monitoring, as well as a system for cross-domain identity management (SCIM).
Other features of the Enterprise plan include the expanded 500K context window that enables Claude to ingest knowledge and provide guidance and the native GitHub integration that helps engineering teams develop new features, debug issues and onboard new engineers, per the release.
Early customers of Claude for Work have used it for brainstorming, streamlining internal processes, creating and translating content, and writing code, according to the release.
Amazon has invested $4 billion in Anthropic, having made an initial investment of $1.25 billion in September 2023 and an additional investment of $2.75 billion in March.
“Anthropic’s visionary work with generative AI, most recently the introduction of its state-of-the-art Claude 3 family of models, combined with Amazon’s best-in-class infrastructure like AWS Tranium and managed services like Amazon Bedrock further unlocks exciting opportunities for customers to quickly, securely and responsibly innovate with generative AI,” Dr. Swami Sivasubramanian, vice president of data and AI at Amazon Web Services (AWS), said in March when announcing the additional investment.
In August, Anthropic debuted a new feature for its large language models aimed at reducing costs and improving performance for businesses using its AI.
The company’s “Prompt Caching” capability reportedly allows users to store and efficiently reuse specific contextual information within prompts, without recurring costs or increased latency.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.