Anthropic is launching a program to fund the development of new types of benchmarks capable of evaluating the performance and impact of AI models, including generative models like its own Claude. Unveiled on Monday, Anthropic’s program will dole out grants to third-party organizations that can, as the company puts it in a blog post, “effectively […]
© 2024 TechCrunch. All rights reserved. For personal use only.
Read More in AI | TechCrunch