Anthropic credential test switches to models endpoint
Credential validation for Anthropic nodes now works reliably — switching from a model-specific messages endpoint to a read-only models lookup eliminates 404 errors for users whose API plan doesn't include the hardcoded model.
The Anthropic credential test was broken by design. It used a specific model endpoint with a hardcoded model name, which would return 404 for users whose API plan didn't include that exact model — even with a valid API key. This was inconsistent with how OpenAI and Mistral credentials are tested, which both use lightweight, model-agnostic endpoints.
The fix swaps the credential test from POST /v1/messages (which requires model access) to GET /v1/models, a read-only endpoint that only requires a valid API key. This change costs nothing in tokens, future-proofs the test against model deprecation, and makes credential validation behave consistently across all LLM providers in n8n.
View Original GitHub Description
Summary
The Anthropic credential test was failing with "The resource you are requesting could not be found" even with a valid API key. The root cause is that the test used POST /v1/messages with a hardcoded model (claude-haiku-4-5-20251001) which returns 404 for users whose API plan doesn't include that specific model.
This switches the credential test to GET /v1/models, a read-only endpoint that only requires a valid API key — no model access needed. This is consistent with how OpenAI and Mistral credentials are tested. The fix is zero-cost (no tokens consumed) and future-proof (no hardcoded model to go stale).
Related Linear tickets, Github issues, and Community forum posts
Review / Merge checklist
- PR title and summary are descriptive. (conventions)
- Docs updated or follow-up ticket created.
- Tests included.
- PR Labeled with
release/backport(if the PR is an urgent fix that needs to be backported)