Erroneous AI workflow runs now correctly report errored status
A bug in n8n's AI instance runtime was causing failed workflow executions to appear as successful. When errors occurred mid-run, the system was incorrectly marking them as completed instead of errored — making debugging and monitoring significantly harder. This fix corrects that status reporting.
AI workflow executions in n8n now report the correct status when they fail. Previously, runs that encountered errors were being marked as completed — a misclassification that could mislead users monitoring workflow health or debugging issues. The fix introduces a three-way status determination: suspended when paused, errored when something goes wrong, and completed only on clean execution. Alongside this, the credential editing interface in the frontend was simplified by consolidating duplicated project-resolution logic into a single reusable reference.
View Original GitHub Description
Summary
A bit of cleanup that was left out of the main feature merge but flagged on review.
Related Linear tickets, Github issues, and Community forum posts
<!-- Include links to **Linear ticket** or Github issue or Community forum post. Important in order to close *automatically* and provide context to reviewers. https://linear.app/n8n/issue/ --> <!-- Use "closes #<issue-number>", "fixes #<issue-number>", or "resolves #<issue-number>" to automatically close issues when the PR is merged. -->Review / Merge checklist
- PR title and summary are descriptive. (conventions) <!-- **Remember, the title automatically goes into the changelog. Use `(no-changelog)` otherwise.** -->
- Docs updated or follow-up ticket created.
- Tests included. <!-- A bug is not considered fixed, unless a test is added to prevent it from happening again. A feature is not complete without tests. -->
- PR Labeled with
Backport to Beta,Backport to Stable, orBackport to v1(if the PR is an urgent fix that needs to be backported)