Merged
Size
M
Change Breakdown
Feature95%
Testing5%
#28485feat(MiniMax Chat Model Node): Add MiniMax Chat Model sub-node (backport to release-candidate/2.17.x)

MiniMax Chat Model node added to n8n integrations

MiniMax Chat Model node added to n8n integrations

n8n workflows can now tap into MiniMax's AI models directly, with a new sub-node that handles region differences and strips reasoning tags from responses automatically.

n8n users who rely on the MiniMax AI platform can now build those workflows natively. A new MiniMax Chat Model sub-node has landed in the LangChain integration suite, letting AI Agents call MiniMax models without custom HTTP request nodes or workarounds.

The node ships with seven MiniMax models preconfigured, defaulting to the M2.7 variant. A region selector handles the International and China endpoints transparently — no manual URL patching required. The credential type includes built-in validation that returns clear error messages for authentication failures versus invalid keys.

One standout option: "Hide Thinking." MiniMax models return chain-of-thought reasoning wrapped in <think> tags. The new node strips these by default, returning clean responses to downstream nodes. Toggle it off, and the raw reasoning appears for workflows that need it.

This lands in the 2.17.x release track as a backport from the main branch.

View Original GitHub Description

Description

Backport of #28305 to release-candidate/2.17.x.

Checklist for the author (@DawidMyslak) to go through.

  • Review the backport changes
  • Fix possible conflicts
  • Merge to target branch

After this PR has been merged, it will be picked up in the next patch release for release track.

Original description

Summary

Adds a MiniMax Chat Model LangChain sub-node for the MiniMax AI platform (OpenAI-compatible API).

2026-04-14 01 05 46

  • Credential with region selector (International / China) and zero-cost credential test
  • Chat model node using ChatOpenAI with 7 static models, default MiniMax-M2.7
  • Hide Thinking option (default: on) to strip <think> reasoning from responses via reasoning_split
  • Light/dark icons, Agent V1 allowlist entries, 10 unit tests

How to test

  1. Add a MiniMax credential with an API key from platform.minimax.io
  2. Create an AI Agent workflow with the MiniMax Chat Model sub-node
  3. Send a message — response should be clean (no <think> tags)
  4. Toggle Hide Thinking off in options — <think> tags should appear

Related Linear tickets, Github issues, and Community forum posts

https://linear.app/n8n/issue/NODE-4773

Review / Merge checklist

  • PR title and summary are descriptive. (conventions)
  • Docs updated or follow-up ticket created.
  • Tests included.
  • PR Labeled with release/backport (if the PR is an urgent fix that needs to be backported)
  • I have seen this code, I have run this code, and I take responsibility for this code.
© 2026 · via Gitpulse