Skip to main content

Overview

GWI Spark MCP lets an MCP client (e.g., Claude Desktop, Cursor, Copilot Studio, or your own MCP-compatible runtime) call GWI Spark tools via a single MCP endpoint using JSON-RPC. It’s designed so a host LLM/agent can pull focused, insight-ready consumer data from GWI — while following the standard MCP lifecycle (not “just another API endpoint”). Recommended pre-read: MCP Lifecycle (initialize → notifications/initialized → tools/list → tools/call)

Quick start checklist

How to use

Spark MCP uses JSON-RPC and follows the MCP lifecycle. In practice, an MCP client should call/invoke the following:
  1. Initialize
  2. Notifications/initialized (signal that initialization is complete)
  3. Tools/list (discover what tools are available + how to use them)
  4. Tools/call (invoke a specific tool)
This is important because tool discovery is part of how an LLM learns what’s available and how to use it correctly. If you bypass the lifecycle and treat MCP like a raw API, you’ll miss key pieces (e.g., tool definitions / usage guidance).

Lifecycle essentials

1. Initialize

Authorization: Bearer YOUR_TOKEN Your MCP client should start by initializing the session (per the MCP lifecycle spec). (Exact params/fields depend on the MCP client runtime; follow the lifecycle doc linked above.)

2. Notifications/initialized

After initialize, send notifications/initialized before requesting tools (per MCP lifecycle ordering). Reference: https://modelcontextprotocol.io/specification/2025-11-25/basic/lifecycle#initialized

3. Tools/list

Next, list available tools so your host LLM can “see”:
  • tool names
  • tool descriptions
  • expected arguments
  • built-in guidance (including prompt decomposition expectations)
This is how you avoid hard-coding tool names/behaviors in your own docs or integration.

4. Tools/call

When calling a tool, use:
  • method: “tools/call”
  • params.name: tool name
  • params.arguments: tool inputs
Example (shape only — tool name/arguments come from tools/list):
{
  "jsonrpc": "2.0",
  "id": "req-123",
  "method": "tools/call",
  "params": {
    "name": "TOOL_NAME_FROM_TOOLS_LIST",
    "arguments": {
      "ARG1": "What marketing channels work best for Audi drivers in the US?"
    }
  }
}

Spark MCP works best when it’s used by a host LLM/agent that can do tool calling.

When your LLM connects to the MCP server, it will typically first discover the tools it can use (via the standard MCP tool discovery flow). The tool definitions include guidance on how to use each tool effectively - including the expectation that complex user requests should be broken into smaller, Spark-style questions. In practice, this means:
  • A user can ask a broad question (e.g., “Build me a profile of Gen Z skincare buyers in the UK and how to reach them”)
  • Your host LLM reads the available tool descriptions and orchestrates the work by:
    • splitting the request into a small number of focused queries, then
    • calling chat_gwi multiple times (and explore_insight_gwi where needed), then
    • summarising the results back to the user in your UI/app
You don’t need to hard-code decomposition logic if your LLM runtime supports tool calling and you pass the tool definitions through properly — the orchestration can be handled by the host LLM using the instructions embedded in the tools. 

Common errors (and how to fix)

Unauthorized (401) What happened: Your token is missing/invalid. Fix: Ensure you’re sending Authorization: Bearer YOUR_TOKEN. 

You got a shallow answer

What happened: The prompt is too broad. Fix: Break the request into 2–4 narrower questions and re-run.

Your app expects “metadata fields”

What happened: Your parser assumes a custom response structure. Fix: Treat the response as JSON-RPC and read outputs from result.content[] (text).