Overview
GWI Spark MCP lets an MCP client (e.g., Claude Desktop, Cursor, Copilot Studio, or your own MCP-compatible runtime) call GWI Spark tools via a single MCP endpoint using JSON-RPC. It’s designed so a host LLM/agent can pull focused, insight-ready consumer data from GWI — while following the standard MCP lifecycle (not “just another API endpoint”). Recommended pre-read: MCP Lifecycle (initialize → notifications/initialized → tools/list → tools/call)Quick start checklist
- Use POST to https://api.globalwebindex.com/v1/spark-api/mcp for Spark MCP calls
- Authenticate with Authorization: Bearer YOUR_TOKEN
- Follow the MCP lifecycle (initialize → notifications/initialized → tools/list → tools/call)
- Let the host LLM use tool definitions from tools/list to orchestrate (decompose broad questions into focused queries)
- Avoid “direct API-style” MCP integrations that skip the protocol steps
How to use
Spark MCP uses JSON-RPC and follows the MCP lifecycle. In practice, an MCP client should call/invoke the following:- Initialize
- Notifications/initialized (signal that initialization is complete)
- Tools/list (discover what tools are available + how to use them)
- Tools/call (invoke a specific tool)
Lifecycle essentials
1. Initialize
Authorization: Bearer YOUR_TOKEN Your MCP client should start by initializing the session (per the MCP lifecycle spec). (Exact params/fields depend on the MCP client runtime; follow the lifecycle doc linked above.)2. Notifications/initialized
Afterinitialize, send notifications/initialized before requesting tools (per MCP lifecycle ordering).
Reference: https://modelcontextprotocol.io/specification/2025-11-25/basic/lifecycle#initialized
3. Tools/list
Next, list available tools so your host LLM can “see”:- tool names
- tool descriptions
- expected arguments
- built-in guidance (including prompt decomposition expectations)
4. Tools/call
When calling a tool, use:- method: “tools/call”
- params.name: tool name
- params.arguments: tool inputs
Spark MCP works best when it’s used by a host LLM/agent that can do tool calling.
When your LLM connects to the MCP server, it will typically first discover the tools it can use (via the standard MCP tool discovery flow). The tool definitions include guidance on how to use each tool effectively - including the expectation that complex user requests should be broken into smaller, Spark-style questions. In practice, this means:- A user can ask a broad question (e.g., “Build me a profile of Gen Z skincare buyers in the UK and how to reach them”)
- Your host LLM reads the available tool descriptions and orchestrates the work by:
- splitting the request into a small number of focused queries, then
- calling chat_gwi multiple times (and explore_insight_gwi where needed), then
- summarising the results back to the user in your UI/app

