In this post, we'll connect GPT / OpenAI API with MCP (Model Context Protocol), and learn how MCP works.
Introduction
Since Anthropic announced the Model Context Protocol (MCP) in November 2024, MCP has gradually been gaining attention in the community.
I've seen information about it in various places, but I hadn't yet tried MCP. This time, based on the official MCP Quickstart, I’ll use OpenAI API / GPT instead of Claude to connect with MCP and explore how it works.
Note: This article was translated from my original post.
Recap: What is MCP?
Let's quickly review the basics of MCP.
MCP is an open protocol that defines how LLMs interact with external tools.
It's often compared to USB-C as a metaphor. If every peripheral device had its own connector standard, a PC would need a unique socket for each one. But by using a common USB-C interface, a PC can connect to many devices with just one port.
In the same way, MCP standardizes how LLMs connect with external tools, so we no longer need to build custom implementations each time we use a tool.
For a more technical analogy, it's similar to LSP (Language Server Protocol). Even Anthropic’s MCP lead has said they were inspired by LSP’s success when designing MCP.
Here’s an image that shows how MCP works:
An MCP client can be a script that calls the LLM via API, an IDE connected to an LLM, or even the Claude Desktop app. An MCP server is also a locally running process (though there’s ongoing discussion about running it remotely—see the roadmap for details). The server receives tool calls from the LLM and executes external processes or web APIs accordingly.
Let’s go ahead and implement an MCP server and a GPT-based MCP client, based on the MCP Quickstart, and dig into the code.
Connecting GPT to an MCP Server
The full code is available here:
github.com
Implementing the MCP Server
We'll begin with the MCP server implementation.
The server-side logic doesn’t change whether the client is Claude-based or GPT-based.
So the server code is the same as in the official Quickstart.
from typing import Any import httpx from mcp.server.fastmcp import FastMCP # Initialize FastMCP server mcp = FastMCP("weather") # Constants NWS_API_BASE = "https://api.weather.gov" USER_AGENT = "weather-app/1.0" async def make_nws_request(url: str) -> dict[str, Any] | None: """Make a request to the NWS API with proper error handling.""" headers = { "User-Agent": USER_AGENT, "Accept": "application/geo+json" } async with httpx.AsyncClient() as client: try: response = await client.get(url, headers=headers, timeout=30.0) response.raise_for_status() return response.json() except Exception: return None def format_alert(feature: dict) -> str: """Format an alert feature into a readable string.""" props = feature["properties"] return f""" Event: {props.get('event', 'Unknown')} Area: {props.get('areaDesc', 'Unknown')} Severity: {props.get('severity', 'Unknown')} Description: {props.get('description', 'No description available')} Instructions: {props.get('instruction', 'No specific instructions provided')} """ @mcp.tool() async def get_alerts(state: str) -> str: """Get weather alerts for a US state. Args: state: Two-letter US state code (e.g. CA, NY) """ url = f"{NWS_API_BASE}/alerts/active/area/{state}" data = await make_nws_request(url) if not data or "features" not in data: return "Unable to fetch alerts or no alerts found." if not data["features"]: return "No active alerts for this state." alerts = [format_alert(feature) for feature in data["features"]] return "\n---\n".join(alerts) @mcp.tool() async def get_forecast(latitude: float, longitude: float) -> str: """Get weather forecast for a location. Args: latitude: Latitude of the location longitude: Longitude of the location """ # First get the forecast grid endpoint points_url = f"{NWS_API_BASE}/points/{latitude},{longitude}" points_data = await make_nws_request(points_url) if not points_data: return "Unable to fetch forecast data for this location." # Get the forecast URL from the points response forecast_url = points_data["properties"]["forecast"] forecast_data = await make_nws_request(forecast_url) if not forecast_data: return "Unable to fetch detailed forecast." # Format the periods into a readable forecast periods = forecast_data["properties"]["periods"] forecasts = [] for period in periods[:5]: # Only show next 5 periods forecast = f""" {period['name']}: Temperature: {period['temperature']}°{period['temperatureUnit']} Wind: {period['windSpeed']} {period['windDirection']} Forecast: {period['detailedForecast']} """ forecasts.append(forecast) return "\n---\n".join(forecasts) if __name__ == "__main__": # Initialize and run the server mcp.run(transport='stdio')
Ref. mcp-gpt-tutorial/weather at main · bioerrorlog/mcp-gpt-tutorial · GitHub
Although it looks like a lot is happening, the actual server logic is very straightforward.
This MCP server exposes two tools:
get_alerts()
: Gets weather alerts for a US stateget_forecast()
: Gets weather forecast based on latitude and longitude
These tools are registered via the tool()
decorator.
Registered tools can be listed using list_tools()
and invoked using call_tool()
.
The server is started using mcp.run(transport='stdio')
, which specifies standard input/output (stdio) for communication between client and server.
MCP supports stdio and Server-Sent Events (SSE) as transports by default. When running locally, stdio is a simple and effective choice.
Ref. Transports - Model Context Protocol
Other than that, the server just queries the National Weather Service API and formats the results.
This is pure Python unrelated to MCP.
Implementing the MCP Client
Next, we’ll build the client. This is based on the official Quickstart, but we’ll modify it to use GPT instead of Claude.
import asyncio from typing import Optional from contextlib import AsyncExitStack from mcp import ClientSession, StdioServerParameters from mcp.client.stdio import stdio_client from openai import OpenAI from dotenv import load_dotenv import json load_dotenv() # load environment variables from .env class MCPClient: def __init__(self): # Initialize session and client objects self.session: Optional[ClientSession] = None self.exit_stack = AsyncExitStack() self.openai = OpenAI() async def connect_to_server(self, server_script_path: str): """Connect to an MCP server Args: server_script_path: Path to the server script (.py or .js) """ is_python = server_script_path.endswith('.py') is_js = server_script_path.endswith('.js') if not (is_python or is_js): raise ValueError("Server script must be a .py or .js file") command = "python" if is_python else "node" server_params = StdioServerParameters( command=command, args=[server_script_path], env=None ) stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params)) self.stdio, self.write = stdio_transport self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write)) await self.session.initialize() # List available tools response = await self.session.list_tools() tools = response.tools print("\nConnected to server with tools:", [tool.name for tool in tools]) async def process_query(self, query: str) -> str: """Process a query using OpenAI and available tools""" messages = [ { "role": "user", "content": query } ] response = await self.session.list_tools() available_tools = [{ "type": "function", "function": { "name": tool.name, "description": tool.description, "parameters": tool.inputSchema } } for tool in response.tools] # Initial OpenAI API call response = self.openai.chat.completions.create( model="gpt-4o", messages=messages, tools=available_tools, tool_choice="auto", ) # Process response and handle tool calls final_text = [] while True: reply = response.choices[0].message if reply.content and not reply.tool_calls: final_text.append(reply.content) messages.append({ "role": "assistant", "content": reply.content }) if reply.tool_calls: # Add the assistant message that triggered the tool calls messages.append({ "role": "assistant", "tool_calls": [ { "id": tool_call.id, "type": "function", "function": { "name": tool_call.function.name, "arguments": tool_call.function.arguments } } for tool_call in reply.tool_calls ] }) for tool_call in reply.tool_calls: tool_name = tool_call.function.name tool_args = tool_call.function.arguments # Execute tool call parsed_args = json.loads(tool_args) result = await self.session.call_tool(tool_name, parsed_args) final_text.append(f"[Calling tool {tool_name} with args {parsed_args}]") # Add tool response message messages.append({ "role": "tool", "tool_call_id": tool_call.id, "name": tool_name, "content": result.content, }) # Get next response from OpenAI response = self.openai.chat.completions.create( model="gpt-4o", messages=messages, ) else: break return "\n".join(final_text) async def chat_loop(self): """Run an interactive chat loop""" print("\nMCP Client Started!") print("Type your queries or 'quit' to exit.") while True: try: query = input("\nQuery: ").strip() if query.lower() == 'quit': break response = await self.process_query(query) print("\n" + response) except Exception as e: print(f"\nError: {str(e)}") async def cleanup(self): """Clean up resources""" await self.exit_stack.aclose() async def main(): if len(sys.argv) < 2: print("Usage: python client.py <path_to_server_script>") sys.exit(1) client = MCPClient() try: await client.connect_to_server(sys.argv[1]) await client.chat_loop() finally: await client.cleanup() if __name__ == "__main__": import sys asyncio.run(main())
Ref. mcp-gpt-tutorial/client at main · bioerrorlog/mcp-gpt-tutorial · GitHub
Let’s start by looking at the main
function:
async def main(): if len(sys.argv) < 2: print("Usage: python client.py <path_to_server_script>") sys.exit(1) client = MCPClient() try: await client.connect_to_server(sys.argv[1]) await client.chat_loop() finally: await client.cleanup()
The flow is:
- Parse command-line arguments
- Start and connect to the MCP server
- Start the chat loop
- Clean up on exit
When started, it launches the server script passed as an argument and connects to it:
async def connect_to_server(self, server_script_path: str): """Connect to an MCP server Args: server_script_path: Path to the server script (.py or .js) """ is_python = server_script_path.endswith('.py') is_js = server_script_path.endswith('.js') if not (is_python or is_js): raise ValueError("Server script must be a .py or .js file") command = "python" if is_python else "node" server_params = StdioServerParameters( command=command, args=[server_script_path], env=None ) stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params)) self.stdio, self.write = stdio_transport self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write)) await self.session.initialize() # List available tools response = await self.session.list_tools() tools = response.tools print("\nConnected to server with tools:", [tool.name for tool in tools])
Here’s what’s happening:
- Launch the server as a subprocess using
stdio_client
- Set up stdio-based communication between client and server
- Use
list_tools()
to retrieve and print available tools
Once connected, the client enters a chat loop:
async def chat_loop(self): """Run an interactive chat loop""" print("\nMCP Client Started!") print("Type your queries or 'quit' to exit.") while True: try: query = input("\nQuery: ").strip() if query.lower() == 'quit': break response = await self.process_query(query) print("\n" + response) except Exception as e: print(f"\nError: {str(e)}")
Type quit
to exit the loop.
Any other input is passed to process_query()
which sends it to GPT:
async def process_query(self, query: str) -> str: """Process a query using OpenAI and available tools""" messages = [ { "role": "user", "content": query } ] response = await self.session.list_tools() available_tools = [{ "type": "function", "function": { "name": tool.name, "description": tool.description, "parameters": tool.inputSchema } } for tool in response.tools] # Initial OpenAI API call response = self.openai.chat.completions.create( model="gpt-4o", messages=messages, tools=available_tools, tool_choice="auto", ) # Process response and handle tool calls final_text = [] while True: reply = response.choices[0].message if reply.content and not reply.tool_calls: final_text.append(reply.content) messages.append({ "role": "assistant", "content": reply.content }) if reply.tool_calls: # Add the assistant message that triggered the tool calls messages.append({ "role": "assistant", "tool_calls": [ { "id": tool_call.id, "type": "function", "function": { "name": tool_call.function.name, "arguments": tool_call.function.arguments } } for tool_call in reply.tool_calls ] }) for tool_call in reply.tool_calls: tool_name = tool_call.function.name tool_args = tool_call.function.arguments # Execute tool call parsed_args = json.loads(tool_args) result = await self.session.call_tool(tool_name, parsed_args) final_text.append(f"[Calling tool {tool_name} with args {parsed_args}]") # Add tool response message messages.append({ "role": "tool", "tool_call_id": tool_call.id, "name": tool_name, "content": result.content, }) # Get next response from OpenAI response = self.openai.chat.completions.create( model="gpt-4o", messages=messages, ) else: break return "\n".join(final_text)
This is function calling via the OpenAI API using MCP:
- Get the list of tools registered with the server
- Call GPT with the tools
- If GPT requests a tool call, execute it via MCP
- Feed the result back to GPT for the final response
That’s how the MCP client works.
Let’s now run the client and server.
Running the MCP Client and Server
We’ll use the Python package manager “uv”, also used in the MCP Quickstart, to run the client.
Since the client launches the server as a subprocess, all we need is one command:
uv run client.py path/to/server.py
Ask about weather alerts for a US state or weather forecast by lat/lon, and you’ll get a response via the MCP server.
Conclusion
In this post, we followed the official MCP Quickstart but used GPT instead of Claude to implement the client.
Recently, OpenAI added MCP support to the Agents SDK, which brings MCP closer to becoming the de facto standard for AI agents.
There’s a lot to look forward to in the future of AI agents.
I hope this was helpful to someone!
[Related Post]
References
- Introduction - Model Context Protocol
- Model Context Protocol · GitHub
- GitHub - modelcontextprotocol/quickstart-resources: A repository of servers and clients from the Model Context Protocol tutorials
- Introducing the Model Context Protocol \ Anthropic
- Roadmap - Model Context Protocol
- What is Model Context Protocol (MCP)? How it simplifies AI integrations compared to APIs | AI Agents That Work
- https://youtu.be/kQmXtrmQ5Zg?si=WJJbvyFrX0K0iDjS
- GitHub - modelcontextprotocol/python-sdk: The official Python SDK for Model Context Protocol servers and clients
- Model context protocol (MCP) - OpenAI Agents SDK