Skip to content
Gram

Tools, toolsets, and environments: Gram concepts and why you should care

In the previous post, we saw how Gram turns any OpenAPI document into a fully scoped MCP server and serves it from a stable endpoint – all in just a few clicks. This approach simplifies internal and public-facing MCP servers, lets you create multiple servers per team or feature, and keeps things organized across clients like Cursor and Claude.

This post introduces the core concepts that make Gram’s agent-ready capabilities organized, secure, and ready to use: tools, toolsets, and environments.

Individual tools are useful on their own, but without structure, they’re scattered and hard to manage. Gram solves this by organizing tools into toolsets and linking them to environments for configuration and control.

We’ll walk through these concepts using a complete example workflow:

  1. Create – Generate tools from OpenAPI documents or pre-built integrations.
  2. Curate – Curate tools into bundled toolsets and configure environments for authentication.
  3. Host – Test the setup in the playground, then deploy it as an MCP server or connect using the SDK.

Let’s take a big-picture view of how Gram’s core concepts work together. The diagram below shows how tools, toolsets, and environments combine in a hosted MCP server you can test and use.

Gram concepts

Here’s a quick summary of each stage:

  • Generate tools: Use an OpenAPI document or a pre-built 3rd party integration (like GitHub or Slack) to create callable tools.
  • Bundle into toolsets: Group related tools for a specific team or use case.
  • Configure environments: Define the variables (like API keys, OAuth tokens, or base URLs) the toolset needs at runtime.
  • Create a MCP server: Combine a toolset and environment to generate a hosted MCP server.
  • Test and publish: Use the playground to validate your server, then connect to it using an MCP client or the Gram SDK.

Gram hosts the MCP server for you, so you can start using it right away without any configuration or local setup.

To see how Gram’s core concepts work together in practice, we’ll build an MCP server to help an agent decide whether it’s safe to push code to production and check the status of our GitHub pull requests and issues.

Our example combines the GitHub integration with a custom “Push Advisor” API, a basic Cloudflare Worker that makes deployment decisions based on the day of the week. (Spoiler: It’s never safe on Fridays!)

Here are the steps we’ll follow:

  • Creating tools using the Cloudflare Worker’s OpenAPI document and the GitHub integration.
  • Bundling tools in a toolset.
  • Configuring an environment with a GitHub token and the Push Advisor API’s base URL.
  • Testing the toolset in the playground.
  • Publishing it as a hosted MCP server.

We’ll explain each concept as we go along.

The Push Advisor API is live at canpushtoprod.abdulbaaridavids04.workers.dev and has two endpoints:

  • /can-i-push-to-prod – Returns “yes” Monday-Thursday, “no” Friday-Sunday
  • /vibe-check – Random deployment vibes with messages

Push Decision API

In Gram, a tool represents a single callable API action.

You can create tools by uploading an OpenAPI document or selecting a ready-made integration. Both methods generate well-described MCP tools that agents can invoke.

With Gram, you can create tools directly from OpenAPI documents – ideal for using existing APIs in your agentic applications.

When you upload an OpenAPI document, Gram parses each operation into a callable tool. In our example, we’ll use the Push Advisor OpenAPI document, which generates tools like can_i_push_to_prod and vibe_check.

To help Gram interpret the API, we’ve added a few x-gram tags. Learn more about these tags on the guide to optimizing OpenAPI documents for Gram.

openapi: 3.1.0
info:
title: Push Decision API
description: A simple API to help decide when it's appropriate to push code
version: 1.0.0
servers:
- url: https://canpushtoprod.<username>.workers.dev
description: Production server
tags:
- name: decision
description: Push decision endpoints
- name: meta
description: API metadata and documentation
paths:
"/can-i-push-to-prod":
get:
summary: Check if it's safe to push to production
description: Returns yes for Monday-Thursday, no for Friday-Sunday
operationId: check_push_safety
tags:
- decision
x-gram:
name: can_i_push_to_prod
summary: "Determine if it's safe to push code to production"
description: |
<context>
This endpoint helps developers make informed decisions about when to push code to production by checking the current day of the week. It follows the common practice of avoiding Friday deployments.
</context>
<prerequisites>
- No prerequisites required for this endpoint
</prerequisites>
responses:
'200':
description: Successful response
content:
application/json:
schema:
$ref: '#/components/schemas/PushDecisionResponse'
'500':
description: Internal server error
content:
application/json:
schema:
$ref: '#/components/schemas/ErrorResponse'

Let’s create some tools from our OpenAPI document.

To create new tools from your OpenAPI document, go to the Home tab and click New OpenAPI Source. Upload your OpenAPI document, and Gram will parse it and create tools for each operation.

Creating tools from pre-built integrations

Section titled “Creating tools from pre-built integrations”

Gram offers pre-built integrations that are fully configured and ready to use, so you can start creating tools right away.

We’ll use the GitHub integration to access the GitHub API. Here’s how to enable it:

Gram automatically adds the tools, and they will be available when you create a new toolset.

A toolset is a curated bundle of tools for a specific use case or team. Toolsets solve a critical problem: Dumping too many tools into an LLM’s context window can exhaust the available space or cause the agent to make poor choices about which tools to call. Some language models even place hard caps on the total number of tools they can handle.

When creating a toolset, focus on the specific task you want an agent to perform, and include only those tools that directly help with that task. A focused toolset gives your agent a much better chance of success.

For example, the GitHub integration provides tools for managing issues, pull requests, and repositories. Rather than including everything, you might create a toolset with only issue-related tools. This ensures the agent gets exactly what it needs — nothing more, nothing less.

Toolsets are perfect for experimentation. You can create multiple toolsets to split-test different tool combinations, or compose multiple agents that each use distinct toolsets for specialized tasks.

Let’s create our toolset by adding both the GitHub tools and our custom Push Decision API tools.

Go to the Toolsets tab and click New Toolset. Give it a name and add the tools you want to include. Click Save to create the toolset.

An environment stores API keys, tokens, server URLs, and other runtime settings, keeping secrets separate from logic. This separation is crucial for managing different deployment contexts (for example, production and staging), multi-tenant APIs, or team-specific credentials.

APIs typically require authentication and a server URL before an AI agent can access them. You might also need different configurations for:

  • Production and staging environments with different API endpoints.
  • Multi-tenant APIs, where each customer has their own subdomain.
  • Team-specific credentials, where different groups use different API keys.

When you attach an environment to a toolset, every API call automatically includes the correct authentication details and server configuration.

For example, if we have a toolset that includes GitHub tools for managing issues, you might create different environments with varying permission levels:

  • A support-readonly environment that uses a GitHub token with read-only access, allowing agents to view issues and pull requests but not modify them.
  • A support-manager environment that uses a token with write permissions, enabling senior support staff’s agents to close issues, add labels, and update milestones.
  • A development environment that uses a token with full repository access for development team agents.

This ensures that each agent operates within its intended scope, preventing accidental modifications or unauthorized access.

To set up an environment, switch to the Environments tab and click New Environment. Give the environment a name (for example, demo-environment).

Gram can automatically populate environment variables based on your selected toolsets. When you click Fill for toolset, Gram analyzes the chosen toolset, identifies all required environment variables, and creates empty placeholders for each one. You can then set values for the relevant variables and remove any that aren’t needed.

Click Save to create the environment.

Testing, publishing, and integrating your toolset

Section titled “Testing, publishing, and integrating your toolset”

Gram offers several ways to interact with your toolset: The Playground for quick experiments, hosted MCP endpoints to use in MCP clients, and an SDK to integrate toolsets into your own applications.

In the Playground, you can test your toolsets with different LLMs and environments to see how they behave under various authorization scopes.

To test your toolset, go to the Playground tab, select your toolset and environment, then click Run. Now you can interact with your toolset using natural language prompts.

Gram hosts your toolset as a remote MCP server, so you can use it immediately in Claude, Cursor, or any MCP-compatible client. Click MCP Config, copy the MCP configuration, and paste it into your client.

Publishing an MCP server

You can also Publish your toolset for public use. Fill in the endpoint name, and whether you want to make it public or private.

Publishing an MCP server publicly

You can now share the MCP config privately with your team or make it public. Since Gram hosts the MCP server for you, there’s no need to manage infrastructure yourself.

MCP config

Use the Gram SDK to give agents access to toolsets in your account.

In the SDK tab, select the language and LLM provider you’re working with.

Code snippet

Gram will generate a code snippet for you to paste into your application.