Cole McIntosh

AI & Full Stack Engineer

Connecting LLMs to Your Data

Model Context Protocol (MCP) Resources provide a powerful mechanism for exposing data to Large Language Models (LLMs). Unlike tools that let models take actions, resources are application-controlled data sources that enrich LLM context with external information.

Understanding MCP Resources

Resources in MCP represent a fundamental shift in how LLMs interact with external data:

  • Application-Controlled Access: Unlike tools that LLMs can invoke directly, resources are explicitly selected by applications or users
  • Rich Data Representation: Support for both text and binary data types
  • Dynamic Updates: Real-time notifications when resources change
  • Flexible Discovery: Both direct resources and templated resource patterns

Think of resources as portals that let LLMs see into your data without necessarily giving them the ability to modify it. They bridge the gap between isolated AI and the wealth of information in your systems.

Resource Types and URIs

Resource URIs

Resources are identified by unique URIs following a simple format:

[protocol]://[host]/[path]

This flexible scheme allows for a wide variety of resource types:

  • file:///home/user/documents/report.pdf
  • postgres://database/customers/schema
  • screen://localhost/display1

The protocol prefix indicates how the resource should be accessed and interpreted, while the path structure is defined by each MCP server implementation.

Text vs. Binary Resources

MCP supports two primary resource types:

Text Resources:

  • UTF-8 encoded text data
  • Ideal for code, configuration, logs, and structured data
  • Directly readable by LLMs without conversion

Binary Resources:

  • Base64-encoded binary data
  • Perfect for images, PDFs, audio files, and videos
  • Requires additional processing by LLMs with multimodal capabilities

Discovering Available Resources

MCP provides two complementary approaches for resource discovery:

Direct Resources

Concrete, immediately available resources are exposed via the resources/list endpoint:

{
  uri: string;           // Unique identifier
  name: string;          // Human-readable name
  description?: string;  // Optional description
  mimeType?: string;     // Optional MIME type
}

This allows LLMs to see exactly what data sources are available to them.

Resource Templates

For dynamic resources that follow patterns, MCP supports URI templates:

{
  uriTemplate: string;   // URI template following RFC 6570
  name: string;          // Human-readable name
  description?: string;  // Optional description
  mimeType?: string;     // Optional MIME type
}

These templates allow for parameterized access to entire classes of resources, like database tables or API endpoints, without having to list each one individually.

Reading and Updating Resources

Basic Reading

To access a resource, clients make a resources/read request with the target URI. The server responds with the resource contents:

{
  contents: [
    {
      uri: string;        // Resource URI
      mimeType?: string;  // Content type
      text?: string;      // For text resources
      blob?: string;      // For binary resources (base64)
    }
  ]
}

Importantly, servers can return multiple related resources in a single response, such as returning all files in a directory when reading that directory.

Real-time Updates

MCP supports dynamic resources through two notification mechanisms:

  1. List Changes: Servers notify clients when available resources change
  2. Content Updates: Clients can subscribe to specific resources for real-time updates

This subscription model enables truly interactive experiences, with LLMs responding to changing data in real-time.

Building MCP Resources in Practice

Let's look at a basic implementation of resources in TypeScript:

const server = new Server({
  name: "example-server",
  version: "1.0.0"
}, {
  capabilities: {
    resources: {}
  }
});

// List available resources
server.setRequestHandler(ListResourcesRequestSchema, async () => {
  return {
    resources: [
      {
        uri: "file:///logs/app.log",
        name: "Application Logs",
        mimeType: "text/plain"
      }
    ]
  };
});

// Read resource contents
server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
  const uri = request.params.uri;

  if (uri === "file:///logs/app.log") {
    const logContents = await readLogFile();
    return {
      contents: [
        {
          uri,
          mimeType: "text/plain",
          text: logContents
        }
      ]
    };
  }

  throw new Error("Resource not found");
});

This simple server exposes application logs as a resource that LLMs can read to troubleshoot issues or analyze patterns.

Real-World Applications

The flexibility of MCP resources enables countless use cases:

Development Context

  • Code repositories: Connect LLMs to your entire codebase
  • Documentation: Link technical specifications and API references
  • Logs and diagnostics: Provide real-time system information

Business Intelligence

  • Database schemas: Share database structure without write access
  • Analytics dashboards: Connect visualization data
  • Reports and documents: Make business intelligence available

Media and Content

  • Document libraries: Access knowledge bases and wikis
  • Images and diagrams: Share visual information
  • Audio transcripts: Work with recorded conversations

Security Best Practices

When implementing MCP resources, security should be a primary concern:

  1. Validate resource URIs to prevent path traversal and injection attacks
  2. Implement proper access controls for sensitive resources
  3. Consider rate limiting to prevent overuse of expensive resources
  4. Audit resource access to track usage patterns
  5. Sanitize and validate all data being exposed
  6. Encrypt sensitive data in transit

The Future of MCP Resources

As MCP continues to evolve, we can expect resources to become even more powerful:

  • Enhanced filtering capabilities for large resource collections
  • Improved binary handling for multimodal LLMs
  • Standardized permission models across implementations
  • Better caching mechanisms for frequently accessed resources

Conclusion

MCP resources represent a fundamental building block for connecting LLMs to your data. By providing a standardized way to expose information to AI models, they enable richer, more contextual interactions without compromising security or control.

Whether you're building a coding assistant that needs access to your codebase, a data analysis tool that works with your business intelligence, or a creative assistant that needs reference materials, MCP resources provide the bridge between your data and the AI models that can help you work with it.


Resources