π§ MCP - Titan Memory Server implementation
Colaboration between @jasonkneenβ and @ExpressionsBotβ
Follow us on X
An implementation inspired by Google Researchβs paper βGenerative AI for Programming: A Common Task Frameworkββ. This server provides a neural memory system that can learn and predict sequences while maintaining state through a memory vector, following principles outlined in the research for improved code generation and understanding.
π Research Background
This implementation draws from the concepts presented in the Google Research paper (Muennighoff et al., 2024) which introduces a framework for evaluating and improving code generation models. The Titan Memory Server implements key concepts from the paper:
- Memory-augmented sequence learning
- Surprise metric for novelty detection
- Manifold optimization for stable learning
- State maintenance through memory vectors
These features align with the paperβs goals of improving code understanding and generation through better memory and state management.
π Features
- Neural memory model with configurable dimensions
- Sequence learning and prediction
- Surprise metric calculation
- Model persistence (save/load)
- Memory state management
- Full MCP tool integration
π¦ Installation
# Install dependencies
npm install
# Build the project
npm run build
# Run tests
npm test
π οΈ Available MCP Tools
1. π― init_model
Initialize the Titan Memory model with custom configuration.
{
inputDim?: number; // Input dimension (default: 64)
outputDim?: number; // Output/Memory dimension (default: 64)
}
2. π train_step
Perform a single training step with current and next state vectors.
{
x_t: number[]; // Current state vector
x_next: number[]; // Next state vector
}
3. π forward_pass
Run a forward pass through the model with an input vector.
{
x: number[]; // Input vector
}
4. πΎ save_model
Save the model to a specified path.
{
path: string; // Path to save the model
}
5. π load_model
Load the model from a specified path.
{
path: string; // Path to load the model from
}
6. βΉοΈ get_status
Get current model status and configuration.
{} // No parameters required
7. π train_sequence
Train the model on a sequence of vectors.
{
sequence: number[][]; // Array of vectors to train on
}
π Example Usage
// Initialize model
await callTool('init_model', { inputDim: 64, outputDim: 64 });
// Train on a sequence
const sequence = [
[1, 0, 0, /* ... */],
[0, 1, 0, /* ... */],
[0, 0, 1, /* ... */]
];
await callTool('train_sequence', { sequence });
// Run forward pass
const result = await callTool('forward_pass', {
x: [1, 0, 0, /* ... */]
});
π§ Technical Details
- Built with TensorFlow.js for efficient tensor operations
- Uses manifold optimization for stable learning
- Implements surprise metric for novelty detection
- Memory management with proper tensor cleanup
- Type-safe implementation with TypeScript
- Comprehensive error handling
π§ͺ Testing
The project includes comprehensive tests covering:
- Model initialization and configuration
- Training and forward pass operations
- Memory state management
- Model persistence
- Edge cases and error handling
- Tensor cleanup and memory management
Run tests with:
npm test
π Implementation Notes
- All tensor operations are wrapped in
tf.tidy()
for proper memory management - Implements proper error handling with detailed error messages
- Uses type-safe MCP tool definitions
- Maintains memory state between operations
- Handles floating-point precision issues with epsilon tolerance
π License
MIT License - feel free to use and modify as needed!