SceneIntruderMCP

๐ญ AI-Powered Immersive Interactive Storytelling Platform

English | ็ฎไฝไธญๆ
๐ Project Overview
SceneIntruderMCP is a revolutionary AI-driven interactive storytelling platform that combines traditional text analysis with modern AI technology, providing users with an unprecedented immersive role-playing and story creation experience.
โจ Core Features
๐ง Intelligent Text Analysis
- Multi-dimensional Parsing: Automatically extract scenes, characters, items, and plot elements
- Bilingual Support: Perfect support for intelligent recognition and processing of Chinese and English content
- Deep Analysis: Professional-grade text type identification based on literary theory
๐ญ AI Character System
- Emotional Intelligence: 8-dimensional emotional analysis (emotion, action, expression, tone, etc.)
- Character Consistency: Maintain long-term memory and personality traits
- Dynamic Interaction: Intelligently triggered automatic dialogues between characters
- Character Memory: Persistent knowledge base that characters remember across interactions
- Relationship Mapping: Dynamic relationship tracking between characters
- Personality Modeling: Comprehensive personality profiles affecting dialogue and behavior
๐ Dynamic Story Engine
- Non-linear Narrative: Support complex story branching and timeline management
- Intelligent Choice Generation: AI dynamically creates 4 types of choices based on context (Action/Dialogue/Investigation/Strategy)
- Story Rewind: Complete timeline rollback and state management
- Branch Visualization: Visual representation of story branches and pathways
- Progressive Storytelling: Continuous story development across sessions
- Context Preservation: Maintain story context when returning to scenes
- Timeline Management: Sophisticated handling of non-linear story timelines
๐ฒ Interactive Game Mechanics
- Inventory System: Rich object management with interactive items
- Skill System: User-defined abilities affecting story outcomes
- Character Relationships: Track evolving relationships between characters
- World Building: Dynamic scene and location management
- Quest Tracking: Mission and objective management system
- Achievement System: Recognition for story exploration and interaction milestones
๐ฎ Gamified Experience
- User Customization: Custom items and skills system
- Creativity Control: 3-level creativity control (Strict/Balanced/Expansive)
- Progress Tracking: Real-time story completion and statistical analysis
๐ User Items & Skills Management
- Custom Items: Users can define unique items with customizable properties
- Custom Skills: Users can create and manage skills with different effects and levels
- Property System: Items can have multiple properties (attack, defense, magic, durability, etc.)
- Rarity Levels: Items support different rarity tiers: common, rare, epic, legendary
- Skill Trees: Hierarchical skill system with prerequisites and requirements
- Character Interaction: Items and skills can affect character interactions and story outcomes
- API Integration: Full CRUD operations available via API for managing user-defined content
๐ Multi-LLM Support
- OpenAI GPT: GPT-4.1/4o/5-chat series
- Anthropic Claude: Claude-3.5/4.5 series
- DeepSeek: DeepSeek--chat series
- Google Gemini: Gemini-2.5/3.0 series
- Grok: xAI's Grok-4/3 series
- Mistral: Mistral-large/small series
- Qwen: Alibaba Cloud Qwen3 series
- GitHub Models: Via GitHub Models platform (GPT-4o/4.1, etc.)
- OpenRouter: Open source model aggregation platform with free tiers
- GLM: Zhipu AI's GLM-4/4-plus series
๐๏ธ Technical Architecture
๐ Project Structure
> โน๏ธ **Tip**: The `default_model` value for the active provider is now respected across the backend. Any AI call that doesn't explicitly pass a model name will automatically fall back to this configuration, so you can centrally switch models without touching code.
SceneIntruderMCP/
โโโ cmd/
โ โโโ server/ # Application entry point
โ โโโ main.go
โโโ internal/
โ โโโ api/ # HTTP API routes and handlers
โ โโโ app/ # Application core logic
โ โโโ config/ # Configuration management
โ โโโ di/ # Dependency injection
โ โโโ llm/ # LLM provider abstraction layer
โ โ โโโ providers/ # Various LLM provider implementations
โ โโโ models/ # Data model definitions
โ โโโ services/ # Business logic services
โ โโโ storage/ # Storage abstraction layer
โโโ frontend/
โ โโโ dist/ # assets
โโโ data/ # Data storage directory
โ โโโ scenes/ # Scene data
โ โโโ stories/ # Story data
โ โโโ users/ # User data
โ โโโ exports/ # Export files
โโโ logs/ # Application logs
๐ง Core Technology Stack
- Backend: Go 1.21+, Gin Web Framework
- AI Integration: Multi-LLM provider support with unified abstraction interface
- Storage: File system-based JSON storage with database extension support
- Frontend: React, responsive design
- Deployment: Containerization support, cloud-native architecture
๐ Release Highlights (v1.2.0 ยท 2025-11-27)
- Scene deletion cleanup โ
DELETE /api/scenes/{id} now synchronously removes the matching data/stories/<scene_id> timeline, ensuring no orphaned story files remain after a scene is removed.
- GitHub Models fallback fixes โ Provider bootstrap now respects the configured
default_model even when only GitHub Models credentials are supplied, eliminating the previous โconnection failedโ errors.
- Operational readiness upgrades โ Documented the persistent encryption key (
data/.encryption_key), refreshed the API/deployment guides, and added a pre-release data cleanup checklist so release artifacts stay tidy.
๐งน Pre-release Data Cleanup Checklist
Before packaging a new build or resetting a shared demo environment, wipe transient data while preserving configuration secrets.
Remove before releasing
data/scenes/* โ per-scene caches, characters, and context files
data/stories/* โ story timelines (v1.2.0+ deletes these automatically alongside scenes)
data/items/* โ scene item caches
data/exports/* โ exported archives and interaction summaries
data/stats/usage_stats.json โ accumulated telemetry
temp/* โ temporary uploads and scratch files
logs/*.log โ runtime logs (archive first if you need them)
Keep (or rotate with care)
data/config.json โ persisted runtime settings and encrypted API keys
data/.encryption_key โ AES-GCM key required to decrypt stored LLM credentials; deleting it forces you to re-enter every API key
data/users/*.json โ built-in accounts such as admin.json and console_user.json
โน๏ธ Scenes deleted prior to v1.2.0 may have left residual data/stories/scene_* folders. You can safely remove those directories manually to reclaim disk space.
๐ Quick Start
๐ System Requirements
- Go 1.21 or higher
- At least one LLM API key (OpenAI/Claude/DeepSeek, etc.)
- 2GB+ available memory
- Operating System: Windows/Linux/macOS
๐ฆ Installation Steps
- Clone the Project
git clone https://github.com/Corphon/SceneIntruderMCP.git
cd SceneIntruderMCP
- Install Dependencies
go mod download
- Configure Environment
On first start, the server initializes a configuration file at data/config.json (or ${DATA_DIR}/config.json).
You can configure the LLM provider/API key either:
- via the Settings UI:
http://localhost:8080/settings, or
- by editing
data/config.json directly.
- Start Service
# Development mode
go run cmd/server/main.go
# Production mode
go build -o sceneintruder cmd/server/main.go
./sceneintruder
- Access Application
Open browser: http://localhost:8080
โ๏ธ Configuration Guide
data/config.json Configuration Example
{
"port": "8080",
"data_dir": "data",
"static_dir": "frontend\\dist\\assets",
"templates_dir": "frontend\\dist",
"log_dir": "logs",
"debug_mode": true,
"llm_provider": "openrouter",
"llm_config": {
"default_model": "mistralai/devstral-2512:free",
"base_url": "",
"api_key": ""
},
"encrypted_llm_config": {
"api_key": "<encrypted_api_key_here>"
}
}
๐ Configuration Encryption & .encryption_key
- When
CONFIG_ENCRYPTION_KEY isnโt provided, the backend generates a random 32-byte key and stores it in data/.encryption_key so encrypted API keys keep working between restarts.
- The file must stay alongside
data/config.json; deleting it invalidates every encrypted credential until you re-enter them through the settings UI.
- To rotate the key intentionally, delete the file, restart the server, and immediately update the API keysโnew data will be re-encrypted with the regenerated key.
- Keep
.encryption_key out of version control and deployment artefacts that are meant to be shared publicly.
๐ User Guide
๐ฌ Creating Scenes
- Upload Text: Support various text formats including novels, scripts, stories
- AI Analysis: System automatically extracts characters, scenes, items, and other elements
- Scene Generation: Create interactive scene environments
๐ญ Character Interaction
- Select Character: Choose interaction targets from analyzed characters
- Natural Dialogue: Engage in natural language conversations with AI characters
- Emotional Feedback: Observe character emotions, actions, and expression changes
๐ Story Branching
- Dynamic Choices: AI generates 4 types of choices based on current situation
- Story Development: Advance non-linear story plots based on choices
- Branch Management: Support story rewind and multi-branch exploration
๐ Data Export
- Interaction Records: Export complete dialogue history
- Story Documents: Generate structured story documents
- Statistical Analysis: Character interaction and story progress statistics
๐ Export Functionality Details
- Multiple Formats: Export data in JSON, Markdown, HTML, TXT, CSV, and PDF formats
- Comprehensive Scene Data: Export full scene information including characters, locations, items, themes, atmosphere, and settings
- Character Interactions: Export detailed interaction records between characters with timestamps and emotional context
- Story Branches: Export complete story trees with all possible branches, choices, and outcomes
- Conversation History: Export all character conversations with metadata
- Progress Statistics: Export story progress metrics, interaction statistics, and timeline data
- User Preferences: Export user customization settings, items, and skills
- Batch Export: Support for exporting multiple scenes or stories simultaneously
- Scheduled Exports: Option for automated periodic exports
- Filtered Exports: Export based on time range, character participation, or interaction type
- Rich Metadata: Include timestamps, version information, and export configuration
- Export Status Tracking: Monitor ongoing export tasks with progress indicators
- Export History: Maintain history of all performed exports
- File Organization: Automatic organization of exported files in structured directories
- Export Quality Assurance: Validation of exported data integrity
- Performance Optimization: Efficient export processing for large datasets
๐ ๏ธ API Documentation
๐ Actually Available API Endpoints
Scene Management
GET /api/scenes # Get scene list
POST /api/scenes # Create scene
GET /api/scenes/{id} # Get scene details
GET /api/scenes/{id}/characters # Get scene characters
GET /api/scenes/{id}/conversations # Get scene conversations
GET /api/scenes/{id}/aggregate # Get scene aggregate data
Story System
GET /api/scenes/{id}/story # Get story data
POST /api/scenes/{id}/story/choice # Make story choice
POST /api/scenes/{id}/story/advance # Advance story
POST /api/scenes/{id}/story/rewind # Rewind story
GET /api/scenes/{id}/story/branches # Get story branches
POST /api/scenes/{id}/story/rewind # Rewind story to specific node
Export Functions
GET /api/scenes/{id}/export/scene # Export scene data
GET /api/scenes/{id}/export/interactions # Export interactions
GET /api/scenes/{id}/export/story # Export story document
Interaction Aggregation
POST /api/interactions/aggregate # Process aggregated interactions
GET /api/interactions/{scene_id} # Get character interactions
GET /api/interactions/{scene_id}/{character1_id}/{character2_id} # Get character-to-character interactions
Scene Aggregation
GET /api/scenes/{id}/aggregate # Get comprehensive scene data with options
Batch Operations
POST /api/scenes/{id}/story/batch # Batch story operations
User Management
GET /api/users/{user_id} # Get user profile
PUT /api/users/{user_id} # Update user profile
GET /api/users/{user_id}/preferences # Get user preferences
PUT /api/users/{user_id}/preferences # Update user preferences
User Items and Skills Management
# User Items
GET /api/users/{user_id}/items # Get user items
POST /api/users/{user_id}/items # Add user item
GET /api/users/{user_id}/items/{item_id} # Get specific item
PUT /api/users/{user_id}/items/{item_id} # Update user item
DELETE /api/users/{user_id}/items/{item_id} # Delete user item
# User Skills
GET /api/users/{user_id}/skills # Get user skills
POST /api/users/{user_id}/skills # Add user skill
GET /api/users/{user_id}/skills/{skill_id} # Get specific skill
PUT /api/users/{user_id}/skills/{skill_id} # Update user skill
DELETE /api/users/{user_id}/skills/{skill_id} # Delete user skill
Configuration and Health Checks
GET /api/config/health # Get configuration health status
GET /api/config/metrics # Get configuration metrics
GET /api/settings # Get system settings
POST /api/settings # Update system settings
POST /api/settings/test-connection # Test connection
WebSocket Management
GET /api/ws/status # Get WebSocket connection status
POST /api/ws/cleanup # Clean up expired WebSocket connections
Text Analysis & File Upload
POST /api/analyze # Analyze text content
GET /api/progress/{taskID} # Get analysis progress
POST /api/cancel/{taskID} # Cancel analysis task
POST /api/upload # Upload file
Character Interaction & Chat
POST /api/chat # Basic chat with characters
POST /api/chat/emotion # Chat with emotion analysis
POST /api/interactions/trigger # Trigger character interactions
POST /api/interactions/simulate # Simulate character dialogue
POST /api/interactions/aggregate # Aggregate interaction processing
GET /api/interactions/{scene_id} # Get interaction history
GET /api/interactions/{scene_id}/{character1_id}/{character2_id} # Get specific character interactions
System Configuration & LLM Management
GET /api/settings # Get system settings
POST /api/settings # Update system settings
POST /api/settings/test-connection # Test connection
GET /api/llm/status # Get LLM service status
GET /api/llm/models # Get available models
PUT /api/llm/config # Update LLM configuration
User Management System
# User Profile
GET /api/users/{user_id} # Get user profile
PUT /api/users/{user_id} # Update user profile
GET /api/users/{user_id}/preferences # Get user preferences
PUT /api/users/{user_id}/preferences # Update user preferences
# User Items Management
GET /api/users/{user_id}/items # Get user items
POST /api/users/{user_id}/items # Add user item
GET /api/users/{user_id}/items/{item_id} # Get specific item
PUT /api/users/{user_id}/items/{item_id} # Update user item
DELETE /api/users/{user_id}/items/{item_id} # Delete user item
# User Skills Management
GET /api/users/{user_id}/skills # Get user skills
POST /api/users/{user_id}/skills # Add user skill
GET /api/users/{user_id}/skills/{skill_id} # Get specific skill
PUT /api/users/{user_id}/skills/{skill_id} # Update user skill
DELETE /api/users/{user_id}/skills/{skill_id} # Delete user skill
WebSocket Support
WS /ws/scene/{id} # Scene WebSocket connection
WS /ws/user/status # User status WebSocket connection
Debug & Development
GET /api/ws/status # Get WebSocket connection status
๐ API Usage Examples
Story Interaction Flow
// 1. Get story data
const storyData = await fetch('/api/scenes/scene123/story');
// 2. Make a choice
const choiceResult = await fetch('/api/scenes/scene123/story/choice', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
node_id: 'node_1',
choice_id: 'choice_a'
})
});
// 3. Export story
const storyExport = await fetch('/api/scenes/scene123/export/story?format=markdown');
Character Interaction
// 1. Basic chat
const chatResponse = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
scene_id: 'scene123',
character_id: 'char456',
message: 'Hello, how are you?'
})
});
// 2. Trigger character interaction
const interaction = await fetch('/api/interactions/trigger', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
scene_id: 'scene123',
character_ids: ['char1', 'char2'],
topic: 'Discussing the mysterious artifact'
})
});
User Customization
// 1. Add custom item
const newItem = await fetch('/api/users/user123/items', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
name: 'Magic Sword',
description: 'A legendary sword with mystical powers',
type: 'weapon',
properties: { attack: 50, magic: 30 }
})
});
// 2. Add skill
const newSkill = await fetch('/api/users/user123/skills', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
name: 'Fireball',
description: 'Cast a powerful fireball spell',
type: 'magic',
level: 3
})
});
๐ WebSocket Integration
Scene WebSocket Connection
// Connect to scene WebSocket
const sceneWs = new WebSocket(`ws://localhost:8080/ws/scene/scene123?user_id=user456`);
sceneWs.onmessage = (event) => {
const data = JSON.parse(event.data);
console.log('Scene update:', data);
};
// Send character interaction
sceneWs.send(JSON.stringify({
type: 'character_interaction',
character_id: 'char123',
message: 'Hello everyone!'
}));
// Send story choice
sceneWs.send(JSON.stringify({
type: 'story_choice',
node_id: 'story_node_1',
choice_id: 'choice_a',
user_preferences: {
creativity_level: 'balanced',
allow_plot_twists: true
}
}));
User Status WebSocket
// Connect to user status WebSocket
const statusWs = new WebSocket(`ws://localhost:8080/ws/user/status?user_id=user456`);
statusWs.onmessage = (event) => {
const data = JSON.parse(event.data);
switch(data.type) {
case 'heartbeat':
console.log('Connection alive');
break;
case 'user_status_update':
console.log('User status changed:', data.status);
break;
case 'error':
console.error('WebSocket error:', data.error);
break;
default:
console.log('Received:', data);
}
};
Supported WebSocket Message Types
- character_interaction: Character-to-character interactions
- story_choice: Story decision-making events
- user_status_update: User presence and status updates
- conversation:new: New conversation events
- heartbeat: Connection health checks
- pong: Heartbeat response messages
- error: Error notifications
Client-Side Realtime Management
The application uses RealtimeManager class for handling WebSocket communications:
// Initialize scene realtime functionality
await window.realtimeManager.initSceneRealtime('scene_123');
// Send character interaction
window.realtimeManager.sendCharacterInteraction('scene_123', 'character_456', 'Hello!');
// Subscribe to story events
window.realtimeManager.on('story:event', (data) => {
// Handle story updates
console.log('Story event:', data);
});
// Get connection status
const status = window.realtimeManager.getConnectionStatus();
console.log('WebSocket status:', status);
Standard Success Response
{
"success": true,
"data": {
// Response data
},
"timestamp": "2024-01-01T12:00:00Z"
}
Error Response
{
"success": false,
"error": "Error message description",
"code": "ERROR_CODE",
"timestamp": "2024-01-01T12:00:00Z"
}
Export Response
{
"file_path": "/exports/story_20240101_120000.md",
"content": "# Story Export\n\n...",
"format": "markdown",
"size": 1024,
"timestamp": "2024-01-01T12:00:00Z"
}
๐ก๏ธ Authentication & Security
Currently, the API uses session-based authentication for user management. For production deployment, consider implementing:
- JWT Authentication: Token-based authentication for API access
- Rate Limiting: API call frequency limits
- Input Validation: Strict parameter validation and sanitization
- HTTPS Only: Force HTTPS for all production traffic
For detailed API documentation, see: API Documentation
๐งช Development Guide
๐โโ๏ธ Running Tests
# Run all tests
go test ./...
# Run tests with coverage report
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out
# Run specific package tests
go test ./internal/services/...
๐ง Adding New LLM Providers
- Implement Interface: Create new provider in
internal/llm/providers/
- Register Provider: Register in
init() function
- Add Configuration: Update configuration file template
- Write Tests: Add corresponding unit tests
๐ Code Structure Explanation
- models/: Data models defining core entities in the system
- services/: Business logic layer handling core functionality
- api/: HTTP handlers exposing RESTful APIs
- llm/: LLM abstraction layer supporting multiple AI providers
- Concurrent Processing: Support multiple simultaneous users
- Caching Mechanism: Intelligent caching of LLM responses
- Memory Optimization: Load on demand, prevent memory leaks
- File Compression: Automatic compression of historical data
๐ Monitoring Metrics
- API Usage Statistics: Request count and token consumption
- Response Time: AI model response speed monitoring
- Error Rate: System and API error tracking
- Resource Usage: CPU and memory usage monitoring
๐ Security Considerations
๐ก๏ธ Data Security
- API Keys: Secure storage with environment variable support
- User Data: Local storage with complete privacy control
- Access Control: User session and permission management support
- Data Backup: Automatic backup of important data
๐ Network Security
- HTTPS Support: HTTPS recommended for production environments
- CORS Configuration: Secure cross-origin resource sharing configuration
- Input Validation: Strict user input validation and sanitization
๐ Data Security & API Key Encryption
- AES-GCM Encryption: API keys are securely encrypted using AES-GCM algorithm before storage
- Environment Variable Priority: API keys are primarily loaded from environment variables (e.g.,
OPENAI_API_KEY)
- Encrypted Storage: When stored in configuration files, API keys are kept in encrypted form in
EncryptedLLMConfig field
- Runtime Decryption: API keys are decrypted only when needed for API calls
- Automatic Migration: Legacy unencrypted API keys are automatically migrated to encrypted storage
- Secure Backward Compatibility: The system handles transition from unencrypted to encrypted API key storage
- Configuration Security: The encryption key should be set as
CONFIG_ENCRYPTION_KEY environment variable for optimal security
- Fallback Protection: Includes fallback mechanisms to prevent storing API keys as plain text
- Key Derivation: In absence of environment-provided encryption keys, the system safely derives encryption keys from multiple entropy sources
๐ค Contributing
We welcome all forms of contributions!
๐ Ways to Contribute
- Bug Reports: Use GitHub Issues to report problems
- Feature Suggestions: Propose ideas and suggestions for new features
- Code Contributions: Submit Pull Requests
- Documentation Improvements: Help improve documentation and examples
๐ง Development Process
- Fork the project repository
- Create feature branch:
git checkout -b feature/amazing-feature
- Commit changes:
git commit -m 'Add amazing feature'
- Push branch:
git push origin feature/amazing-feature
- Create Pull Request
๐ Code Standards
- Follow official Go coding style
- Add necessary comments and documentation
- Write unit tests covering new features
- Ensure all tests pass
๐ License
This project is licensed under the Apache 2.0 License - see the LICENSE file for details
๐ Acknowledgments
๐ฏ Core Technologies
- Go - High-performance programming language
- Gin - Lightweight web framework
- OpenAI - GPT series models
- Anthropic - Claude series models
Thanks to all developers and users who have contributed to this project!
๐ If this project helps you, please consider giving it a Star! ๐
Made with โค๏ธ by SceneIntruderMCP Team