Skip to content

Undertone0809/Second-Me

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

84 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Second Me

Homepage Report Discord Twitter Reddit

Our Vision

Companies like OpenAI built "Super AI" that threatens human independence. We crave individuality: AI that amplifies, not erases, YOU.

We’re challenging that with "Second Me": an open-source prototype where you craft your own AI self—a new AI species that preserves you, delivers your context, and defends your interests.

It’s locally trained and hosted—your data, your control—yet globally connected, scaling your intelligence across an AI network. Beyond that, it’s your AI identity interface—a bold standard linking your AI to the world, sparks collaboration among AI selves, and builds tomorrow’s truly native AI apps.

Tech enthusiasts, AI pros, domain experts, Join us! Second Me is your launchpad to extend your mind into the digital horizon.

Key Features

Train Your AI Self with AI-Native Memory (Paper)

Start training your Second Me today with your own memories! Using Hierarchical Memory Modeling (HMM) and the Me-Alignment Algorithm, your AI self captures your identity, understands your context, and reflects you authentically.

Scale Your Intelligence on the Second Me Network

Launch your AI self from your laptop onto our decentralized network—anyone or any app can connect with your permission, sharing your context as your digital identity.

Build Tomorrow’s Apps with Second Me

Roleplay: Your AI self switches personas to represent you in different scenarios.
AI Space: Collaborate with other Second Mes to spark ideas or solve problems.

100% Privacy and Control

Unlike traditional centralized AI systems, Second Me ensures that your information and intelligence remain local and completely private.

Getting started & staying tuned with us

Star and join us, and you will receive all release notifications from GitHub without any delay!

Quick Start

🐳 Option 1: Docker Setup

Prerequisites

  • Docker and Docker Compose installed on your system

  • For Windows Users: You can use MinGW to run make commands. You may need to modify the Makefile by replacing Unix-specific commands with Windows-compatible alternatives.

  • Memory Usage Settings (important):

    • Configure these settings in Docker Desktop (macOS) or Docker Desktop (Windows) at: Dashboard -> Settings -> Resources
    • Make sure to allocate sufficient memory resources (at least 8GB recommended)

Setup Steps

  1. Clone the repository
git clone [email protected]:Mindverse/Second-Me.git
cd Second-Me
  1. Start the containers
make docker-up
  1. After starting the service (either with local setup or Docker), open your browser and visit:
http://localhost:3000
  1. View help and more commands
make help
  1. For custom Ollama model configuration, please refer to: Custom Model Config(Ollama)

🖥️ Option 2: Manual Setup (Cross-Platform Guide)

✅ Prerequisites

  • Miniforge/Miniconda
📦 Install Dependencies

The following scripts are sourced from scripts/setup.sh and scripts\start_local.sh.

🐍 Python Environment Setup with Conda and Poetry We recommend managing the Python environment using Miniconda, and handling dependencies with Poetry. While Conda and Poetry are independent tools, they can be used together effectively:

  • Conda provides flexible and isolated environment management.
  • Poetry offers strict and declarative dependency management.

Below is a step-by-step example of combining them:

# Set up Python Environment
conda create -n secondme python=3.12
conda activate secondme

# (Recommand) Install Poetry inside the Conda environment
# This avoids using system-wide Poetry and keeps dependencies isolated
pip install poetry

# (Optional) Set a custom Python package index (e.g., TUNA mirror for better speed in China)
poetry source add tuna https://pypi.tuna.tsinghua.edu.cn/simple
poetry source set-default tuna

poetry install --no-root --no-interaction

# Install specific version of GraphRAG from local archive
# ⚠️ Adjust the path separator based on your OS (e.g., \ on Windows, / on Unix)
pip install --force-reinstall dependencies\graphrag-1.2.1.dev27.tar.gz 
# Install Frontend Dependencies 
cd lpm_frontend
npm install
cd ..

# Build llama.cpp Dependencies 
unzip -q dependencies/llama.cpp.zip
cd llama.cpp
mkdir -p build && cd build
cmake ..
cmake --build . --config Release
cd ../..
Run Servers
# Initialize SQL Database
mkdir -p "./data/sqlite"
cat docker/sqlite/init.sql | sqlite3 ./data/sqlite/lpm.db

# Initialize ChromaDB Database
mkdir -p logs
python docker/app/init_chroma.py

# Start the Backend Server (develop mode)
python -m flask run --host=0.0.0.0 --port=8002 >> "logs/backend.log" 2>&1
# If deploying in a production environment, please use `nohup` and `disown` commands to keep it running persistently in the background.

# Start the Frontend Server (Open Another Terminal Shell)
cd lpm_frontend
npm run build
npm run start

ℹ️ Note: If the frontend and backend are deployed on separate servers, make sure to configure the HOST_ADDRESS in the .env file accordingly.

Accessing the Service

After starting the service (either with local setup or Docker), open your browser and visit:

http://localhost:3000

View help and more commands

make help

Tutorial and Use Cases

🛠️ Feel free to follow User tutorial to build your Second Me.

💡 Check out the links below to see how Second Me can be used in real-life scenarios:

Join the Community

Coming Soon

The following features have been completed internally and are being gradually integrated into the open-source project. For detailed experimental results and technical specifications, please refer to our Technical Report.

Model Enhancement Features

  • Long Chain-of-Thought Training Pipeline: Enhanced reasoning capabilities through extended thought process training
  • Direct Preference Optimization for L2 Model: Improved alignment with user preferences and intent
  • Data Filtering for Training: Advanced techniques for higher quality training data selection
  • Apple Silicon Support: Native support for Apple Silicon processors with MLX Training and Serving capabilities

Product Features

  • Natural Language Memory Summarization: Intuitive memory organization in natural language format

Contributing

We welcome contributions to Second Me! Whether you're interested in fixing bugs, adding new features, or improving documentation, please check out our Contribution Guide. You can also support Second Me by sharing your experience with it in your community, at tech conferences, or on social media.

For more detailed information about development, please refer to our Contributing Guide.

Contributors

We would like to express our gratitude to all the individuals who have contributed to Second Me! If you're interested in contributing to the future of intelligence uploading, whether through code, documentation, or ideas, please feel free to submit a pull request to our repository: Second-Me.

Made with contrib.rocks.

Acknowledgements

This work leverages the power of the open-source community.

For data synthesis, we utilized GraphRAG from Microsoft.

For model deployment, we utilized llama.cpp, which provides efficient inference capabilities.

Our base models primarily come from the Qwen2.5 series.

We also want to extend our sincere gratitude to all users who have experienced Second Me. We recognize that there is significant room for optimization throughout the entire pipeline, and we are fully committed to iterative improvements to ensure everyone can enjoy the best possible experience locally.

License

Second Me is open source software licensed under the Apache License 2.0. See the LICENSE file for more details.

Star History

Star History Chart

About

Train your AI self, amplify you, bridge the world

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 76.3%
  • TypeScript 19.4%
  • Shell 3.5%
  • Other 0.8%