Data Sources

Memory ingests your digital footprint from 50+ sources. Every email, document, code commit, and conversation becomes part of your personal knowledge base.

Privacy First

All data stays on your machine. Memory never sends your personal information to external servers. Processing happens locally using your own LLM, and you control exactly what gets ingested.

Local processing only
No cloud uploads
Encrypted storage
Selective ingestion

Email & Communication

Ingest your email history, chat messages, and communication patterns

Gmail / Google Workspace

Active

Import emails, drafts, and labels. Preserves thread context and attachments.

Data captured: Subject, body, sender, recipients, timestamps, labels, attachments (text extracted)

Microsoft Outlook / 365

Active

Connect to Outlook.com or Office 365. Supports personal and work accounts.

Data captured: Emails, calendar items, contacts, folder structure

Slack Workspaces

Active

Import channel messages, DMs, and threads. Requires workspace admin approval.

Data captured: Messages, threads, reactions, file mentions, channel context

Microsoft Teams

Beta

Teams chat and channel messages. Requires Graph API permissions.

Data captured: Chat messages, channel posts, meeting chat

Discord

Beta

Import DMs and server messages from your Discord account.

Data captured: DMs, server messages, threads

WhatsApp Export

Active

Parse WhatsApp chat exports. Export chats manually from the app.

Data captured: Messages, timestamps, participants

Documents & Notes

Import your documents, notes, and written content

Google Drive / Docs

Active

Import Google Docs, Sheets, Slides, and uploaded files from Drive.

Data captured: Document content, comments, revision history

Notion

Active

Import pages, databases, and wikis from Notion workspaces.

Data captured: Page content, database entries, properties, links

Obsidian Vaults

Active

Parse Obsidian markdown files with full support for wikilinks and metadata.

Data captured: Notes, frontmatter, links, tags, folder structure

Confluence

Active

Import pages and spaces from Atlassian Confluence.

Data captured: Page content, comments, attachments, space hierarchy

Local Files

Active

Scan local directories for markdown, text, PDF, and Office documents.

Supported: .md, .txt, .pdf, .docx, .xlsx, .pptx, .html

Zotero Library

Beta

Import research papers, annotations, and citation metadata from Zotero.

Data captured: Papers, annotations, notes, tags, collections

Code & Development

Import your coding history and development patterns

GitHub

Active

Import repositories, commits, issues, PRs, and discussions.

Data captured: Code, commit messages, PR discussions, issues, reviews

GitLab

Active

Import from GitLab.com or self-hosted instances.

Data captured: Repositories, merge requests, issues, snippets

Local Git Repos

Active

Scan local repositories for code patterns and commit history.

Data captured: Source files, commit history, branch structure

Jira

Active

Import issues, comments, and project context from Jira.

Data captured: Issues, comments, transitions, attachments

Claude Code History

Active

Import your Claude Code CLI conversation history.

Data captured: Prompts, responses, code changes, file context

Linear

Beta

Import issues and project tracking from Linear.

Data captured: Issues, comments, project context

Browsing & Research

Import your browsing history and research patterns

Chrome History

Active

Import browsing history from Google Chrome. Requires closing Chrome first.

Data captured: URLs, page titles, visit timestamps, frequency

Safari History

Active

Import browsing history from Safari on macOS.

Data captured: URLs, titles, visit dates

Bookmarks

Active

Import bookmarks from Chrome, Safari, or Firefox.

Data captured: URLs, titles, folders, tags

Pocket

Beta

Import saved articles and reading list from Pocket.

Data captured: Articles, tags, highlights

Social & Professional

Import your social media and professional network data

LinkedIn Export

Active

Parse your LinkedIn data export (request from LinkedIn settings).

Data captured: Posts, messages, connections, profile data

Twitter/X Archive

Active

Parse your Twitter/X data archive (request from settings).

Data captured: Tweets, DMs, likes, bookmarks

Reddit History

Beta

Import your Reddit posts, comments, and saved items.

Data captured: Posts, comments, saved items, subreddit context

Facebook Export

Beta

Parse your Facebook data export.

Data captured: Posts, messages, photos metadata

Connecting a Data Source

Use the Memory CLI or web interface to connect data sources:

# List available connectors
memory sources list

# Connect Gmail (opens OAuth flow)
memory sources connect gmail

# Connect local directory
memory sources connect local --path ~/Documents/notes

# Import Chrome history
memory sources connect chrome-history

# Check connection status
memory sources status

Custom Connectors

Build your own connector by implementing the DataSource interface:

from memory.ingestion.sources.base import DataSource, MemoryItem

class CustomSource(DataSource):
    name = "custom"
    description = "My custom data source"

    async def fetch(self) -> list[MemoryItem]:
        # Implement your ingestion logic
        items = []
        # ... fetch and transform data
        return items

Need a new connector?

Open an issue on GitHub to request a new data source connector, or submit a PR with your implementation.