Explore our complete catalog of 25 open-source projects
Real-time desktop overlay copilot that watches your screen and listens to calls, delivering contextual answers with profile presets for interviews, sales, and presentations.
Proactive context-aware desktop assistant that captures your screen, stores data locally, and uses OpenAI/Doubao-compatible models to deliver summaries, insights, and todos.
AI video note assistant that transcribes Bilibili/YouTube/Douyin videos and generates structured Markdown notes with screenshots, jump links, and customizable styles.
Markdown-native project board for Git repos with CLI, web UI, MCP/AI integration, and offline-friendly Kanban/search built entirely on plain Markdown files.
Peer-to-peer, end-to-end encrypted file and folder transfer over QUIC with resumable downloads, no accounts, and cross-platform desktop builds.
Context-aware Windows overlay assistant that reads your screen and delivers translations, summaries, and answers via multi-LLM backends with a sleek keyboard-driven UI.
WebRTC P2P tool for files, text, and desktop sharing with end-to-end encryption, ACK reliability, Docker/single-binary deploys, and a responsive Next.js UI.
Ultra-lightweight Minecraft server for embedded and low-RAM systems, trading vanilla completeness for performance with configurable globals and cross-platform polyglot binaries.
Vulkan layer that brings Lossless Scaling frame generation to Linux/Steam Deck, with a GUI configurator, benchmarks, and per-game tuning.
Windows 10/11 debloat and optimization suite that manages apps, privacy, performance, and UI tweaks, plus ISO/autounattend creation and reusable config exports.
CPU-efficient offline translation server with plugin-compatible APIs, Docker deploys, and token-protected endpoints delivering low-latency multilingual translation without a GPU.
GPU-accelerated, non-destructive RAW image editor built with Rust/Tauri/React, offering fast previews, masking, and color grading with a performance-first workflow.
Local, privacy-first AI workspace that bundles chat, code, image generation, agents, and automation in one app using local Llama.cpp models with tool-calling and MCP integration.
5MB Rust OpenAI-compatible server for local GGUF/SafeTensors models with hot swaps, auto-discovery, and optional GPU/MOE offloading for drop-in use across tools and SDKs.
Tiny Rust OpenAI-compatible server for local GGUF/SafeTensors models with hot swaps, auto-discovery, and multi-backend GPU/MOE support for drop-in use across editors and SDKs.