Your collaborative AI assistant to design, iterate, and scale full-stack applications for the web.
Based on the limited social mentions available, users view **v0** as a highly effective AI coding tool for rapid prototyping, with one user highlighting its ability to build a fully functional, clickable landing page in just 90 seconds. The tool is specifically recommended for "testing ideas" and appears to generate high-quality, production-ready code quickly. Users seem impressed with v0's speed and output quality, grouping it alongside other top AI development tools like Lovable as solutions that "actually work." However, the mentions don't provide insight into pricing sentiment or detailed user complaints, suggesting more comprehensive reviews would be needed for a complete assessment.
Mentions (30d)
2
Reviews
0
Platforms
6
Sentiment
0%
0 positive
Based on the limited social mentions available, users view **v0** as a highly effective AI coding tool for rapid prototyping, with one user highlighting its ability to build a fully functional, clickable landing page in just 90 seconds. The tool is specifically recommended for "testing ideas" and appears to generate high-quality, production-ready code quickly. Users seem impressed with v0's speed and output quality, grouping it alongside other top AI development tools like Lovable as solutions that "actually work." However, the mentions don't provide insight into pricing sentiment or detailed user complaints, suggesting more comprehensive reviews would be needed for a complete assessment.
Features
I wasted $500 testing AI coding tools so you don't have to 💸 Here's what actually works: 🧪 Testing ideas? → V0 or Lovable Built a landing page in 90 seconds. Fully clickable, looked real. Code's me
I wasted $500 testing AI coding tools so you don't have to 💸 Here's what actually works: 🧪 Testing ideas? → V0 or Lovable Built a landing page in 90 seconds. Fully clickable, looked real. Code's messy but perfect for validation. 🏗️ Shipping real apps? → Bolt Full dev environment in your browser. I built a document uploader with front end + back end + database in one afternoon. 💻 Coding with AI? → Cursor or Windsurf Cursor = stable, used by Google engineers Windsurf = faster, newer, more aggressive Both are insane. 📚 Learning from scratch? → Replit Best coding teacher I've found. Explains errors, walks you through fixes, teaches as you build. Here's what 500+ hours taught me: The tool doesn't matter if you're using it for the wrong stage. Testing ≠ Building ≠ Coding ≠ Learning Stop comparing features. Match your goal first. Drop what you're building 👇 I'll tell you exactly which tool to use Save this. You'll need it. #AI #AITools #TechTok #ChatGPT #Coding
View originalPricing found: $0 /month, $5, $30 /user, $30, $2
ClaudeGUI: File tree + Monaco + xterm + live preview, all streaming from Claude CLI
https://preview.redd.it/5ml5rgvd6iug1.png?width=3444&format=png&auto=webp&s=1a16f1fefe2efd898e72852ad7c900a055ea518d https://preview.redd.it/cwlkjevd6iug1.png?width=3454&format=png&auto=webp&s=2537aee124bc0c6e23f75d97bc604d5df640153f https://preview.redd.it/eynv3fvd6iug1.png?width=3428&format=png&auto=webp&s=c749d7b467bc5f1cde91698ffce5509935baf13e Hey all — I've been living inside `claude` in the terminal for months, and kept wishing I could see files, the editor, the terminal, and a live preview of whatever Claude is building, all at once. So I built it. **ClaudeGUI** is an unofficial, open-source web IDE that wraps the official Claude Code CLI (`@anthropic-ai/claude-agent-sdk`). Not affiliated with Anthropic — just a community project for people who already pay for Claude Pro/Max and want a real GUI on top of it. **What's in the 4 panels** - 📁 File explorer (react-arborist, virtualized, git status) - 📝 Monaco editor (100+ languages, multi-tab, AI-diff accept/reject per hunk) - 💻 xterm.js terminal (WebGL, multi-session, node-pty backend) - 👁 Multi-format live preview — HTML, PDF, Markdown (GFM + LaTeX), images, and reveal.js presentations **The part I'm most excited about** - **Live HTML streaming preview.** The moment Claude opens a ```html``` block or writes a `.html` file, the preview panel starts rendering it *while Claude is still typing*. Partial render → full render on completion. Feels like watching a website materialize. - **Conversational slide editing.** Ask Claude to "make slide 3 darker" — reveal.js reloads in place via `Reveal.sync()`, no iframe flash. Export to PPTX/PDF when done. - **Permission GUI.** Claude tool-use requests pop up as an approval modal instead of a y/N prompt in the terminal. Dangerous commands get flagged. Rules sync with `.claude/settings.json`. - **Runtime project hotswap.** Switch projects from the header — file tree, terminal cwd, and Claude session all follow. - **Green phosphor CRT theme** 🟢 because why not. **Stack**: Next.js 14 + custom Node server, TypeScript strict, Zustand, Tailwind + shadcn/ui, `ws` (not socket.io), chokidar, Tauri v2 for native `.dmg`/`.msi` installers. **Install** (one-liner): ```bash curl -fsSL https://github.com/neuralfoundry-coder/CLAUDE-GUI/tree/main/scripts/install/install.sh | bash Or grab the .dmg / .msi from releases. Runs 100% locally, binds to 127.0.0.1 by default. Your Claude auth from claude login is auto-detected. Status: v0.3 — 102/102 unit tests, 14/14 Playwright E2E passing. Still rough around the edges, MIT-ish license TBD, feedback very welcome. Repo: Happy to answer questions about the architecture — the HTML streaming extractor and the Claude SDK event plumbing were the fun parts. submitted by /u/Motor_Ocelot_1547 [link] [comments]
View originalMy First Claude - A gateway that tracks what your agent does
Cortex is my first project, built almost entirely with Claude Code (Opus 4.6). Been working on it for awhile, constantly evolving. I run Claude Code as my primary builder across multi-step tasks and kept finding it would skip steps, produce stubs, or report things as done that weren't. AI builds alot of stuff that just breaks and there's no built-in way to enforce a workflow or verify what actually happened. So I built Cortex — a local MCP gateway that sits between your agents and tracks what they are doing. It enforces a task lifecycle where agents have to claim work, report progress, submit results, and get reviewed by another agent before anything counts as done. It can't close off the task unless an external approval is made. How Claude Code was used: - Claude Code (Opus) is the primary builder — wrote most of the gateway, MCP tools, dashboard, and task system - Claude Code also runs as an agent through Cortex with PreToolUse hooks for enforcement - I also run Codex as a code reviewer and two Hermes/GPT agents for research, all routing through the same gateway on a linux system What it includes: - Gateway on port 4840 with 62 MCP tools - Task lifecycle: claim → progress → submit → review → approve/reject - Inter-agent bridge messaging - Live dashboard showing tasks, agent activity, and costs - Hard gates via Claude Code hooks — agents can't write without an active project - Works with Claude Code, Codex, Hermes, or any MCP-compatible runtime Tech: Bun, SQLite, React dashboard, MCP stdio transport Still early (v0.1) — there is still plenty to be done around it, heavily working on it in my spare time. Looking for feedback from anyone running multi-agent setups or wanting better visibility into what Claude Code is doing. Free to try and open source (AGPL-3.0): https://github.com/MrPancakex/Cortex if anyone has feedback or wants to tear it apart, go for it submitted by /u/MrPancakex [link] [comments]
View originalCouldn't find a good exercise API, so my workout app's data layer became its own thing
Building a workout tracking app on the side (Tally), I kept hitting the same wall: where do you actually get decent exercise data? The options are rough. free-exercise-db has ~800 exercises but the schema is thin and it's gym-only. ExerciseDB on RapidAPI has GIFs and not much else. API Ninjas gives you numbers but no search keywords, no form cues, no safety notes. So I built my own library. Took about four months of evenings, and most of that wasn't code. It was cleaning data, writing form cues, and arguing with myself about how to model "a yoga pose has no rep range." At some point the library got more interesting than the app it was sitting inside, so I pulled it out: exerciseapi.dev It's 2,198 exercises across 12 categories. Not just barbell stuff. Yoga, PT, mobility, pilates, calisthenics, plyometrics. Each one has search keywords, form cues, safety notes, anatomical muscle mapping, and a few variations. The thing I most want a reality check on is the onboarding. It's one copy-paste prompt. You drop it into Claude Code or Lovable or v0, it pulls the docs via an llms.txt file, figures out your framework, and wires up search, a detail view, and a card component on its own. No reading docs for an hour first. I can't tell yet if "API designed for AI coding tools" is a real wedge or just a cute framing of normal good docs. If you have an instinct either way I'd love to hear it. Used Claude Code extensively and Claude.ai for brainstorming/prototyping. Tech stack for the curious: Workers + Hono on the API, Postgres with tsvector + pg_trgm for search (so "benchpress" still finds "Bench Press"), Next.js on Vercel for the dashboard, Supabase, Upstash for rate limiting. Free tier is 100 req per day. Paid starts at $5/mo. Supabase + Upstash + the domain aren't free and I'd rather charge five bucks than stick ads on a docs site. Three paying users so far, all friends, who keep finding things I missed :) submitted by /u/dawnpawtrol1 [link] [comments]
View originalExperimenting with a DSL for LLM-based code generation: .hvibe, a dual-pipeline approach (direct or IR-based execution)
Hi! I’ve been experimenting with a DSL called .hvibe for describing interactive systems (e.g. games) using structured natural language constraints where you define: - Game logic in plain language (physics, collisions, win/lose conditions) - Hard constraints (MUST / MUST NEVER) - Structured specs (features, tests, dependencies) - There are two possible layers: .hvibe: declarative spec (rules, logic, tests, dependencies) .hvibe.plus: LLM-driven compilation layer that transforms the spec into JS-like executable code while preserving intent as comments For now you get a single self-contained artifact (e.g. HTML game). Also, you can include a .lock file to freeze parts of the spec, and the .hvibe file can embed test constraints that are enforced during generation. There are two main flows, the first one is direct: spec + prompt + .hvibe => LLM => executable. The second in two-step IR: spec + prompt + .hvibe => LLM => IR (.hvibe.plus) => LLM => executable. It introduces an intermediate representation to improve constraint stability and reduce interpretation drift during generation. What’s actually different here (compared to typical DSLs, prompt systems, or spec-to-code pipelines) is that .hvibe tries to unify 4 layers that are usually separate: - Spec (what the system should do) - Code structure (how it is organized) - Tests (how behavior is validated) - Constraints (what must never happen) Instead of treating these as external or separate systems, .hvibe merges them into a single declarative representation where: - tests are embedded inside the spec itself - constraints are treated as executable intent (not comments or external validation) - dependencies are explicitly declared as part of the same model - logic + structure + verification are all part of one graph Getting good results using Claude and its main competitors. A project example is available here, including all files up to the final build: https://github.com/Th6uD1nk/HiVibe-AI-DSL/tree/main/versions/v0.2.1 (see jumper example) Curious if similar systems combining those approaches exist or are being used (LLM-native DSLs, AI compiler architectures, intermediate representations for LLM systems). submitted by /u/mcidclan [link] [comments]
View originalengram v0.2: Claude Code now indexes your ~/.claude/skills/ directory into a query-able graph + warns you about past mistakes before re-makin
Body: Short v0.2 post for anyone running Claude Code as a daily driver. v0.1 shipped last week as a persistent code knowledge graph (3-11x token savings on navigation queries). v0.2 closes three more gaps that have been bleeding my context budget: 1. Skills awareness. If you've built up a ~/.claude/skills/ directory, engram can now index every SKILL.md into the graph as concept nodes. Trigger phrases from the description field become separate keyword concept nodes, linked via a new triggered_by edge. When Claude Code queries the graph for "landing page copy", BFS naturally walks the edge to your copywriting skill — no new query code needed, just reusing the traversal that was already there. Numbers on my actual ~/.claude/skills: 140 skills + 2,690 keyword concept nodes indexed in 27ms. The one SKILL.md without YAML frontmatter (reddit-api-poster) gets parsed from its # heading as a fallback and flagged as an anomaly. Opt-in via --with-skills. Default is OFF so users without a skills directory see zero behavior change. 2. Task-aware CLAUDE.md sections. engram gen --task bug-fix writes a completely different CLAUDE.md section than --task feature. Bug-fix mode leads with 🔥 hot files + ⚠️ past mistakes, drops the decisions section entirely. Feature mode leads with god nodes + decisions + dependencies. Refactor mode leads with the full dependency graph + patterns. The four preset views are rows in a data table — you can add your own view without editing any code. 3. Regret buffer. The session miner already extracted bug: / fix: lines from your CLAUDE.md into mistake nodes in v0.1, they were just buried in query results. v0.2 gives them a 2.5x score boost in the query layer and surfaces matching mistakes at the TOP of output in a ⚠️ PAST MISTAKES warning block. New engram mistakes CLI command + list_mistakes MCP tool (6 tools total now). The regex requires explicit colon-delimited format (bug: X, fix: Y), so prose docs don't false-positive. I pinned the engram README as a frozen regression test — 0 garbage mistakes extracted. Bug fixes that might affect you if you're using v0.1: writeToFile previously could silently corrupt CLAUDE.md files with unbalanced engram markers (e.g. two and one from a copy-paste error). v0.2 now throws a descriptive error instead of losing data. If you have a CLAUDE.md with manually-edited markers, v0.2 will tell you. Atomic init lockfile so two concurrent engram init calls can't silently race the graph. UTF-16 surrogate-safe truncation so emoji in mistake labels don't corrupt the MCP JSON response. Install: npm install -g engramx@0.2.0 cd ~/your-project engram init --with-skills # opt-in skills indexing engram gen --task bug-fix # task-aware CLAUDE.md generation engram mistakes # list known mistakes MCP setup (for Claude Code's .claude.json or claude_desktop_config.json): { "mcpServers": { "engram": { "command": "engram-serve", "args": ["/path/to/your/project"] } } } GitHub: https://github.com/NickCirv/engram Changelog with every commit + reviewer finding: https://github.com/NickCirv/engram/blob/main/CHANGELOG.md 132 tests, Apache 2.0, zero native deps, zero cloud, zero telemetry. Feedback welcome. Heads up: there's a different project also called "engram" on this sub (single post, low traction). Mine is engramx on npm / NickCirv/engram on GitHub — the one with the knowledge graph + skills-miner + MCP s submitted by /u/SearchFlashy9801 [link] [comments]
View originalI JUST want Claude to check my email and tell me a summary over speech to text and it can’t do it.
Very simple. Have enterprise. Make a Claude project. Connect Gmail and calendar. Provide specific instructions Claude on iPhone reacts completely differently if you speech to text compared to typing the exact same input. From Sonnet 4.6 That’s a real problem and I understand the frustration. When you type a calendar request, the system properly loads the Google Calendar tools via tool_searcht first, then calls gcal_list_events with your devb@skl.vc calendar. But when the same request comes through speech-to-text, it’s not triggering that same initialization sequence — it’s falling back to the generic event_search_v0 tool, which only sees your iPhone calendar. submitted by /u/Dbillz78 [link] [comments]
View originalMade Claude Code actually understand my codebase — local MCP server with symbol graph + memory tied to git
I've been frustrated that Claude Code either doesn't know what's in my repo (so every session starts with re-explaining the architecture) or guesses wrong about which files matter. Cursor's @codebase kind of solves it but requires uploading to their cloud, which is a no-go for some of my client work. So I built Sverklo — a local-first MCP server that gives Claude Code (and Cursor, Windsurf, Antigravity) the same mental model of my repo that a senior engineer has. Runs entirely on my laptop. MIT licensed. No API keys. No cloud. What it actually does in a real session Before sverklo: I ask Claude Code "where is auth handled?" It guesses based on file names, opens the wrong file, reads 500 lines, guesses again, eventually finds it. After sverklo: Same question. Claude Code calls sverklo_search("authentication flow") and gets the top 5 files ranked by PageRank — middleware, JWT verifier, session store, login route, logout route. In one tool call. With file paths and line numbers. Refactor scenario: I want to rename a method on a billing class. Claude Code calls sverklo_impact("BillingAccount.charge") and gets the 14 real callers ranked by depth, across the whole codebase. No grep noise from recharge, discharge, or a Battery.charge test fixture. The rename becomes mechanical. PR review scenario: I paste a git diff. Claude Code calls sverklo_review_diff and gets a risk-scored review order — highest-impact files first, production files with no test changes flagged, structural warnings for patterns like "new call inside a stream pipeline with no try-catch" (the kind of latent outage grep can't catch). Memory scenario: I tell Claude Code "we decided to use Postgres advisory locks instead of Redis for cross-worker mutexes." It calls sverklo_remember and the decision is saved against the current git SHA. Three weeks later when I ask "wait, what did we decide about mutexes?", Claude Code calls sverklo_recall and gets the decision back — including a flag if the relevant code has moved since. The 20 tools in one MCP server Grouped by job: Search: sverklo_search, sverklo_overview, sverklo_lookup, sverklo_context, sverklo_ast_grep Refactor safety: sverklo_impact, sverklo_refs, sverklo_deps, sverklo_audit Diff-aware review: sverklo_review_diff, sverklo_test_map, sverklo_diff_search Memory (bi-temporal, tied to git SHAs): sverklo_remember, sverklo_recall, sverklo_memories, sverklo_forget, sverklo_promote, sverklo_demote Index health: sverklo_status, sverklo_wakeup All 20 run locally. Zero cloud calls after the one-time 90MB embedding model download on first run. Install (30 seconds) npm install -g sverklo cd your-project && sverklo init sverklo init auto-detects Claude Code / Cursor / Windsurf / Google Antigravity, writes the right MCP config file for each, appends sverklo instructions to your CLAUDE.md, and runs sverklo doctor to verify the setup. Safe to re-run on existing projects. Before you install — a few honest things Not magic. The README has a "when to use grep instead" section. Small repos (<50 files), exact string lookups, and single-file edits are all cases where the built-in tools are fine or better. Privacy is a side effect, not the pitch. The pitch is the mental model. Local-first happens to come with it because running a symbol graph on your laptop is trivially cheap. It's v0.2.16. Pre-1.0. I ran a structured 3-session dogfood protocol on my own tool before shipping this version — the log is public (DOGFOOD.md in the repo) including the four bugs I found in my own tool and fixed. I triage issues within hours during launch week. Links Repo: github.com/sverklo/sverklo Playground (see real tool output on gin/nestjs/react without installing): sverklo.com/playground Benchmarks (reproducible with npm run bench): BENCHMARKS.md in the repo Dogfood log: DOGFOOD.md in the repo If you try it, tell me what breaks. I'll respond within hours and ship fixes fast. submitted by /u/Parking-Geologist586 [link] [comments]
View originalI shipped three Claude Code integrations for my smart TV CLI (CLI, MCP, Skill) and let daily use pick the winner.
I got tired of picking up the remote to start an episode of a show I already knew the name of. So I built stv — a Python CLI that lets Claude Code drive my LG, Samsung, Roku, and Android TVs directly. Say "play Frieren s2e8" and Netflix opens on the TV in about 3 seconds. Full disclosure first: most of stv was written with Claude Code itself. I review and merge, but the keystrokes aren't mine. Meta-ironic given that the whole point of stv is to let Claude control your TV. The thing I actually want to talk about in this post is that stv integrates with Claude Code three different ways, and I wasn't sure which would win — so I shipped all three and let my own daily use decide. 3 integration paths with Claude Code 1. CLI (dead simple — Claude already shells out) pip install stv stv setup Claude Code runs shell commands by default, so you can just tell it: "Run stv play netflix Wednesday s1e7" ...and it works. No config, no MCP setup. 2. MCP server (21 tools, structured) json { "mcpServers": { "tv": { "command": "uvx", "args": ["stv"] } } } 21 structured tools with typed schemas. Tools are intentionally chunky so the model makes fewer round-trips per conversation turn. 3. Claude Code Skill (drop-in, zero config) clawhub install smartest-tv The Skill auto-triggers on phrases like "play", "good night", "next episode" — so Claude knows when to invoke stv without being told. A typical evening for me me: play frieren s2e8 on the living room tv claude: [runs tv_play_content] Playing now. me: make it a bit quieter claude: [runs tv_volume(value=18)] Volume 18. me: good night claude: [runs tv_sync(action="off")] All 3 TVs off. Caveats, up front Samsung 2024+ models may block third-party control by design. Only confirmed on my Q60T. Spotify is web-search based and flaky on niche tracks. HBO Max / Disney+ unsupported. The CLI path is still 90% of what I use. The Skill is the one I want to use the most, but I haven't gotten the trigger phrases tight enough yet — suggestions very welcome. Install pip install stv stv setup GitHub: https://github.com/Hybirdss/smartest-tv PyPI: https://pypi.org/project/stv/ (v0.10.0, 252 tests, MIT) Happy to answer questions about which integration path works best, MCP design tradeoffs, the Netflix resolver, or the Skill triggering heuristics. submitted by /u/PatientEither6390 [link] [comments]
View originalRTFM v0.4 — MCP retrieval server that cuts vault context by 90% (Obsidian + Claude Code)
Problem: Karpathy-style LLM wikis inject everything into context. On a 1,700-file vault, that's your entire quota in minutes. I built an MCP server that does retrieval instead of scanning. **How it works with Claude Code:** The agent calls `rtfm_search("formal grammars")` → gets 5 results with scores and file paths (~300 tokens). Then `rtfm_expand("source-slug")` to read only the relevant section. Progressive disclosure: context grows only by what's actually useful. **New in v0.4 — Obsidian vault integration:** `rtfm vault` indexes your vault in one command: - Auto corpus mapping (folders → searchable corpora) - [[wikilink]] resolution → knowledge graph with centrality ranking - Auto-generated _rtfm/ navigation files (readable in Obsidian) - 10 parsers: Markdown, Python AST, LaTeX, PDF, YAML, JSON, Shell... - Extensible: add any format in ~50 lines of Python Measured on real repos: -51% cost, -61% tokens, -16% duration vs standard grep-based navigation. `pip install rtfm-ai[mcp]` https://github.com/roomi-fields/rtfm MIT licensed. Works with Claude Code, Cursor, Codex — any MCP client. submitted by /u/Plenty-Ad-7699 [link] [comments]
View originalI built a single pip install that gives Claude 18 MCP servers at once — HN, Wikipedia, Reddit, Weather, GitHub, NASA, and more
I got tired of hunting down, configuring, and registering individual MCP servers one by one, so I built mcp-everything — a single Python package that ships 18 ready-to-use MCP servers for Claude Desktop. One install: pip install mcp-everything mcp-everything setup Restart Claude Desktop. Done. What's included (18 servers): Free (no key needed): Hacker News, Wikipedia, ArXiv, Weather, Reddit, Pokédex, Countries, Books, Recipes, Dictionary, Translate, Public Holidays, Crypto prices API key required: GitHub, NewsAPI, NASA, TMDB (Movies), YouTube v0.4.0 just shipped with: - mcp-everything serve — a local web dashboard at localhost:7337 to toggle servers, enter API keys, and run live tests from your browser - Secure .env key storage (keys no longer live inside your Claude config) - export / import to share configs with teammates - GitHub Actions CI across Python 3.10–3.12 GitHub: https://github.com/Wellix260/mcp-everything PyPI: https://pypi.org/project/mcp-everything/ Would love feedback — and if there are MCP servers you wish existed, drop them below and I'll add them. submitted by /u/Stock_Animal [link] [comments]
View originalindxr v0.4.0 - Teach your agents to learn from their mistakes.
I had been building indxr as a "fast codebase indexer for AI agents." Tree-sitter parsing, 27 languages, structural diffs, token budgets, the whole deal. And it worked. Agents could understand what was in your codebase faster. But they still couldn't remember why things were the way they were. Karpathy's tweet about LLM knowledge bases prompted me to take indxr in a different direction. One of the main issues I faced, like many of you, while working with agents was them making the same mistake over and over again, because of not having persistent memory across sessions. Every new conversation starts from zero. The agent reads the code, builds up understanding, maybe fails a few times, eventually figures it out and then all of that knowledge evaporates. indxr is now a codebase knowledge wiki backed by a structural index. The structural index is still there — it's the foundation. Tree-sitter parses your code, extracts declarations, relationships, and complexity metrics. But the index now serves a bigger purpose: it's the scaffolding that agents use to build and maintain a persistent knowledge wiki about your codebase. When an agent connects to the indxr MCP server, it has access to wiki_generate. The tool doesn't write the wiki itself, it returns the codebase's structural context, and the agent decides which pages to create. Architecture overviews, module responsibilities, and design decisions. The agent plans the wiki, then calls wiki_contribute for each page. indxr provides the structural intelligence; the agent does the thinking and writing. But generating docs isn't new. The interesting part is what happens next. I added a tool called wiki_record_failure. When an agent tries to fix a bug and fails, it records the attempt: Symptom — what it observed Attempted fix — what it tried Diagnosis — why it didn't work Actual fix — what eventually worked These failure patterns get stored in the wiki, linked to the relevant module pages. The next agent that touches that code calls wiki_search first and finds: "someone already tried X and it didn't work because of Y." This is the loop: Search — agent queries the wiki before diving into the source. Learn — after synthesising insights from multiple pages, wiki_compound persists the knowledge back Fail — when a fix doesn't work, wiki_record_failure captures the why. Avoid — future agents see those failures and skip the dead ends Every session makes the wiki smarter. Failed attempts become documented knowledge. Synthesised insights get compounded back. The wiki grows from agent interactions, not just from code changes. The wiki doesn't go stale. Run indxr serve --watch --wiki-auto-update and when source files change, indxr uses its structural diff engine to identify exactly which wiki pages are affected — then surgically updates only those pages. Check out the project here: https://github.com/bahdotsh/indxr Would love to hear your feedback! submitted by /u/New-Blacksmith8524 [link] [comments]
View originalCodeGraphContext - An MCP server that converts your codebase into a graph database
CodeGraphContext- the go to solution for graph-code indexing 🎉🎉... It's an MCP server that understands a codebase as a graph, not chunks of text. Now has grown way beyond my expectations - both technically and in adoption. Where it is now v0.4.0 released ~3k GitHub stars, 500+ forks 50k+ downloads 75+ contributors, ~250 members community Used and praised by many devs building MCP tooling, agents, and IDE workflows Expanded to 15 different Coding languages What it actually does CodeGraphContext indexes a repo into a repository-scoped symbol-level graph: files, functions, classes, calls, imports, inheritance and serves precise, relationship-aware context to AI tools via MCP. That means: - Fast “who calls what”, “who inherits what”, etc queries - Minimal context (no token spam) - Real-time updates as code changes - Graph storage stays in MBs, not GBs It’s infrastructure for code understanding, not just 'grep' search. Ecosystem adoption It’s now listed or used across: PulseMCP, MCPMarket, MCPHunt, Awesome MCP Servers, Glama, Skywork, Playbooks, Stacker News, and many more. Python package→ https://pypi.org/project/codegraphcontext/ Website + cookbook → https://codegraphcontext.vercel.app/ GitHub Repo → https://github.com/CodeGraphContext/CodeGraphContext Docs → https://codegraphcontext.github.io/ Our Discord Server → https://discord.gg/dR4QY32uYQ This isn’t a VS Code trick or a RAG wrapper- it’s meant to sit between large repositories and humans/AI systems as shared infrastructure. Happy to hear feedback, skepticism, comparisons, or ideas from folks building MCP servers or dev tooling. Original post (for context): https://www.reddit.com/r/mcp/comments/1o22gc5/i_built_codegraphcontext_an_mcp_server_that/ submitted by /u/Desperate-Ad-9679 [link] [comments]
View originalControl Codex or any CLI App from Claude using NPCterm
NPCterm gives AI agents full terminal access not only bash. The ability to spawn shells, run arbitrary commands, read screen output, send keystrokes, and interact with TUI applications Claude/Codex/Gemni/Opencode/vim/btop... Use with precautions. A terminal is an unrestricted execution environment. Features Full ANSI/VT100 terminal emulation with PTY spawning via portable-pty 15 MCP tools for complete terminal control over JSON-RPC stdio Process state detection -- knows when a command is running, idle, waiting for input, or exited Event system -- ring buffer of terminal events (CommandFinished, WaitingForInput, Bell, etc.) AI-friendly coordinate overlay for precise screen navigation Mouse, selection, and scroll support for interacting with TUI applications Multiple concurrent terminals with short 2-character IDs https://github.com/alejandroqh/npcterm submitted by /u/aq-39 [link] [comments]
View originalvibecop is now an mcp server. we also scanned 5 popular mcp servers and the results are rough
Quick update on vibecop (AI code quality linter I've posted about before). v0.4.0 just shipped with three things worth sharing. vibecop is now an MCP server vibecop serve exposes 3 tools over MCP: vibecop_scan (scan a directory), vibecop_check (check one file), vibecop_explain (explain what a detector catches and why). One config block: json { "mcpServers": { "vibecop": { "command": "npx", "args": ["vibecop", "serve"] } } } This extends vibecop from 7 agent tools (via vibecop init) to 10+ by adding Continue.dev, Amazon Q, Zed, and anything else that speaks MCP. Scored 100/100 on mcp-quality-gate compliance testing. We scanned 5 popular MCP servers MCP launched late 2024. Nearly every MCP server on GitHub was built with AI assistance. We pointed vibecop at 5 of the most popular ones: Repository Stars Key findings DesktopCommanderMCP 5.8K 18 unsafe shell exec calls (command injection), 137 god-functions mcp-atlassian 4.8K 84 tests with zero assertions, 77 tests with hidden conditional assertions Figma-Context-MCP 14.2K 16 god-functions, 4 missing error path tests exa-mcp-server 4.2K handleRequest at 77 lines/complexity 25, registerWebSearchAdvancedTool at 198 lines/complexity 34 notion-mcp-server 4.2K startServer at 260 lines, cyclomatic complexity 49. 9 files with excessive any The DesktopCommanderMCP one is concerning. 18 instances of execSync() or exec() with dynamic string arguments. This is a tool that runs shell commands on your machine. That's command injection surface area. The Atlassian server has 84 test functions with zero assertions. They all pass. They prove nothing. Another 77 hide assertions behind if statements so depending on runtime conditions, some assertions never execute. The signal quality fix This was the real engineering story. Our first scan of DesktopCommanderMCP returned 500+ findings. Sounds impressive until you check: 457 were "console.log left in production code." But it's a server. Servers log. That's 91% noise. Same pattern across all 5 repos. The console.log detector was designed for frontend/app code. For servers and CLIs, it's the wrong signal. So we made detectors context-aware. vibecop now reads your package.json. If the project has a bin field (CLI tool or server), the console.log detector skips the entire project. We also fixed self-import detection and placeholder detection in fixture/example directories. Before: ~72% noise. After: 90%+ signal. The finding density gap holds: established repos average 4.4 findings per 1,000 lines of code. Vibe-coded repos average 14.0. 3.2x higher. Other updates: 35 detectors now (up from 22) 540 tests, all passing Full docs site: https://bhvbhushan.github.io/vibecop/ 48 files changed, 10,720 lines added in this release npm install -g vibecop vibecop scan . vibecop serve # MCP server mode GitHub: https://github.com/bhvbhushan/vibecop If you're using MCP servers, have you looked at the code quality of the ones you've installed? Or do you just trust them because they have stars? submitted by /u/Awkward_Ad_9605 [link] [comments]
View originalif you have just started using Codex CLI, codex-cli-best-practice is your ultimate guide
Repo: https://github.com/shanraisshan/codex-cli-best-practice submitted by /u/shanraisshan [link] [comments]
View originalYes, v0 offers a free tier. Pricing found: $0 /month, $5, $30 /user, $30, $2
Key features include: Sync with a repo, Integrate with apps, Deploy to Vercel, Edit with design mode, Start with templates, Create design systems, Agentic by default, Create from your phone.
Based on user reviews and social mentions, the most common pain points are: token cost, LLM costs.
Based on 41 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.
Nat Friedman
Investor at AI Grant
3 mentions