Mentions (30d)
0
Reviews
0
Platforms
3
Sentiment
0%
0 positive
Industry
information technology & services
Employees
300
Funding Stage
Series E
Total Funding
$696.1M
🤝 https://t.co/10deyzvHHP
🤝 https://t.co/10deyzvHHP
View originalsupabase-micro a micro-python library that makes it easy to connect microcontrollers to Supabase https://t.co/u4QFkPtjBI
supabase-micro a micro-python library that makes it easy to connect microcontrollers to Supabase https://t.co/u4QFkPtjBI
View originalCustom OIDC Provider | Supabase Office Hour AMA #4 https://t.co/8sGUyE3CiQ
Custom OIDC Provider | Supabase Office Hour AMA #4 https://t.co/8sGUyE3CiQ
View original🤝 https://t.co/10deyzvHHP
🤝 https://t.co/10deyzvHHP
View originalLearn more about Custom OIDC provider on our blog post: https://t.co/PCmQEqrK7C
Learn more about Custom OIDC provider on our blog post: https://t.co/PCmQEqrK7C
View originalWe now support adding your own custom OIDC providers https://t.co/u2oCiFN5fS
We now support adding your own custom OIDC providers https://t.co/u2oCiFN5fS
View originalWe're running the State of Startups survey again. Last year 2,110 people told us what they're building, how they're building it, and what's breaking. This year we want to know how AI changed the ans
We're running the State of Startups survey again. Last year 2,110 people told us what they're building, how they're building it, and what's breaking. This year we want to know how AI changed the answer to all three. Get a free t-shirt for completing it: https://t.co/fua01RFEDJ https://t.co/4osSipRxzN
View originalYou can now deploy Supabase edge functions with Terraform https://t.co/4Bqt99XYX6
You can now deploy Supabase edge functions with Terraform https://t.co/4Bqt99XYX6
View originalUsing AI to untangle 10,000 property titles in Latam, sharing our approach and wanting feedback
Hey. Long post, sorry in advance (Yes, I used an AI tool to help me craft this post in order to have it laid in a better way). So, I've been working on a real estate company that has just inherited a huge mess from another real state company that went bankrupt. So I've been helping them for the past few months to figure out a plan and finally have something that kind of feels solid. Sharing here because I'd genuinely like feedback before we go deep into the build. Context A Brazilian real estate company accumulated ~10,000 property titles across 10+ municipalities over decades, they developed a bunch of subdivisions over the years and kept absorbing other real estate companies along the way, each bringing their own land portfolios with them. Half under one legal entity, half under a related one. Nobody really knows what they have, the company was founded in the 60s. Decades of poor management left behind: Hundreds of unregistered "drawer contracts" (informal sales never filed with the registry) Duplicate sales of the same properties Buyers claiming they paid off their lots through third parties, with no receipts from the company itself Fraudulent contracts and forged powers of attorney Irregular occupations and invasions ~500 active lawsuits (adverse possession claims, compulsory adjudication, evictions, duplicate sale disputes, 2 class action suits) Fragmented tax debt across multiple municipalities A large chunk of the physical document archive is currently held by police as part of an old investigation due to old owners practices The company has tried to organize this before. It hasn't worked. The goal now is to get a real consolidated picture in 30-60 days. Team is 6 lawyers + 3 operators. What we decided to do (and why) First instinct was to build the whole infrastructure upfront, database, automation, the works. We pushed back on that because we don't actually know the shape of the problem yet. Building a pipeline before you understand your data is how you end up rebuilding it three times, right? So with the help of Claude we build a plan that is the following, split it in some steps: Build robust information aggregator (does it make sense or are we overcomplicating it?) Step 1 - Physical scanning (should already be done on the insights phase) Documents will be partially organized by municipality already. We have a document scanner with ADF (automatic document feeder). Plan is to scan in batches by municipality, naming files with a simple convention: [municipality]_[document-type]_[sequence] Step 2 - OCR Run OCR through Google Document AI, Mistral OCR 3, AWS Textract or some other tool that makes more sense. Question: Has anyone run any tool specifically on degraded Latin American registry documents? Step 3 - Discovery (before building infrastructure) This is the decision we're most uncertain about. Instead of jumping straight to database setup, we're planning to feed the OCR output directly into AI tools with large context windows and ask open-ended questions first: Gemini 3.1 Pro (in NotebookLM or other interface) for broad batch analysis: "which lots appear linked to more than one buyer?", "flag contracts with incoherent dates", "identify clusters of suspicious names or activity", "help us see problems and solutions for what we arent seeing" Claude Projects in parallel for same as above Anything else? Step 4 - Data cleaning and standardization Before anything goes into a database, the raw extracted data needs normalization: Municipality names written 10 different ways ("B. Vista", "Bela Vista de GO", "Bela V. Goiás") -> canonical form CPFs (Brazilian personal ID number) with and without punctuation -> standardized format Lot status described inconsistently -> fixed enum categories Buyer names with spelling variations -> fuzzy matched to single entity Tools: Python + rapidfuzz for fuzzy matching, Claude API for normalizing free-text fields into categories. Question: At 10,000 records with decades of inconsistency, is fuzzy matching + LLM normalization sufficient or do we need a more rigorous entity resolution approach (e.g. Dedupe.io)? Step 5 - Database Stack chosen: Supabase (PostgreSQL + pgvector) with NocoDB on top Three options were evaluated: Airtable - easiest to start, but data stored on US servers (LGPD concern for CPFs and legal documents), limited API flexibility, per-seat pricing NocoDB alone - open source, self-hostable, free, but needs server maintenance overhead Supabase - full PostgreSQL + authentication + API + pgvector in one place, $25/month flat, developer-first We chose Supabase as the backend because pgvector is essential for the RAG layer (Step 7) and we didn't want to manage two separate databases. NocoDB sits on top as the visual interface for lawyers and data entry operators who need spreadsheet-like interaction without writing SQL. Each lot becomes a single entity (primary key) with relational links to: contracts, bu
View originalI built an open-source MCP memory server that gives Claude persistent memory with auto-graph and semantic search
I've been building a personal knowledge system called Open Brain and just open-sourced it. It's an MCP server that gives Claude (Code, Desktop, or any MCP client) persistent memory across sessions. What it does: You tell Claude to "remember this" and it captures the thought — embedding it, extracting entities (people, tools, projects, orgs), scoring quality, checking for semantic duplicates, and auto-linking to related thoughts. Later you search by meaning, not keywords. What makes it different from other MCP memory tools: Auto-graph — connections between thoughts are created automatically on capture. Typed links (extends, contradicts, is-evidence-for) at 0.80+ similarity. No manual linking. Semantic dedup — captures at 0.92+ similarity auto-merge instead of creating duplicates Salience scoring — 6-factor ranking (recency, access frequency, connections, merges, source weight, pinned). Thoughts you actually use rise to the top over time. Hybrid search — BM25 full-text + pgvector cosine similarity with Reciprocal Rank Fusion. Handles both exact terms and meaning. 16 MCP tools — not just store/recall. Graph traversal, entity browsing, weekly review synthesis, staleness pruning, dedup review, density analysis. Staleness pruning — thoughts that become irrelevant decay and get soft-archived automatically. LLM-confirmed, with sole-entity protection so you don't lose knowledge. Stack: Supabase (Postgres + pgvector) + Deno Edge Functions + OpenRouter. Self-hostable — you own your data, runs on your own Supabase project. Setup is ~10 minutes: clone, run bootstrap (interactive secret setup), run deploy (schema + functions), run validate (8-check verification). The deploy script prints a ready-to-paste claude mcp add command. Works with Claude Code, Claude Desktop, ChatGPT, and any MCP-compatible client. MIT licensed, 40 SQL migrations, 5 Edge Functions, 138 tests. GitHub: https://github.com/Bobby-cell-commits/open-brain-server Happy to answer questions about the architecture or how the auto-graph/salience scoring works under the hood. submitted by /u/midgyrakk [link] [comments]
View originalSupabase docs now available over SSH for AI Agents https://t.co/ivtkL2gCf9
Supabase docs now available over SSH for AI Agents https://t.co/ivtkL2gCf9
View originalhappy friday https://t.co/Sz5Bii9RQ2
happy friday https://t.co/Sz5Bii9RQ2
View originalAt MCP Summit in NYC, Pedro, our AI tooling engineer, shared insights: 'MCP provides the hands. Skills provide the playbook. Together, they close the gap between what agents can do and what they sho
At MCP Summit in NYC, Pedro, our AI tooling engineer, shared insights: 'MCP provides the hands. Skills provide the playbook. Together, they close the gap between what agents can do and what they should do.' #MCPDevSummit 🤝 https://t.co/zo9VAUWSQx
View originalSupabase has hit an impressive 100,000 GitHub stars! 🌟 A massive THANK YOU to our amazing community for your unwavering support. Let’s keep building together!! https://t.co/Zuv6agQnoG
Supabase has hit an impressive 100,000 GitHub stars! 🌟 A massive THANK YOU to our amazing community for your unwavering support. Let’s keep building together!! https://t.co/Zuv6agQnoG
View originalWe built something experimental for developers working with AI coding agents: https://t.co/5s1VsFkLB8 It's a public SSH server that exposes the full Supabase documentation as a virtual file system. C
We built something experimental for developers working with AI coding agents: https://t.co/5s1VsFkLB8 It's a public SSH server that exposes the full Supabase documentation as a virtual file system. Connect with `ssh https://t.co/5s1VsFkLB8` and your agent gets bash access to every page: grep, find, cat, and more. https://t.co/ud82Mfe6LJ
View originalThe easiest way to make a GraphQL request is to use Supabase Studio's built-in GraphiQL IDE https://t.co/6FpyX35TXR
The easiest way to make a GraphQL request is to use Supabase Studio's built-in GraphiQL IDE https://t.co/6FpyX35TXR
View originalBased on 59 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.