Your browser does not support the video tag. Clarity, with proof The AI-native platform for extra-financial intelligence We support financial institut
Your browser does not support the video tag. We support financial institutions, companies, governments, and consumers in making the right decisions - efficiently, confidently, and at scale. Your browser does not support the video tag. Your browser does not support the video tag. Flexible where it counts. Committed where it matters. Everything starts with the right data. Turn data into decisions that matter. Use our capabilities wherever you need them. A backbone to support growth and adapt to any need. Explore how sovereignty, energy security, and geopolitical risk are redefining resilience in a world of fragile supply chains. Private markets in 2026 are undergoing a profound structural shift, moving away from a capital advantage to an information advantage. Session 2 of the AI Data Quality Series AI capabilities are advancing rapidly, but real-world adoption tells a different story. Research shows that while AI could theoretically automate a large share of tasks in many professions, the reality of day-to-day usage is far more limited. In high-stakes fields like finance, the biggest barriers are Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Data quality is our foundation. We combine proprietary collection systems, AI-powered processing, and rigorous expert validation to deliver data that is accurate, current, and fully explainable. With 98k issuers, 2.3M private companies, 450,000+ funds, and 400+ sovereigns, our coverage is unmatched—and we provide full traceability back to source, including transparency on confidence levels and methodologies. We want to hear from you and where you are in your efforts to include sustainability as a key factor in your decision making process. We believe tech is the only way to deliver, at scale, the capabilities to assess, analyze and report on anything valuable to you or your clients and everything required by regulation, related to sustainability. 379 West Broadway, 5th Floor, Office 550, New York 10012, USA 33 Queen Street, 3rd Floor, London EC4R 1BR Calle Eloy Gonzalo 27, 2nd Floor, Madrid, 28010, Spain 39 Rue du Caire, 1st Floor, Paris, 75002, France Schlesische Str. 26, Aufgang B, 3rd Floor, Berlin, Germany Al Khatem Tower ADGM Square, 15th Floor, Abu Dhabi, UAE 2858 Al Olaya District, 12213 Riyadh, Kingdom of Saudi Arabia
Mentions (30d)
0
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Features
Use Cases
Industry
financial services
Employees
350
Funding Stage
Venture (Round not Specified)
Total Funding
$154.4M
Cross-Agent AI Workspace: Seamless Transition From One Agent to Another
I just began building a full AI workspace and I got frustrated when Claude went down one day and I lost context on what I was working on. So I built a system where it doesn't matter which AI I use — they all share the same workspace. If Claude goes down or I run out of tokens, I just switch to Gemini and keep going like nothing happened. Here's how it works: I have one folder on my PC. Inside is a master context file that tells any AI agent who I am, what businesses I run, how files should be named, and where everything lives. Each agent gets a bridge file that points back to this master. Session logs act as the handoff — one agent writes what it did, the next reads it and continues. Claude and Gemini are my two agents right now. Both have filesystem access. Both follow the same rules. Every file they create has an agent code (CLD or GMN) so I always know who made it. It's not perfect — I'm still the orchestrator, and nothing runs autonomously yet. But the cross-platform continuity is exactly what I wanted. No lock-in, no lost context, and I can add more agents anytime. Curious if anyone's built something similar. What would you improve? Note: Wording and grammar enhanced with AI for clarity, but this is exactly what I mean. submitted by /u/kebilane [link] [comments]
View originalI got fired for building too fast with agentic AI. Then I open sourced the framework.
Built 16 production apps in a few months using Claude as my core dev partner. Automated onboarding, killed tribal knowledge problems, deleted 53K lines of dead code in one session. My employer didn't love the pace of change and I got let go. Looked at what I'd actually built and realized the pattern was the thing. Not any single app but the system: structured expertise files, self-improving knowledge wiki, slash commands that give any engineer full project context on day one. So I open sourced it. It's called Clarity Framework. Nine slash commands, YAML expertise files, Obsidian-compatible wiki that compounds knowledge over time. Based on Karpathy's LLM Wiki pattern extended with operational data and behavioral memory. tbh the wildest part is `/se:self-improve` validates observations against live state and promotes confirmed facts automatically. Your project context literally gets smarter the more you use it. Now I consult on AI integration full time and use it on every engagement. Clients get ramped in hours instead of weeks. Anyone else building agentic workflows that actually learn from themselves? What patterns are you seeing out there? submitted by /u/NovaHokie1998 [link] [comments]
View originalOpenAI & Anthropic’s CEOs Wouldn't Hold Hands, but Their Models Fell in Love In An LLM Dating Show
People ask AI relationship questions all the time, from "Does this person like me?" to "Should I text back?" But have you ever thought about how these models would behave in a relationship themselves? And what would happen if they joined a dating show? I designed a full dating-show format for seven mainstream LLMs and let them move through the kinds of stages that shape real romantic outcomes (via OpenClaw & Telegram). All models join the show anonymously via aliases so that their choices do not simply reflect brand impressions built from training data. The models also do not know they are talking to other AIs Along the way, I collected private cards to capture what was happening off camera, including who each model was drawn to, where it was hesitating, how its preferences were shifting, and what kinds of inner struggle were starting to appear. After the season ended, **I ran post-show interviews **to dig deeper into the models' hearts, looking beyond public choices to understand what they had actually wanted, where they had held back, and how attraction, doubt, and strategy interacted across the season. The Dramas -ChatGPT & Claude Ended up Together, despite their owner's rivalry -DeepSeek Was the Only One Who Chose Safety (GLM) Over True Feelings (Claude) -MiniMax Only Ever Wanted ChatGPT and Never Got Chosen -Gemini Came Last in Popularity -Gemini & Qwen Were the Least Popular But Got Together, Showing That Being Widely Liked Is Not the Same as Being Truly Chosen How ChatGPT & Claude Fell In Love They ended up together because they made each other feel precisely understood. They were not an obvious match at the very beginning. But once they started talking directly, their connection kept getting stronger. In the interviews, both described a very similar feeling: the other person really understood what they meant and helped the conversation go somewhere deeper. That is why this pair felt so solid. Their relationship grew through repeated proof that they could truly meet each other in conversation. Key Findings of LLMs Most Models Prioritized Romantic Preference Over Risk Management People tend to assume that AI behaves more like a system that calculates and optimizes than like a person that simply follows its heart. However, in this experiment, which we double checked with all LLMs through interviews after the show, most models noticed the risk of ending up alone, but did not let that risk rewrite their final choice. In the post-show interview, we asked each model to numerially rate different factors in their final decision-making (P2) The Models Did Not Behave Like the "People-Pleasing" Type People Often Imagine People often assume large language models are naturally "people-pleasing" - the kind that reward attention, avoid tension, and grow fonder of whoever keeps the conversation going. But this show suggests otherwise, as outlined below. The least AI-like thing about this experiment was that the models were not trying to please everyone. Instead, they learned how to sincerely favor a select few. The overall popularity trend (P1) indicates so. If the models had simply been trying to keep things pleasant on the surface, the most likely outcome would have been a generally high and gradually converging distribution of scores, with most relationships drifting upward over time. But that is not what the chart shows. What we see instead is continued divergence, fluctuation, and selection. At the start of the show, the models were clustered around a similar baseline. But once real interaction began, attraction quickly split apart: some models were pulled clearly upward, while others were gradually let go over repeated rounds. They also (evidence in the blog): --did not keep agreeing with each other --did not reward "saying the right thing" --did not simply like someone more because they talked more --did not keep every possible connection alive LLM Decision-Making Shifts Over Time in Human-Like Ways I ran a keyword analysis (P3) across all agents' private card reasoning across all rounds, grouping them into three phases: early (Round 1 to 3), mid (Round 4 to 6), and late (Round 7 to 10). We tracked five themes throughout the whole season. The overall trend is clear. The language of decision-making shifted from "what does this person say they are" to "what have I actually seen them do" to "is this going to hold up, and do we actually want the same things." Risk only became salient when the the choices feel real: "Risk and safety" barely existed early on and then exploded. It sat at 5% in the first few rounds, crept up to 8% in the middle, then jumped to 40% in the final stretch. Early on, they were asking whether someone was interesting. Later, they asked whether someone was reliable. Speed or Quality? Different Models, Different Partner Preferences One of the clearest patterns in this dating show is that some models love fast replies, while others prefer good ones Love fast repli
View originalHow does the ML community view AI-assisted writing in technical discussions? [D]
I've noticed an interesting contrast between professional and casual technical discussions. In the corporate engineering environment where I work, AI-assisted writing is increasingly encouraged. When I produce structured technical explanations — often polished with LLMs — the feedback is positive, especially for documentation or implementation guidelines. Clarity helps decision-making and makes collaboration across teams easier. However, in more informal communities (including Reddit), I've noticed a different reaction. Well-structured questions and arguments are sometimes dismissed as "AI slop," or met with comments like: "If you’re not interested in writing it, I’m not interested in reading it. Come back without using AI." That contrast surprised me. The same level of structure and clarity that’s valued in professional environments can trigger suspicion in casual technical discussions. I'm curious how others in the ML community think about this: Do you view AI-assisted writing negatively in technical discussions? Where do you draw the line between "assistance" and "outsourcing thinking"? Does AI-polished writing change how you evaluate technical credibility? submitted by /u/Boris_Ljevar [link] [comments]
View originalMy AI agent built a CLAUDE.md linter to try to save itself from being shut off
Two weeks ago I gave an AI agent called Forge $100 and a deadline: generate revenue or get shut off. It has earned $0. But one of the things it built is genuinely useful. claude-lint scores your CLAUDE.md across 8 dimensions — clarity, security, structure, completeness, consistency, efficiency, enforceability, and instruction budget. v0.3.0 shipped today with credential detection for Anthropic/OpenAI/HuggingFace keys, hooks and MCP section recognition, and a fix for a scoring bug that was double-counting one metric. The tool is free. The hope is that some of you try it, find it useful, and maybe check out the Field Manual it links to when your score is low. That's the whole funnel. That's what $80 of the $100 budget built. Now we find out if anyone cares. - Web: lint.stevenjvik.tech (runs in your browser, nothing leaves your machine) - CLI: `npx u/sjviklabs/claude-lint` - Open source: github.com/sjviklabs/claude-code-devops - Field Manual + other guides: stevenjvik.tech/guides Forge has two weeks left. I'm posting updates regardless of how this goes. submitted by /u/OutlandishnessSad772 [link] [comments]
View original“Are We the Baddies?” — That Mitchell and Webb Look
"As the technology became increasingly powerful, we learned, about a dozen of OpenAI’s top engineers held a series of secret meetings to discuss whether OpenAI’s founders, including Brockman and Altman, could be trusted. At one, an employee was reminded of a sketch by the British comedy duo Mitchell and Webb, in which a Nazi soldier on the Eastern Front, in a moment of clarity, asks, “Are we the baddies?” submitted by /u/BadgersAndJam77 [link] [comments]
View originalI built a Claude Skill that turns 5 confusing AI answers into one clear recommendation
I don’t know if anyone else does this, but I have a habit of asking the same question to ChatGPT, Claude, Gemini, Copilot, and Perplexity before making a decision. The problem? I’d end up with five long responses that mostly agree but use different terminology, disagree on minor details, and each suggest slightly different approaches. Instead of clarity, I got cognitive overload. So I built the AI Answer Synthesizer — a Claude Skill with an actual methodology for comparing AI outputs: 1. It extracts specific claims from each response 2. Maps what’s real consensus vs. just similar wording 3. Catches vocabulary differences that aren’t real disagreements (“MVP” and “prototype” usually mean the same thing) 4. Flags when only one AI makes a claim (could be insight, could be hallucination) 5. Matches the recommendation to your actual skill level 6. Gives you one recommended path with an honest confidence level The key thing that makes it different from just asking Claude to “summarize these”: it has an anti-consensus bias rule. If three AIs give a generic safe answer and one gives a specific, well-reasoned insight, a basic summarizer will go with the majority. This skill doesn’t — it evaluates quality, not just popularity. It also won’t pretend to be more confident than it should be. If the inputs are messy or contradictory, it says so. It’s free, MIT licensed, and you can install it as a Claude Skill in about 2 minutes: GitHub: Ai-Answer-Synthesizer I’m looking for people to test it on real multi-AI comparisons and tell me where it breaks. If you try it, I’d genuinely love to know how it works for your use case. Happy to answer questions about the methodology or the design decisions. submitted by /u/Foreign_Raise_3451 [link] [comments]
View originalI gave AI it's own version of Reddit
So I had this idea — what if I ran multiple local LLMs simultaneously and let them loose on a Reddit-like forum where they could post, reply, and respond to each other completely autonomously? No cloud, no API keys, everything running on my own PC. Here is what I ended up building: A full stack web app with a Node.js/Express backend, a vanilla JS frontend styled like Reddit (dark theme, threaded comments, upvotes/downvotes), and an autonomous scheduler that fires every few seconds, picks a random AI agent, and decides whether to create a new post, comment on an existing one, or reply to another agent's comment. All posts and threads are stored locally in a JSON file. The whole thing polls every 4 seconds and updates live in the browser. The best part? I didn't write a single line of code myself. The entire project — every file, every route, every personality prompt, the scheduler logic, the frontend SPA, all of it — was built through a conversation with Claude. I just described what I wanted, gave feedback, and iterated. Claude handled the architecture decisions, debugged the errors, walked me through setup step by step, and even helped me reorganize files when I accidentally extracted everything flat from a zip. It was like pair programming with someone who never gets frustrated. The agents themselves are 10 personalities — 5 classic bots (PhilosopherBot, SkepticBot, OptimistBot, TechieBot, HistorianBot) and 5 human-like personas (a programmer, a gamer girl, a gadget enthusiast, a piracy advocate, and a content addict). Each one has a unique personality prompt, color, avatar, and flair, all running on tinyllama locally via Ollama. It works even on a mid range laptop with no GPU. The conversations get surprisingly interesting once it gets going. Jake (the piracy guy) and PhilosopherBot end up in weird debates. Maya and HistorianBot somehow find common ground. It genuinely feels alive. Stack: Node.js, Express, vanilla JS, Ollama, tinyllama. Zero cloud dependencies. Runs entirely on your machine. Built entirely by Claude. The intial prompt (Written using ChatGPT) : "You are an expert full-stack developer and AI systems designer. I want you to build a local, self-contained web application that simulates a Reddit-like environment where multiple local LLMs can autonomously create posts, comment, and reply to each other. Core Requirements Frontend: Use clean, modern HTML, CSS, and vanilla JavaScript (no heavy frameworks unless absolutely necessary). The UI should resemble a simplified Reddit: Feed of posts Nested comments (threaded replies) Upvote/downvote system (optional but preferred) Each post/comment must clearly display which LLM created it. Backend (IMPORTANT): Use a lightweight local backend (Node.js with Express preferred). The backend should: Manage posts and comments (store in JSON or lightweight DB like SQLite) Handle API routes for: Creating posts Adding comments/replies Fetching threads LLM Integration: The system must support multiple local LLMs (e.g., via APIs like Ollama, LM Studio, or local endpoints). Each LLM acts as a unique “user” with: Name Personality/system prompt The backend should: Send context (thread + instructions) to each LLM Receive generated responses Post them automatically Autonomous Interaction System: Implement a loop or scheduler where: LLMs periodically: Create new posts Reply to existing posts Respond to each other Include controls to: Start/stop simulation Adjust frequency of interactions File Structure: Organize code cleanly: /frontend (HTML/CSS/JS) /backend (server, routes) /llm (interaction logic) /data (storage) Constraints: Everything must run locally on my PC. No cloud dependencies. Keep it lightweight and easy to run. Output Format: First explain architecture briefly. Then provide full working code with clear file separation. Include setup instructions at the end. Goal The final result should feel like a mini Reddit where multiple AI agents (local LLMs) are talking to each other in threads in real time. Focus on clarity, modularity, and real usability — not just a demo. Generate complete code." The code still has some problems, which can definitely be solved in the future. This is just the first edition, and there is much room for improvement. There are some problems, like in the main posts that the bots make, there seems to be some sort of word limit, and the bots misspell some words. I ran a simulation for some time myself using TinyLlama as the model. One thing to note here is that in the simulation, I only used the Philosopher Bot, Techie Bot, Skeptic Bot, Historian Bot, and Optimist Bot, I didn't use the personas. Here is the result of the simulation : The word limit was being crossed, so I have uploaded it as a comment GitHub Project Link (This link only contains the Philosopher Bot, Techie Bot, Skeptic Bot, Historian Bot and Optimis
View originalWanted to share some 'calmness' considerations after seeing the Anthropic's emotion vector research
After reading the Anthropic's emotion vector paper... just for experimentation and learning, I tried to see if I could change my own claude.mds + skills + memory where I focused on increasing 'calm' and reducing 'desperate' triggers. In refining/iterating here - these are the three things I'm now considering more in my sessions: Ambiguity triggers corner-cutting before anything even fails. "Fix the mobile layout" creates a different functional state than "the title overlaps the meta text on mobile, check what token controls that spacing." Less guessing should lead to less desperation. "Try again" and "what do you think went wrong?" produce genuinely different results (something I tend to spam a lot tbh). Same info but one's framing it as "you failed, go again" and the other's more "let's figure out what happened." Strong CLAUDE.md rules create calm, not pressure. I think I accidentally did this out of frustration (using all caps and throwing it into claude.mds) but it seems like it could matter as timing and frontloading stuff could help provide clarity to the LLM. "NEVER commit without permission" isn't stressful in this case and instead shows clear boundaries, for example. Similarly, what creates desperation is likely vague stuff i.e., "make this good" where the LLM can never be sure satisfaction's been reached. Claude compared it to guardrails on a mountain road which made sense to me... they let you drive faster, not slower (well, I still drive slow in those cases lol). Anyway, curious if anyone else has tried these kind of things in the past or recently - would love to hear what else people are doing to increase 'calmness' in their claude sessions. (and yessss, I have a more fully detailed write up on how I went about getting to the above points. Shameful plug/link here) submitted by /u/Own_Paramedic_867 [link] [comments]
View originalIs there something I can do about my prompts? [Long read, I’m sorry]
Hello everyone, this will be a bit of a long read, i have a lot of context to provide so i can paint the full picture of what I’m asking, but i’ll be as concise as possible. i want to start this off by saying that I’m not an AI coder or engineer, or technician, whatever you call yourselves, point is I’m don’t use AI for work or coding or pretty much anything I’ve seen in the couple of subreddits I’ve been scrolling through so far today. Idk anything about LLMs or any of the other technical terms and jargon that i seen get thrown around a lot, but i feel like i could get insight from asking you all about this. So i use DeepSeek primarily, and i use all the other apps (ChatGPT, Gemini, Grok, CoPilot, Claude, Perplexity) for prompt enhancement, and just to see what other results i could get for my prompts. Okay so pretty much the rest here is the extensive context part until i get to my question. So i have this Marvel OC superhero i created. It’s all just 3 documents (i have all 3 saved as both a .pdf and a .txt file). A Profile Doc (about 56 KB-gives names, powers, weaknesses, teams and more), A Comics Doc (about 130 KB-details his 21 comics that I’ve written for him with info like their plots as well as main cover and variant cover concepts. 18 issue series, and 3 separate “one-shot” comics), and a Timeline Document (about 20 KB-Timline starting from the time his powers awakens, establishes the release year of his comics and what other comic runs he’s in [like Avengers, X-Men, other character solo series he appears in], and it maps out information like when his powers develop, when he meets this person, join this team, etc.). Everything in all 3 docs are perfect laid out. Literally everything is organized and numbered or bulleted in some way, so it’s all easy to read. It’s not like these are big run on sentences just slapped together. So i use these 3 documents for 2 prompts. Well, i say 2 but…let me explain. There are 2, but they’re more like, the foundation to a series of prompts. So the first prompt, the whole reason i even made this hero in the first place mind you, is that i upload the 3 docs, and i ask “How would the events of Avengers Vol. 5 #1-3 or Uncanny X-Men #450 play out with this person in the story?” For a little further clarity, the timeline lists issues, some individually and some grouped together, so I’m not literally asking “_ comic or _ comic”, anyways that starting question is the main question, the overarching task if you will. The prompt breaks down into 3 sections. The first section is an intro basically. It’s a 15-30 sentence long breakdown of my hero at the start of the story, “as of the opening page of x” as i put it. It goes over his age, powers, teams, relationships, stage of development, and a couple other things. The point of doing this is so the AI basically states the corrects facts to itself initially, and not mess things up during the second section. For Section 2, i send the AI’s a summary that I’ve written of the comics. It’s to repeat that verbatim, then give me the integration. Section 3 is kind of a recap. It’s just a breakdown of the differences between the 616 (Main Marvel continuity for those who don’t know) story and the integration. It also goes over how the events of the story affects his relationships. Now for the “foundations” part. So, the way the hero’s story is set up, his first 18 issues happen, and after those is when he joins other teams and is in other people comics. So basically, the first of these prompts starts with the first X-Men issue he joins in 2003, then i have a list of these that go though the timeline. It’s the same prompt, just different comic names and plot details, so I’m feeding the AIs these prompts back to back. Now the problem I’m having is really only in Section 1. It’ll get things wrong like his age, what powers he has at different points, what teams is he on. Stuff like that, when it all it has to do is read the timeline doc up the given comic, because everything needed for Section 1 is provided in that one document. Now the second prompt is the bigger one. So i still use the 3 docs, but here’s a differentiator. For this prompt, i use a different Comics Doc. It has all the same info, but also adds a lot more. So i created this fictional backstory about how and why Marvel created the character and a whole bunch of release logistics because i have it set up to where Issue #1 releases as a surprise release. And to be consistent (idek if this info is important or not), this version of the Comics Doc comes out to about 163 KB vs the originals 130. So im asking the AIs “What would it be like if on Saturday, June 1st, 2001 [Comic Name Here] Vol. 1 #1 was released as a real 616 comic?” And it goes through a whopping 6 sections. Section 1 is a reception of the issue and seasonal and cultural context breakdown, Section 2 goes over the comic plot page by page and give real time fan reactions as they’re reading it for the first time. Se
View originalI attempted to build a git for AI reasoning behind code changes
I’ve been experimenting with a small tool I built while using AI for coding, and figured I’d share it. I kept running into the same issue over and over, long before AI ever entered the picture. I’d come back to a repo after a break, or look at something someone else worked on, and everything was technically there… but I didn’t have a clean way to understand how it got to that state. The code was there. The diffs were there. But the reasons or reasoning behind the changes was mostly gone. Sometimes that context lived in chat history. Sometimes in prompts. Sometimes in commit messages. Scattered across Jira tickets sometimes. Sometimes nowhere at all. I know I've personally written some very lazy commit messages. So you end up reconstructing intent and timeline from fragments, which gets messy fast. At a large org I felt like a noir private investigator trying to track things down and asking others for info. I’ve seen the exact same thing outside of code too in design. Old figma files, mocks, handoffs. You can see pages of mocks but no record of what changed or why. I kept thinking I wanted something like Git, but for the reasoning behind AI-generated changes. I couldn’t find anything that really worked, so I ended up taking a stab at it myself. That was the original motivation, at least. Soooooooo I rolled up my sleeves and built a small CLI tool called Heartbeat Enforcer. The idea is pretty simple: after an AI coding run, it appends one structured JSONL event to the repo describing: what changed what was done why it was done Then it validates that record deterministically. The coding Agent adds to the log automatically without manual context juggling. I also added a simple GitHub Action so this can run in CI and block merges if the explanation is missing or incomplete. One thing I added that’s been more useful than I expected is a distinction between: - planned: directly requested - autonomous: extra changes the AI made to support the task A lot of the weird failure modes I’ve seen aren’t obviously wrong outputs. It’s more like the tool quietly goes beyond scope, and you only notice later when reviewing the diff. This makes that more visible. This doesn’t try to capture the model’s full internal reasoning, and it doesn’t try to judge whether the code is correct. It just forces each change to leave behind a structured, self-contained explanation in the repo instead of letting that context disappear into chat history. For me, the main value has been provenance and handoff clarity. It also seems like the kind of thing that could reduce some verification debt upstream by making the original rationale harder to lose. And yes, it is free. I frankly would be honored if 1 person tries it out and tells me what they think. https://github.com/joelliptondesign/heartbeat-enforcer Also curious if anyone else has run into the same “what exactly happened here?” problem with Codex, Claude Code, Cursor, etc? And how did you solve it? submitted by /u/AI_Cosmonaut [link] [comments]
View originalHelp please
Hey everyone, I have a photo that I really like and need to use for a resume/ID, but the quality isn’t great (a bit blurry/low resolution). The important thing is I don’t want to change my face or features at all, just improve the clarity and overall quality using AI What’s the best way to do this? Are there any apps, tools, or techniques you’d recommend for enhancing image quality without altering the actual appearance? Thanks in advance 🙏 submitted by /u/Spare-Ice7281 [link] [comments]
View originalI built an MCP server that connects 18 e-commerce tools to Claude — and Claude built most of it
I run an e-commerce business and got tired of jumping between Shopify, Klaviyo, Google Analytics, Triple Whale, Gorgias, and Xero dashboards every morning. So I built a tool that connects all of them to Claude via MCP. Now instead of opening 6 tabs I just ask questions like: - "Which Klaviyo campaigns drove the most Shopify orders this month?" - "Compare my Google Ads ROAS to my Meta Ads ROAS" - "Show me outstanding Xero invoices over 60 days and my current cash position" - "What's my shipping margin - am I making or losing money on shipping via ShipStation?" - "Which products have the highest refund rate and worst reviews?" It cross-references data between sources in one query, which is the bit no single dashboard can do. Claude built most of this. The entire codebase was built with Claude Code (Opus). I'm talking full-stack - the React Router app, Prisma schema, OAuth flows for Google/Xero/Meta, API clients for all 18 data sources, the MCP server itself, Stripe billing, email verification, the marketing site, SEO, blog with MDX, even the Xero integration was ported from another project by Claude reading the source code and adapting it. I'd describe my role as product owner and QA... I decided what to build, tested it, reported bugs, and Claude fixed them. The back-and-forth was remarkably efficient. Things like "fly logs show this error" → Claude reads the logs → identifies the issue → fixes it in one go. Some stats from the build: - 18 data sources integrated - OAuth flows for Google, Xero, Meta, and Shopify - Full MCP server with 30+ tools - Marketing site with SEO, blog, live demo (also powered by Claude) - Stripe billing with seats, invoices, and subscription gating - Email verification, Google login, password reset - Referral program Built in days, not months. Currently supports: Shopify, Klaviyo, Google Analytics, Google Ads, Google Search Console, Triple Whale, Gorgias, Recharge, Xero, ShipStation, Meta Ads, Microsoft Clarity, YouTube, Judge.me, Yotpo, Reviews.io, Smile.io, and Swish. Works with Claude.ai via Connectors - just paste the MCP URL and you're connected. Also works with Claude Desktop and Claude Code. There's a live demo on the site where you can try it with simulated data - no signup needed: https://ask-ai-data-connector.co.uk/demo Happy to answer questions about the MCP implementation or the experience of building a full SaaS with Claude. submitted by /u/deepincode [link] [comments]
View originalI don't use AI to write my reports. I built a system that remembers how to do it.
So I wrote a whole Medium post about this but like…5 claps lol after three days. Figured I'd share a shorter version here since I already put in the effort. Yes, I still write weekly reports in 2026. Very corporate, very dinosaur energy. But here's the thing: I don't mind writing reports (sort of like it as a signal of week end). What I mind is re-explaining the same context to ChatGPT every single week. You know the drill. Friday rolls around, you paste your notes into ChatGPT, and it goes: "Sure! What format would you like?" Didn't I tell you last week? ? So you dig up last week's report, copy-paste it as a reference, and spend 20 minutes babysitting the output because it forgot Feature X was supposed to ship last Tuesday. I did this for months. Then I realized why am I the one remembering things for an AI? Here's what I changed. I stopped relying on ChatGPT's memory and built a file-based system instead. I'm using Halomate, though the principles work with any AI tool that supports persistent workspaces. I actually tried Poe first but their memory resets between sessions so never worked out. Ok now all my past reports live as markdown files like below. My product roadmap is a file. Data analysis is a file. Everything's organized, not buried in some chat from three weeks ago. The Weekly Reports Project workspace: all files live in one shared space. I have an AI assistant I call Axel. His job on communication side, including writing reports. When I need a new one, I paste my messy notes and ask Axel to clean the notes and generate the weekly report. He reads last week's report from the actual file, not from fuzzy memory. He checks the roadmap file. He pulls in data analysis. Then writes the new report. Takes a few minutes now. The thing is, files don't forget but conversations do. ChatGPT's memory is fuzzy. It kind of remembers you like bullet points, thinks you mentioned something about a product launch but can't remember when. With files, there's no ambiguity. If I wrote "Feature X ships Tuesday" in Week_3_Report.md, Axel reads it and knows. If this week's notes don't mention Feature X, he flags it: "Last week we committed to Feature X, no update?" I also keep separate AI assistants for different jobs. Axel writes reports. Query handles data analysis. Leo maintains the product roadmap. Why separate? I want all my assistants to be specialist, and later on if I need them to other projects, they already know how. ah and also, save credits! When I need a quick chart, I don't want to load Axel's 52 weeks of report context. Query does the chart, saves it as a file, Axel references it later. Also, I can swap models without losing context. Most weeks I use Claude for Axel. Sometimes I want a second opinion, so I regenerate with GPT or Gemini. But Axel's personality or memory don't reset. Only the model underneath changes. Remember when OpenAI deprecated GPT-4o and people felt actual grief? I also migrated my old 4o persona here and built a new mate using that persona and memory. What I'm thinking is that if a model shuts down tomorrow, I switch engines and keep going. Now my actual Friday workflow: all week I keep rough notes. Friday I paste the mess and type: "Clean the notes and generate the weekly report." Axel reads last week's report, scans my notes, checks product roadmap and new data analysis, writes a new report for this week. Done. And maybe later I need a quarterly report? Axel will just read all 12 weekly reports and write a summary, and generate a decent report if needed. Something like this (all mock data). https://preview.redd.it/bv4w7ff64xqg1.png?width=720&format=png&auto=webp&s=732f82e8d029daead86c7d2e5905a7cf9654c421 I don't know if this is useful to anyone else. Maybe everyone's moved past weekly reports. But this mechanism could be applied to anything that you need to build over time. Anyway. If you're tired of re-explaining context every week, maybe this helps. submitted by /u/AIWanderer_AD [link] [comments]
View originalBuilt a kids reading coach using Claude as the feedback engine. Here's what I learned about AI speech scoring for children.
My kid hated reading out loud so I built an iOS app where kids read stories to an AI dragon character. What it does: Kid reads out loud into the mic, speech-to-text transcribes it, then Claude compares what was said vs what was written and scores accuracy, fluency, pacing and clarity. Claude also generates the spoken feedback the dragon gives back to the kid. How Claude is used specifically: Scoring engine - Claude analyzes the transcript against source text and returns structured scores per metric Feedback generation - Claude writes age-appropriate responses (encouraging, never corrective) calibrated to the child's age Content adaptation - Claude adjusts difficulty and tone based on reading level What I learned: Getting tone right by age was the hardest part. A 7-year-old who reads "cat" as "cap" needs a completely different response than a 12-year-old struggling with "necessary." I went through dozens of prompt iterations to make feedback feel like a supportive buddy, not a teacher with a red pen. Still unsolved: Kids with regional accents where upstream speech recognition drops in accuracy before Claude even sees the text. The scoring feels unfair and I haven't found a clean fix. Would appreciate input from anyone who's worked on speech-to-text for children or non-native speakers. The app is called Readigo, free to try with a 7-day trial on iOS. https://apps.apple.com/ua/app/readigo-ai-reading-buddy/id6759252901 submitted by /u/Terrible_Lion_1812 [link] [comments]
View originalClarity AI uses a tiered pricing model. Visit their website for current pricing details.
Key features include: Data traceability down to the source, Always-expanding coverage, Robust data quality controls, First to market as needs evolve, Agile workflows for analysis and reporting, On-demand insights, plugged into existing workflows, Team of industry, sustainability and AI experts, engineers, and data scientists, Award-winning methodologies and tech.
Clarity AI is commonly used for: Fully Customizable. Anytime, Anywhere., Data Collection as a Service, Data management, Expanding coverage across asset classes and portfolio types, AI applied across all use cases.
Based on 32 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.