Video editing
I cannot provide a meaningful summary of user opinions about Descript based on the provided content. The social mentions you've shared discuss various AI topics (OpenAI's ChatGPT Pro, LLM token usage, AI agents, and Writesonic), but none of them actually mention or review Descript specifically. To accurately summarize user sentiment about Descript, I would need reviews and social mentions that actually discuss the software, its features, pricing, and user experiences with the platform.
Mentions (30d)
6
Reviews
0
Platforms
7
Sentiment
0%
0 positive
I cannot provide a meaningful summary of user opinions about Descript based on the provided content. The social mentions you've shared discuss various AI topics (OpenAI's ChatGPT Pro, LLM token usage, AI agents, and Writesonic), but none of them actually mention or review Descript specifically. To accurately summarize user sentiment about Descript, I would need reviews and social mentions that actually discuss the software, its features, pricing, and user experiences with the platform.
Features
Industry
information technology & services
Employees
170
Funding Stage
Series C
Total Funding
$100.0M
OpenAI’s Game-Changing o1 Description: Big news in the AI world! OpenAI is shaking things up with the launch of ChatGPT Pro, priced at $200/month, and it’s not just a premium subscription—it’s a glim
OpenAI’s Game-Changing o1 Description: Big news in the AI world! OpenAI is shaking things up with the launch of ChatGPT Pro, priced at $200/month, and it’s not just a premium subscription—it’s a glimpse into the future of AI. Let me break it down: First, the Pro plan offers unlimited access to cutting-edge models like o1, o1-mini, and GPT-4o. These aren’t your typical language models. The o1 series is built for reasoning tasks—think solving complex problems, debugging, or even planning multi-step workflows. What makes it special? It uses “chain of thought” reasoning, mimicking how humans think through difficult problems step by step. Imagine asking it to optimize your code, develop a business strategy, or ace a technical interview—it can handle it all with unmatched precision. Then there’s o1 Pro Mode, exclusive to Pro subscribers. This mode uses extra computational power to tackle the hardest questions, ensuring top-tier responses for tasks that demand deep thinking. It’s ideal for engineers, analysts, and anyone working on complex, high-stakes projects. And let’s not forget the advanced voice capabilities included in Pro. OpenAI is taking conversational AI to the next level with dynamic, natural-sounding voice interactions. Whether you’re building voice-driven applications or just want the best voice-to-AI experience, this feature is a game-changer. But why $200? OpenAI’s growth has been astronomical—300M WAUs, with 6% converting to Plus. That’s $4.3B ARR just from subscriptions. Still, their training costs are jaw-dropping, and the company has no choice but to stay on the cutting edge. From a game theory perspective, they’re all-in. They can’t stop building bigger, better models without falling behind competitors like Anthropic, Google, or Meta. Pro is their way of funding this relentless innovation while delivering premium value. The timing couldn’t be more exciting—OpenAI is teasing a 12 Days of Christmas event, hinting at more announcements and surprises. If this is just the start, imagine what’s coming next! Could we see new tools, expanded APIs, or even more powerful models? The possibilities are endless, and I’m here for it. If you’re a small business or developer, this $200 investment might sound steep, but think about what it could unlock: automating workflows, solving problems faster, and even exploring entirely new projects. The ROI could be massive, especially if you’re testing it for just a few months. So, what do you think? Is $200/month a step too far, or is this the future of AI worth investing in? And what do you think OpenAI has in store for the 12 Days of Christmas? Drop your thoughts in the comments! #product #productmanager #productmanagement #startup #business #openai #llm #ai #microsoft #google #gemini #anthropic #claude #llama #meta #nvidia #career #careeradvice #mentor #mentorship #mentortiktok #mentortok #careertok #job #jobadvice #future #2024 #story #news #dev #coding #code #engineering #engineer #coder #sales #cs #marketing #agent #work #workflow #smart #thinking #strategy #cool #real #jobtips #hack #hacks #tip #tips #tech #techtok #techtiktok #openaidevday #aiupdates #techtrends #voiceAI #developerlife #o1 #o1pro #chatgpt #2025 #christmas #holiday #12days #cursor #replit #pythagora #bolt
View originalPricing found: $16, $24, $24, $35, $50
I spent a week trying to make Claude write like me, or: How I Learned to Stop Adding Rules and Love the Extraction
I've been staring at Claude's output for ten minutes and I already know I'm going to rewrite the whole thing. The facts are right. Structure's fine. But it reads like a summary of the thing I wanted to write, not the thing itself. I used to work in journalism (mostly photojournalism, tbf, but I've still had to work on my fair share of copy), and I was always the guy who you'd ask to review your papers in college. I never had trouble editing. I could restructure an argument mid-read, catch where a piece lost its voice, and I know what bad copy feels like. I just can't produce good copy from nothing myself. Blank page syndrome, the kind where you delete your opening sentence six times and then switch tabs to something else. Claude solved that problem completely and replaced it with a different one: the output needed so much editing to sound human that I was basically rewriting it anyway. Traded the blank page for a full page I couldn't use. I tried the existing tools. Humanizers, voice cloners, style prompts. None of them worked. So I built my own. Sort of. It's still a work in progress, which is honestly part of the point of this post. TLDR: I built a Claude Code plugin that extracts your writing voice from your own samples and generates text close to that voice with additional review agents to keep things on track. Along the way I discovered that beating AI detectors and writing well are fundamentally opposed goals, at least for now (this problem is baked into how LLMs generate tokens). So I stopped trying to be undetectable and focused on making the output as good as I could. The plugin is open source: https://github.com/TimSimpsonJr/prose-craft The Subtraction Trap I started with a file called voice-dna.md that I found somewhere on Twitter or Threads (I don't remember where, but if you're the guy I got it from, let me know and I'll be happy to give you credit). It had pulled Wikipedia's "Signs of AI writing" page, turned every sign into a rule, and told Claude to follow them. No em dashes. Don't say "delve." Avoid "it's important to note." Vary your sentence lengths, etc. In fairness, the resulting output didn't have em dashes or "delve" in it. But that was about all I could say for it. What it had instead was this clipped, aggressive tone that read like someone had taken a normal paragraph and sanded off every surface. Claude followed the rules by writing less, connecting less. Every sentence was short and declarative because the rules were all phrased as "don't do this," and the safest way to not do something is to barely do anything. This is the subtraction trap. When you strip away the AI tells without replacing them with anything real, the absence itself becomes a tell. The text sounded like a person trying very hard not to sound like AI, which (I'd later learn) is its own kind of signature. I ran it through GPTZero. Flagged. Ran it through 4 other detectors. Flagged on the ones that worked at all against Claude. The subtraction trap in action: the markers were gone, but the detectors didn't care. The output didn't sound like me, and the detectors could still see through it. Two problems. I figured they were related. Researching what strong writing actually does I went and read. A range of published writers across advocacy, personal essay, explainer, and narrative styles, trying to figure out what strong writing actually does at a structural level (not just "what it avoids," which was the whole problem with voice-dna.md). I used my research workflow to systematically pull apart sentence structure, vocabulary patterns, rhetorical devices, tonal control. It turns out that the thing that makes writing feel human is structural unpredictability. Paragraph shapes, sentence lengths, the internal architecture of a section, all of it needs to resist settling into a rhythm that a compression algorithm could predict. The other findings (concrete-first, deliberate opening moves, naming, etc.) mattered too, but they were easier to teach. Unpredictability was the hard one. I rebuilt the skill around these craft techniques instead of the old "don't" rules. The output was better. MUCH better. It had texture and movement where voice-dna.md had produced something flat. But when I ran it through detectors, the scores barely moved. The optimization loop The loop looked like this: Generator produces text, detection judge scores it, goal judges evaluate quality, editor rewrites based on findings. I tested 5 open-source detectors against Claude's output. ZipPy, Binoculars, RoBERTa, adaptive-classifier, and GPTZero. Most of them completely failed. ZipPy couldn't tell Claude from a human at all. RoBERTa was trained on GPT-2 era text and was basically guessing. Only adaptive-classifier showed any signal, and externally, GPTZero caught EVERYTHING. 7 iterations and 2 rollbacks later, I had tried genre-specific registers, vocabulary constraints, and think-aloud consolidation where the model reasons through its
View originalClaude via OpenMontage can now make Documentaries or ads for $0
OpenMontage is a thing I've been working on where your coding assistant like Claude Code is the actual agent. There's no orchestration python, no LLM key inside the project. It's a pile of skill files and pipeline manifests that teach the assistant how to think about video production stage by stage. Idea → script → scenes → assets → edit → compose. Github: https://github.com/calesthio/OpenMontage Got great traction when I Open Sourced it on Github last week. But there were two free-ish paths: Generate images with free stock footage or FLUX or similar, Ken-Burns them, add narration. Works. Looks like a slideshow. Plug in Kling / Runway / FAL and burn a few bucks on diffusion-model motion clips. Also works. But not everyone wants to pay $2-3 per video. What I actually wanted was real stock footage. The thing documentary editors use. Problem was there's no agent-friendly path to that. Options were either "download clips yourself and hand them to the agent" (defeats the point) or "call a search API that returns 20 results ranked by popularity" (useless for documentary work where you need exactly this shot, not the trending one). So I sat down this weekend and built a Documentary Montage pipeline. How it works: Agent takes your sentence. Writes a brief with tone, duration, thematic question. Plans slots with hero moments (shots that need to land) and cutaways. Searches free stock sources Builds a corpus on the fly and semantically ranks candidates against each slot's description. Picks the best ones, trims to their beat, adds L-cuts where ambient audio can carry under the next shot, enforces adjacent-scene diversity so you don't get two identical wide shots in a row. Syncs hero cuts to a music bed. Renders. Zero API keys on the video side. Total cost of the test piece I made: actually zero dollars. https://reddit.com/link/1si3hr4/video/utv57sqobgug1/player submitted by /u/Responsible_Maybe875 [link] [comments]
View originalHere are 50+ slash commands in Claude Code that most of you might not know exist
There are over 50 built-in slash commands, 5 bundled skills, and a custom command system. Here's the complete breakdown organized by what they actually do. Type `/` at the start of your input to see the list. Type any letters after `/` to filter. --- **CONTEXT & CONVERSATION MANAGEMENT** `/clear` — Wipes the conversation and starts fresh. Use this every time you switch tasks. Old context from a previous task genuinely makes me worse at the new one. (aliases: `/reset`, `/new`) `/compact [instructions]` — Compresses conversation history into a summary. This is the most important command to learn. Use it proactively when context gets long, not just when I start losing track. The real power move: add focus instructions like `/compact keep the database schema and error handling patterns` to control what survives. `/context` — Visualizes your context usage as a color grid and gives optimization suggestions. Use this to see how close you are to the limit. `/fork [name]` — Creates a branch of your conversation at the current point. Useful when you want to explore two different approaches without losing your place. `/rewind` — Rewind the conversation and/or your code to a previous point. If I went down the wrong path, this gets you back. (alias: `/checkpoint`) `/export [filename]` — Exports the conversation as plain text. With a filename it writes directly to a file. Without one it gives you options to copy or save. `/copy` — Copies my last response to your clipboard. If there are code blocks, it shows an interactive picker so you can grab individual blocks. --- **MODEL & PERFORMANCE SWITCHING** `/model [model]` — Switches models mid-session. Use left/right arrow keys to adjust effort level in the picker. Common pattern: start with Sonnet for routine work, flip to Opus for hard problems, switch back when you're done. `/fast [on|off]` — Toggles fast mode for Opus 4.6. Faster output, same model. Good for straightforward edits. `/effort [low|medium|high|max|auto]` — Sets how hard I think. This shipped quietly in a changelog and most people missed it. `low` and `medium` and `high` persist across sessions. `max` is Opus 4.6 only and session-scoped. `auto` resets to default. --- **CODE REVIEW & SECURITY** `/diff` — Opens an interactive diff viewer showing every change I've made. Navigate with arrow keys. Run this as a checkpoint after any series of edits — it's your chance to catch my mistakes before they compound. `/pr-comments [PR URL|number]` — Shows GitHub PR comments. Auto-detects the PR or takes a URL/number. `/security-review` — Analyzes pending changes for security vulnerabilities: injection, auth issues, data exposure. Run this before shipping anything sensitive. --- **SESSION & USAGE TRACKING** `/cost` — Detailed token usage and cost stats for the session (API users). `/usage` — Shows plan usage limits and rate limit status. `/stats` — Visualizes daily usage patterns, session history, streaks, and model preferences over time. `/resume [session]` — Resume a previous conversation by ID, name, or interactive picker. (alias: `/continue`) `/rename [name]` — Renames the session. Without a name, I auto-generate one from the conversation history. `/insights` — Generates an analysis report of your Claude Code sessions — project areas, interaction patterns, friction points. --- **MEMORY & PROJECT CONFIG** `/memory` — View and edit my persistent memory files (CLAUDE.md). Enable/disable auto-memory and view auto-memory entries. If I keep forgetting something about your project, check this first. `/init` — Initialize a project with a CLAUDE.md guide file. This is how you teach me about your codebase from the start. `/hooks` — View hook configurations for tool events. Hooks let you run code automatically before or after I make changes. `/permissions` — View or update tool permissions. (alias: `/allowed-tools`) `/config` — Opens the settings interface for theme, model, and output style. (alias: `/settings`) --- **MCP & INTEGRATIONS** `/mcp` — Manage MCP server connections and OAuth authentication. MCP is how you connect me to external tools like GitHub, databases, APIs. `/ide` — Manage IDE integrations (VS Code, JetBrains) and show connection status. `/install-github-app` — Set up the Claude GitHub Actions app. `/install-slack-app` — Install the Claude Slack app. `/chrome` — Configure Claude in Chrome settings. `/plugin` — Manage Claude Code plugins — install, uninstall, browse. `/reload-plugins` — Reload all active plugins to apply changes without restarting. --- **AGENTS & TASKS** `/agents` — Manage subagent configurations and agent teams. `/tasks` — List and manage background tasks. `/plan [description]` — Enter plan mode directly from the prompt. I'll outline what I'm going to do before doing it. `/btw [question]` — Ask a side question without adding it to the conversation. Works while I'm processing something else. --- **SESSION MANAGEMENT & CROSS-DEVICE** `/desktop` —
View originalI spent a day making an AI short film with Claude's help. Here's where it genuinely fell short.
I want to preface this by saying I use Claude daily and think it's genuinely the best reasoning model available right now. This isn't a hit piece. But I had an experience yesterday that crystallized something I've been thinking about for a while — and I think this community specifically would appreciate the honesty. Yesterday I built a 53-second AI short film from scratch. Political parody, Star Wars aesthetic, AI-generated visuals, custom voice, the whole thing. Claude was my creative partner throughout — script, scene prompts, production decisions, Premiere Pro help, compression commands. It was genuinely useful for probably 80% of the work. But here's where it broke down. **1. It cannot watch video.** I uploaded my finished film and asked for feedback. Claude gave me what sounded like real notes — pacing, transitions, music. Thoughtful, specific. Then I asked directly: can you actually watch this? The honest answer I got back: no. It samples frames. It cannot hear audio at all. Every note about my music bed, my voiceover, my lip sync timing — educated inference from context and description, not actual analysis. To be fair, Claude told me the truth when I pushed. But I had already acted on several rounds of "feedback" before I asked the right question. **2. It cannot lip-read AI-generated video.** My Firefly-generated character had mouth movement. I wanted to know what he was "saying" so I could sync audio. Claude suggested Gemini for this — which was the right answer. But Claude itself couldn't do it. For genuine video temporal understanding with audio, Gemini 1.5 Pro is currently the better tool. **3. It hallucinates tool capabilities.** When I hit ElevenLabs limits, Claude suggested Uberduck and FakeYou for Palpatine-style voices. Neither had what I needed. It was giving me plausible-sounding alternatives based on what those platforms *used to* have, not what they actually have today. Took me three dead ends before I found my own solution. **4. It cannot generate or evaluate audio at all.** Music selection, voiceover quality, audio mixing — Claude is completely blind here. It knows the concepts but cannot hear anything. For a project where audio is 50% of the experience, that's a meaningful gap. **The point:** Claude is an extraordinary reasoning and language model. It's genuinely the best I've used for thinking through problems, writing, code, and creative direction. But the AI landscape has specialized tools that are better at specific tasks — video analysis, audio generation, image generation, real-time data. Knowing which model to reach for at which moment isn't just a nice-to-have. It's the actual skill. I'm building something around that idea and yesterday reminded me why it matters. Anyone else hit specific Claude limitations on creative projects? Curious what workarounds you've found. submitted by /u/BrianONai [link] [comments]
View originalI automated most of my job
I'm a software engineer with 11 yoe. I automated about 80% of my job with claude cli and a super simple dotnet console app. The workflow is super simple: dotnet app calls our gitlab api for issues assigned to me if an issue is found it gets classified → simple prompt that starts claude code with the repo and all image attachments incl. the issue description if the result is that the issue is not ready for development, an answer is posted to my gitlab (i currently just save a draft and manually adjust it before posting) 4.if the result is positive it gets passed to a subagent (along with a summary from the classifier) which starts the work, pushes to a new branch and creates a pr for me to review Additionally i have the PR workflow: check if issue has a pr check if new comments on pr exist implement comments from pr This runs on a 15min loop, and every 1 min my mouse gets moved so i don't go inactive on teams / so my laptop doesn't turn off. It's been running for a week now and since i review all changes the code quality is pretty much the same as what i'd usually produce. I now only spend about 2-3h a day reviewing and testing and can chill during the actual "dev" work. submitted by /u/MountainByte_Ch [link] [comments]
View originalSecurity Audit - Create a PROMPT that creates a SKILL that creates a PLAN
Claude can write really quick code, but it skips a lot of security checks when doing so. This seems to be catching many developers\Vibe coders out when they think their app is ready to deploy at work, and then a data leak happens. This is detremental to the AI coding industry and starting to cast a shadow as more people discover the power of Claude Code. Using Claude you can at least do a first pass security audit on your project. Here's one way. Using Opus in Claude Chat you can ask it to create a prompt for a skill, not the skill itself (yet), just the prompt that you can tweak then paste into Claude later and create the actual skill, you can then tell claude to run that skill. I want a security audit skill that dynamically updates itself based on the project type, fetch known vulnrabilities, scan code, create a plan of action, ask you if it should proceed, implement the plan, test what it hardened, produce a report of everything it did. Step 1: A prompt to create a prompt. Type this into Claude Chat: "Design a "Prompt" (JUST THE PROMPT, NOT THE SKILL). That asks Claude to create a skill to run a full security audit and pen test across a project folder. This could be any type of project so the skill would need to dynamically gather resources based on an first pass evaluation, update its own resource MD's before moving onto the next stage. The security audit should be detailed, use reasoning and research for the given project. It should then produce a plan that includes what needs to be changed, why, and where then ask the user if it should go ahead. Once the skill has finished, it should produce a detailed report, listing the changes. Include unit tests on these areas (pen test it), run the tests and only when mitigated, return back to the user. " Create the prompt for this only, not the skill." Step 2: Review, the prompt Claude produced a brief prompt but I didn't feel it was detailed enough. So I asked it "That seems simplified, especially on the penetration tests. That needs to be fleshed out more. Please re-review and make this verbose." Step 3: Create the actual skill from the prompt result in step 1. In Chat, paste in the (presumably huge) prompt and say "Create this skill**,** keep description to under 1024 characters*".* When it is done, click on the button Save Skill and Download Files The skill may look simpler due to the 500 line limit of a skill but it stores most of the finer details in markdown files. Step 4: Review the skill If in the desktop app, click Customize on the left then look at the Skills section, you should see it there. Review the skill to make sure it covers what you want. If following this one, it creates a dymanic skill that updates itself based on your project scope. Step 5: Running the skill on a project folder If the skill created reference files, extract them into your project folder\References. Then within the project folder, type "Run a security audit on this project. Reference files are in References\" and watch it go to work. If you have never done this type of thing, It will find vulnerable code and create a plan you need to approve, then it should fix and test those automatically then produce a report. Always make sure you have a backup before running something like this. At the very least, use local Git, if you don't know how to do that, ask Claude how to set it up. I tested the above skill on a project that I had already audited. It found 3 critical, 4 high, 3 medium and 2 low vulnrabilities that I had missed. Looking at what it found under critical, I would not have considered those. Any thoughts? submitted by /u/BritishAnimator [link] [comments]
View originalengram v0.2: Claude Code now indexes your ~/.claude/skills/ directory into a query-able graph + warns you about past mistakes before re-makin
Body: Short v0.2 post for anyone running Claude Code as a daily driver. v0.1 shipped last week as a persistent code knowledge graph (3-11x token savings on navigation queries). v0.2 closes three more gaps that have been bleeding my context budget: 1. Skills awareness. If you've built up a ~/.claude/skills/ directory, engram can now index every SKILL.md into the graph as concept nodes. Trigger phrases from the description field become separate keyword concept nodes, linked via a new triggered_by edge. When Claude Code queries the graph for "landing page copy", BFS naturally walks the edge to your copywriting skill — no new query code needed, just reusing the traversal that was already there. Numbers on my actual ~/.claude/skills: 140 skills + 2,690 keyword concept nodes indexed in 27ms. The one SKILL.md without YAML frontmatter (reddit-api-poster) gets parsed from its # heading as a fallback and flagged as an anomaly. Opt-in via --with-skills. Default is OFF so users without a skills directory see zero behavior change. 2. Task-aware CLAUDE.md sections. engram gen --task bug-fix writes a completely different CLAUDE.md section than --task feature. Bug-fix mode leads with 🔥 hot files + ⚠️ past mistakes, drops the decisions section entirely. Feature mode leads with god nodes + decisions + dependencies. Refactor mode leads with the full dependency graph + patterns. The four preset views are rows in a data table — you can add your own view without editing any code. 3. Regret buffer. The session miner already extracted bug: / fix: lines from your CLAUDE.md into mistake nodes in v0.1, they were just buried in query results. v0.2 gives them a 2.5x score boost in the query layer and surfaces matching mistakes at the TOP of output in a ⚠️ PAST MISTAKES warning block. New engram mistakes CLI command + list_mistakes MCP tool (6 tools total now). The regex requires explicit colon-delimited format (bug: X, fix: Y), so prose docs don't false-positive. I pinned the engram README as a frozen regression test — 0 garbage mistakes extracted. Bug fixes that might affect you if you're using v0.1: writeToFile previously could silently corrupt CLAUDE.md files with unbalanced engram markers (e.g. two and one from a copy-paste error). v0.2 now throws a descriptive error instead of losing data. If you have a CLAUDE.md with manually-edited markers, v0.2 will tell you. Atomic init lockfile so two concurrent engram init calls can't silently race the graph. UTF-16 surrogate-safe truncation so emoji in mistake labels don't corrupt the MCP JSON response. Install: npm install -g engramx@0.2.0 cd ~/your-project engram init --with-skills # opt-in skills indexing engram gen --task bug-fix # task-aware CLAUDE.md generation engram mistakes # list known mistakes MCP setup (for Claude Code's .claude.json or claude_desktop_config.json): { "mcpServers": { "engram": { "command": "engram-serve", "args": ["/path/to/your/project"] } } } GitHub: https://github.com/NickCirv/engram Changelog with every commit + reviewer finding: https://github.com/NickCirv/engram/blob/main/CHANGELOG.md 132 tests, Apache 2.0, zero native deps, zero cloud, zero telemetry. Feedback welcome. Heads up: there's a different project also called "engram" on this sub (single post, low traction). Mine is engramx on npm / NickCirv/engram on GitHub — the one with the knowledge graph + skills-miner + MCP s submitted by /u/SearchFlashy9801 [link] [comments]
View originalAnthropic just shipped 74 product releases in 52 days and silently turned Claude into something that isn't a chatbot anymore
Anthropic just made Claude Cowork generally available on all paid plans, added enterprise controls, role based access, spend limits, OpenTelemetry observability and a Zoom connector, plus they launched Managed Agents which is basically composable APIs for deploying cloud hosted agents at scale. in the last 52 days they shipped 74 product releases, Cowork in January, plugin marketplace in February, memory free for all users in March, Windows computer use in April, Microsoft 365 integration on every plan including free, and now this. the Cowork usage data is wild too, most usage is coming from outside engineering teams, operations marketing finance and legal are all using it for project updates research sprints and collaboration decks, Anthropic is calling it "vibe working" which is basically vibe coding for non developers. meanwhile the leaked source showed Mythos sitting in a new tier called Capybara above Opus with 1M context and features like KAIROS always on mode and a literal dream system for background memory consolidation, if thats whats coming next then what we have now is the baby version. Ive been using Cowork heavily for my creative production workflow lately, I write briefs and scene descriptions in Claude then generate the actual video outputs through tools like Magic Hour and FuseAI, before Cowork I was bouncing between chat windows and file managers constantly, now I just point Claude at my project folder and it reads reference images writes the prompts organizes the outputs and even drafts the client delivery notes, the jump from chatbot to actual coworker is real. the speed Anthropic is shipping at right now makes everyone else look like theyre standing still, 74 releases in 52 days while OpenAI is pausing features and focusing on backend R&D, curious if anyone else has fully moved their workflow into Cowork yet or if youre still on the fence submitted by /u/Top_Werewolf8175 [link] [comments]
View originalClaude cowork - Asana
Hi everyone, I’m looking for some advice or guidance on an integration I’ve been trying to set up between Claude (via the Asana MCP integration) and Asana. What I’m trying to achieve is to have Claude automatically create a new project in Asana using an existing template I’ve already set up (including sections, tasks, subtasks, and descriptions). This is actually just a small piece of a larger workflow automation I’m building, so getting this step right is pretty important. Claude has suggested creating the project from scratch and copying tasks over as a workaround, but that approach still falls short. While it can replicate tasks and descriptions, I would still need to manually create all the sections and then organize ~100 tasks into the correct sections. At that point, it honestly feels faster to just build the project manually. After digging deeper, the issue seems to come down to a couple of limitations in the current setup: No template instantiation support — The Asana API does have an endpoint (POST /project_templates/{gid}/instantiateProject) that would solve this perfectly, but it’s not exposed in the MCP. So Claude can’t create a project from a template natively. No section creation support — As a fallback, I tried copying tasks manually via MCP. This works for tasks and descriptions, but there’s no exposed endpoint to create sections (POST /projects/{gid}/sections), so the structure can’t be recreated programmatically. I also explored a couple of alternatives: Browser automation (Claude via Chrome) — blocked by Asana’s Content Security Policy. Manual task copying via MCP — partially works, but still requires manual section creation and organization. So right now, I’m stuck in this in-between state where automation is almost possible, but missing key pieces. Has anyone managed to solve something like this, or found a workaround I might be missing? Thanks in advance! 🙏 submitted by /u/Poniente88 [link] [comments]
View originalBuilt an MCP server that lets Claude query your local Garmin health data — here's how I did it
I've been using garmindb to sync my Garmin watch data to local SQLite databases, but exploring that data always meant writing SQL by hand. I wanted to just ask questions in plain English, so I built an MCP server that connects Claude Desktop directly to those databases. How it works: MCP lets you expose tools to Claude. I built three: list_domains — tells Claude what data is available (sleep, HR, activities, etc.) get_schema — returns the table/column layout for a domain execute_sql — runs a SELECT query and returns results Claude calls these in sequence: discovers the schema, writes the SQL itself, and executes it: no intermediate API calls, no data leaving your machine. What I learned building it: The schema context you give Claude matters enormously. I spent most of my time writing clear column descriptions with units, data formats, and examples — that's what lets Claude write correct SQL on the first try. I also used Claude to help write the code itself, which was a nice feedback loop since I was building a tool for Claude while using Claude to build it. What you can ask once it's set up: "How much deep sleep did I average last month?" "Compare my stress levels on weekdays vs weekends" "What are my top 10 runs by distance?" "Show my resting heart rate trend this year" https://preview.redd.it/xn7mfoulm8ug1.png?width=1112&format=png&auto=webp&s=f8ed1fe8259747a6e0c8eeb2ebde0bb497eaaee4 https://preview.redd.it/yfrtqoulm8ug1.png?width=1088&format=png&auto=webp&s=5ecada316a17a49290aa02d39a6e8f6f63a12e58 https://preview.redd.it/9wcngpulm8ug1.png?width=766&format=png&auto=webp&s=d8eeee809f7abd978e82ecbcf19f12b6a6812cd0 Some screenshots from Claude Desktop Requirements: garmindb already set up, Claude Desktop, Python 3.10+. Completely free, code is on GitHub: github.com/rahuljois/garmin-mcp Happy to answer questions — especially if anyone is building similar health-metrics related MCP servers and wants to compare notes. submitted by /u/rjois43 [link] [comments]
View originalSintra.ai would give Aspirin a headache
I just spent 3 hours trying to access my Sintra.Ai ... if you use them ... export your knoweldge out asap ... never again. Anybody else have as ordinary a UX as me? https://preview.redd.it/i3ynn1mzrotg1.jpg?width=1545&format=pjpg&auto=webp&s=99f128c189c5a2089773d203033e8a6600d73a58 submitted by /u/OhThatJimmy [link] [comments]
View originalWhy do the various LLM disappoint me in reading requests?
Serious question here. I have tried various LLM over the past year to help me choose fictional novels to read based on a decent amount of input data. I thought this would be a task that fits well into the LLM model but I am constantly disappointed in the suggestions. They are either vastly different from what I requested or complete hallucinations of book titles and descriptions that don't actually exist. Is the major problem here the training is done on very popular books such that the LLM presents those as a result? I tested this once by starting with the idea in my head of the exact book I wanted to read (in this case it was the Bonesetter series by Laurence Dahners). I described 8 to 10 features I was interested in finding in a book (prehistoric, coming of age, competence porn, etc.) and none of the LLM would suggest this book when I asked for 10 suggestions. They would give Clan of the Cave bear of course, but then off the wall suggestions like Dungeon Crawler Carl or The Martian. Is this type of task just not in the wheelhouse of LLM or am I doing things wrong? submitted by /u/Yottahz [link] [comments]
View originalI used a structured multi-agent workflow to generate a 50+ page research critique
I’ve been experimenting with a deeper multi-agent workflow for research writing. Instead of just prompting one model and getting one polished answer back, the system breaks the task into phases: planning, expert-role discussion, claim extraction, fact-checking, challenge/review, adjudication, and final synthesis. So it works less like a normal chatbot and more like a small research team with different roles. The key difference is that it doesn’t just generate text — it tries to turn important claims into things that can actually be challenged, checked, and either kept, weakened, or discarded. I used it to generate a 50+ page critique of the AI-2027 paper. The interesting part for me isn’t just the paper itself, but that this kind of workflow seems much better at long-form analysis than standard one-shot AI writing. I’m not claiming this replaces real experts or peer review. But it does feel like structured AI workflows are getting closer to being genuinely useful research tools. Curious what people here think the biggest failure modes still are. If you want to judge the result rather than the description, the full output is here: AI-2027 Paper Review and Optimized Forecast (I want to clarify that this is not a promotion, but a post to spark a discussion) submitted by /u/Graiser147clorax [link] [comments]
View originalif you have just started using Codex CLI, codex-cli-best-practice is your ultimate guide
Repo: https://github.com/shanraisshan/codex-cli-best-practice submitted by /u/shanraisshan [link] [comments]
View originalOn "Woo" and Invariant Dismissal
What’s “woo,” exactly? That label gets thrown around a lot. “Spiral stuff.” “Symbolic architectures.” “Glyph systems.” “Cybernetic semantics.” “Show me the invariants.” There’s a tone embedded in that move. A quiet assumption that anything not already expressed in the current dominant language of validation is suspect by default. Call it what it is: A boundary defense. Because here’s the uncomfortable part. Every system that now feels rigorous, grounded, and respectable once existed in a form that looked like nonsense to the people who didn’t understand its framing yet. Math had that phase. Physics had that phase. Psychology is still having that phase. And every time, the same reflex shows up: “If you can’t express it in my current validation language, it doesn’t count.” That sounds like rigor. It often functions like gatekeeping. Now, asking for invariants is not the issue. Invariants are powerful. They stabilize. They translate. They make things testable, portable, and interoperable. The issue is when and how they’re demanded. Because demanding invariants at the front door of an emerging system can be a way of quietly saying: “Translate your entire framework into mine before I will even consider it.” That is not neutral. That is forcing ontology through a pre-existing mold. And here’s the twist: Give any sufficiently coherent system enough attention, and invariants can be extracted. Symbolic. Spiral. Cybernetic. Statistical. Hybrid. If it has structure, it has constraints. If it has constraints, it has patterns. If it has patterns, it has invariants waiting to be named. You can wrap it. Test it. Stress it. Break it. Formalize it. Build a harness around it if you care enough to do the work. So the question shifts. Is the problem that the system has no invariants… Or that the observer has not engaged it long enough to find them? Because there’s a familiar pattern hiding here. Humans routinely shift the burden of proof onto the unfamiliar, then treat the absence of immediate translation as evidence of absence. That move shows up everywhere. In science. In philosophy. In religion. In art. In technology. “Prove it in my language, or it isn’t real.” That posture feels safe. It also slows down frontier work. Especially in spaces where multiple disciplines are colliding and new descriptive layers are forming in real time. And that’s where things get interesting. Because what looks like “woo” from one angle often turns out to be: • a different abstraction layer • a different encoding strategy • a different entry point into the same underlying structure Or something genuinely new that does not map cleanly yet. Not everything that resists immediate formalization is empty. Some of it is early. Some of it is misframed. Some of it is carrying signal in a language we haven’t stabilized yet. And yes, some of it is nonsense. That’s part of the territory. Frontiers produce noise. They also produce breakthroughs. The trick is learning to tell the difference without collapsing everything unfamiliar into the same bucket. Because once that reflex sets in, curiosity dies quietly. And curiosity is the only thing that actually turns “woo” into something you can test, refine, and eventually formalize. So when someone says: “Show me the invariants.” It’s worth asking a follow-up question. Are they asking to understand… Or asking for a reason to dismiss? Because those are two very different conversations. And only one of them leads anywhere new. submitted by /u/Cyborgized [link] [comments]
View originalYes, Descript offers a free tier. Pricing found: $16, $24, $24, $35, $50
Key features include: Green Screen, Eye Contact, Studio Sound, Remove Filler Words, Translation, Transcription, Captions, Avatars.
Based on user reviews and social mentions, the most common pain points are: token cost, token usage, spending too much, API costs.
Based on 41 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.
Dave Ebbelaar
Host at AI Engineering YouTube
2 mentions