【Hacker News搬运】Show HN:Mastra–开源JS代理框架,由Gatsby的开发者开发
-
Title: Show HN: Mastra – Open-source JS agent framework, by the developers of Gatsby
Show HN:Mastra–开源JS代理框架,由Gatsby的开发者开发
Text: Hi HN, we’re Sam, Shane, and Abhi, and we’re building Mastra (<a href="https://mastra.ai">https://mastra.ai</a>), an open-source JavaScript SDK for building agents on top of Vercel’s AI SDK.<p>You can start a Mastra project with
npm create mastra
and create workflow graphs that can suspend/resume, build a RAG pipeline and write evals, give agents memory, create multi-agent workflows, and view it all in a local playground.<p>Previously, we built Gatsby, the open-source React web framework. Later, we worked on an AI-powered CRM but it felt like we were having to roll all the AI bits (agentic workflows, evals, RAG) ourselves. We also noticed our friends building AI applications suffering from long iteration cycles: they were getting stuck debugging prompts, figuring out why their agents called (or didn’t call) tools, and writing lots of custom memory retrieval logic.<p>At some point we just looked at each other and were like, why aren't we trying to make this part easier, and decided to work on Mastra.<p>Demo video: <a href="https://www.youtube.com/watch?v=8o_Ejbcw5s8" rel="nofollow">https://www.youtube.com/watch?v=8o_Ejbcw5s8</a><p>One thing we heard from folks is that seeing input/output of every step, of every run of every workflow, is very useful. So we took XState and built a workflow graph primitive on top with OTel tracing. We wrote the APIs to make control flow explicit:.step()
for branching,.then()
for chaining, and.after()
for merging. We also added ..suspend()/.resume()
for human-in-the-loop.<p>We abstracted the main RAG verbs like.chunk()
,embed()
,.upsert(),’
.query(), and
rerank()across document types and vector DBs. We shipped an eval runner with evals like completeness and relevance, plus the ability to write your own.<p>Then we read the MemGPT paper and implemented agent memory on top of AI SDK with a
lastMessageskey,
topKretrieval, and a
messageRangefor surrounding context (think
grep -C).<p>But we still weren’t sure whether our agents were behaving as expected, so we built a local dev playground that lets you curl agents/workflows, chat with agents, view evals and traces across runs, and iterate on prompts with an assistant. The playground uses a local storage layer powered by libsql (thanks Turso team!) and runs on localhost with
npm run dev` (no Docker).<p>Mastra agents originally ran inside a Next.js app. But we noticed that AI teams’ development was increasingly decoupled from the rest of their organization, so we built Mastra so that you can also run it as a standalone endpoint or service.<p>Some things people have been building so far: one user automates support for an iOS app he owns with tens of thousands of paying users. Another bundled Mastra inside an Electron app that ingests aerospace PDFs and outputs CAD diagrams. Another is building WhatsApp bots that let you chat with objects like your house.<p>We did (for now) adopt an Elastic v2 license. The agent space is pretty new, and we wanted to let users do whatever they want with Mastra but prevent, eg, AWS from grabbing it.<p>If you want to get started:- On npm: npm create mastra@latest
- Github repo: <a href="https://github.com/mastra-ai/mastra">https://github.com/mastra-ai/mastra</a>
- Demo video: <a href="https://www.youtube.com/watch?v=8o_Ejbcw5s8" rel="nofollow">https://www.youtube.com/watch?v=8o_Ejbcw5s8</a>
- Our website homepage: <a href="https://mastra.ai">https://mastra.ai</a> (includes some nice diagrams and code samples on agents, RAG, and links to examples)
- And our docs: <a href="https://mastra.ai/docs">https://mastra.ai/docs</a><p>Excited to share Mastra with everyone here – let us know what you think!
嗨,HN,我们是Sam、Shane和Abhi,我们正在构建Mastra(<a href=“https://Mastra.ai”>https://Mastra.api</a>),这是一个开源JavaScript SDK,用于在Vercel的ai SDK之上构建代理<p> 您可以使用`npm create Mastra`启动Mastra项目,并创建可以挂起的工作流图;恢复、构建RAG管道和编写评估、给代理内存、创建多代理工作流,并在本地游乐场查看所有内容<p> 之前,我们构建了开源React web框架Gatsby。后来,我们开发了一个基于人工智能的CRM,但感觉我们必须自己滚动所有的人工智能部分(代理工作流、评估、RAG)。我们还注意到,我们的朋友在构建人工智能应用程序时遇到了长迭代周期的问题:他们遇到了调试提示,弄清楚为什么他们的代理调用(或不调用)工具,并编写了大量自定义内存检索逻辑<p> 在某个时刻,我们只是看着对方,然后想,为什么不呢;我们不想让这部分更容易,于是决定在Mastra上工作<p> 演示视频:<a href=“https://;www.youtube.com/-watch?v=8o_Ejbcw5s8”rel=“nofollow”>https:/;www.youtube.com;看?v=8o_Ejbcw5s8</a><p>我们从人们那里听到的一件事是,看到输入;每个步骤、每个工作流的每次运行的输出都非常有用。因此,我们采用XState,在OTel跟踪的基础上构建了一个工作流图原语。我们编写了API来使控制流显式:`.step()`用于分支,`.then()`表示链接,`.fafter()`进行合并。我们也加了。suspend();。resume()`用于循环中的人<p> 我们在文档类型和向量数据库中抽象了主要的RAG动词,如“.chunk()”、“embed()”,“.usprt()”和“.rerank())”。我们发布了一个eval runner,它具有完整性和相关性等评价,以及编写自己的评价的能力<p> 然后,我们阅读了MemGPT论文,并在AI SDK上实现了代理内存,其中包含“lastMessages”键、“topK”检索和用于周围上下文的“messageRange”(想想“grep-C”)<p> 但我们仍然不确定我们的代理是否按预期运行,因此我们构建了一个本地开发游乐场,让您可以卷曲代理;工作流、与代理聊天、查看运行中的评估和跟踪,以及使用助手迭代提示。游乐场使用由libsql支持的本地存储层(感谢Turso团队!),并在本地主机上运行“npm run-dev”(无Docker)<p> Mastra代理最初运行在Next.js应用程序中。但我们注意到,人工智能团队的开发越来越与组织的其他部分脱钩,因此我们构建了Mastra,这样你也可以将其作为独立的端点或服务运行<p> 到目前为止,人们一直在构建一些东西:一个用户自动支持他拥有的一个拥有数万付费用户的iOS应用程序。另一个将Mastra捆绑在一个电子应用程序中,该应用程序可以接收航空航天PDF并输出CAD图表。另一个是构建WhatsApp机器人,让你与房子等物体聊天<p> 我们(目前)确实采用了Elastic v2许可证。代理空间是相当新的,我们想让用户用Mastra做任何他们想做的事情,但要防止AWS抓取它。<p>如果你想开始:-在npm上:npm createmastra@latest -Github仓库:<a href=“https:/;/ Github.com/-mastra-ai//mastra”>https:"/;github.com;mastra ai;马斯特拉</a>-演示视频:<a href=“https://;www.youtube.com/-watch?v=8o_Ejbcw5s8”rel=“nofollow”>https:/;www.youtube.com;看?v=8o_Ejbcw5s8</a>-我们的网站主页:<a href=“https:”mastra.ai“>https:”/;mastra.ai</a>(包括一些关于代理、RAG的漂亮图表和代码示例,以及示例链接)-我们的文档:<a href=“https:/;mastra.ai/-docs”>https:"/;mastra.ai;docs</a><p>很高兴与在座各位分享Mastra–让我们知道您的想法!
Url: https://github.com/mastra-ai/mastra
很抱歉,作为一个AI,我无法直接访问外部链接或GitHub页面。不过,我可以根据你提供的链接信息来给出一些指导。 GitHub链接 `https://github.com/mastra-ai/mastra` 指向了一个名为 "mastra" 的项目,这个项目可能是由 "mastra-ai" 组织创建的。以下是如何使用JinaReader抓取分析该内容并进行总结的一般步骤: 1. **安装JinaReader**: 首先,你需要确保你已经安装了JinaReader。如果你还没有安装,可以通过pip来安装: ```bash pip install jina
-
访问GitHub页面:
使用JinaReader,你可以通过编写一个简单的Python脚本来访问GitHub页面,并抓取所需的信息。 -
抓取内容:
使用JinaReader的API或命令行工具,你可以抓取GitHub页面上的内容。以下是一个使用JinaReader的伪代码示例:from jina import Client # 创建一个Jina客户端 client = Client() # 定义要抓取的URL url = "https://github.com/mastra-ai/mastra" # 使用JinaReader抓取内容 result = client.search(url) # 打印抓取的内容 print(result)
-
分析内容:
抓取到的内容可能包含多种类型的数据,比如代码、文档、描述等。你可以使用JinaReader提供的分析工具来处理这些数据。 -
翻译非中文内容:
如果抓取到的内容不是中文,你可以使用JinaReader的翻译功能或者集成一个翻译API(如Google Translate API)来将内容翻译成中文。 -
总结内容:
一旦内容被分析和翻译,你可以使用自然语言处理(NLP)技术来提取关键信息并进行总结。以下是一个简化的总结过程:from jina import Document from jina import Flow # 创建一个Flow flow = Flow.load_config("path_to_config") # 创建一个Document对象 doc = Document(text=result) # 假设result是抓取到的内容 # 将Document传递给Flow进行总结 flow.push_all([doc]) # 获取总结后的内容 summary = doc.text # 打印总结 print(summary)
请注意,上述代码仅为示例,实际使用时需要根据具体的项目配置和需求进行调整。如果你需要具体实现这些步骤的代码,请提供更多的细节,例如你想要抓取和总结的具体内容类型。
## Post by: calcsam ### Comments: **_pdp_**: I don't want to be that person but there are hundreds of other similar frameworks doing more or less the same thing. Do you know why? Because writing a framework that orchestrates a number of tools with a model is the easy part. In fact, most of the time you don't even need a framework. All of these framework focus on the trivial and you can tell that simply by browsing the examples section.<p>This is like 5% of the work. The developer needs to fill the other 95% which involves a lot more things that are strictly outside of scope of the framework. > **_pdp_**: 我不知道;我不想成为那样的人,但有数百个其他类似的框架或多或少都在做同样的事情。你知道为什么吗?因为编写一个框架,将许多工具与模型协调起来是很容易的部分。事实上,大多数时候你不会;甚至不需要一个框架。所有这些框架都专注于琐碎的事情,你只需浏览示例部分就可以知道<p> 这相当于5%的工作。开发人员需要填补另外95%的工作,这涉及更多严格超出框架范围的事情。 **kylemathews**: Very excited about Mastra! We have a number of Agent-ic things we'll be building at ElectricSQL and Mastra looks like a breath of fresh air.<p>Also the team is top-notch — Sam was my co-founder at Gatsby and I worked closely with Shane and Abhi and I have a ton of confidence in their product & engineering abilities. > **kylemathews**: 对Mastra感到非常兴奋!我们有很多代理ic的东西;我将在ElectricSQL大楼施工,马斯特拉看起来就像呼吸到了新鲜空气<p> 团队也是一流的——Sam是我在Gatsby的联合创始人,我与Shane和Abhi密切合作,我对他们的产品和服务充满信心;工程能力。 **joshstrange**: This looks awesome! Quick question, are there plans to support SSE MCP servers? I see Stdio [0] are supported and I can always run a proxy but SSE would be awesome.<p>[0] <a href="https://mastra.ai/docs/reference/tools/client">https://mastra.ai/docs/reference/tools/client</a> > **joshstrange**: 这看起来太棒了!快速提问,是否有计划支持SSE MCP服务器?我看到支持Stdio[0],我总是可以运行代理,但SSE会很棒<p> [0]<a href=“https:/;mastra.aiG;docs+;reference,;tools-;client”>https:/;mastra.ai;docs™;参考;工具;客户</a> **alanwells**: Happy Mastra user here! Strikes the right balance between letting me build with higher level abstractions but providing lower level controls when needed. I looked at a handful of other frameworks before getting started and the clarity & easy of use of Mastra stood out. Nice work. > **alanwells**: Mastra用户快乐!在允许我使用更高级别的抽象进行构建,但在需要时提供更低级别的控制之间取得了适当的平衡。在开始之前,我查看了其他一些框架,并了解了它们的清晰度和可用性;Mastra的易用性脱颖而出。干得好。 **brap**: I don’t really understand agents. I just don’t get why we need to pretend we have multiple personalities, especially when they’re all using the same model.<p>Can anyone please give me a usecase, that couldn’t be solved with a single API call to a modern LLM (capable of multi-step planning/reasoning) and a proper prompt?<p>Or is this really just about building the prompt, and giving the LLM closer guidance by splitting into multiple calls?<p>I’m specifically <i>not</i> asking about function calling. > **brap**: 我真的不了解特工。我只是不明白为什么我们需要假装我们有多重人格,尤其是当他们都使用相同的模型时<p> 有人能给我一个用例吗?这个用例不能用一个对现代LLM(能够进行多步规划推理)的API调用和适当的提示来解决<p> 或者,这真的只是建立提示,并通过拆分为多个调用来为LLM提供更密切的指导吗<p> 我特别是<I>而不是</I>询问函数调用。