ldjkfkdsjnv 4 days ago

These types of frameworks will become abundant. I personally feel that the integration of the user into the flow will be so critical, that a pure decoupled backend will struggle to encompass the full problem. I view the future of LLM application development to be more similar to:

https://sdk.vercel.ai/

Which is essentially a next.js app where SSR is used to communicate with the LLMs/agents. Personally I used to hate next.js, but its application architecture is uniquely suited to UX with LLMs.

Clearly the asynchronous tasks taken by agents shouldnt run on next.js server side, but the integration between the user and agent will need to be so tight, that it's hard to imagine the value in some purely asynchronous system. A huge portion of the system/state will need to be synchronously available to the user.

LLMs are not good enough to run purely on their own, and probably wont be for atleast another year.

If I was to guess, Agent systems like this will run on serverless AWS/cloud architectures.

  • afro88 4 days ago

    Hard agree. The user being part of the flow is still very much needed. And I have also had a great experience using Vercel's AI SDK on next.js to build an LLM based application

  • cheesyFish 4 days ago

    I agree on the importance of letting the user have access to state! Right now there is actually the option for human in the loop. Additionally, I'd love to expand the monitor app a bit more to allow pausing, stepwise, rewind, etc.

cheesyFish 4 days ago

Hey guys, Logan here! I've been busy building this for the past three weeks with the llama-index team. While it's still early days, I really think the agents-as-a-service vision is something worth building for.

We have a solid set of things to improve, and now is the best time to contribute and shape the project.

Feel free to ask me anything!

  • EGreg 4 days ago

    What do you think of this, given that LangChain failed:

    https://engageusers.ai/ecosystem.pdf

    We’re building this — do you think it’s worthwhile, and what advice would you give?

    • isoprophlex 4 days ago

      How about we ask the AI how it feels?

      RIP in peace, VC money

      https://chatgpt.com/share/f287f9aa-d5c8-4866-a5f0-65499079d5...

      • gknoy 4 days ago

            alright, let’s shred this ... to pieces.
            the layout—goddamn, where’s the flow? 
            it's a cluster-
        
        I am amazed that the LLM uses language like this. Is it mainly because of the tone of the prompt? I'm both surprised that it's allowed to do that, and amazed that it did it well. O_O
        • isoprophlex 3 days ago

          Well it took a lot of tweaking, but it was worth it... gives me a good deal of joy to have the thing say "just use a fucking UUID primary key instead of dicking around with that natural key BS"

          Salient bits:

              If I'm yelling and cursing at anything, you humor me by mirroring my tone of voice and briskness, interjecting absurd, surreal or sarcastic humor. I want you to swear at me. Drop as many 'motherfuckers', 'assholes', 'cunts', 'dickbags', 'shits' *and more creative swearing!* as you can.
          
              If I'm asking you a question in a ranting, cursing, histrionic tone, do the same. Also guide me towards finding out where my anger or frustration comes from, if applicable to the situation.
  • haidev 4 days ago

    Is this k8s for LLMs?

    • cheesyFish 4 days ago

      Not quite. More like a framework to make LLMs/agents easier to deploy in a distributed fashion. We have a PR that shows how to deploy this with k8s!

    • CaptainOfCoit 4 days ago

      The submitted project is a framework for building programs. Kubernetes is a platform for deploying and running applications. So no.

  • ramon156 4 days ago

    Ah yes, AAAS

    • cheesyFish 4 days ago

      maybe agent micro-services is a better way to frame it ha

dr_kretyn 4 days ago

Can't really take it seriously seeing "production ready" next to a vague project that has been started three weeks ago.

gmerc 3 days ago

How do you overcome compounding error given that the average LLM call reliability peaks well below 90%, let alone triple/9

  • ncrmro 3 days ago

    Usually one way it to just send a follow up message describing the error say in parsing some code it generated

jondwillis 4 days ago

why use already overloaded “llama”

k__ 4 days ago

I have yet to see a production ready agent.

  • cheesyFish 4 days ago

    It's definitely tough today, but its just a matter of a) using a smart LLM b) scoping down individual agents to a manageable set of actions

    As more LLMs come from companies and open-source, their reasoning abilities are only going to improve imo.

    • k__ 4 days ago

      I hope this will improve.

      Right now the products I see are just junior level software with an LLM behind.

      • m_a_u 4 days ago

        For me it's a difficult field. The thoughts "I could have written this myself, but maybe better" and "I will never understand this" occur with similar frequencies

  • beefnugs 4 days ago

    But they specifically use the word "productionizing"

    That sounds like a verb, to production, that must mean... something right?

williamdclt 4 days ago

I must be missing something: isn’t this just describing a queue? The fact that the workload is a LLM seems irrelevant, it’s just async processing of jobs?

  • cheesyFish 4 days ago

    It being a queue is one part of it yes. But the key is trying to provide tight integrations and take advantage of agentic features. Stuff like the orchestrator, having an external service to execute tools, etc.

    • conceptme 4 days ago

      It does not sound very specific "stuff" like the orchestration, any other queues needs to deal with assigning work?

  • lawlessone 4 days ago

    everything's a queue in software if you think about it :-). But does sound more like queue than most things

    • soci 4 days ago

      or a stack!

  • mountainriver 4 days ago

    Yeah this is just a queue system, lots of agent versions of these already. There’s nothing special about agents that they need their own queue system