This is a very meandering article without a lot of content, but as near as I can tell this quote near the end is the thesis:
> Local ai apps are how you get around the legacy permission system of the internet and set AI automation free.
This is an overly dramatic way of saying that OpenAI wants their local app to be a first class user agent rather than just another sandboxed website. This isn't really an end run around the web's permission system, it's very much an explicit part of the security model. Users trust a small handful of programs to be their agent interacting with bunch of content that they don't necessarily trust. If OpenAI wants to make an entry to try to become one of those trusted user agents, more power to them! We can always use more competition in the user agent space.
Article is junk but I'm curious if their fever dream is even possible on iOS - would the permission structure let an app read screens, click buttons etc.?
If there was genuine user agent competition/differentiation I think it would take an EU antitrust action to force apple to open a "siri replacement" API up.
And if I'm apple I'd be finding a way to make apple-only silicon "part of" siri, at least in the eyes of a future EU court.
I also struggled, but pushed through it enough to get the essence.
The message is that by running an arbitrary app on a machine, LLMs can act as an agent on your behalf, as though it were actually you. So it could, for example, make purchases, login to _anything_, and generally do whatever across any local programs and / or miscellaneous web browsers. Compare this to a web browser were there are huge security restrictions on what components can talk to what (e.g. forbidding cross-origin, http <=> https requests, and so forth).
Overall it's a bit of a nothing burger, and the level of effort required to identify it as such was a tad disappointing.
Is the author suggesting that OpenAI’s desktop app is indexing the content of machine’s disk or monitoring a users behavior outside of the app? I hope this is not true.
tl;dr: There is concern that BigNames will give AI "root" on your life and let it take action on your behalf that would have previously required permission, e.g. logging in with a FANG account.
About the time that real costs are incurred, a class-action lawsuit would presumably follow.
I know.. I flagged, then unflagged because it's bad but ... is it really flag-worthy bad? Thanks to the sibling commenter (@lc64), I agree with you, it's a waste of time.
This is a very meandering article without a lot of content, but as near as I can tell this quote near the end is the thesis:
> Local ai apps are how you get around the legacy permission system of the internet and set AI automation free.
This is an overly dramatic way of saying that OpenAI wants their local app to be a first class user agent rather than just another sandboxed website. This isn't really an end run around the web's permission system, it's very much an explicit part of the security model. Users trust a small handful of programs to be their agent interacting with bunch of content that they don't necessarily trust. If OpenAI wants to make an entry to try to become one of those trusted user agents, more power to them! We can always use more competition in the user agent space.
yes, good nothing burger
Article is junk but I'm curious if their fever dream is even possible on iOS - would the permission structure let an app read screens, click buttons etc.?
If there was genuine user agent competition/differentiation I think it would take an EU antitrust action to force apple to open a "siri replacement" API up.
And if I'm apple I'd be finding a way to make apple-only silicon "part of" siri, at least in the eyes of a future EU court.
I had a very hard time understanding what they were saying.
Whitespace between single, short sentence.
Sounds in my head like they pause 5-10 seconds between each sentence.
What is going on?
I also struggled, but pushed through it enough to get the essence.
The message is that by running an arbitrary app on a machine, LLMs can act as an agent on your behalf, as though it were actually you. So it could, for example, make purchases, login to _anything_, and generally do whatever across any local programs and / or miscellaneous web browsers. Compare this to a web browser were there are huge security restrictions on what components can talk to what (e.g. forbidding cross-origin, http <=> https requests, and so forth).
Overall it's a bit of a nothing burger, and the level of effort required to identify it as such was a tad disappointing.
---
Whatever you do
In life
Please don't
Form your sentences
Like this.
I am sorry -- that was impossible to read. I think even LLMs will struggle here.
[dead]
Is the author suggesting that OpenAI’s desktop app is indexing the content of machine’s disk or monitoring a users behavior outside of the app? I hope this is not true.
tl;dr: There is concern that BigNames will give AI "root" on your life and let it take action on your behalf that would have previously required permission, e.g. logging in with a FANG account.
About the time that real costs are incurred, a class-action lawsuit would presumably follow.
[flagged]
I know.. I flagged, then unflagged because it's bad but ... is it really flag-worthy bad? Thanks to the sibling commenter (@lc64), I agree with you, it's a waste of time.
more concerning
than form
is the absence
of substance.
flag that waste of time
[flagged]
[flagged]
Yes.
And for the six year old subsistence farmer in India.
I always wondered what became of Hipster Runoff.
[flagged]