After eating up all of the data on the internet, the obvious next step is recording all human interactions with the computer in order to make you obsolete. You're going to train the AI model that's going to replace you while all the profits go to a small number of companies.
We already gave away tons of our personal information for free to social networks, hopefully we've learned our lesson and we don't let AI companies get away with doing the same thing.
The reason I created https://github.com/OpenAdaptAI/OpenAdapt is because I had the same concern, and wanted to create a way for users and organizations to avoid exfiltrating their data while still participating in the promise of AI.
I believe we can create a future in which you train the AI model that will assist you while participating in the profits it generates. "Build the future you want to live in."
I think what you train should stay on your own box and not be lured into sharing for some profits, as the profits will be collected in a few large funnels to pay large salaries for CEOs and what not.
For me personally privacy and security is less of a concern vs Ads and manipulation. I'm very very concerned that marketers will be able to insert very compelling and very nuanced advertising into our experiences with it, such that even if you're trained in the dark art of marketing, you will fall for it. We're already seeing ads in perplexity, what happens when your reply to your question about a dirty entry you wrote from your free diary app AI is: "good point Johh, and I'm sorry you're going through that, it's been proven that tea can help with that however, it's calming and soothing, Lipton is great".
I work in go to market, some people (my mum) might even call me "one of those evil manipulative marketers" - yet here I am- inclined to lobby for strong legal standards around this stuff, it should not be the wild west, that would be really bad. I'm extremely concerned about this.
I am starting to wonder if the AI market will diverge into two separate implementations where one is the open source (open models?) that people can use and deploy will be done for people that are more technical and most people will use the easiest solutions.
I have used some of the AI techniques in macOS and while they are helpful I probably won't use the ones that call out to ChatGPT. But, I have found that using o1 preview it helps me with my side projects. On the other hand I have started using llama 3.3 and it is approaching usefulness and maybe an open model with CoT and reinforcement learning or even a new version of Qwen 2.5 I could see replacing.
The only future where this happens is if little, non- infrastructure / specialized hardware models maintain functional parity with "throw more hardware at the problem" big models.
That seems... unlikely.
And then we'll see the same evolution that built Google.
Someone with the best algorithm leverages that into a revenue stream, then uses that revenue to buy hardware, then monetizes its userbase via advertisement, then uses that additional revenue to buy market share.
I'm sure it'll make someone a ton of money, but it's not a future I'd like to live in (again).
> On the other hand I have started using llama 3.3 and it is approaching usefulnes
cute phrasing! so what you're saying is that you're using it, but it's not useful. You can imagine it being useful, and it produces output similar to useful output, but it isn't useful.
That perfectly matches my experience with language models, going back to when I was running GPT2 locally.
Has there been any evidence that the improvement is not asymptotic? For ten years language models have been approaching usefulness.
I think LLAMA 4 will get to usefulness ( I think I should say I have evaluated it). Long term training seems to be the best solution to solve the issues I have been facing and also more polish using CoT with reinforcement learning. I don't want to be paying for o1 since I have a homelab that could easily use up to 70B for LLAMA 3.3 or more if I can figure out how to run larger versions of LLAMA but llama can parallelize it to use all of my GPUs and inference would be faster so I would settle for that.
That's not what they said, and you're being purposely obtuse.
They were saying that Llama 3.3, an open model, approaches usefulness, in comparison to o1, a closed model, which is already useful to them.
As for usefulness in general, it's already here. It's not perfect, it can make mistakes, but saying they're not useful at all - especially for software engineers - is foolish. See https://news.ycombinator.com/item?id=42602940.
> For me personally privacy and security is less of a concern vs Ads and manipulation.
Those two don’t add up. You expose yourself to manipulation and control when you don’t have adequate privacy. If privacy is less of a concern to you, then manipulation equally so too.
For me it's slightly nuanced. Personally, I'm less concerned about yet another attack surface for my personal information from a 3rd party I didn't expect, it's probably all out there anyway. If I gave all my data to an AI company and some hacks in and steals it, for me anyway, I'm less concerned (I realize very many people find that strange/insane). I am considerably more concerned about being psychologically manipulated by an AI tool. I would say however, to your point, it would be shit if someone hacked in, changed or inserted data, and that was use to manipulate the AI, however my sense is that might be difficult/unlikely?
Scenario: You get bloodwork done. You upload the report to an AI app and ask it for its interpretation. It says you have high cholesterol, low testosterone, and some elevated liver values.
The next day you start seeing ads on instagram for GLP-1’s, TRT, cholesterol medication, and supplements to “improve liver function”.
It may not be that the AI tool is directly manipulating you. It could simply be that your data is used or incorporated into a new AI personalization model where the manipulation happens outside of the AI tool you originally gave your data to.
Just to make it very clear: I am less concerned about my personal information being stolen, hacked, etc. I accept that stuff happens.
I tech people how to do that all day every day without AI, we have the tools to do what you described already, and we in fact indeed do what you said already.
Yes this will most likely happen. I wrote my PhD thesis exactly around this topic (~8 years ago). It was fascinating to see in a lab setting how I could manipulate humans with voice agents (robots).
Their HN profile is full of personal information, including name, email, location, resume, and saying they’re open for work. All I did was search for “<name> thesis” and found it without any significant effort. Why would they not want people to read their public research while talking about it with interest and making it easy to find?
It seemed more likely to me they simply didn’t want to be perceived as tooting their own horn. Turns out that was exactly the case.
a cloud based AI having access to a document like a W2 pdf on my computer worries me. It has details like social security number, how much you earned, etc.
It's literally a principal-agent problem (https://en.wikipedia.org/wiki/Principal%E2%80%93agent_proble...). If the evolution of Google's results page is any indication, it's a matter of "when" (not "if") the usefulness gets sold away to the highest bidder. Having billions of people rely on your system is too tempting a thing to not sell away.
"Parasitic middleman" is Silicon Valley's business model.
I can't see the (intended) endgame of AI as anything other than (1) replacing vast amounts of human labor and/or (2) inventing a new form of advertising that's conversational, obfuscated, undisclosed. Something dramatic has to justify the crazy level of investment.
I could see the endgame of AI as surveillance and manipulation of information to achieve political goals. In the old days, expensive intelligence agencies and security services had to work hard to figure out targets' affiliations and beliefs, to propagandize, and to subvert opponents (or one's own society). Social media then persuaded users that it was a good idea to list their family, friends, and associates in a giant database along with their political beliefs, hopes and fears, and pictures of their lunch, and its algorithms surfaced whatever the makers wanted (cf. TikTok). Now, AI is poised not only to comb through what e-mails and files hadn't been available before, but also to surface or even generate information, giving it a prime spot to censor or manipulate what we see.
Sure, most technology replaces human labor. The printing press put thousands of book copiers out of work. The word processor destroyed the entire job description of “typist”.
The promise of AI is that it will free human labor to focus on higher value areas rather than copying books or typing up the boss’ scribbled notes.
Automation is a shitty deal for the workforce because in every case those who are not made redundant are expected to create more work product for the same wage.
The introduction of Photoshop generative fill, Office 365 Co-pilot, etc means that my team will only hire one marketing grad instead of the usual three-four this year, and the one will be responsible for the same amount of work product as last year if they don’t die from the panic attacks.
In February, the excess profits go to the owners by way of a stock buyback and the 3-4 designers we didn’t hire are free to work at a coffee shop.
I’ve not experienced a single case where automation has enabled the workforce to have more leisure time, or more technically, where efficiency gains have accrued to labor rather than capital.
I really think there is no way out of it. Things will continue to get worse before they get better. Eventually there will be so many unemployed working age people that violence will be the only logical course.
> I'm very very concerned that marketers will be able to insert very compelling and very nuanced advertising into our experiences with it...
But this isn’t new—it’s been happening for a long time without AI, and it’s scarier than we often realize. With just a few data points about your life, marketers can accurately identify your traits, even if you think you belong to a very specific niche or tribe.
What’s worse is the deceitful and unethical practices behind this: many companies have been sharing your data to build detailed profiles of you, often without your knowledge or consent.
The other day someone mentioned how AI crawling bots crawled every link, including edit history links, on their sites. It made me realize they really only can learn from what they can access digitally.
While we as humans can go out in the world, use our senses to see, hear, smell and feel, and move around and interact with the physical world.
But my next thought was - for how long will only we be able to do that? Will we soon see AI drones roaming our skies to ingest the physical world? Or will the surveillance cameras and microphones see and hear anything, and contribute to their training? I assume it has already started...
I saw a HN comment the other day that is still giving me pause. Maybe it's very obvious, but it got me thinking. It was something like: we are pulled forward by individuals, individuals who through some period of time "unlock" new information via interactions over period of time though the methods you mentioned, primarily by interacting with others who hold net new data to them, and adding it into this unique data set in their head, that causes some stuff to "click" and something "new" is born. When I read that I thought, yeah, that's basically how it works and now that you put it like that, it doesn't seem very special at all.
No, it's the other way around. Humans will continue the trend of interacting with "the world" less and less, and thus become more like the LLMs themselves. They won't get smarter; we will get stupider and meet them in the middle. You can already see this happening.
Related: "AI can now create a replica of your personality" [0]. What could be a better personal assistant than an AI that's imprinted on your electronic personality as evidenced through your emails, messages, documents, social media postings, transcripts of your phone calls, etc.?
> What could be a better personal assistant than an AI that's imprinted on your electronic personality as evidenced through your emails, messages, documents, social media postings, transcripts of your phone calls, etc.
One who can, you know, move. And interact with the physical world, like open letters or run errands. And correctly anticipate your needs. And react to unpredictable situations. And not make stuff up. And shield you from stuff you don’t want to engage in. And, and, and…
“Oh, but don’t worry, those robots will be ready and in everyone’s homes next year. That is a promise that will not change: ask me again in three years and it’ll still be ‘next year’”.
Maybe 85% of what I do on a given day is mundane and predictable. The rest is what makes someone unique, unpredictable, emergent, interesting, and able to solve issues that fall outside of life's happy path.
From what a security researcher in the article is saying, these systems are vulnerable to prompt injection attacks just like LLMs, and I wouldn't be surprised if they're using LLMs under the hood.
The privacy temptations for modern tech corporations are already too much. The security negligence is just icing on a poison cake. Feels like sunk cost.
The network effects of any sufficiently large technology means that once it's got enough adoption, and individual cannot really opt out in a meaningful way. Social media and smart phones are an example of this: yes, and individual can opt out, but 1) this is incredibly inconvenient, and 2) everyone else they know will _not_ have opted out, so the secondary effects of those technologies will still impact their personal and professional lives.
AI feels much like smart phones and social media in that regard. Another net negative for society because some people wanted to make their mark on society.
> One ambitious goal of companies developing AI agents is to have them interact with other kinds of software as humans do, by making sense of a visual interface and then clicking buttons or typing to complete a task.
Can it avoid dark patterns? If it can't, then it's not a solution to anything. It's more like an additional problem. It also opens the door to the concept of dark agents (hey, this bot _always_ installs the Altavista search bar).
We designed https://github.com/OpenAdaptAI/OpenAdapt to ground the agent to human demonstrations, so that if you as a human don't engage in the dark pattern, neither will the agent.
Why would it not avoid dark patterns? Assuming the reinforcement learning takes desired task and correct outcome into account, it would take work to train them to fall for dark patterns.
What do you think is going to happen if an AI system which avoids current dark patterns starts being widespread? Bad actors aren’t going to shrug their shoulders and stop, they’re just going to make systems which trick those AIs. Adapting a dark pattern is significantly faster and more cost effective than training a new AI to bypass those new approaches.
Of course it won’t fool everyone, just like not everyone is fooled now. That’s why we’re able to put a name to the practice, some people saw through it. That just goes to show human psychology is not as homogenous as you make it out to be.
There is a vastly greater diversity of humans than AI implementations. It is delusional to think the current crop AI, trained on human behaviour and interactions, won’t be able to be tricked just as easily or even more so, when we’ve been doing it since the start.
“Delusional” is pretty strong for a handwave assertion.
And you really think human neuro- and cognitive architecture is more diverse than the many many many different ML architectures? That’s a bold claim. We have wide ML models, and deep, and LORA, a million flavors of transformers and then a million non-transformer architcthres.
I’m very suspicious of your absolute certainty and complete dismissal of contrary opinions.
I presume Google Mariner is using the digitized behavior of googlers searching for products, analytics from shopping websites (I would use it), even ad interactions (why not?). If it works, boom, lots of unproductive stale data turned into a potential product (if someone builds this, I want 5%).
The problem is: that data has a lot of users under the influence of dark patterns. People shopping in stupid ways, falling for tricks. They probably know this and improved search against some of these patterns in the past. But it's not perfect, right? Search isn't.
Large models fall for tricks all the time, you have to assume the other way around. They will fail, and you can reduce that failure rate, but never zero it.
Yes, yes they are. A captcha is a simple system which returns a binary pass/fail. A dark pattern plays with your understanding of reality to make you choose what is beneficial to the company at your expense.
AI systems will be gamed to follow dark patterns too.
I like the possibility of not having to deal with all dark patterns on a web shop, but on the other hand, it is very likely that "AI agent" will order wrong type of milk.
AI Agents, when discussed on podcasts by tech “bros” and execs, seem to forget that most people are price sensitive. When booking a flight, we take into account cost, travel time, whether or not we’re a morning person, a meeting we need to make that’s not in the diary (texted a mate to catch up). If it’s reading that as well. I don’t want an Agent booking me something that on paper looks the best, but is £200 more than it could be with a small trade off.
Plus, these execs all have Executive Assistants that act as the abstraction layer already. This is maybe why it feels like a no brainer to them. They already share their lives with these people, why not share the same with an AI??
I don't see how that's a technological problem? It's certainly a social problem (your agent's provider might have a deal with airlines / hotels to gouge you), but there's no inherent reason why an agent couldn't be price sensitive, either. After all, it's just an optimisation problem, and the field of AI famously knows a thing or two about those ;)
After eating up all of the data on the internet, the obvious next step is recording all human interactions with the computer in order to make you obsolete. You're going to train the AI model that's going to replace you while all the profits go to a small number of companies.
We already gave away tons of our personal information for free to social networks, hopefully we've learned our lesson and we don't let AI companies get away with doing the same thing.
The reason I created https://github.com/OpenAdaptAI/OpenAdapt is because I had the same concern, and wanted to create a way for users and organizations to avoid exfiltrating their data while still participating in the promise of AI.
I believe we can create a future in which you train the AI model that will assist you while participating in the profits it generates. "Build the future you want to live in."
I think what you train should stay on your own box and not be lured into sharing for some profits, as the profits will be collected in a few large funnels to pay large salaries for CEOs and what not.
For me personally privacy and security is less of a concern vs Ads and manipulation. I'm very very concerned that marketers will be able to insert very compelling and very nuanced advertising into our experiences with it, such that even if you're trained in the dark art of marketing, you will fall for it. We're already seeing ads in perplexity, what happens when your reply to your question about a dirty entry you wrote from your free diary app AI is: "good point Johh, and I'm sorry you're going through that, it's been proven that tea can help with that however, it's calming and soothing, Lipton is great".
I work in go to market, some people (my mum) might even call me "one of those evil manipulative marketers" - yet here I am- inclined to lobby for strong legal standards around this stuff, it should not be the wild west, that would be really bad. I'm extremely concerned about this.
I am starting to wonder if the AI market will diverge into two separate implementations where one is the open source (open models?) that people can use and deploy will be done for people that are more technical and most people will use the easiest solutions.
I have used some of the AI techniques in macOS and while they are helpful I probably won't use the ones that call out to ChatGPT. But, I have found that using o1 preview it helps me with my side projects. On the other hand I have started using llama 3.3 and it is approaching usefulness and maybe an open model with CoT and reinforcement learning or even a new version of Qwen 2.5 I could see replacing.
The only future where this happens is if little, non- infrastructure / specialized hardware models maintain functional parity with "throw more hardware at the problem" big models.
That seems... unlikely.
And then we'll see the same evolution that built Google.
Someone with the best algorithm leverages that into a revenue stream, then uses that revenue to buy hardware, then monetizes its userbase via advertisement, then uses that additional revenue to buy market share.
I'm sure it'll make someone a ton of money, but it's not a future I'd like to live in (again).
Diversity is strength.
> On the other hand I have started using llama 3.3 and it is approaching usefulnes
cute phrasing! so what you're saying is that you're using it, but it's not useful. You can imagine it being useful, and it produces output similar to useful output, but it isn't useful.
That perfectly matches my experience with language models, going back to when I was running GPT2 locally.
Has there been any evidence that the improvement is not asymptotic? For ten years language models have been approaching usefulness.
When do we get to usefulness?
I think LLAMA 4 will get to usefulness ( I think I should say I have evaluated it). Long term training seems to be the best solution to solve the issues I have been facing and also more polish using CoT with reinforcement learning. I don't want to be paying for o1 since I have a homelab that could easily use up to 70B for LLAMA 3.3 or more if I can figure out how to run larger versions of LLAMA but llama can parallelize it to use all of my GPUs and inference would be faster so I would settle for that.
That's not what they said, and you're being purposely obtuse.
They were saying that Llama 3.3, an open model, approaches usefulness, in comparison to o1, a closed model, which is already useful to them.
As for usefulness in general, it's already here. It's not perfect, it can make mistakes, but saying they're not useful at all - especially for software engineers - is foolish. See https://news.ycombinator.com/item?id=42602940.
> For me personally privacy and security is less of a concern vs Ads and manipulation.
Those two don’t add up. You expose yourself to manipulation and control when you don’t have adequate privacy. If privacy is less of a concern to you, then manipulation equally so too.
For me it's slightly nuanced. Personally, I'm less concerned about yet another attack surface for my personal information from a 3rd party I didn't expect, it's probably all out there anyway. If I gave all my data to an AI company and some hacks in and steals it, for me anyway, I'm less concerned (I realize very many people find that strange/insane). I am considerably more concerned about being psychologically manipulated by an AI tool. I would say however, to your point, it would be shit if someone hacked in, changed or inserted data, and that was use to manipulate the AI, however my sense is that might be difficult/unlikely?
Scenario: You get bloodwork done. You upload the report to an AI app and ask it for its interpretation. It says you have high cholesterol, low testosterone, and some elevated liver values.
The next day you start seeing ads on instagram for GLP-1’s, TRT, cholesterol medication, and supplements to “improve liver function”.
It may not be that the AI tool is directly manipulating you. It could simply be that your data is used or incorporated into a new AI personalization model where the manipulation happens outside of the AI tool you originally gave your data to.
Just to make it very clear: I am less concerned about my personal information being stolen, hacked, etc. I accept that stuff happens.
I tech people how to do that all day every day without AI, we have the tools to do what you described already, and we in fact indeed do what you said already.
Yes this will most likely happen. I wrote my PhD thesis exactly around this topic (~8 years ago). It was fascinating to see in a lab setting how I could manipulate humans with voice agents (robots).
> I wrote my PhD thesis exactly around this topic (~8 years ago).
For anyone interested, from checking the HN profile the thesis seems to be “The Power of Robot Groups with a Focus on Persuasion and Linguistic Cues”.
https://ir.canterbury.ac.nz/bitstreams/cac255b7-43fe-4e08-94...
Thanks for sharing my thesis. I'm never sure if I should include links or not. It kind of feels too pushy.
Why would you do that? They could have shared a link if they wanted, they didn't.
Their HN profile is full of personal information, including name, email, location, resume, and saying they’re open for work. All I did was search for “<name> thesis” and found it without any significant effort. Why would they not want people to read their public research while talking about it with interest and making it easy to find?
It seemed more likely to me they simply didn’t want to be perceived as tooting their own horn. Turns out that was exactly the case.
https://news.ycombinator.com/item?id=42603304
You're right, my bad, sorry for the noise
a cloud based AI having access to a document like a W2 pdf on my computer worries me. It has details like social security number, how much you earned, etc.
Imagine trying to perform a “right to forget” request on data that was trained into the AI?
Imagine having to comply with such request or elsr
https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-re...
It's literally a principal-agent problem (https://en.wikipedia.org/wiki/Principal%E2%80%93agent_proble...). If the evolution of Google's results page is any indication, it's a matter of "when" (not "if") the usefulness gets sold away to the highest bidder. Having billions of people rely on your system is too tempting a thing to not sell away.
"Parasitic middleman" is Silicon Valley's business model.
I can't see the (intended) endgame of AI as anything other than (1) replacing vast amounts of human labor and/or (2) inventing a new form of advertising that's conversational, obfuscated, undisclosed. Something dramatic has to justify the crazy level of investment.
I could see the endgame of AI as surveillance and manipulation of information to achieve political goals. In the old days, expensive intelligence agencies and security services had to work hard to figure out targets' affiliations and beliefs, to propagandize, and to subvert opponents (or one's own society). Social media then persuaded users that it was a good idea to list their family, friends, and associates in a giant database along with their political beliefs, hopes and fears, and pictures of their lunch, and its algorithms surfaced whatever the makers wanted (cf. TikTok). Now, AI is poised not only to comb through what e-mails and files hadn't been available before, but also to surface or even generate information, giving it a prime spot to censor or manipulate what we see.
Sure, most technology replaces human labor. The printing press put thousands of book copiers out of work. The word processor destroyed the entire job description of “typist”.
The promise of AI is that it will free human labor to focus on higher value areas rather than copying books or typing up the boss’ scribbled notes.
Automation is a shitty deal for the workforce because in every case those who are not made redundant are expected to create more work product for the same wage.
The introduction of Photoshop generative fill, Office 365 Co-pilot, etc means that my team will only hire one marketing grad instead of the usual three-four this year, and the one will be responsible for the same amount of work product as last year if they don’t die from the panic attacks.
In February, the excess profits go to the owners by way of a stock buyback and the 3-4 designers we didn’t hire are free to work at a coffee shop.
I’ve not experienced a single case where automation has enabled the workforce to have more leisure time, or more technically, where efficiency gains have accrued to labor rather than capital.
I really think there is no way out of it. Things will continue to get worse before they get better. Eventually there will be so many unemployed working age people that violence will be the only logical course.
it’s a greed problem. No automation will fix that.
In competitive markets, the services should get cheaper, in the end, the consumer profits from AI-increased productivity.
> I'm very very concerned that marketers will be able to insert very compelling and very nuanced advertising into our experiences with it...
But this isn’t new—it’s been happening for a long time without AI, and it’s scarier than we often realize. With just a few data points about your life, marketers can accurately identify your traits, even if you think you belong to a very specific niche or tribe.
What’s worse is the deceitful and unethical practices behind this: many companies have been sharing your data to build detailed profiles of you, often without your knowledge or consent.
The other day someone mentioned how AI crawling bots crawled every link, including edit history links, on their sites. It made me realize they really only can learn from what they can access digitally.
While we as humans can go out in the world, use our senses to see, hear, smell and feel, and move around and interact with the physical world.
But my next thought was - for how long will only we be able to do that? Will we soon see AI drones roaming our skies to ingest the physical world? Or will the surveillance cameras and microphones see and hear anything, and contribute to their training? I assume it has already started...
I saw a HN comment the other day that is still giving me pause. Maybe it's very obvious, but it got me thinking. It was something like: we are pulled forward by individuals, individuals who through some period of time "unlock" new information via interactions over period of time though the methods you mentioned, primarily by interacting with others who hold net new data to them, and adding it into this unique data set in their head, that causes some stuff to "click" and something "new" is born. When I read that I thought, yeah, that's basically how it works and now that you put it like that, it doesn't seem very special at all.
Google Street View is a pretty good example of this. And the issues of having someone driving around with a massive WiFi radio and loads of cameras.
https://www.darkreading.com/cyber-risk/google-street-view-pu...
No, it's the other way around. Humans will continue the trend of interacting with "the world" less and less, and thus become more like the LLMs themselves. They won't get smarter; we will get stupider and meet them in the middle. You can already see this happening.
Suddenly Germany not having google mapa due to privacy all those years sounds less silly now
Related: "AI can now create a replica of your personality" [0]. What could be a better personal assistant than an AI that's imprinted on your electronic personality as evidenced through your emails, messages, documents, social media postings, transcripts of your phone calls, etc.?
[0] https://www.technologyreview.com/2024/11/20/1107100/ai-can-n...
Taking the Black Mirror episode "Be Right Back" [1] closer to reality.
[1] https://en.wikipedia.org/wiki/Be_Right_Back
> What could be a better personal assistant than an AI that's imprinted on your electronic personality as evidenced through your emails, messages, documents, social media postings, transcripts of your phone calls, etc.
One who can, you know, move. And interact with the physical world, like open letters or run errands. And correctly anticipate your needs. And react to unpredictable situations. And not make stuff up. And shield you from stuff you don’t want to engage in. And, and, and…
Techbro: "Yes, we are working on robots and that kind of stuff, but it is a more difficult field. To get the accuracy and versatility of .."
“Oh, but don’t worry, those robots will be ready and in everyone’s homes next year. That is a promise that will not change: ask me again in three years and it’ll still be ‘next year’”.
> The results were 85% similar.
Maybe 85% of what I do on a given day is mundane and predictable. The rest is what makes someone unique, unpredictable, emergent, interesting, and able to solve issues that fall outside of life's happy path.
From what a security researcher in the article is saying, these systems are vulnerable to prompt injection attacks just like LLMs, and I wouldn't be surprised if they're using LLMs under the hood.
The privacy temptations for modern tech corporations are already too much. The security negligence is just icing on a poison cake. Feels like sunk cost.
The network effects of any sufficiently large technology means that once it's got enough adoption, and individual cannot really opt out in a meaningful way. Social media and smart phones are an example of this: yes, and individual can opt out, but 1) this is incredibly inconvenient, and 2) everyone else they know will _not_ have opted out, so the secondary effects of those technologies will still impact their personal and professional lives.
AI feels much like smart phones and social media in that regard. Another net negative for society because some people wanted to make their mark on society.
> One ambitious goal of companies developing AI agents is to have them interact with other kinds of software as humans do, by making sense of a visual interface and then clicking buttons or typing to complete a task.
Can it avoid dark patterns? If it can't, then it's not a solution to anything. It's more like an additional problem. It also opens the door to the concept of dark agents (hey, this bot _always_ installs the Altavista search bar).
We designed https://github.com/OpenAdaptAI/OpenAdapt to ground the agent to human demonstrations, so that if you as a human don't engage in the dark pattern, neither will the agent.
Why would it not avoid dark patterns? Assuming the reinforcement learning takes desired task and correct outcome into account, it would take work to train them to fall for dark patterns.
> Why would it not avoid dark patterns?
What do you think is going to happen if an AI system which avoids current dark patterns starts being widespread? Bad actors aren’t going to shrug their shoulders and stop, they’re just going to make systems which trick those AIs. Adapting a dark pattern is significantly faster and more cost effective than training a new AI to bypass those new approaches.
Maybe, but if there is a diversity of AI implementations plus real humans using the system, it may not be practical to fool everyone.
Dark patterns work because human psychology is fairly homogeneous. It’s possible that AI will be similar, but I suspect that won’t be the case.
Of course it won’t fool everyone, just like not everyone is fooled now. That’s why we’re able to put a name to the practice, some people saw through it. That just goes to show human psychology is not as homogenous as you make it out to be.
There is a vastly greater diversity of humans than AI implementations. It is delusional to think the current crop AI, trained on human behaviour and interactions, won’t be able to be tricked just as easily or even more so, when we’ve been doing it since the start.
“Delusional” is pretty strong for a handwave assertion.
And you really think human neuro- and cognitive architecture is more diverse than the many many many different ML architectures? That’s a bold claim. We have wide ML models, and deep, and LORA, a million flavors of transformers and then a million non-transformer architcthres.
I’m very suspicious of your absolute certainty and complete dismissal of contrary opinions.
I presume Google Mariner is using the digitized behavior of googlers searching for products, analytics from shopping websites (I would use it), even ad interactions (why not?). If it works, boom, lots of unproductive stale data turned into a potential product (if someone builds this, I want 5%).
The problem is: that data has a lot of users under the influence of dark patterns. People shopping in stupid ways, falling for tricks. They probably know this and improved search against some of these patterns in the past. But it's not perfect, right? Search isn't.
Large models fall for tricks all the time, you have to assume the other way around. They will fail, and you can reduce that failure rate, but never zero it.
A new meaning for the Principal-Agent Problem... or maybe just the same meaning in a new context.
Are dark patterns harder than captchas?
I'd say no.
> Are dark patterns harder than captchas?
Yes, yes they are. A captcha is a simple system which returns a binary pass/fail. A dark pattern plays with your understanding of reality to make you choose what is beneficial to the company at your expense.
AI systems will be gamed to follow dark patterns too.
I like the possibility of not having to deal with all dark patterns on a web shop, but on the other hand, it is very likely that "AI agent" will order wrong type of milk.
Why would you assume less dark pattern in the muddy waters of two ai agents negotiating you milk buying?
AI Agents, when discussed on podcasts by tech “bros” and execs, seem to forget that most people are price sensitive. When booking a flight, we take into account cost, travel time, whether or not we’re a morning person, a meeting we need to make that’s not in the diary (texted a mate to catch up). If it’s reading that as well. I don’t want an Agent booking me something that on paper looks the best, but is £200 more than it could be with a small trade off.
Plus, these execs all have Executive Assistants that act as the abstraction layer already. This is maybe why it feels like a no brainer to them. They already share their lives with these people, why not share the same with an AI??
I don't see how that's a technological problem? It's certainly a social problem (your agent's provider might have a deal with airlines / hotels to gouge you), but there's no inherent reason why an agent couldn't be price sensitive, either. After all, it's just an optimisation problem, and the field of AI famously knows a thing or two about those ;)
Inner monologue is next after that huh.
Anyone working on this should either efficiently sabotage or quit the effort, and be socially isolated from their communities until they do.
Worked really well of all the vices we eradicated from our communities up until now
Nope