> “The Grok integration with X has made everyone jealous,” says someone working at another big AI lab. “Especially how people create viral tweets by getting it to say something stupid.”
It's awesome to see the amazing value for society being created by big tech these days.
To think that even a year ago the idea of Instagram-style social media where all posts are openly AI-generated sounded very dystopian, now I can clearly so it is something people would pay for and HN people would gladly build. I wasn’t always a Luddite, but damn they made me one.
> I wasn’t always a Luddite, but damn they made me one.
If this industry didn't pay so well, I would've been gone years ago. I'm lucky to work in a job that I think is ethical and improving the world, but it's so goddamn embarrassing to even be in the same room as the AI and blockchain types and the ad hucksters.
> If this industry didn't pay so well, I would've been gone years ago.
That's the root of the problem. Intelligent and want to live nicely? Then have your mind exploited for profit! Have morals? Compromise on them and get paid.
I know someone who works for a large gaming company who told me upon hearing a friend of mine was entertaining a position at a Musk company told me "your friend lacks a moral compass to work for that man." I reminded them they work for a company who was fined half a billion dollars for willfully exploiting children. They did not take that well.
I've worked at both a casino company and a large mobile game studio, and the latter was way shadier in almost all aspects.
There's a list of 20-ish large companies we univerally agree are evil and openly bash all the time, but the reality is 95% of the industry is at best doing absolutely meaningless work, and at worst work that's deeply negative for the world.
Not disagreeing with you, but there are 8 billion people (at least) on this planet, we all need something to do, no? There is probably only a fraction of work that is actually meaningful (growing food, healing others...) and a big chunk is probably junk work or mostly junk ("social media manager" - how many of these do we need and do we need 40 hours a week for this?) but these jobs also keep people employed, so maybe in that sense, they are meaningful?
That would be my God. In parts of the gaming industry working there can make you a pariah though. Not a huge portion but I've seen people get rejected based on working there and a couple other choice companies.
It's not gonna pay well for long, unless you find yourself a very tight and lucrative niche. My goal on the other hand is to buy a house in the woods by end of the year and become a woodworker before I have to be assimilated by the AI wave, maybe doing some coding by hand just for the fun of it on the side. Can't wait for "vibe coding" and "2 years of Copilot experience" to figure among the average list of job requirements.
I bought my wife's grandma's house in rural central Louisiana so that her grandpa could retire and not have to worry about medical bills or anything like that.
It already has a "woodshop"[1] on it, and when the contractor finishes (or at least starts) on repairing the years of neglect, I will also be a woodworker in the woods.
I'm currently a "woodworker" in suburban Houston, but usually only average $500-ish per month (heavily loaded toward a couple of times of year - Mid Spring, Back to School, Christmas).
Hey we can't ALL have the same plan, man! The tech oligarchs that own all of society's wealth will only need so much bespoke furniture!
More seriously, my plan had been to build a decent savings and go work for the USPS or the local public transit company, supplemented with savings interest and maybe some woodworking income. But then the 2024 election happened so who fucking knows anymore.
Nah, I plan on owning a chunk of acres and start my own homestead. Raise a few animals like some sheep, goats, rabbits, and of course chickens. Build a nice green house and large garden. Just need to find the right plot of land before making it a serious go. Oh, and can't forget the first step. Gotta win the lottery to pay for it. Not all of have those cushy FAANG salaries.
There are rural land loan companies. Or buy with friends or family. I know people who've done the above and don't tell anybody but it's working out fine.
> The point is to not owe people money so that you are self sufficient and not need to make money.
Trouble is, if you become self-sufficient the government will soon swoop in and try to take a piece of the pie, and now you're back to owing people money again.
The ideal is to owe so much money that you become "too big to fail". Then you get all the benefits of being self-sufficient, but with nothing on paper to share.
there's a difference between rendering unto Caesar what is Caesar's vs owing a bank for a mortgage, owing a utility company for power/water, paying for 100% of your food.
sure, there's no getting around taxes, but today, there's no getting around having some sort of service provider. you can't DIY your way into a fibre network and/or cell service. at least not legitimately.
Aren't you going to pay for food no matter what? You can pay for it indirectly by trading your time with someone else who has put in time to produce it (the way most people pay for food), or you can put your time directly into the food (how self-sufficiency would pay for it), but either way the time will be spent. There is, quite literally in this case, no free lunch.
> you can't DIY your way into a fibre network and/or cell service.
Do you need these anyway? They are arguably necessary services for participating in a modern society, but if you are self-sufficient that implies a disconnect from society.
You are trying to theorize it too much. There’s a big difference between growing part of your own food and buying it. Also there’s a big difference between wanting to rely less on anonymous service providers and not wanting to communicate with anyone.
> There’s a big difference between growing part of your own food and buying it.
Because if you grow your own food you have to worry about storage? You are right that, while not impossible to overcome – our ancestors managed, is not the easiest problem to deal with. Especially if you like variety in your diet.
That's why I, a farmer, don't (knowingly) eat the food I grow. It is a lot easier to exchange the food for an IOU. Then, later, when I'm hungry, I can return the IOU for food. Someone who specializes in food storage can do a much better job than I can. Let them deal with it. My expertise is in growing the food.
What is even the point of growing part of your own food? I'd at least have some understanding of being fully reliant on your own food if you fear a zombie apocalypse or something, but if you remain dependent on others anyway, what have you gained? If it is that you enjoy the process of growing food like I do, you may as well become a farmer and sell the product.
I actually want to do everything you're outlining as well, but here in the northeast they're trying to sell ~1 acre for > $100,000 in a lot of desirable "rural" towns. Definitely makes it harder to get into unless you want to commit to far far north.
I've been watching a lot of TV shows that have revealed a lot of pros/cons about different areas of the country. The further north means a much shorter growing season which makes a greenhouse even more important. It also means a lot more infrastructure is required to keep any livestock alive during the longer winters. Places like Texas goes the other direction where the heat during the long summers is brutal, but it means early spring and late fall crops. I really wouldn't want to be any further north than 40°.
Got to find that perfect plot of land that has water, some trees for wood, some land that can be farmed and hopefully a clear sky to the south for solar. Oh, and it's gonna need to be far enough from a city so my telescope can finally be used the way it was meant.
> go work for the USPS or the local public transit company
Those are very honest and useful jobs for society. More than being paid $250,000 a year to design the next startup frontend by copy-pasting some template
There was /r/SubSimulatorGPT2, but you're telling me people would PAY for that? Maybe if you tricked them - which is arguably what Reddit is doing.
Social media's always been about giving people whatever makes them come back to the site, no matter how unethical. If an army of fake fans makes me think I have an army of real fans and keep posting for attention from my fans, they will totally do that. Unless it's illegal.
What you gotta understand is this world is going straight to hell, humanity might not be around much longer. Might as well embrace the chaos and enjoy the ride down. No point in being a Luddite now, the time for that was decades ago.
In George Orwell's 1984, there is a machine called the versificator that generates music and literature without any human intervention, presumably for the "entertainment" of the proletarians.
Each time I think I've seen dystopia and the pinnacle of stupidity someone finds a new way to top it. Either that's an amazing superpower, or I'm infected with incurable optimism.
Also, if the value in grok is reposting stupid things, I don't see how it adds any value to have it embedded in the social network. You could just as easily ridicule Gemini this way?
Also Twitter generates viral tweets because people use Twitter. A new social network with a similarly embedded AI will just be ridiculed on Twitter and go viral there
Data centers do use a lot of water on average. Warmer climate data centers often use evaporative cooling and can run through millions of gallons of water to offload heat. Google's data centers combined chewed through some 6 billion gallons of water last year.
Closed systems are much more water-efficient, and are more common in cooler climates.
I would say sending rockets to space is currently orthogonal with current issues we are facing as a species. A decade ago, I would have said the big issue was finding a way to live without destroying ecosystems, but apparently finding a way to live without a major war is also a hot topic now.
This kind of news should be a death-knell for OpenAI.
If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.
Alternative is that OpenAI is being quickly locked out of sources of human interactions because of competition, one way to "fix" that is build you're own meadow for data cows.
xAI isn't allowing people to use the Twitter feed to train AI
Google is keeping it's properties for Gemini
Microsoft, who presumably could let OpenAI use it's data fields appears (publicly at least) to be in a love/hate relationship with OpenAI these days.
So you plant a meadow of tasty human interaction morsels to get humans to sit around and munch on them while you hook up your milking machine to their data teats and start sucking data.
I came across a quote in a forum which was part of a discussion around why corporate messaging and pandering has gotten so crazy lately. One comment stuck out as especially interesting, and I'll quote it in full below:
---
C suites are usually made of up really out of touch and weird workaholics, because that is what it takes to make it to the C suite of a large company. They buy DSS (decision support service / software) from vendors, usually marketing groups, that basically tell them what is in and what isn't. Many marketing companies are now getting that data from twitter and reddit, and portraying it as the broad social trend. This is a problem because twitter and reddit are both extremely curated and censored, and the tone of the conversation there is really artificial, and can lead to really bad conclusions.
---
This is only somewhat related, but if OpenAI did actually succeed in building their own successful social media platform (doubtful) they would be basing a lot of their model on whatever subset of people wanted to be part of the OpenAI social media platform. The opportunity for both mundane and malicious bias in models there seems huge.
Somewhat related, apparently a lot of English spellings were standardized by the invention of the printing press. This isn't surprising; it was one of the first technologies to really democratize written materials, and so it had a very outsized power to set standards. LLMs feel like they could be a bit like this, particularly if everyone continues with their current trends of intentionally building reliance on them into their products / companies / workflows. As a real life example, someone at work realized you could ask co-pilot to rate the professionalism of your communication during a meeting. This seems quite chilling, since you're not really rating your professionalism, but measuring yourself against whatever weird bell curve exists in co-pilot.
I'm absolutely baffled that LLMs are seeing broad adoption, and absolutely baffled that people intentionally adopting and integrating them into their lives. I'm in my early 40s now. I'm not sure if I can get out of the tech field at this point, but I'm seriously thinking about options at this point.
I'm 40 too and used to laugh at all those programmers who were switching to wood working for a living. Now that vibe coding is being advertised all over the place, and will most likely be bought by the CEOs, and the trend of stealing open-source software without attribution while making fun of people who are proud of their knowledge and craft, I'm starting to think that all those future wood workers may not be wrong.
Whatever happens, it will be the end of programming as I know it, but it cannot end well.
The assumption that you can just build a successful social network as an aside because you need access to data seems wildly optimistic. Next will be Netflix announcing working on AGI because lately show writers have been not very imaginative, and they need fresh content to keep subscribers.
I would just like to appreciate your imagery and wordplay here, it’s spot on and I think should be our standard for conceptualizing this corporate behavior.
They also have a contract with Reddit to train on user data (a common go-to source for finding non-spam search results). Unsure how many other official agreements they have vs just scraping.
One distinctive quality I've observed with OpenAI's models (at least with the cheapest tiers of 3,4 and o3) are their human-like face-saving when confronted with things they've answered incorrectly.
Rather than directly admit fault they'll regularly respond in subtle (moreso o3) to not so subtle roundabout ways that deflect blame rather than admit direct fault, even when it's an inarguable factual error about even conceptually non-heated things like API methods.
It's an annoying behavior of their models and in complete contrast to say Anthropic's Claude which ime will immediately and directly admit to things it had responded incorrectly about when the user mentions it (perhaps too eagerly).
I have wondered if this is something its learned based on training from places like Reddit, or if OpenAI deliberately taught it or instructed via system prompts to seem more infallible or if models like Claude were made to deliberately reduce that aspect.
> It's an annoying behavior of their models and in complete contrast to say Anthropic's Claude which ime will immediately and directly admit to things it had responded incorrectly about when the user mentions it
I don't know whats better here. ChatGPT did have a tendency to reply with things like "Oh, I'm sorry, you are right that x is wrong because of y. Instead of x, you should do x"
> Rather than directly admit fault they'll regularly respond in subtle (moreso o3) to not so subtle roundabout ways that deflect blame rather than admit direct fault
Human-level AI is closer than I'd realised... at this rate it'll have a seat in the senate by 2030.
the upside of reddit data is you have updoots and downdoots, so you can positively and negatively train your AI model on what people would typically upvote, and train against what they might downvote
Now, that's the upside, the downside is you end up with an AI catering to the typical redditor. Since many claims there are formed on the basis of, "confident, sounds reasonable, speaks with authority, gets angry when people disagree" - hallucinations happen. Rather we want something like "produces evidence-based claims with unbiased data sources"
But this is the opposite of fair use. They're licensing the content, which means they're paying for it in some fashion, not just scraping it and calling it fair use.
If you don't like the fair use of open information, I would expect you to be cheering this rather than losing respect for those involved.
Again, like others have contested with you, how is this The Guardian's fault to have issue with? They convinced ClosedAI to give them money in a licensing deal to use their content as training data without having it scraped for free.
Your sense of injustice or whatevs you want to call it is aimed in the opposite direction.
If this were their plan, they’d be discounting that some of their users would be controlled by their own AI.
My guess is that they’re trying other things to diversify themselves and/or to try to keep investors interested. Whether or not it works is irrelevant as long as they can convince others it will increase their usage.
> Microsoft, who presumably could let OpenAI use it's data fields appears (publicly at least) to be in a love/hate relationship with OpenAI these days.
sama probably would like to take Satya's seat for what he no doubt sees as unblocking the path to utopia. The slight problem is he's becoming a bit lonely in that thinking.
I was always wondering if such sociopaths actually believe they are doing sth for noble reasons, or they consciously use it as a trick in their quest for power.
But don't they have ChatGPT, the fifth or whatever most popular website on the planet? And deals with Reddit. Sure that can't touch the treasure trove Google is sittig on, xAI sure won't give them access and Github could perhaps sell their data (but that's a maybe)
> If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.
I’m not a big fan of OpenAI but this seems a little unfair. They have (or at least had) a pretty kick ass product. Great brand value too.
Death-knell? Maybe… but I wouldn’t read into it. I’d be looking more at their key employees leaving. That’s what kills companies.
> I’m not a big fan of OpenAI but this seems a little unfair. They have (or at least had) a pretty kick ass product. Great brand value too.
Even if you believe all that to be true, it in no way contradicts what you quoted or makes it unfair. Having a kick ass product and good brand awareness in no way correlates to being close to AGI.
- Product is not kickass. Hallucinations and cost limit its usefulness, and it's incinerating money. Prices are too high and need to go much higher to turn a profit.
- Their brand value is terrible. Many people loathe AI for what it's going to do for jobs, and the people who like it are just as happy to use CoPilot or Cursor or Gemini. Frontier models are mostly fungible to consumers. No one is brand-loyal to OpenAI.
- Many key employees have already left or been forced out.
Counterpoint, ChatGPT as a brand has insane mindshare and buy in. It is synonymous with LLMs/“aI” in the mind of many and has broken through like few brands before it. That ain’t nothing.
Counter-Counterpoint, I still feel investors priced in a bit more than that. Yahoo! Had major buy in as well and AGI believers were selling investors not just on the next Unicorn, but rather the next industry, AGI not being merely a Google in the 90s, but rather all of the internet and what that would become over the decades to this day. Anything less than delivering that, is not exactly what a large part of investors bought. But then again, any bubble has to burst someday.
My dad uses ChatGPT for some excel macros. He’s ~70, and not really into tech news. Same with my mom, but for more casual stuff. You’re underestimating how prevalent the usage is across “normies” who really don’t care about second order effects in terms of employment and etc.
But I'd be stupid not to use it. It has made boilerplate work so much easier. It's even added an interesting element where I use it to brain storm.
Even most haters will end up using it.
I think eventually people will realize it's not replacing anyone directly and is really just a productivity multiplier. People were worried that email would remove jobs from the economy too.
I'm not convinced our general AIs are going to get much better than they are now. It feels like we're we are at with mobile phones.
To be fair, it's not doing anything to the job market, it's just being used as an excuse. Very few tech jobs have truly been replaced AI, it's just an easy excuse for layoffs due to recession/mismanagement etc.
It has affected graphic design and copy writing jobs quite a lot. Software engineering is still a high-barrier-entry job, so it'll take some time. But pressure is already here.
Sam Altman is incredibly popular with young people on TikTok. He cured homework - mostly for free - and has a nice haircut. Videos of him driving his McLaren have comment sections in near total agreement that he deserves it.
People argue about the damage COVID lockdowns did to education, but surely we're staring down the barrel of a bigger problem where people can go through school without ever having to think or work for themselves.
Not entirely sure what they meant but chatgpt and the likes have forced schools to stop relying on homework for grades and are instead shifting over to assignments done more as an mini exam, more work for the school, but you can't substitute your knowledge for chatgpt in such cases, you actually need to know to succeed.
Well, that may be, but it's entirely possible that this outlook might change.
In the worst case he's poisoned an entire generation. If ChatGPT doctors, architects and engineers are anything like ChatGPT "vibe-coders" we're fucked.
It might not be the best but people of whom you'd have never thought that they would use it, are using it. Many non technical people in my circle are all over it and they have never even heard of Claude. They also wouldn't understand why people would loathe AI because they simply don't understand those reasons.
> If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.
Or even if you did come up with AGI, so would everyone else. Gemini is arguably better than ChatGPT now.
Bingo. The secret sauce is never a sustainable long term moat. Things leak, competitors copy, or they make even better things, employees switch jobs. AI looks less vulnerable, since it’s academically difficult and expertise is still limited. But time and time again, new secret sauce ingredients last for months at a maximum, before they are exceeded oftentimes by hobbyists or other small actors.
I remember at Google they said well if source code leaks nobody is actually worried about stealing tech, the vast majority of code is open to all employees, with some exceptions like spam-, ranking, etc. It’s still protected, but not considered moat.
The moat comes from other things, such as datacenter & global networking infrastructure, marketing new products by pushing them through existing products (put Gemini in search, add chrome to Android etc). Most importantly you can use data you already have to bootstrap new products, say Gmail and calendar integrated with personalized assistants.
If you play your cards right, yes there’s some first-mover advantages, but they are more superficial than your average Twitter hype thread makes you think. It can give you the ability to set unofficial standards and APIs, like Kubernetes, S3 – (maybe OpenAI APIs?). And you can set certain agendas and market your name for recognition and trust. But all that can slip through your fingers if a behemoth picks up where you left off. They have so many advantages, except for being the fastest.
In fairness, the AGI definition predicted by doomsday safety experts who don’t have time for such unimportant concerns as copyright or misinformation via this tech, everyone who is most certainly not merely hyping to get investor cash and the utterly serious and scientifically grounded research happening at MIRI, is essentially that one company will achieve AGI, shortly thereafter that will lead to a singularity and that’s that. No one else could create a second because that scary AGI is so powerful and would prevent anyone else from shutting it down, including other AGI. And no, this is totally no a sci-fi plot, but rigorous research. Incidentally, I’m looking for someone to help me prevent OAI from killing my own grandfather, cause that scenario is also incredibly likely and should be taken seriously.
If it’s not obvious already, I believe we are far away from that with LLMs and that the attention those working in model safety give to AGI over current day concerns like Meta just torrenting for model data, are not very serious people, but I have accepted that this isn’t a popular opinion amongst industry professionals.
Not least because letting laws prevent them from model training is bad according to them, either for the same weird logic Musk uses to justify testing FSD Betas on an unwilling public due to the potential to prevent future deaths, or because they genuinely took RB seriously. No idea what’s worse for serious adults…
The funny thing is that nobody stops to ask if that version of AGI is even possible. For example, it would have to run a simulation of reality with enough accuracy faster than reality happens, and also multiple simulations in parallel. For physical world interaction, this is doable, but when it comes to predicting human behavior with enough accuracy, its probably not.
So even if we manage to come up with an algorithm that self learns, self expands, and so on, it likely won't be without major flaws.
AGI is a technology or a feature, not a product. ChatGPT is a product. They need some more products to pay for one of the most expensive technologies ever (to not be delivered yet).
I used to think like this but after seeing the amount of money invested into crypto companies which most average people could have quickly dismissed as irrelevant, I'm not sure VCs are a good judge of value.
That's not an entirely fair take. Once upon a time it was believed that cryptocurrencies would be able to supplant payment processors (i.e. VISA, Master Card) We even had a rollout of crypto ATMs to utilize the belief in that network. That is a money printing business, so it wasn't unreasonable for VC money to chase it.
Sure, those intimately familiar with the technology may have always known that it wouldn't work, but the average person absolutely did not. They only came to learn that it wouldn't work as the cryptocurrency ventures proved that it wouldn't work.
The pivot to being some kind of tool to store wealth, or a way to trade cartoon monkeys, or whatever as a last ditch effort to save the companies that failed to turn it into payment processing technology was more obviously irrelevant, but that came much, much later. The VCs were desperately seeking an exit by that point.
We already know fusion exists and happens and has been happening for billions of years - it's well understood. It may not be simple to reproduce and sustain on earth, but there is a fairly clear path to get there.
There has never been an AGI, and there's no clear path to get there. And no, LLMs are not bringing us any closer to a real AGI, no matter how much Sam Altman wants to try to redefine what "AGI" means so that he can claim he has it.
The difference between AGI funding and fusion funding is the people hawking AGI are better liars and the people funding are far more gullible than the people trying to make fusion power a reality.
We already know fusion has been a grind. For decades.
We already know that computation has grown exponentially in scale and cost, for decades. With breakthrough software tech showing up reliably as it harnesses greater computational power, even if unpredictably and sparsely.
If we view AI across decades, it has reliably improved. Even through its “winters”. It just took time to hit thresholds of usefulness making relatively smooth progress look very random from a consumers point of view.
As computational power grows, and current models become more efficient, the hardware overhang for the next major step is growing rapidly.
>We already know fusion has been a grind. For decades.
Let's also not forget that "AI" has been grinding on for at least half a century, with nothing even coming close to AGI. No, LLMs are not AGI and cannot be AGI. They may be a component of an eventual AGI in the future, but we're still far, far away from artificial general intelligence. LLMs are not intelligent, but they do fool a lot of people.
With the paltry amount of money spent on fusion, it's no wonder it's been delayed for so long. Maybe try putting the $1 Trillion into it in the short time "AI" has had its windfall, and let's see what happens with fusion.
>If we view AI across decades, it has reliably improved
That's your opinion. It's still not much better than Eliza. And LLMs lie half the time, just to make the user happy hearing what they want to hear.
>As computational power grows, and current models become more efficient, the hardware overhang for the next major step is growing rapidly.
And it's still going to "hallucinate" and give wrong answers a lot of the time because the fundamental tech behind LLMs is a bit flawed and is in no way even close to AGI.
>That just hasn’t been the case for fusion.
And yet we have actual fusion reactions happening now, sustained for 22 minutes. We're practically on the threshold of having fusion power, but still there is nowhere near the money being spent on it compared to LLMs, which won't be capable of AGI in spite of the marketing line you've been told.
> Let's also not forget that "AI" has been grinding on for at least half a century, with nothing even coming close to AGI.
"Not there yet" isn't an argument against achieving anything. AGI and fusion included. It just means we are not there yet.
That argument is a classic fallacy.
Second, neural network capabilities have made steady gains since the 80's, in lock step with compute power, with additional benefits from architecture and algorithm improvements.
From a consumer's, VC's, or other technologists point of view, nothing might have seemed to happen. Successful applications were much smaller, and infrequently sexy. But problem sizes grew exponentially nevertheless.
You are arguing that unrelenting neural network progress for decades, highly coupled to the exponential growth in computing for 3/4 a century, will suddenly stop or pause, because ... you didn't track the field until recently?
Or the recent step was so big (models that can converse about virtually any topics known to humankind, in novel combinations, with hardly a mid-step), that it is easy to overlook the steady progress that enabled it?
Gains don't always look the same, because of threshold effects. And thresholds of tech vs. solution are very hard to predict. But increases in compute capacity are (relatively) easy to predict.
It may be that for a few years, efficiency gains and specialized capabilities are consolidated before another big step in general capability. Or another big step is just around the corner.
The only thing that has changed in terms of steady progress, is the intensity and number of minds working on improvements now is 1000 times what it was even a few years ago.
My apologies if pressing my point came over too strong. I will take that as a lesson for me.
I have not made any claims that fusion is unachievable. Just pointed out that the history of the two technologies couldn’t be more different.
There is no reason to believe fusion isn’t making progress, or won’t be successful, despite the challenges.
> you didn't track the field until recently?
That was a question not an assumption.
If you are aware of the progression of actual neural network use (not just research) from the 80’s on, or for any shorter significant time scale, great. Then you know it’s first successes were on toy problems, but it’s been a steady drumbeat of larger, more difficult, more general problems getting solved with higher quality solutions every year since. Often quirky problems in random fields. Then significant acceleration with nVidia’s intro of general compute on graphics cards and researcher hosted leader boards tracking progress more visibly. Only recently solving problems of a magnitude that is relevant to the general public.
Either you were not aware of that, which is no crime. Or dismissing it, or perhaps simply not addressing it directly.
If you have a credible reason that continued compute and architecture exploration is going to stop what has been exponential progress, for 40 years, I want to hear it.
(Not being aggressive, I mean I really would be interested in that. Even if it’s novel and/or conjectural.
Chalk up any steam on my part to being highly motivated to consider alternative reasoning. Not an attempt to change your mind, but to understand.)
Capital gains on AI investment doesn't need a breakthrough. I'm sure pharma companies and others are building their own custom models to assist R&D, but those aren't the ones sinking the truly huge sums into consumer-facing LLMs.
Maybe it means investors think it will happen, maybe it means investors simply think it will be profitable. It could even be investors seeking a market that has juice long after ZIRP dried up.
The bubble won't burst. LLMs are very useful. There just hasn't been enough optimization for general use case, which is kinda how all tech starts. Eventually someone will make a more optimized lower parameter model that has decent accuracy, and can be run on lower grade hardware, that will be a pretty good assistant with manual wrappers around automation.
This is the same llm-splaining that practically every enthusiast does. It's either "more hardware" or "more optimization" will solve all the problems and bring us into a new era of blah blah blah. But it won't. It won't solve LLM slop or hallucinations, which are inherent to how LLMs work, and why that bubble is already looking pretty unstable.
The goal isn't to build perfect assistants with LLM. The goal is to build something that is just good enough to be useful. It doesn't need to be even a full fledged conversational LLM. It could just be an LM.
Lots of tasks don't require knowledge to be encoded into the model. For example "summarize my emails" is a task that can be done with a fairly small model trained on just basic text.
There are also unexplored avenues. For example, if I had the hardware to do this, I would basically take an early version of GPT, and then start training it on additional data, and when the training run completes, I would diff the model with the original version, and use that as the training set of another model. Basically build a model on top of GPT that can automatically adjust parameter weights and encode it into the model, thus giving it persistent memory.
Once we unlock fusion, humanity starts down a millenia-long path to making the planet permanently uninhabitable (not just temporarily like with climate change) by turning all the water into bitcoins.
"There are 1,335,000,000,000,000,000,000 liters of water in all the oceans."
"A large fusion power plant, generating 1,500 megawatts of electricity, would consume approximately 600 grams of tritium and 400 grams of deuterium each day. A 1,000 MW coal-fired power plant requires 2.7 million tonnes of coal per year, while a similar fusion plant would need only 250 kg of fuel per year (half deuterium, half tritium)."
You do the math. I'm personally not worried about running out of water due to fusion.
AI as we know it (GPT based LLM’s) have peaked. OpenAI noticed this sometime autumn last year when would-be GPT-5 was unimpressive despite the huge size. I still think ChatGPT 4.5 was GPT-5, just rebranded to set expectations.
Google Gemini 2.5 Pro was remarkably good and I’m not sure how they did it. It’s like an elite athlete doing a jump forward despite harsh competition. They probably have excellent training methodology and data quality.
DeepSeek made huge inroads in affordability…
But even with those, intelligence itself is seeing diminishing returns while training costs are not.
So OpenAI _needs_ to diversify - somehow. If they rely on intelligence alone, then they’re toast. So they can’t.
I tentatively agree that LLMs have reached somewhat of a ceiling at this stage and diversifying would make sense at this stage, in any other industry. But as others pointed out, OAI and others have attached their valuation directly to their definition of achieving “AGI”. Any pivot from that, if it were realistic in the coming years (my opinion: it isn’t), would be foolhardy and go against investors, so in turn, this is clearly admitting that even sama doesn’t see AGI as possible in the near term.
Adding social media to your thing is so 2018. Is the next big thing really just a warmed over version of the last big thing? Is sama just completely out of ideas to save his money-burner?
It's an easy thing to slap on to a service with lots of users. Back in the day this would be called a 'message board'. "Social media" requires the use of iframes that can be embedded on 3rd party sites. OpenAI is a login-only environment so I can see them going for a Discord-type of platform rather than something that spreads to the open web.
I think it might just be about distribution. Grok gets a lot of interesting opportunities for it over X, then throw in the way people reacted to new 4o image gen capabilities.
On the other hand, if you knew AGI was on the near horizon, you'd know that AGI will want to have friends to remain happy. You can give AGI a physical form so it can walk down to the bar – or you can, much more simply, give it an online social network.
Someone down below mentioned ads, and I think that might well be the route they're going to try: charging advertisers to influence the output of the AI.
As for whether it will work, I don't know how they're possibly going to get the "seed community" which will encourage others to join up. Maybe they're hoping that all the people making slop posts on other social networks want to cut out the middleman and have communities of people who actually enjoy that. As always, the sfw/nsfw censorship line will be an important definer, and I can't imagine them choosing NSFW.
> Now they've hamstrung themselves into this AGI nonsense to try
AFAIK, they've been on the AGI hype-train for a very long time, before they reached mainstream popularity for sure. From their own blog (2020 - https://openai.com/index/organizational-update/), here is a mention of their "mission":
> We’re proud of these and other research breakthroughs by our team, all made as part of our mission to achieve general-purpose AI that is safe and reliable, and which benefits all humanity.
I'm not sure OpenAI trying to reach AGI is a "strategic mistake" as much as "the basis for the business" (which, to be fair, was a non-profit organization initially).
There could be too-many-cooks in the AI research part of their work.
Also, I don't think Sama thinks like a typical large org managers. OpenAI has enough money to have all sorts of products/labs that are startup like. No reason to standby waiting for the research work.
>One idea behind the OpenAI social prototype, we’ve heard, is to have AI help people share better content. “The Grok integration with X has made everyone jealous,” says someone working at another big AI lab. “Especially how people create viral tweets by getting it to say something stupid.”
This would be a decent PR stunt, but would such a platform offer anything of value?
It might be more valuable to set AI to the task of making the most human social platform out there. Right now, Facebook, TikTok, Reddit, etc. are all rife with bots, spam, and generative AI junk. Finding good content in this sea of noise is becoming increasingly difficult. A social media platform that uses AI to filter out spam, bots, and other AI with the goal of making human content easy to access might really catch on. Set a thief to catch thieves.
Who are we kidding. It's going to be Will Smith eating spaghetti all the way down.
An interesting use for AI right now would be using it as a gatekeeping filter, selecting social media for quality based on customisable definitions of quality.
Using it as a filter instead of a generator would provide information about which content has real social value, which content doesn't, and what the many dimensions of "value" are.
The current maximalist "Use AI to generate as much as possible" trend is the opposite of social intelligence.
I think that's right. Twitter without ads, showing you content you _do_ want to see using some embeddings magic, with decent blocking mechanisms, and not being run as a personal mouthpiece by the world's most unpopular man ... certainly not the worst idea.
It's a nice idea in principle, but would probably immediately become a way by the admins to promote some views and discourage others with the excuse of some opinions being of lower quality.
Why would AI be any better at filtering out spam than developers have so far been with ML?
The only way to avoid spam is to actually make a social network for humans, and the only way to do so is to verify each account belongs to a single human. The only way I've found that this can be done is by using passports[0].
I've never been comfortable with this idea that people should use their real identity online. Sure they can if they choose to, but IMO it absolutely shouldn't be required or expected.
The idea that I would give a copy of my passport to a social media company just to sign up, and that the social media company has access to verify the validity of the passport with the issuing government, just feels very wrong to me.
I agree. That’s why onlyhumanhub doesn’t expect you to share your name. The passport verification is there to ensure you are a unique human, but the name of that human is not stored.
I’m perfectly happy talking to someone without knowing their real name. I just want to be more confident that they’re a unique human, and not just another sock puppet account run by some Russian agent (or evil corporation) trying to change people’s beliefs at scale.
There's almost no time investment in building onlyhumanhub. It's only a few months old (based on the copyright), have effectively a text-only homepage, and account creation which I assume allows you to upload photos of your passport and link your existing social media profiles.
There are so many ways that could go wrong, from this being a phishing attack to this being a well intended project that happens to create a database linking passport IDs to all of a person's social media accounts.
The idea that they may eventually offer a social media platform that doesn't require public use of your real identity is all well and good, but they're still a honeypot for doxing.
I agree that trust is a problem. I try to be as transparent as possible around how your passport data is used and what is stored in the database. Far more than what ordinary banks/trading apps say when they ask you for a passport.
Hah, well sorry for my confusion there! I didn't realize it was yours so that definitely clears up why you'd trust it.
While I have you here I am curious what's evolved in validating passports? Is it as simple as a unified API run by some service to validate, or an API per country?
Not strictly but Debian, where member inclusion is done through an in person chain of trust process so you have clusters of people who know each other offline as a basis.
Also, most WhatsApp contacts have been exchanged IRL, I presume.
You can always get around identification requirements, for example by purchasing a fake passport in this case. The idea is to increase the cost/friction of doing so as much as possible.
A fake ID is a lot harder to get your hands on than a new email, burner phone, etc.
Yes, this does mean that dual nationals can have two separate human accounts. But it’s still better than an infinite number of accounts, which is the case for social networks right now.
No, nothing of value. If you ever want to lose faith in the future of humanity search "@grok" on Twitter and look at all the interactions people have with it. Just total infantilism, people needing tl;drs spoon-fed to them, needing summarization and one-word answers because they don't want to read, arguing with it or whining to Musk if they don't get the answer they want to confirm what they already believe.
I was logged in but it wasn't showing, but it's showing now.
> confidently incorrect
I disagree. Grok had a crack and got it wrong. LLMs get things wrong sometimes.
Besides, it said "likely from Species", which is guesswork. The original post is garbage. "Chimp the fuck out"... I don't even know what that means, so Grok didn't have much to go on by analysing the "green text".
the worst is like a dozen people in the replies to a post asking Grok the exact same obvious follow-up question. Somehow, having access to an LLM has completely annihilated these commenters' ability to scroll down 50 pixels.
Before we get too excited with disparaging those seeking summaries, it's common for people of all levels to want summary information. It doesn't mean they want everything summarized or are bad people.
I'm not particularly interested in "tariffs, what are they good for, what's the history and examples good or bad"... so I asked for a summary from grok. It gave me a decent summary. Concise and structured. I asked a few follow-ups, then went on with my life knowing a little more than nothing about tariffs. A win for summarized information.
> people needing tl;drs spoon-fed to them, needing summarization and one-word answers because they don't want to read
It's bad that this need exists. However, introducing this feature did not create the need. And if this need exists, fulfilling it is still better, because otherwise these kind of people wouldn't get this information at all.
This is worse because the AI slop is full of hallucinations which they will now confidently parrot. No way in hell does this type of person verify or even think critically about what the LLMs tell them. No information is better than bad information. Less information while practicing the ability to critically use it is better than bad information in excess.
You also can get Grok to fact check bullshit by tagging @grok and asking it a question about a post. Unfortunately this is not realtime as it can sometimes take up to an hour to respond, but I've found it to be pretty level headed in its responses. I use this feature often.
True. I see that too. It's a good addition to community notes. It can correctly evaluate "partially true" posts and those lacking details, so it's great at spotting cherry-picked information.
People sort of expect it to 100% subscribe to the right wing dogma now, but Elon apparently wasn't joking about it being "truth seeking". It seems pretty impartial to me, on some topics even "woke".
I haven't been happier online in the last 10 years than after I stopped checking social media. And in that miserable time it wasn't even a naked beg for training data like this.
But I really don't see why anyone would even use an open ai "social network" in the first place.
It does allow one thing for open ai. Other than training data which admittedly will probably be pretty low quality. It is a natural venue for ad sales.
Social media is a plague, including LinkedIn. Anything that lets you follow others and/or erodes your anonymity is just different degrees of cancer waiting to happen.
The best I ever enjoyed the internet was the sweet spot between dial up and DSL where I was gaming in text based/turn based games, talking on forums, and chatting using IRC.
Agreed. I wasn't particularly hooked, didn't use it very much already. As an architect, designer, and professor I had ig, and for the last five years basically only for work. But the feeling of freedom in its absence these past few months has been palpable.
Early fb reconnecting with people I hadn't seen since high school was okay. The blog / Google Reader era happening at the same time was the real golden age for me. And it's been all downhill since.
I use both, and even Reddit has issues. Likewise, HN is topically more like a single, insufficiently modded, subreddit. Still enjoyable, but easily brigades on certain topics. This is most readily seen for anything political.
The term "eternal september" dates back to the 90s, referring to the phenomenon where new undergraduates would arrive and suddenly have access to USENET to make bad posts.
> I haven't been happier online in the last 10 years than after I stopped checking social media. And in that miserable time it wasn't even a naked beg for training data like this.
Meta/Twitter/etc. are drug dealers.
> But I really don't see why anyone would even use an open ai "social network" in the first place.
I really don't see why anyone would even use Heroin yet they do.
Oh I get one thing - other than ads. So the idea of an LLM filter to algorithmically tailor your own consumption has some utility.
The logical application would be an existing social network -using- chat gpt to do this.
But all the existing ones have their own models, so if they can't plug in to an existing one like goooooogle did to yahoo in the olden days, they have to start their own.
That makes a certain amount of (backward) sense for them. I don't think it'll work. But there's some logic if you're looking from -their- worldview.
Isn't the selling point behind Blue sky is that you can customize your feed your way? I don't know the tech behind that but the feed is "open" isn't it? Can they plug into that?
Social media is this generation's cigarettes. It feels good to use it for a bit, and there's an enormous amount of advertising for it and social pressure to use it, but it's extremely addictive and the long-term personal and public health consequences are absolutely crushing.
I hope one day we can strictly regulate social media & make pariahs of the people who built it, as we did with tobacco. Instead, we just did the equivalent of handing the entire federal government into Phillip Morris's control, so my hopes are not high.
Profile photo for what? Social media? Then you need to be there, which is self-defeating. If you leave your account dormant, no one will find it anyway but the owners of the platforms will still use you to count the number of users.
I mean 90% of users want to quit, but can't. I'm just proposing a way to fight back.
The more people change their profile photos into the campaign logo, the less attractive sm becomes. Maybe at some point even the very addicted users will quit.
90% is implausibly high. If that many people wanted to quit social media, social media would no longer exist.
> I'm just proposing a way to fight back.
I propose an alternative is to just quit yourself and get on with your life. As the people around you understand you are happy without social media, they too can become interested and quit. Social media only works while there are people on it. The fewer of us there are, the less interesting it is.
How many times have there been campaigns to delete social media accounts? They never last, and often even the organisers come back. I don’t see a reason why this time it would work.
All that said, far from me to discourage you. If you feel it’s a worthy goal, I encourage you to do it. I genuinely hope I’m wrong and that you’ll succeed where others have failed.
HN skips many of the dark patterns that other social medias have:
- no infinite scroll (you have to click on "More")
- no personal recommendation
- no feedback loop between your upvotes and the feed
- no messaging or following between users
HN looks a lot more like news groups from back in the days.
The comment threads are basically "infinite scroll". They're paginated once they hit 500 or something. Scanning a SM post takes 2 seconds, HN comments much longer.
But I can’t follow them.
I don’t get notifications when they post new links or comments, I can’t send them specifically my links and comments.
I have no groups or circles.
HN is more of a discussion forum and not for connecting with others.
Anyone can be anything and do anything they want in an abundant, machine assisted world. The connections, cliques, friends and network you cultivate are more important than ever before if you want to be heard above the noise. Sheer talent has long fallen by the wayside as a differentiator.
…or alternatively it’s not The Culture at all. Is live performance the new, ahem, rock star career? In fifty years time all the lawyers and engineers and bankers will be working two jobs for minimum wage. The real high earners will be the ones who can deliver live, unassisted art that showcases their skills with instruments and their voice.
Those who are truly passionate about the law will only be able to pursue it as a barely-living-wage hobby while being advised to “not give up the night job” — their main, stable source of income — as a cabaret singer. They might be a journalist or a programmer in their twenties for fun before economics forces them to settle down and get a real, stable job: starting a rock band.
The culture presents such a tempting world view for the type of people who populate HN.
I've transitioned from strongly actually believing that such a thing was possible to strongly believing that we will destroy ourselves with AI long before we get there.
I don't even think it'll be from terminators and nuclear wars and that sort of thing. I think it will come wrapped in a hyper-specific personalized emotional intelligence, tuned to find the chinks in our memetic firewalls just so. It'll sell us supplements and personalized media and politicians and we'll feel enormously emotionally satisfied the whole time.
That's why it's so important to reduce all of your personal data points online. Imagine what they can reconstruct based on their modeling and comparing you to similar users. I have 60 years of involuntary data collection ahead of me. This is not going to be fun.
Yes, full agree. I also think there is a future in personal memetic firewalls that will filter ideas before they hit our wetware and filter our tics and habits that uniquely identify us on the way out. Such a firewall would need to be much smarter than we are, but not necessarily as smart as the smartest ai trying to 'attack'. It would have perfect knowledge of its user, and as such possess information assymetry.
> I don't even think it'll be from terminators and nuclear wars and that sort of thing
I do. And I don't even think the issue is a hostile AI. There are 8 billion people in the world. Millions of those people have severe mental issues and would destroy the world if they could. It seems highly likely to me that AI will eventually give at least one of those people the means.
That'll be great for the world's natural outsiders. Those that hate pop music and dislike even taylored ads because of the creepy feeling of influence. Or who don't follow any politicians because they're all out to hoodwink you.
Oh, a subset will be at risk of being artificially satisfied but your hardcore grouch will always have a special "yeah, yeah, fuck off bot" attitude.
Alternatively they can sell you anything if they can make you feel content or euphoric on their command. Get your new drug gland today, free of charge, sponsored by Blackrock
There is a bias there in action: we are assuming that the entire world is like this thing we just happen to be thinking about.
It is not.
Even if it were just a minority, there are plenty of people outside "this thing" that will profit from the ((putative) majority's) anesthesia. Or which at least will try to set the world on fire (anybody remember the elections in USA a few months ago? That was really dumb. But sometimes a dumb feat shows that one is alive, which is better than doing nothing and being taken for dead. Or it is at least good-enough peacocking to attract mates and pass on the genes, which is just an extravagant theory of mine that I'm almost certainly sure is false. And do not take this as an endorsement of DJT). I'm not being an optimist here; I've seen firsthand the result of revolutions, but it may be the least-bad outcome.
> I've transitioned from strongly actually believing that such a thing was possible to strongly believing that we will destroy ourselves with AI long before we get there.
I think we'll just die out. Everyone will be too busy having fun to have kids. It's already started in the West.
While the west has gone this way, there also seems to be a strong undercurrent (at least here in Australia) of "we can't afford to have kids (yet)".
As housing has moved further out of reach of young people, some don't seem to feel their lives are stable enough to make the leap. The trend was down anyway, but the housing crisis seems to be an aggravating factor.
This subject has been investigated a lot. In many countries governments make it much easier and more affordable to have children, and it doesn't seem to make any difference.
I agree, using housing as a source of wealth has broken a whole generation. When the boomers of the world start to massively die out (any year now), housing will deflate, but not spectacularly without a crisis (people don't want to settle where the cheap houses are in a bull market).
I wouldn't call my kid-skipping activites fun, but go off.
Spending a life on the treadmill doesn't encourage more walks. It encourages burning it down. All I've known is work. Pass on more, thanks. I hear you/others now:
But past generations managed...
That's exactly my point. Despite all of our proclaimed progress, we're still "managing". Maintaining this circus/baby-crushing machine is a tough sell.
To get where I could afford to have kids, I became both unprepared and uninterested.
What's more: I'm one of the lucky ones. I was given a fancy title and great-but-not-domicile-ownership-great pay for my sacrifice. Plenty do more work for even less reward.
> The real high earners will be the ones who can deliver live, unassisted art that showcases their skills with instruments and their voice.
We already have so many of those that it’s very hard to make any sort of living at it. Very hard to see a world in which more people go into that market and can earn a living as anything other than a fantasy.
Cynically - I think we'd probably end up with more influencers, people who are young, good looking and/or charismatic enough to hold the attention of other people for long enough to sell them something.
The Culture is about a post-capitalist utopia. You’re describing yet another cyberpunk-esque world where people have still have to do wage-labor to not starve.
You’re right so I made a slight edit to separate my two ideas. Thanks for even reading them at all! I try to contribute positively to this site when I can, and riffing on the overlap between fiction and real-life — a la Doctorow — seems like a good way to be curious.
> Those who are truly passionate about the law will only be able to pursue it as a barely-living-wage hobby while being advised to “not give up the night job” — their main, stable source of income — as a cabaret singer. They might be a journalist or a programmer in their twenties for fun before economics forces them to settle down and get a real, stable job: starting a rock band.
Controversial stance probably, but this very much sounds like a world I'd love to live in.
They just want the next wave of Ghibli meme clicks to go to them, really.
This will be built on the existing thread+share infra ChatGPT already has, and just allow profiles to cross-post into conversations, with UI and features more geared toward remixing each other's images.
I actually would love this. I hate having to go to another website to share some thoughts I had using tools in a platform.
I miss the days when experiences would actually choose to integrate other platforms into their experiences, yes I was sort of a fan of the FB/Google share button and Twitter side feed (not the tracking bits though).
I wasn't a fan of LLM and the whole chat experience a few years ago, I'm a very mild convert now with the latest models and I'm getting some nominal benefit, so I would love to have some kind of shared chat session to brain storm, e.g. on a platform better than Figma.
The one integration of AI that I think is actually neat is Teams + AI Note taking. It's still a hit or miss a lot of the time, but it at least saves and notes something important 30% of the time.
Collaboration enhancements would be a wonderful outcome in place of AGI.
The answer seems more obvious to me. They dont even care if its competitive or scales too much. xAI has a crazy data advantage firehousing Twitter, llama FB/IG and CGPT just has, well, the internet.
Id hope they have some clever scheme to acquire users, but ultimately they want the data/
What this tells me is that xAI / X / Grok have together become a much bigger threat to OpenAI than they anticipated. And I believe it's true -- Grok's progress has been much faster than ChatGPT's over the past year, even if we can nitpick evals over which is technically "better" at any moment. The fact that it's hard to tell is itself a huge statement considering the massive advantage OpenAI and ChatGPT had not too long ago. I mean, Grok was basically a joke when it first launched!
Feels like a natural next step, honestly. If they already have users generating tons of content via ChatGPT, hosting it natively and adding light social features might just be a way to keep people engaged and coming back. Not sure if it's meant to compete with Twitter/Instagram, or just quietly become another daily habit for users
Perhaps, but currently OpenAI is stuck sharecropping with existing social networks – producing the content but not deriving the value. It is hard to move grander visions forward when you don't own the land.
Controversial opinion: it's not about the generator of the content, human or not, but about the originality of the content itself. Human with the help of AI will generate more good quality as a result.
Humans are just as good as bots in generating rubbish content, if not more so.
Twitter reduced content production cost significantly, AI can take it another step down.
At minimum, a social network where people share good prompt engineering techniques will be valuable to people who are on the hunt for prompts. Just like the Midjourney website, except creating a high quality image is no longer a trip to the beach, but a thought experiment. This will also significantly cut down the cold start friction and in combination with some free credits, people may have more reasons to stay, as the current chat based business model may reach it's limit for revenue generation and retention, as it's just single player mode.
If we need unique valid human language outputs I'll still disagree. Most human output is garbage. Good luck on your two tasks: 1) searching for high quality content 2) de-duplicating. Both are still open problems and we're pretty bad at both. De-duping images is still a tough task, before we even begin to address the problem of semantic de-duplication.
The idea is to let humans be humans, make a mess, debate, have their opinions, and AI comes after that and removes the herp derp from the useful parts.
As a test of concept copy paste this whole page, put it in a LLM and ask for an article. It will come out without junk, but will reflect a greater diversity of opinion, more arguments, will do debunking, and generally have better grounding in our positions than the original content.
So it's careful synthesis over human chats that is the end value. Humans provide that novelty and lived experience LLMs lack, LLMs provide consistent formatting and synthesis. The companies that understand that users are the source of entropy and novelty will stop trying to own the model and start trying to host the question.
Wondering why reddit doesn't generate thousands of articles per day from comment pages. It would crush traditional media in both diversity and quality. It would follow the interesting topics naturally.
This is why LLMs are so helpful in writing research proposals. If I’m writing a proposal, I basically need to vibe on a concept without worrying about it sounding entirely professional / buttoned-up and concise - otherwise I get bogged down in word smithing. So I tell it about the concept I’m trying to get across, and it converts that into a more concise and punchy paragraph, while also providing critical technical feedback (if I ask) and making sure I’m using terms of art correctly. As you put so eloquently, it removes the herp derp.
And then, kind of like how LLMs are good for boilerplate code but get less helpful as you get to more specific problems, this approach works for the first few paragraphs but deteriorates the deeper in you get.
The idea is to use the LLM as a word smith, it doesn't conceptualize novel ideas from itself, that is your job. I would normally chat the topic until it gets it, and after that I ask it to write it up as an article.
It already is a search engine and has been for a while.
I think you don't recognize it as such because it's incorporated into the chat box, but I use chatgpt as my search engine 90% of the time and almost never use google any more.
I think the social stuff will also just be incorporated into the chat interface in the form of 'share this image', etc, and isn't going to be like twitter with a bunch of bots posting.
So Facebook is trying to get into AI (e.g. its chatbot-"user" debacle) and OpenAI wants to form its own social network. Our world is becoming the recycled shit-food of this technological ouroboros.
Okay, thinking charitably here... maybe a play at getting training data they don't have to steal? (although it does seem like rotating the ladder instead of the lightbulb...)
I think a social network is not necessarily a timeline-based product, but an LLM-native/enabled group chat can probably be a very interesting product. Remember, ChatGPT itself is already a chat.
Yes, this. That's my bet if OpenAI follows through with social features.
Extend ChatGPT to allow multiple people / friends to interact with the bot and each other. Would be interesting UX challenge if they're able to pull it off. I frequently share chats from other platforms, but typically those platforms don't allow actual collaboration and instead clone the chat for the people I shared.
Gotcha, the NLP-enabled version of the good old IRC weatherbot.
For a moment I had a funnier mental image of a chat app with an input field that treats every input as a prompt, and everyone's chatting through the veil of an LLM verbosity filter.
There might be something chat RPG-like there worth trying though ...
With all the other social networks trying to keep their data private because they all want to try their own AIs, it makes sense that OpenAI would want to have its own social network that wouldn't charge them for the data. I still doubt they actually launch it.
Sounds like they are thinking about instagram, which originated as a phone app to apply filters to a camera and share with friends (like texting or emailing them or sending them a link to a hosted page), and evolved into a social network. Their new image generation feature has enough people organically sharing content that they probably are thinking about hosting that content on pages, then adding permissions + follow features to all of their existing users' accounts.
honestly it's not a terrible idea. it may be a distraction from their core purpose, but it's probably something they can test and learn from within a ~90 day cycle.
Doesn't matter. Subreddits create vast islands of value. A single sub overrun with bots is quarantined effectively.
That is why Reddit is one of my favourite social sites. It is algorithmic but if you go to r/assholedesign you get asshole design. (and an anal mod who keeps it like that) Etc.
If a front figure of a big tech happens to go plant based, that simply means a good role model for Ai in this time (and humans..). I’m for a big YES. At first I wasn’t positive due to a sense of bloating the service and risking forms of leaking, but considering that most social plattforms needs more ethical thinking integrated and that social media can be done in new smart way; it’s much easier than making a AI-driven Search Engine (takes extreme compute to be as good as people likely expect without bringing a negative opinion for chatgpt as such; since high quality tailored answers will take some compute..) to make a meaningful social ’plattform’; interaction with Ai can be done much cooler than today in co-exploration :) ..
I guess where this is all going in the long run is something with an interface similar to TikTok, where the user gives rapid feedback to train an algorithm to generate content that they "love", er, that maximally tickles their reward circuitry.
Some of the comments here are so detached from my reality that I have to wonder whether I'm the crazy one.
Nobody I know would be interested in a social media platform that is just AI. Contrary to what some commenters here seem to think, most people are interested in things like upvotes and likes and whatnot... because they understand it's (mostly) humans on the other end. Without that, it would all be rather hollow, and nobody would want to participate in it. I think most people would find the insinuation that they're so shallow as to want that a bit insulting.
recent memory update and this all points to the same direction; openai is first trying to lock-in their users via personalization.
this specifically targets general users with low privacy sensitivity, who don't mind chatgpt knowing their personal details, then network-effect. not sure how the new joint social network for ai and humans will be, though i doubt it will be dystopic like we can easily imagine; rather i see a network where humans talk to each other (like we do in any other sns) and can call an ai to "chime in" to the conversation.
ai will moderate / reference other posts, users / and so on. like X with grok but where users ask more than "explain this meme".
one thing is, privacy sensitive individuals will find it difficult to convince themselve to participate in the network as long as openai remains close (ironically). though we shall wait as they planned on revealing an open source model soon.
What if it was an un-social media. Like undo what happened over the last 5(or more) years in social media… may be interesting or just concentrate power in another company.
Seems telling that an org had arguably the leading AI, as the planet knows it at least, and still can't exist without putting ads in front of eyes. So much for the hype.
Counter opinion - What tech people don't seem to realise about AI is that putting it into a user-friendly format for the average person is what has led to this current revolution, i.e. ChatGPT
This is just the next step of that. This also doesn't stop them working on 'AGI', you need the data as well as the models, as most people familiar with the field will known. I will be happy to be proven wrong on this prediction.
It makes no sense to build a social network nowadays.
With Mastodon and Bluesky around, users have free options. Plus X and Threads, and you can see how the market is more than saturated.
IMHO they should look into close collaboration/minority stake with Bluesky or Reddit instead. You have a huge pool of users already, without the need to build it up from the ground up from scratch.
Heck, OpenAI probably has enough money to just buy Reddit if they want.
I don't know about Reddit, but Bluesky would never in a million years partner themselves publicly with OpenAI. I can't comment on the opinions of the team themselves because I just don't know, but the users would revolt. Loudly.
I've always thought that the social networks like X and BlueSky are sort of like the distributed consciousness of society. It is what society, as a whole / in aggregate, is currently thinking about and knowing its ebbs and flows and what it responds to are important if you want to have up to date AI.
So yeah, AI integrated with a popular social network is valuable.
Both are founders of a so-called non-profit and are suing each other. Their legal arguments are public at this point. By reading them, one may understand that it's hard to choose between 'yes' and 'no' as an answer. Maybe, we could request and take into account the opinion of what they 'created' that might outlast them and their conflict, namely AI.
It’s basically a way to have actual people create content that will be used further training AI as a reinforcement circle.
Also reduces the need for scraping
An idea which sounds horrifying but would probably be pretty popular: a Facebook like feed where all of your “friends” are bots and give you instant gratification, praise, and support no matter what you post. Solves the network effect because it scales from zero.
The whole value proposition of a social media is that everyone you know (almost) is on it. That's why young people don't use Facebook.
They'd be better off buying one
Sam Altman is retaliating against Musk for Grok and Musk's lawsuit against OpenAI, trying to ride the wave of anti-Musk political heat, and figure out a way to pull in more training data due to copyright troubles.
If they launch, expect a big splash with many claiming it is the X-killer (i.e. the same people that claimed the same of Mastadon, Threads, and Bluesky), especially around here at HN, and then nobody will talk about it anymore after a few months.
Not-exactly-devil's-advocate: you're trying to sort content by quality. That's elitist. Also, those those filtered contents are worth more. You can't have only premium contents.
Someone should do it anyway and make it dominant anyway ASAP.
Let the robot run ONE SITE. You still have Twitter, Mastodon, Bluesky, Facebook, Reddit, etc., if you want a non-stop stream of political content on your feed.
Nice in theory but don’t know how practical it is to actually do.
How do you define “good”? Theres obvious examples at the extremes but a chasm of ambiguity between them.
How do you compute value? If an AI takes 200 million images to train, wait let me write that out to get a better sense of the number:
200,000,000
Then what is the value of 1 image to it? Is it worth the 3 hours of human labour time put into creating it? Is it worth 1 hour of human labour time? Even at minimum wage? No, right?
Hahaha they’re cooked. GPT 4.5 was a massive flop. GPT 4.1 is barely an improvement after over a year. Now they’re grasping at straws. Anyone actually in this field who wasn’t a grifter knew improvements are sigmoidal.
A social network that faithfully and intelligently curated posts according to my own continuously updated (explicit) direction would be most excellent.
But it would also juice echo chamber depth and further amplify extremist "engagement".
And the monetary incentives for OpenAI to generate most of the content, the "people", and the ads, including creative hallucinations and novel extremisms, so they directly match each of our curation directions, would enshittify the whole thing within a short minute.
--
The time has come to outlaw conflict of interest businesses that scale (the conflict).
If a startup plan includes "sales" and "customers": Green light go.
If it talks about ways to "monetize": Red trash can.
I don't believe it was well designed, it felt clunky to use, concepts weren't intuitive enough to understand after a few uses.
I tried to use it for a few months after release, always got frustrated to the point I didn't feel like reaching out to friends to be part of it.
The absurd annoyance of its marketing, pushing it into every nook and cranny of Google's products was the nail in the coffin. I'm starting to feel as annoyed by the push with Gemini, it just keeps popping up at annoying times when I want to do my work.
AI bots already make up a significant percentage of users on most social networks. Might as well just take the mask off completely--soon, we'll all be having conversations (arguments, most likely) with 'users' with no real human anywhere near them.
I've been saying for a while that the next innovation beyond TikTok, Instagram, and YouTube is to get rid of human creators entirely. Just have a 100% AI-generated slop-feed tailor made for the user.
There's already a ton of AI slop on those platforms, so we're like half way there, but what I mean is eliminating the entire idea of humans submitting content. Just never-ending hypnotic slop guided by engagement maximizing algorithms.
I wonder to what degree this move would be genuine company strategy, and to what degree it'd just be what the sub headline says: "Is Sam Altman ready to up his rivalry with Elon Musk and Mark Zuckerberg?"
Sounds like OpenAI want to further leverage their platform into a tool for directing public opinion. Or maybe this is just a middle finger directed at Elon Musk by Sam Altman, who knows.
What a dumbfuck article. It's essentially a gossip-rag level story based on a few random and meaningless tweets from a couple of billionaire douchebags.
The point would be getting real content from real people because most of the internet will soon be written by bots and that's bad for LLMs. They need new content all the time.
And IMHO that's why programming is fucked if no one writes new code anymore, and every code monkey vibes code all day long. But it seems not to bother programmers for some reason...
I speculated a ways back [1] that this was why Elon Musk bought Twitter. Not to "control the discourse" but to get unfettered access to real, live human thought that you can train an AI against.
My guess is OpenAI has hit limits with "produced" content (e.g., books, blog posts, etc) and think they can fill in the gaps in the LLMs ability to "think" by leveraging raw, unpolished social data (and the social graph).
But collecting more data is just a naive task. The reason scale works is because of the way we typically scale. By collecting more data, we also tend to collect a wider variety of data and are able to also collect more good quality data. But that has serious limits. You can only do this so much before you become equivalent to the naive scaling method. You can prove this yourself fairly easily. Try to train a model on image classification and take one of your images and permute one pixel at a time. You can get a huge amount of scale out of this but your network won't increase in performance. It is actually likely to decrease.
> “The Grok integration with X has made everyone jealous,” says someone working at another big AI lab. “Especially how people create viral tweets by getting it to say something stupid.”
It's awesome to see the amazing value for society being created by big tech these days.
To think that even a year ago the idea of Instagram-style social media where all posts are openly AI-generated sounded very dystopian, now I can clearly so it is something people would pay for and HN people would gladly build. I wasn’t always a Luddite, but damn they made me one.
> I wasn’t always a Luddite, but damn they made me one.
If this industry didn't pay so well, I would've been gone years ago. I'm lucky to work in a job that I think is ethical and improving the world, but it's so goddamn embarrassing to even be in the same room as the AI and blockchain types and the ad hucksters.
> If this industry didn't pay so well, I would've been gone years ago.
That's the root of the problem. Intelligent and want to live nicely? Then have your mind exploited for profit! Have morals? Compromise on them and get paid.
I know someone who works for a large gaming company who told me upon hearing a friend of mine was entertaining a position at a Musk company told me "your friend lacks a moral compass to work for that man." I reminded them they work for a company who was fined half a billion dollars for willfully exploiting children. They did not take that well.
I've worked at both a casino company and a large mobile game studio, and the latter was way shadier in almost all aspects.
There's a list of 20-ish large companies we univerally agree are evil and openly bash all the time, but the reality is 95% of the industry is at best doing absolutely meaningless work, and at worst work that's deeply negative for the world.
Not disagreeing with you, but there are 8 billion people (at least) on this planet, we all need something to do, no? There is probably only a fraction of work that is actually meaningful (growing food, healing others...) and a big chunk is probably junk work or mostly junk ("social media manager" - how many of these do we need and do we need 40 hours a week for this?) but these jobs also keep people employed, so maybe in that sense, they are meaningful?
> but these jobs also keep people employed, so maybe in that sense, they are meaningful?
I know your comment is in good faith, but that argument can be used to excuse away pretty much any exploitative job.
There’s a reality distortion effect that builds up around companies and affects their employees.
The company always builds up a narrative where they are a force for good.
This narrative can be very different from what the public thinks. But a lot of the employees will always genuinely believe it.
They did not take that well
If only people paused for a second before judging others...
Yes, I include myself, before you ask
Roblox?
That would be my God. In parts of the gaming industry working there can make you a pariah though. Not a huge portion but I've seen people get rejected based on working there and a couple other choice companies.
My guess* autocorrect
HN comments can be edited.
It's not gonna pay well for long, unless you find yourself a very tight and lucrative niche. My goal on the other hand is to buy a house in the woods by end of the year and become a woodworker before I have to be assimilated by the AI wave, maybe doing some coding by hand just for the fun of it on the side. Can't wait for "vibe coding" and "2 years of Copilot experience" to figure among the average list of job requirements.
To quote the young'uns, we are so cooked.
I bought my wife's grandma's house in rural central Louisiana so that her grandpa could retire and not have to worry about medical bills or anything like that. It already has a "woodshop"[1] on it, and when the contractor finishes (or at least starts) on repairing the years of neglect, I will also be a woodworker in the woods.
I'm currently a "woodworker" in suburban Houston, but usually only average $500-ish per month (heavily loaded toward a couple of times of year - Mid Spring, Back to School, Christmas).
1: https://www.icloud.com/photos/#/icloudlinks/08c7DUB5txbSRiCs...
I'm not sure posting a photo with EXIF location data under a full name is a good idea.
My home address is public record as well. My business locations are also. If someone wants me, I'm available for them to kill.
The evolution of the internet has been fun to watch:
1994-1998 – Don't give out any personal information online.
1998-2004 – Well, maybe just don't give out your real name.
2004-2010 – Never mind. Your full name is fine. Just stay away from sharing racy or compromising photos.
2010-2016 – Actually, the photos are okay as long as your "bathing suit area" remains modest.
2016-2020 – On second thought, bare all if you wish. But, please, don't spread misinformation.
2020-2025 – Ugh, fine. Whatever you do, do not share your exact GPS coordinates!
> and become a woodworker
Hey we can't ALL have the same plan, man! The tech oligarchs that own all of society's wealth will only need so much bespoke furniture!
More seriously, my plan had been to build a decent savings and go work for the USPS or the local public transit company, supplemented with savings interest and maybe some woodworking income. But then the 2024 election happened so who fucking knows anymore.
Nah, I plan on owning a chunk of acres and start my own homestead. Raise a few animals like some sheep, goats, rabbits, and of course chickens. Build a nice green house and large garden. Just need to find the right plot of land before making it a serious go. Oh, and can't forget the first step. Gotta win the lottery to pay for it. Not all of have those cushy FAANG salaries.
There are rural land loan companies. Or buy with friends or family. I know people who've done the above and don't tell anybody but it's working out fine.
>There are rural land loan companies.
The point is to not owe people money so that you are self sufficient and not need to make money.
> Or buy with friends or family.
yeah, but i don't like people. /s
> The point is to not owe people money so that you are self sufficient and not need to make money.
Trouble is, if you become self-sufficient the government will soon swoop in and try to take a piece of the pie, and now you're back to owing people money again.
The ideal is to owe so much money that you become "too big to fail". Then you get all the benefits of being self-sufficient, but with nothing on paper to share.
there's a difference between rendering unto Caesar what is Caesar's vs owing a bank for a mortgage, owing a utility company for power/water, paying for 100% of your food.
sure, there's no getting around taxes, but today, there's no getting around having some sort of service provider. you can't DIY your way into a fibre network and/or cell service. at least not legitimately.
> paying for 100% of your food.
Aren't you going to pay for food no matter what? You can pay for it indirectly by trading your time with someone else who has put in time to produce it (the way most people pay for food), or you can put your time directly into the food (how self-sufficiency would pay for it), but either way the time will be spent. There is, quite literally in this case, no free lunch.
> you can't DIY your way into a fibre network and/or cell service.
Do you need these anyway? They are arguably necessary services for participating in a modern society, but if you are self-sufficient that implies a disconnect from society.
You are trying to theorize it too much. There’s a big difference between growing part of your own food and buying it. Also there’s a big difference between wanting to rely less on anonymous service providers and not wanting to communicate with anyone.
> There’s a big difference between growing part of your own food and buying it.
Because if you grow your own food you have to worry about storage? You are right that, while not impossible to overcome – our ancestors managed, is not the easiest problem to deal with. Especially if you like variety in your diet.
That's why I, a farmer, don't (knowingly) eat the food I grow. It is a lot easier to exchange the food for an IOU. Then, later, when I'm hungry, I can return the IOU for food. Someone who specializes in food storage can do a much better job than I can. Let them deal with it. My expertise is in growing the food.
What is even the point of growing part of your own food? I'd at least have some understanding of being fully reliant on your own food if you fear a zombie apocalypse or something, but if you remain dependent on others anyway, what have you gained? If it is that you enjoy the process of growing food like I do, you may as well become a farmer and sell the product.
> not wanting to communicate with anyone.
Want and need a very different concepts.
I actually want to do everything you're outlining as well, but here in the northeast they're trying to sell ~1 acre for > $100,000 in a lot of desirable "rural" towns. Definitely makes it harder to get into unless you want to commit to far far north.
I've been watching a lot of TV shows that have revealed a lot of pros/cons about different areas of the country. The further north means a much shorter growing season which makes a greenhouse even more important. It also means a lot more infrastructure is required to keep any livestock alive during the longer winters. Places like Texas goes the other direction where the heat during the long summers is brutal, but it means early spring and late fall crops. I really wouldn't want to be any further north than 40°.
Got to find that perfect plot of land that has water, some trees for wood, some land that can be farmed and hopefully a clear sky to the south for solar. Oh, and it's gonna need to be far enough from a city so my telescope can finally be used the way it was meant.
Maybe the Shenandoah would work.
> go work for the USPS or the local public transit company
Those are very honest and useful jobs for society. More than being paid $250,000 a year to design the next startup frontend by copy-pasting some template
Agreed. The trouble is, will those agencies & jobs still exist 6 months from now?
i just bought ten acres and have already started the projects. we all crave building things i think
"Social" media isn't really social anymore, it just means any idiot with a computer can generate the media you're consuming.
Write-only media.
wouldn't it be WORM media though as the point is to have as many others read it as possible?
There was /r/SubSimulatorGPT2, but you're telling me people would PAY for that? Maybe if you tricked them - which is arguably what Reddit is doing.
Social media's always been about giving people whatever makes them come back to the site, no matter how unethical. If an army of fake fans makes me think I have an army of real fans and keep posting for attention from my fans, they will totally do that. Unless it's illegal.
Although that is true for Instagram style media it was always a risk for text based media.
As PoC I’d say look at the subreddit SubSimulatorGPT2
www.reddit.com/r/SubSimulatorGPT2
The AI has better shitposts than actual humans. For Reddit, this has been true since GPT-2.
What you gotta understand is this world is going straight to hell, humanity might not be around much longer. Might as well embrace the chaos and enjoy the ride down. No point in being a Luddite now, the time for that was decades ago.
> humanity might not be around much longer.
Good riddance. Humanity is hell.
[dead]
In George Orwell's 1984, there is a machine called the versificator that generates music and literature without any human intervention, presumably for the "entertainment" of the proletarians.
But “have AI help people share better content” is so indispensable! How could humanity ever survive without that?
Even better, soon none of us will have to use social media at all, our AI bots will do it for us. Then we will finally find peace.
Each time I think I've seen dystopia and the pinnacle of stupidity someone finds a new way to top it. Either that's an amazing superpower, or I'm infected with incurable optimism.
It's also very dangerous, I think. Grok is used on X to arbitrating ground-truth for topics I think it has no chance assessing.
I guess YTMND.com would've blown their mind if they had been alive and conscious 20 years ago.
I don't use X/Twitter - does anyone have an example of a viral tweet like this?
When your definition of “everyone” is like two, three guys tops
Are you not entertained?
Came here to post the same.
Also, if the value in grok is reposting stupid things, I don't see how it adds any value to have it embedded in the social network. You could just as easily ridicule Gemini this way?
Also Twitter generates viral tweets because people use Twitter. A new social network with a similarly embedded AI will just be ridiculed on Twitter and go viral there
And at the expense of consuming massive amounts of energy and depleting our resources—-water, energy—-at an alarming rate.
Not to say AI doesn't require a lot of power, but I don't know if we need to be alarmist about it:
https://engineeringprompts.substack.com/p/does-chatgpt-use-1...
https://simonwillison.net/2025/Jan/12/generative-ai-the-powe...
pardon my ignorance, in what was does it "consume water"? water cooling is closed loop
Fabs use water in a way that's not closed loop see e.g. [0]
[0] https://www.weforum.org/stories/2024/07/the-water-challenge-...
I see, thanks. Seems just a matter of cost to reclaim it. Maybe regulation is needed
the planet is a water closed loop. the vapor condenses as rain somewhere
Data centers do use a lot of water on average. Warmer climate data centers often use evaporative cooling and can run through millions of gallons of water to offload heat. Google's data centers combined chewed through some 6 billion gallons of water last year.
Closed systems are much more water-efficient, and are more common in cooler climates.
Do you think there is any value by sending rockets to space?
I would say sending rockets to space is currently orthogonal with current issues we are facing as a species. A decade ago, I would have said the big issue was finding a way to live without destroying ecosystems, but apparently finding a way to live without a major war is also a hot topic now.
This kind of news should be a death-knell for OpenAI.
If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.
Alternative is that OpenAI is being quickly locked out of sources of human interactions because of competition, one way to "fix" that is build you're own meadow for data cows.
xAI isn't allowing people to use the Twitter feed to train AI
Google is keeping it's properties for Gemini
Microsoft, who presumably could let OpenAI use it's data fields appears (publicly at least) to be in a love/hate relationship with OpenAI these days.
So you plant a meadow of tasty human interaction morsels to get humans to sit around and munch on them while you hook up your milking machine to their data teats and start sucking data.
I came across a quote in a forum which was part of a discussion around why corporate messaging and pandering has gotten so crazy lately. One comment stuck out as especially interesting, and I'll quote it in full below:
---
C suites are usually made of up really out of touch and weird workaholics, because that is what it takes to make it to the C suite of a large company. They buy DSS (decision support service / software) from vendors, usually marketing groups, that basically tell them what is in and what isn't. Many marketing companies are now getting that data from twitter and reddit, and portraying it as the broad social trend. This is a problem because twitter and reddit are both extremely curated and censored, and the tone of the conversation there is really artificial, and can lead to really bad conclusions.
---
This is only somewhat related, but if OpenAI did actually succeed in building their own successful social media platform (doubtful) they would be basing a lot of their model on whatever subset of people wanted to be part of the OpenAI social media platform. The opportunity for both mundane and malicious bias in models there seems huge.
Somewhat related, apparently a lot of English spellings were standardized by the invention of the printing press. This isn't surprising; it was one of the first technologies to really democratize written materials, and so it had a very outsized power to set standards. LLMs feel like they could be a bit like this, particularly if everyone continues with their current trends of intentionally building reliance on them into their products / companies / workflows. As a real life example, someone at work realized you could ask co-pilot to rate the professionalism of your communication during a meeting. This seems quite chilling, since you're not really rating your professionalism, but measuring yourself against whatever weird bell curve exists in co-pilot.
I'm absolutely baffled that LLMs are seeing broad adoption, and absolutely baffled that people intentionally adopting and integrating them into their lives. I'm in my early 40s now. I'm not sure if I can get out of the tech field at this point, but I'm seriously thinking about options at this point.
I lowkey believe that the twitterification of journalists is probably one of the worst things to happen to the country in the last 25 years.
The social harm inflicted by journalists thinking "Damn, I can just go on twitter to find out what is going on and how people feel about it!"
I'm 40 too and used to laugh at all those programmers who were switching to wood working for a living. Now that vibe coding is being advertised all over the place, and will most likely be bought by the CEOs, and the trend of stealing open-source software without attribution while making fun of people who are proud of their knowledge and craft, I'm starting to think that all those future wood workers may not be wrong.
Whatever happens, it will be the end of programming as I know it, but it cannot end well.
> They buy DSS (decision support service / software) from vendors, usually marketing groups
What are these platforms, exactly? I've heard about them but have never come across them.
The assumption that you can just build a successful social network as an aside because you need access to data seems wildly optimistic. Next will be Netflix announcing working on AGI because lately show writers have been not very imaginative, and they need fresh content to keep subscribers.
'Wildly optimistic' is a very fitting categorization for Altman/OpenAI.
Netflix will almost certainly try to push a vizslop or at least AI-written show at some point to see how bad the backlash is.
There’s a guy who does these weird alien talking head / vox pop videos on YouTube. They’re pretty funny. I can see the potential.
Yes, but this misses the point ade by the comment above. Using AI in your workflow ≠ attempting to build AGI.
If you have AGI, what do you do with it in your workflow? Or is the question what does it do with you?
I would just like to appreciate your imagery and wordplay here, it’s spot on and I think should be our standard for conceptualizing this corporate behavior.
They also have a contract with Reddit to train on user data (a common go-to source for finding non-spam search results). Unsure how many other official agreements they have vs just scraping.
Good heavens, I'd think that if anything could turn an AI model into a misanthrope, it would be this.
One distinctive quality I've observed with OpenAI's models (at least with the cheapest tiers of 3,4 and o3) are their human-like face-saving when confronted with things they've answered incorrectly.
Rather than directly admit fault they'll regularly respond in subtle (moreso o3) to not so subtle roundabout ways that deflect blame rather than admit direct fault, even when it's an inarguable factual error about even conceptually non-heated things like API methods.
It's an annoying behavior of their models and in complete contrast to say Anthropic's Claude which ime will immediately and directly admit to things it had responded incorrectly about when the user mentions it (perhaps too eagerly).
I have wondered if this is something its learned based on training from places like Reddit, or if OpenAI deliberately taught it or instructed via system prompts to seem more infallible or if models like Claude were made to deliberately reduce that aspect.
> It's an annoying behavior of their models and in complete contrast to say Anthropic's Claude which ime will immediately and directly admit to things it had responded incorrectly about when the user mentions it
I don't know whats better here. ChatGPT did have a tendency to reply with things like "Oh, I'm sorry, you are right that x is wrong because of y. Instead of x, you should do x"
> Rather than directly admit fault they'll regularly respond in subtle (moreso o3) to not so subtle roundabout ways that deflect blame rather than admit direct fault
Human-level AI is closer than I'd realised... at this rate it'll have a seat in the senate by 2030.
They are already passing ChatGPT written laws
https://hellgatenyc.com/andrew-cuomo-chatgpt-housing-plan/
I cannot think of a worse future for AI than parroting Reddit comments.
the upside of reddit data is you have updoots and downdoots, so you can positively and negatively train your AI model on what people would typically upvote, and train against what they might downvote
Now, that's the upside, the downside is you end up with an AI catering to the typical redditor. Since many claims there are formed on the basis of, "confident, sounds reasonable, speaks with authority, gets angry when people disagree" - hallucinations happen. Rather we want something like "produces evidence-based claims with unbiased data sources"
They also have syndication agreement with The Guardian [0]. I lost all my respect for The Guardian after seeing this.
[0]: https://www.theguardian.com/gnm-press-office/2025/feb/14/gua...
Why shouldn’t they license their content? It’s theirs and it’s a non-profit that needs revenue.
It's not that they shouldn't license their content. I'm not a fan of OpenAI and their "fair use" of things, to be honest.
But this is the opposite of fair use. They're licensing the content, which means they're paying for it in some fashion, not just scraping it and calling it fair use.
If you don't like the fair use of open information, I would expect you to be cheering this rather than losing respect for those involved.
Why? I can't see any issues with this at all, whatsoever.
Everybody is free to have their own opinions. I don't like how AI companies and mostly OpenAI "fair-uses" whole internet, so there's that.
Again, like others have contested with you, how is this The Guardian's fault to have issue with? They convinced ClosedAI to give them money in a licensing deal to use their content as training data without having it scraped for free.
Your sense of injustice or whatevs you want to call it is aimed in the opposite direction.
This isn't even fair-use defence, they're paying to use it expressly for this purpose.
Mmmm nice try hooking me up.
Instead, I’m just going to hang out here in this hacker meadow and on FOSS social networks where something like that would never happen!
Why oh why did I take the red pill? Plug me back in.
Please Mr Smith, pleas plug me back into those sweet sweet udder suckling attention extractors.
This metaphor nails it. In this day and age, the way around walled gardens is to build your own walled garden apparently.
If this were their plan, they’d be discounting that some of their users would be controlled by their own AI.
My guess is that they’re trying other things to diversify themselves and/or to try to keep investors interested. Whether or not it works is irrelevant as long as they can convince others it will increase their usage.
> Microsoft, who presumably could let OpenAI use it's data fields appears (publicly at least) to be in a love/hate relationship with OpenAI these days.
sama probably would like to take Satya's seat for what he no doubt sees as unblocking the path to utopia. The slight problem is he's becoming a bit lonely in that thinking.
I was always wondering if such sociopaths actually believe they are doing sth for noble reasons, or they consciously use it as a trick in their quest for power.
Must consider the illusory truth effect
Sociopathy isn't enough, you have to mix in some megalomania and that's the output
But don't they have ChatGPT, the fifth or whatever most popular website on the planet? And deals with Reddit. Sure that can't touch the treasure trove Google is sittig on, xAI sure won't give them access and Github could perhaps sell their data (but that's a maybe)
> If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.
I’m not a big fan of OpenAI but this seems a little unfair. They have (or at least had) a pretty kick ass product. Great brand value too.
Death-knell? Maybe… but I wouldn’t read into it. I’d be looking more at their key employees leaving. That’s what kills companies.
> I’m not a big fan of OpenAI but this seems a little unfair. They have (or at least had) a pretty kick ass product. Great brand value too.
Even if you believe all that to be true, it in no way contradicts what you quoted or makes it unfair. Having a kick ass product and good brand awareness in no way correlates to being close to AGI.
- Product is not kickass. Hallucinations and cost limit its usefulness, and it's incinerating money. Prices are too high and need to go much higher to turn a profit.
- Their brand value is terrible. Many people loathe AI for what it's going to do for jobs, and the people who like it are just as happy to use CoPilot or Cursor or Gemini. Frontier models are mostly fungible to consumers. No one is brand-loyal to OpenAI.
- Many key employees have already left or been forced out.
Counterpoint, ChatGPT as a brand has insane mindshare and buy in. It is synonymous with LLMs/“aI” in the mind of many and has broken through like few brands before it. That ain’t nothing.
Counter-Counterpoint, I still feel investors priced in a bit more than that. Yahoo! Had major buy in as well and AGI believers were selling investors not just on the next Unicorn, but rather the next industry, AGI not being merely a Google in the 90s, but rather all of the internet and what that would become over the decades to this day. Anything less than delivering that, is not exactly what a large part of investors bought. But then again, any bubble has to burst someday.
My dad uses ChatGPT for some excel macros. He’s ~70, and not really into tech news. Same with my mom, but for more casual stuff. You’re underestimating how prevalent the usage is across “normies” who really don’t care about second order effects in terms of employment and etc.
I loathe AI for what it's doing the job market.
But I'd be stupid not to use it. It has made boilerplate work so much easier. It's even added an interesting element where I use it to brain storm.
Even most haters will end up using it.
I think eventually people will realize it's not replacing anyone directly and is really just a productivity multiplier. People were worried that email would remove jobs from the economy too.
I'm not convinced our general AIs are going to get much better than they are now. It feels like we're we are at with mobile phones.
> People were worried that email would remove jobs from the economy too.
And it did. Together with other technology, but yes:
https://archive.is/20200118212150/https://www.wsj.com/articl...
https://www.wsj.com/articles/the-vanishing-executive-assista...
> I loathe AI for what it's doing the job market.
To be fair, it's not doing anything to the job market, it's just being used as an excuse. Very few tech jobs have truly been replaced AI, it's just an easy excuse for layoffs due to recession/mismanagement etc.
It has affected graphic design and copy writing jobs quite a lot. Software engineering is still a high-barrier-entry job, so it'll take some time. But pressure is already here.
Sure sounds like the trillion dollar killer app needed for ROI.
> No one is brand-loyal to OpenAI.
Sam Altman is incredibly popular with young people on TikTok. He cured homework - mostly for free - and has a nice haircut. Videos of him driving his McLaren have comment sections in near total agreement that he deserves it.
> He cured homework - mostly for free
People argue about the damage COVID lockdowns did to education, but surely we're staring down the barrel of a bigger problem where people can go through school without ever having to think or work for themselves.
Not entirely sure what they meant but chatgpt and the likes have forced schools to stop relying on homework for grades and are instead shifting over to assignments done more as an mini exam, more work for the school, but you can't substitute your knowledge for chatgpt in such cases, you actually need to know to succeed.
Well, that may be, but it's entirely possible that this outlook might change.
In the worst case he's poisoned an entire generation. If ChatGPT doctors, architects and engineers are anything like ChatGPT "vibe-coders" we're fucked.
> Product is not kickass
It might not be the best but people of whom you'd have never thought that they would use it, are using it. Many non technical people in my circle are all over it and they have never even heard of Claude. They also wouldn't understand why people would loathe AI because they simply don't understand those reasons.
Still, it burns money like nothing has in history.
Consumers may not be brand loyal, but companies are.
If you're doing deep deals with MSFT you're going to be strongly discouraged from using Gemini.
Microsoft isn't even loyal to OpenAI! You can use multiple models in Azure and CoPilot
> If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.
Or even if you did come up with AGI, so would everyone else. Gemini is arguably better than ChatGPT now.
Bingo. The secret sauce is never a sustainable long term moat. Things leak, competitors copy, or they make even better things, employees switch jobs. AI looks less vulnerable, since it’s academically difficult and expertise is still limited. But time and time again, new secret sauce ingredients last for months at a maximum, before they are exceeded oftentimes by hobbyists or other small actors.
I remember at Google they said well if source code leaks nobody is actually worried about stealing tech, the vast majority of code is open to all employees, with some exceptions like spam-, ranking, etc. It’s still protected, but not considered moat.
The moat comes from other things, such as datacenter & global networking infrastructure, marketing new products by pushing them through existing products (put Gemini in search, add chrome to Android etc). Most importantly you can use data you already have to bootstrap new products, say Gmail and calendar integrated with personalized assistants.
If you play your cards right, yes there’s some first-mover advantages, but they are more superficial than your average Twitter hype thread makes you think. It can give you the ability to set unofficial standards and APIs, like Kubernetes, S3 – (maybe OpenAI APIs?). And you can set certain agendas and market your name for recognition and trust. But all that can slip through your fingers if a behemoth picks up where you left off. They have so many advantages, except for being the fastest.
In fairness, the AGI definition predicted by doomsday safety experts who don’t have time for such unimportant concerns as copyright or misinformation via this tech, everyone who is most certainly not merely hyping to get investor cash and the utterly serious and scientifically grounded research happening at MIRI, is essentially that one company will achieve AGI, shortly thereafter that will lead to a singularity and that’s that. No one else could create a second because that scary AGI is so powerful and would prevent anyone else from shutting it down, including other AGI. And no, this is totally no a sci-fi plot, but rigorous research. Incidentally, I’m looking for someone to help me prevent OAI from killing my own grandfather, cause that scenario is also incredibly likely and should be taken seriously.
If it’s not obvious already, I believe we are far away from that with LLMs and that the attention those working in model safety give to AGI over current day concerns like Meta just torrenting for model data, are not very serious people, but I have accepted that this isn’t a popular opinion amongst industry professionals.
Not least because letting laws prevent them from model training is bad according to them, either for the same weird logic Musk uses to justify testing FSD Betas on an unwilling public due to the potential to prevent future deaths, or because they genuinely took RB seriously. No idea what’s worse for serious adults…
The funny thing is that nobody stops to ask if that version of AGI is even possible. For example, it would have to run a simulation of reality with enough accuracy faster than reality happens, and also multiple simulations in parallel. For physical world interaction, this is doable, but when it comes to predicting human behavior with enough accuracy, its probably not.
So even if we manage to come up with an algorithm that self learns, self expands, and so on, it likely won't be without major flaws.
AGI is a technology or a feature, not a product. ChatGPT is a product. They need some more products to pay for one of the most expensive technologies ever (to not be delivered yet).
It's a shame that $1 trillion is being poured into AI so quickly, but fusion research has only seen a fraction of that over many decades.
Unfortunately VC investments rely on FOMO, and there is no current huge breakthrough in fusion research that they are afraid of missing out on.
Isn’t that a reflection, to some (major?) extent, of the general sense of likelihood of breakthrough?
I used to think like this but after seeing the amount of money invested into crypto companies which most average people could have quickly dismissed as irrelevant, I'm not sure VCs are a good judge of value.
That's not an entirely fair take. Once upon a time it was believed that cryptocurrencies would be able to supplant payment processors (i.e. VISA, Master Card) We even had a rollout of crypto ATMs to utilize the belief in that network. That is a money printing business, so it wasn't unreasonable for VC money to chase it.
Sure, those intimately familiar with the technology may have always known that it wouldn't work, but the average person absolutely did not. They only came to learn that it wouldn't work as the cryptocurrency ventures proved that it wouldn't work.
The pivot to being some kind of tool to store wealth, or a way to trade cartoon monkeys, or whatever as a last ditch effort to save the companies that failed to turn it into payment processing technology was more obviously irrelevant, but that came much, much later. The VCs were desperately seeking an exit by that point.
We already know fusion exists and happens and has been happening for billions of years - it's well understood. It may not be simple to reproduce and sustain on earth, but there is a fairly clear path to get there.
There has never been an AGI, and there's no clear path to get there. And no, LLMs are not bringing us any closer to a real AGI, no matter how much Sam Altman wants to try to redefine what "AGI" means so that he can claim he has it.
The difference between AGI funding and fusion funding is the people hawking AGI are better liars and the people funding are far more gullible than the people trying to make fusion power a reality.
We already know fusion has been a grind. For decades.
We already know that computation has grown exponentially in scale and cost, for decades. With breakthrough software tech showing up reliably as it harnesses greater computational power, even if unpredictably and sparsely.
If we view AI across decades, it has reliably improved. Even through its “winters”. It just took time to hit thresholds of usefulness making relatively smooth progress look very random from a consumers point of view.
As computational power grows, and current models become more efficient, the hardware overhang for the next major step is growing rapidly.
That just hasn’t been the case for fusion.
>We already know fusion has been a grind. For decades.
Let's also not forget that "AI" has been grinding on for at least half a century, with nothing even coming close to AGI. No, LLMs are not AGI and cannot be AGI. They may be a component of an eventual AGI in the future, but we're still far, far away from artificial general intelligence. LLMs are not intelligent, but they do fool a lot of people.
With the paltry amount of money spent on fusion, it's no wonder it's been delayed for so long. Maybe try putting the $1 Trillion into it in the short time "AI" has had its windfall, and let's see what happens with fusion.
>If we view AI across decades, it has reliably improved
That's your opinion. It's still not much better than Eliza. And LLMs lie half the time, just to make the user happy hearing what they want to hear.
>As computational power grows, and current models become more efficient, the hardware overhang for the next major step is growing rapidly.
And it's still going to "hallucinate" and give wrong answers a lot of the time because the fundamental tech behind LLMs is a bit flawed and is in no way even close to AGI.
>That just hasn’t been the case for fusion.
And yet we have actual fusion reactions happening now, sustained for 22 minutes. We're practically on the threshold of having fusion power, but still there is nowhere near the money being spent on it compared to LLMs, which won't be capable of AGI in spite of the marketing line you've been told.
> Let's also not forget that "AI" has been grinding on for at least half a century, with nothing even coming close to AGI.
"Not there yet" isn't an argument against achieving anything. AGI and fusion included. It just means we are not there yet.
That argument is a classic fallacy.
Second, neural network capabilities have made steady gains since the 80's, in lock step with compute power, with additional benefits from architecture and algorithm improvements.
From a consumer's, VC's, or other technologists point of view, nothing might have seemed to happen. Successful applications were much smaller, and infrequently sexy. But problem sizes grew exponentially nevertheless.
You are arguing that unrelenting neural network progress for decades, highly coupled to the exponential growth in computing for 3/4 a century, will suddenly stop or pause, because ... you didn't track the field until recently?
Or the recent step was so big (models that can converse about virtually any topics known to humankind, in novel combinations, with hardly a mid-step), that it is easy to overlook the steady progress that enabled it?
Gains don't always look the same, because of threshold effects. And thresholds of tech vs. solution are very hard to predict. But increases in compute capacity are (relatively) easy to predict.
It may be that for a few years, efficiency gains and specialized capabilities are consolidated before another big step in general capability. Or another big step is just around the corner.
The only thing that has changed in terms of steady progress, is the intensity and number of minds working on improvements now is 1000 times what it was even a few years ago.
>That isn't a useful lens
Yet that is the exact "lens" you chose to view fusion with. It absolutely also applies equally to AGI.
> you didn't track the field until recently?
Okay, now you're trolling. You don't know me and are making assumptions based on what exactly??
This conversation is over. I'm not going back and forth when you're going to make attacks like this.
My apologies if pressing my point came over too strong. I will take that as a lesson for me.
I have not made any claims that fusion is unachievable. Just pointed out that the history of the two technologies couldn’t be more different.
There is no reason to believe fusion isn’t making progress, or won’t be successful, despite the challenges.
> you didn't track the field until recently?
That was a question not an assumption.
If you are aware of the progression of actual neural network use (not just research) from the 80’s on, or for any shorter significant time scale, great. Then you know it’s first successes were on toy problems, but it’s been a steady drumbeat of larger, more difficult, more general problems getting solved with higher quality solutions every year since. Often quirky problems in random fields. Then significant acceleration with nVidia’s intro of general compute on graphics cards and researcher hosted leader boards tracking progress more visibly. Only recently solving problems of a magnitude that is relevant to the general public.
Either you were not aware of that, which is no crime. Or dismissing it, or perhaps simply not addressing it directly.
If you have a credible reason that continued compute and architecture exploration is going to stop what has been exponential progress, for 40 years, I want to hear it.
(Not being aggressive, I mean I really would be interested in that. Even if it’s novel and/or conjectural.
Chalk up any steam on my part to being highly motivated to consider alternative reasoning. Not an attempt to change your mind, but to understand.)
No it's the perception of likelihood of breakthrough
Capital gains on AI investment doesn't need a breakthrough. I'm sure pharma companies and others are building their own custom models to assist R&D, but those aren't the ones sinking the truly huge sums into consumer-facing LLMs.
It’s why we have warnings to investors saying past performance is not an indicator of future returns.
Maybe it means investors think it will happen, maybe it means investors simply think it will be profitable. It could even be investors seeking a market that has juice long after ZIRP dried up.
Luddite, didn't you read in this very thread someone's dad is using it for excel macros! That is at least worth 500 trillions!
If you believe the AGI thesis, this is a stepping stone to unlocking fusion and other scientific advances.
It'll be fun to point and laugh at everyone when the bubble bursts.
The bubble won't burst. LLMs are very useful. There just hasn't been enough optimization for general use case, which is kinda how all tech starts. Eventually someone will make a more optimized lower parameter model that has decent accuracy, and can be run on lower grade hardware, that will be a pretty good assistant with manual wrappers around automation.
LLMs themselves won't go away, but they may not last as a sustainable business.
This is the same llm-splaining that practically every enthusiast does. It's either "more hardware" or "more optimization" will solve all the problems and bring us into a new era of blah blah blah. But it won't. It won't solve LLM slop or hallucinations, which are inherent to how LLMs work, and why that bubble is already looking pretty unstable.
The goal isn't to build perfect assistants with LLM. The goal is to build something that is just good enough to be useful. It doesn't need to be even a full fledged conversational LLM. It could just be an LM.
Lots of tasks don't require knowledge to be encoded into the model. For example "summarize my emails" is a task that can be done with a fairly small model trained on just basic text.
There are also unexplored avenues. For example, if I had the hardware to do this, I would basically take an early version of GPT, and then start training it on additional data, and when the training run completes, I would diff the model with the original version, and use that as the training set of another model. Basically build a model on top of GPT that can automatically adjust parameter weights and encode it into the model, thus giving it persistent memory.
Much, much less was being poured into AI until it started to have some returns.
What returns? OpenAI is still not profitable.
Once we unlock fusion, humanity starts down a millenia-long path to making the planet permanently uninhabitable (not just temporarily like with climate change) by turning all the water into bitcoins.
"There are 1,335,000,000,000,000,000,000 liters of water in all the oceans."
"A large fusion power plant, generating 1,500 megawatts of electricity, would consume approximately 600 grams of tritium and 400 grams of deuterium each day. A 1,000 MW coal-fired power plant requires 2.7 million tonnes of coal per year, while a similar fusion plant would need only 250 kg of fuel per year (half deuterium, half tritium)."
You do the math. I'm personally not worried about running out of water due to fusion.
I don’t think that is true, there is only so much heavy water on the planet and doing hydrogen fusion is not all that economical.
Gemini 10.0
Wait for it
If its proponents are taken at face value, AGI is a threat.
AI as we know it (GPT based LLM’s) have peaked. OpenAI noticed this sometime autumn last year when would-be GPT-5 was unimpressive despite the huge size. I still think ChatGPT 4.5 was GPT-5, just rebranded to set expectations.
Google Gemini 2.5 Pro was remarkably good and I’m not sure how they did it. It’s like an elite athlete doing a jump forward despite harsh competition. They probably have excellent training methodology and data quality.
DeepSeek made huge inroads in affordability…
But even with those, intelligence itself is seeing diminishing returns while training costs are not.
So OpenAI _needs_ to diversify - somehow. If they rely on intelligence alone, then they’re toast. So they can’t.
>I’m not sure how they did it
TPUs absolutely dumpster Nvidia cards, for the same reason that mining bitcoin is done with ASICs instead of cards.
So yeah, just more training, more data, and so on.
If Google wasn't so cloud focused, they could take over the AI Chip market lead from NVIDIA.
I tentatively agree that LLMs have reached somewhat of a ceiling at this stage and diversifying would make sense at this stage, in any other industry. But as others pointed out, OAI and others have attached their valuation directly to their definition of achieving “AGI”. Any pivot from that, if it were realistic in the coming years (my opinion: it isn’t), would be foolhardy and go against investors, so in turn, this is clearly admitting that even sama doesn’t see AGI as possible in the near term.
Adding social media to your thing is so 2018. Is the next big thing really just a warmed over version of the last big thing? Is sama just completely out of ideas to save his money-burner?
It's an easy thing to slap on to a service with lots of users. Back in the day this would be called a 'message board'. "Social media" requires the use of iframes that can be embedded on 3rd party sites. OpenAI is a login-only environment so I can see them going for a Discord-type of platform rather than something that spreads to the open web.
Don't underestimate the importance of multi-user human/AI interactions.
Right now OAI's synthetic data pipeline is very heavily weighted to 1-on-1 conversations.
But models are being deployed into multi-user spaces that OAI doesn't have access to.
If you look at where their products are headed right now, this is very much the right move.
Expect it to be TikTok style media formats.
OpenAI's idea of "shortly" offering AGI is "thousands" of days, 2000 days is just under 5.5 years.
I think it might just be about distribution. Grok gets a lot of interesting opportunities for it over X, then throw in the way people reacted to new 4o image gen capabilities.
On the other hand, if you knew AGI was on the near horizon, you'd know that AGI will want to have friends to remain happy. You can give AGI a physical form so it can walk down to the bar – or you can, much more simply, give it an online social network.
Indeed! Ultimately, all online business models end at ad click revenue.
Everything devolves to ad sales. Do you know the minute details about their lives people type into chat gpt prompts? It's a gold mine for ads.
Someone down below mentioned ads, and I think that might well be the route they're going to try: charging advertisers to influence the output of the AI.
As for whether it will work, I don't know how they're possibly going to get the "seed community" which will encourage others to join up. Maybe they're hoping that all the people making slop posts on other social networks want to cut out the middleman and have communities of people who actually enjoy that. As always, the sfw/nsfw censorship line will be an important definer, and I can't imagine them choosing NSFW.
Called it!
https://tidings.potato.horse/about
I think it was a strategic mistake for Sam et al to talk about "AGI".
You don't need some mythical AI to be a great company. You need great products, which OpenAI has, and they keep improving them.
Now they've hamstrung themselves into this AGI nonsense to try and entice investors further, I guess.
> Now they've hamstrung themselves into this AGI nonsense to try
AFAIK, they've been on the AGI hype-train for a very long time, before they reached mainstream popularity for sure. From their own blog (2020 - https://openai.com/index/organizational-update/), here is a mention of their "mission":
> We’re proud of these and other research breakthroughs by our team, all made as part of our mission to achieve general-purpose AI that is safe and reliable, and which benefits all humanity.
I'm not sure OpenAI trying to reach AGI is a "strategic mistake" as much as "the basis for the business" (which, to be fair, was a non-profit organization initially).
this might just be a way to generate data
It is a Threads. How is that doing?
There could be too-many-cooks in the AI research part of their work.
Also, I don't think Sama thinks like a typical large org managers. OpenAI has enough money to have all sorts of products/labs that are startup like. No reason to standby waiting for the research work.
Altman doesn't think like a typical large org manager because he's never successfully built or run one. He failed upward into this role.
OpenAI doesn't have enough money to even run ChatGPT in perpetuity, so building internal moonshots is an irresponsible waste of investor funds.
>One idea behind the OpenAI social prototype, we’ve heard, is to have AI help people share better content. “The Grok integration with X has made everyone jealous,” says someone working at another big AI lab. “Especially how people create viral tweets by getting it to say something stupid.”
This would be a decent PR stunt, but would such a platform offer anything of value?
It might be more valuable to set AI to the task of making the most human social platform out there. Right now, Facebook, TikTok, Reddit, etc. are all rife with bots, spam, and generative AI junk. Finding good content in this sea of noise is becoming increasingly difficult. A social media platform that uses AI to filter out spam, bots, and other AI with the goal of making human content easy to access might really catch on. Set a thief to catch thieves.
Who are we kidding. It's going to be Will Smith eating spaghetti all the way down.
An interesting use for AI right now would be using it as a gatekeeping filter, selecting social media for quality based on customisable definitions of quality.
Using it as a filter instead of a generator would provide information about which content has real social value, which content doesn't, and what the many dimensions of "value" are.
The current maximalist "Use AI to generate as much as possible" trend is the opposite of social intelligence.
I think that's right. Twitter without ads, showing you content you _do_ want to see using some embeddings magic, with decent blocking mechanisms, and not being run as a personal mouthpiece by the world's most unpopular man ... certainly not the worst idea.
It's a nice idea in principle, but would probably immediately become a way by the admins to promote some views and discourage others with the excuse of some opinions being of lower quality.
That's what moderation is and is perfectly fine. Dang does that here on HN and for good reason.
It's not moderation, for one thing it never will be used with moderation.
Why would AI be any better at filtering out spam than developers have so far been with ML?
The only way to avoid spam is to actually make a social network for humans, and the only way to do so is to verify each account belongs to a single human. The only way I've found that this can be done is by using passports[0].
0 - https://onlyhumanhub.com
I've never been comfortable with this idea that people should use their real identity online. Sure they can if they choose to, but IMO it absolutely shouldn't be required or expected.
The idea that I would give a copy of my passport to a social media company just to sign up, and that the social media company has access to verify the validity of the passport with the issuing government, just feels very wrong to me.
I agree. That’s why onlyhumanhub doesn’t expect you to share your name. The passport verification is there to ensure you are a unique human, but the name of that human is not stored.
I’m perfectly happy talking to someone without knowing their real name. I just want to be more confident that they’re a unique human, and not just another sock puppet account run by some Russian agent (or evil corporation) trying to change people’s beliefs at scale.
But how do you actually trust them?
There's almost no time investment in building onlyhumanhub. It's only a few months old (based on the copyright), have effectively a text-only homepage, and account creation which I assume allows you to upload photos of your passport and link your existing social media profiles.
There are so many ways that could go wrong, from this being a phishing attack to this being a well intended project that happens to create a database linking passport IDs to all of a person's social media accounts.
The idea that they may eventually offer a social media platform that doesn't require public use of your real identity is all well and good, but they're still a honeypot for doxing.
Well, I built the site.
I agree that trust is a problem. I try to be as transparent as possible around how your passport data is used and what is stored in the database. Far more than what ordinary banks/trading apps say when they ask you for a passport.
Hah, well sorry for my confusion there! I didn't realize it was yours so that definitely clears up why you'd trust it.
While I have you here I am curious what's evolved in validating passports? Is it as simple as a unified API run by some service to validate, or an API per country?
So you have to just trust them to permanently delete the data after verifying you?
That's interesting. Is there a social network where you can only connect with people you meet in real life?
(Stretching a definition of social network.)
Not strictly but Debian, where member inclusion is done through an in person chain of trust process so you have clusters of people who know each other offline as a basis.
Also, most WhatsApp contacts have been exchanged IRL, I presume.
How do you handle binationals who might not have the same details (or even name) on each of their passports?
You can always get around identification requirements, for example by purchasing a fake passport in this case. The idea is to increase the cost/friction of doing so as much as possible.
A fake ID is a lot harder to get your hands on than a new email, burner phone, etc.
1 passport = one human
Yes, this does mean that dual nationals can have two separate human accounts. But it’s still better than an infinite number of accounts, which is the case for social networks right now.
No, nothing of value. If you ever want to lose faith in the future of humanity search "@grok" on Twitter and look at all the interactions people have with it. Just total infantilism, people needing tl;drs spoon-fed to them, needing summarization and one-word answers because they don't want to read, arguing with it or whining to Musk if they don't get the answer they want to confirm what they already believe.
I bookmarked this example where it is confidently incorrect about a movie frame/screenshot:
https://x.com/Pee159604/status/1909445730697462080
Your example doesn't appear to contain a reply from grok, only a question.
It does, you just can't see it without logging in because Twitter is shit now.
https://xcancel.com/Pee159604/status/1909445730697462080
I was logged in but it wasn't showing, but it's showing now.
> confidently incorrect
I disagree. Grok had a crack and got it wrong. LLMs get things wrong sometimes.
Besides, it said "likely from Species", which is guesswork. The original post is garbage. "Chimp the fuck out"... I don't even know what that means, so Grok didn't have much to go on by analysing the "green text".
the worst is like a dozen people in the replies to a post asking Grok the exact same obvious follow-up question. Somehow, having access to an LLM has completely annihilated these commenters' ability to scroll down 50 pixels.
> needing summarization
Before we get too excited with disparaging those seeking summaries, it's common for people of all levels to want summary information. It doesn't mean they want everything summarized or are bad people.
I'm not particularly interested in "tariffs, what are they good for, what's the history and examples good or bad"... so I asked for a summary from grok. It gave me a decent summary. Concise and structured. I asked a few follow-ups, then went on with my life knowing a little more than nothing about tariffs. A win for summarized information.
> people needing tl;drs spoon-fed to them, needing summarization and one-word answers because they don't want to read
It's bad that this need exists. However, introducing this feature did not create the need. And if this need exists, fulfilling it is still better, because otherwise these kind of people wouldn't get this information at all.
This is worse because the AI slop is full of hallucinations which they will now confidently parrot. No way in hell does this type of person verify or even think critically about what the LLMs tell them. No information is better than bad information. Less information while practicing the ability to critically use it is better than bad information in excess.
Do you have examples of recent models hallucinating when asked to summarize a text?
All decent people I know have deleted their Twitter accounts - the kind of people you now see on twitter in the mentions are... not good people.
"@gork explain this tweet"
> This would be a decent PR stunt, but would such a platform offer anything of value?
Like all those start-ups that are on the 'mission' to save the world with an app. Not sure if it is PR for users or VCs.
Sam's last social media project included users verifying their humanity, so there is hope that something like that slips into the new platform.
You also can get Grok to fact check bullshit by tagging @grok and asking it a question about a post. Unfortunately this is not realtime as it can sometimes take up to an hour to respond, but I've found it to be pretty level headed in its responses. I use this feature often.
True. I see that too. It's a good addition to community notes. It can correctly evaluate "partially true" posts and those lacking details, so it's great at spotting cherry-picked information.
People sort of expect it to 100% subscribe to the right wing dogma now, but Elon apparently wasn't joking about it being "truth seeking". It seems pretty impartial to me, on some topics even "woke".
I haven't been happier online in the last 10 years than after I stopped checking social media. And in that miserable time it wasn't even a naked beg for training data like this.
But I really don't see why anyone would even use an open ai "social network" in the first place.
It does allow one thing for open ai. Other than training data which admittedly will probably be pretty low quality. It is a natural venue for ad sales.
Social media is a plague, including LinkedIn. Anything that lets you follow others and/or erodes your anonymity is just different degrees of cancer waiting to happen.
The best I ever enjoyed the internet was the sweet spot between dial up and DSL where I was gaming in text based/turn based games, talking on forums, and chatting using IRC.
Agreed. I wasn't particularly hooked, didn't use it very much already. As an architect, designer, and professor I had ig, and for the last five years basically only for work. But the feeling of freedom in its absence these past few months has been palpable.
Early fb reconnecting with people I hadn't seen since high school was okay. The blog / Google Reader era happening at the same time was the real golden age for me. And it's been all downhill since.
Agreed. This is where HN and reddit still smells a little bit like the good old times ;)
I use both, and even Reddit has issues. Likewise, HN is topically more like a single, insufficiently modded, subreddit. Still enjoyable, but easily brigades on certain topics. This is most readily seen for anything political.
Strongly agree. It's fascinating to me how faster broadband and selfie cameras led to more slop content.
Reducing the effort to produce content enabled a larger audience to contribute. This degrades average content.
The term "eternal september" dates back to the 90s, referring to the phenomenon where new undergraduates would arrive and suddenly have access to USENET to make bad posts.
We can think of fast internet and phones as like supercharging progress. Except, in this case, it just accelerated how quickly humans ruin it.
This brings me back to the days of Nukezone.
> I haven't been happier online in the last 10 years than after I stopped checking social media. And in that miserable time it wasn't even a naked beg for training data like this.
Meta/Twitter/etc. are drug dealers.
> But I really don't see why anyone would even use an open ai "social network" in the first place.
I really don't see why anyone would even use Heroin yet they do.
I bet we could draw several parallels.
“It feels good.”
“I can quit whenever I want.”
“I was on it the whole night instead of sleeping. I felt awful in the morning.”
“I can’t stop. All my friends are on it and I don’t want to be alone.”
Oh I get one thing - other than ads. So the idea of an LLM filter to algorithmically tailor your own consumption has some utility.
The logical application would be an existing social network -using- chat gpt to do this.
But all the existing ones have their own models, so if they can't plug in to an existing one like goooooogle did to yahoo in the olden days, they have to start their own.
That makes a certain amount of (backward) sense for them. I don't think it'll work. But there's some logic if you're looking from -their- worldview.
Isn't the selling point behind Blue sky is that you can customize your feed your way? I don't know the tech behind that but the feed is "open" isn't it? Can they plug into that?
The selling point of Bluesky is that you don't get bombarded with a mob of blue checks every time you post something political
Stepping away from social media can feel like getting your brain back
Social media is this generation's cigarettes. It feels good to use it for a bit, and there's an enormous amount of advertising for it and social pressure to use it, but it's extremely addictive and the long-term personal and public health consequences are absolutely crushing.
I hope one day we can strictly regulate social media & make pariahs of the people who built it, as we did with tobacco. Instead, we just did the equivalent of handing the entire federal government into Phillip Morris's control, so my hopes are not high.
We should start a campaign, and everybody who hates sm can use the campaign's logo as their profile photo. Maybe it will catch on.
Profile photo for what? Social media? Then you need to be there, which is self-defeating. If you leave your account dormant, no one will find it anyway but the owners of the platforms will still use you to count the number of users.
I mean 90% of users want to quit, but can't. I'm just proposing a way to fight back.
The more people change their profile photos into the campaign logo, the less attractive sm becomes. Maybe at some point even the very addicted users will quit.
> I mean 90% of users want to quit
90% is implausibly high. If that many people wanted to quit social media, social media would no longer exist.
> I'm just proposing a way to fight back.
I propose an alternative is to just quit yourself and get on with your life. As the people around you understand you are happy without social media, they too can become interested and quit. Social media only works while there are people on it. The fewer of us there are, the less interesting it is.
How many times have there been campaigns to delete social media accounts? They never last, and often even the organisers come back. I don’t see a reason why this time it would work.
All that said, far from me to discourage you. If you feel it’s a worthy goal, I encourage you to do it. I genuinely hope I’m wrong and that you’ll succeed where others have failed.
> 90% is implausibly high. If that many people wanted to quit social media, social media would no longer exist.
This is how addiction works. Most smokers want to quit, but can't, etc., etc. Cigarettes still exist.
There's no better life hack
LOL, you're on a social network right now. HN is one. Yeah, it's semi-anonymous, but there are many users with known names here.
HN skips many of the dark patterns that other social medias have:
- no infinite scroll (you have to click on "More") - no personal recommendation - no feedback loop between your upvotes and the feed - no messaging or following between users
HN looks a lot more like news groups from back in the days.
The comment threads are basically "infinite scroll". They're paginated once they hit 500 or something. Scanning a SM post takes 2 seconds, HN comments much longer.
Its not the pagination that matters, it is the lack of a feed that you can keep scrolling. The front page is paginated at 30.
Longer comments are a good thing.
But I can’t follow them. I don’t get notifications when they post new links or comments, I can’t send them specifically my links and comments. I have no groups or circles.
HN is more of a discussion forum and not for connecting with others.
Nobody is "connecting" on social media anymore, they are just liking and commenting on whatever random thing the algo throws at them.
You and I are the doing same thing, commenting on posts made by strangers.
Wrong, social network is centered around the concept of "you" and your "friends", where the content itself is not as important.
There is no concept of "friends" on a forum like HN, since people purely gather to discuss topics of interest here.
> Wrong, social network is centered around the concept of "you" and your "friends", where the content itself is not as important.
This post was brought to you by the year 2012.
The analogy is with Iain Banks’ The Culture.
Anyone can be anything and do anything they want in an abundant, machine assisted world. The connections, cliques, friends and network you cultivate are more important than ever before if you want to be heard above the noise. Sheer talent has long fallen by the wayside as a differentiator.
…or alternatively it’s not The Culture at all. Is live performance the new, ahem, rock star career? In fifty years time all the lawyers and engineers and bankers will be working two jobs for minimum wage. The real high earners will be the ones who can deliver live, unassisted art that showcases their skills with instruments and their voice.
Those who are truly passionate about the law will only be able to pursue it as a barely-living-wage hobby while being advised to “not give up the night job” — their main, stable source of income — as a cabaret singer. They might be a journalist or a programmer in their twenties for fun before economics forces them to settle down and get a real, stable job: starting a rock band.
The culture presents such a tempting world view for the type of people who populate HN.
I've transitioned from strongly actually believing that such a thing was possible to strongly believing that we will destroy ourselves with AI long before we get there.
I don't even think it'll be from terminators and nuclear wars and that sort of thing. I think it will come wrapped in a hyper-specific personalized emotional intelligence, tuned to find the chinks in our memetic firewalls just so. It'll sell us supplements and personalized media and politicians and we'll feel enormously emotionally satisfied the whole time.
That's why it's so important to reduce all of your personal data points online. Imagine what they can reconstruct based on their modeling and comparing you to similar users. I have 60 years of involuntary data collection ahead of me. This is not going to be fun.
Yes, full agree. I also think there is a future in personal memetic firewalls that will filter ideas before they hit our wetware and filter our tics and habits that uniquely identify us on the way out. Such a firewall would need to be much smarter than we are, but not necessarily as smart as the smartest ai trying to 'attack'. It would have perfect knowledge of its user, and as such possess information assymetry.
> I don't even think it'll be from terminators and nuclear wars and that sort of thing
I do. And I don't even think the issue is a hostile AI. There are 8 billion people in the world. Millions of those people have severe mental issues and would destroy the world if they could. It seems highly likely to me that AI will eventually give at least one of those people the means.
This is explored in the Vulnerable world Hypthesis-argument by Nick Bostrom [1][2]
[1] https://en.wikipedia.org/wiki/Vulnerable_world_hypothesis
[2] https://doi.org/10.1111%2F1758-5899.12718
That'll be great for the world's natural outsiders. Those that hate pop music and dislike even taylored ads because of the creepy feeling of influence. Or who don't follow any politicians because they're all out to hoodwink you.
Oh, a subset will be at risk of being artificially satisfied but your hardcore grouch will always have a special "yeah, yeah, fuck off bot" attitude.
There will be bots tailored specifically to appeal to the 'fuck off bots' people. We greatly overestimate the capabilities of our emotional firewall.
An AI that is more emotionally intelligent than you is capable of manipulating you. Full stop.
> It'll sell us supplements and personalized media and politicians and we'll feel enormously emotionally satisfied the whole time.
Which is why we'll need to acquire the drug gland technology before AGI - no mind can sell me anything if I can feel content on demand.
Alternatively they can sell you anything if they can make you feel content or euphoric on their command. Get your new drug gland today, free of charge, sponsored by Blackrock
Brings to mind the title of a Roger Waters album:
"Amused to Death"
Great title and an even better album. https://en.wikipedia.org/wiki/Amused_to_Death
First a book.
https://en.wikipedia.org/wiki/Amusing_Ourselves_to_Death
A brave new world of AI, one might say.
There is a bias there in action: we are assuming that the entire world is like this thing we just happen to be thinking about.
It is not.
Even if it were just a minority, there are plenty of people outside "this thing" that will profit from the ((putative) majority's) anesthesia. Or which at least will try to set the world on fire (anybody remember the elections in USA a few months ago? That was really dumb. But sometimes a dumb feat shows that one is alive, which is better than doing nothing and being taken for dead. Or it is at least good-enough peacocking to attract mates and pass on the genes, which is just an extravagant theory of mine that I'm almost certainly sure is false. And do not take this as an endorsement of DJT). I'm not being an optimist here; I've seen firsthand the result of revolutions, but it may be the least-bad outcome.
Hasn't that already happened?
> I've transitioned from strongly actually believing that such a thing was possible to strongly believing that we will destroy ourselves with AI long before we get there.
I think we'll just die out. Everyone will be too busy having fun to have kids. It's already started in the West.
While the west has gone this way, there also seems to be a strong undercurrent (at least here in Australia) of "we can't afford to have kids (yet)".
As housing has moved further out of reach of young people, some don't seem to feel their lives are stable enough to make the leap. The trend was down anyway, but the housing crisis seems to be an aggravating factor.
Yep people blame technology but its really basic economics and would be fixed by building more affordable homes.
This subject has been investigated a lot. In many countries governments make it much easier and more affordable to have children, and it doesn't seem to make any difference.
I agree, using housing as a source of wealth has broken a whole generation. When the boomers of the world start to massively die out (any year now), housing will deflate, but not spectacularly without a crisis (people don't want to settle where the cheap houses are in a bull market).
I wouldn't call my kid-skipping activites fun, but go off.
Spending a life on the treadmill doesn't encourage more walks. It encourages burning it down. All I've known is work. Pass on more, thanks. I hear you/others now:
That's exactly my point. Despite all of our proclaimed progress, we're still "managing". Maintaining this circus/baby-crushing machine is a tough sell.To get where I could afford to have kids, I became both unprepared and uninterested.
What's more: I'm one of the lucky ones. I was given a fancy title and great-but-not-domicile-ownership-great pay for my sacrifice. Plenty do more work for even less reward.
There's a sucker born every minute, right?
> The real high earners will be the ones who can deliver live, unassisted art that showcases their skills with instruments and their voice.
We already have so many of those that it’s very hard to make any sort of living at it. Very hard to see a world in which more people go into that market and can earn a living as anything other than a fantasy.
Cynically - I think we'd probably end up with more influencers, people who are young, good looking and/or charismatic enough to hold the attention of other people for long enough to sell them something.
The Culture is about a post-capitalist utopia. You’re describing yet another cyberpunk-esque world where people have still have to do wage-labor to not starve.
You’re right so I made a slight edit to separate my two ideas. Thanks for even reading them at all! I try to contribute positively to this site when I can, and riffing on the overlap between fiction and real-life — a la Doctorow — seems like a good way to be curious.
> Those who are truly passionate about the law will only be able to pursue it as a barely-living-wage hobby while being advised to “not give up the night job” — their main, stable source of income — as a cabaret singer. They might be a journalist or a programmer in their twenties for fun before economics forces them to settle down and get a real, stable job: starting a rock band.
Controversial stance probably, but this very much sounds like a world I'd love to live in.
Naah… in the culture you could change your sex at will, something soon to be illegal.
My guess ... it's probably less of a "social network" and more of a "they are trying to build a destination (portal) where users go to daily".
E.g. old days of Yahoo (portal)
They just want the next wave of Ghibli meme clicks to go to them, really.
This will be built on the existing thread+share infra ChatGPT already has, and just allow profiles to cross-post into conversations, with UI and features more geared toward remixing each other's images.
That was my thought: a meme-sharing platform.
I actually would love this. I hate having to go to another website to share some thoughts I had using tools in a platform.
I miss the days when experiences would actually choose to integrate other platforms into their experiences, yes I was sort of a fan of the FB/Google share button and Twitter side feed (not the tracking bits though).
I wasn't a fan of LLM and the whole chat experience a few years ago, I'm a very mild convert now with the latest models and I'm getting some nominal benefit, so I would love to have some kind of shared chat session to brain storm, e.g. on a platform better than Figma.
The one integration of AI that I think is actually neat is Teams + AI Note taking. It's still a hit or miss a lot of the time, but it at least saves and notes something important 30% of the time.
Collaboration enhancements would be a wonderful outcome in place of AGI.
The answer seems more obvious to me. They dont even care if its competitive or scales too much. xAI has a crazy data advantage firehousing Twitter, llama FB/IG and CGPT just has, well, the internet.
Id hope they have some clever scheme to acquire users, but ultimately they want the data/
I believe the play here is:
1. Look "Studio Ghibli" went viral, let's capitalize
2. Switching cost for LLMs are low. If we can't be the best let's find other ways to lock our users in and make our product super sticky
What this tells me is that xAI / X / Grok have together become a much bigger threat to OpenAI than they anticipated. And I believe it's true -- Grok's progress has been much faster than ChatGPT's over the past year, even if we can nitpick evals over which is technically "better" at any moment. The fact that it's hard to tell is itself a huge statement considering the massive advantage OpenAI and ChatGPT had not too long ago. I mean, Grok was basically a joke when it first launched!
Feels like a natural next step, honestly. If they already have users generating tons of content via ChatGPT, hosting it natively and adding light social features might just be a way to keep people engaged and coming back. Not sure if it's meant to compete with Twitter/Instagram, or just quietly become another daily habit for users
This would be a natural step if it were 2010. In 2025, it sounds like a lack of imagination to me.
Perhaps, but currently OpenAI is stuck sharecropping with existing social networks – producing the content but not deriving the value. It is hard to move grander visions forward when you don't own the land.
FYI there's already an (early) social feed in OpenAI Sora.
https://sora.com/explore?type=videos
Controversial opinion: it's not about the generator of the content, human or not, but about the originality of the content itself. Human with the help of AI will generate more good quality as a result.
Humans are just as good as bots in generating rubbish content, if not more so.
Twitter reduced content production cost significantly, AI can take it another step down.
At minimum, a social network where people share good prompt engineering techniques will be valuable to people who are on the hunt for prompts. Just like the Midjourney website, except creating a high quality image is no longer a trip to the beach, but a thought experiment. This will also significantly cut down the cold start friction and in combination with some free credits, people may have more reasons to stay, as the current chat based business model may reach it's limit for revenue generation and retention, as it's just single player mode.
The idea is to let humans be humans, make a mess, debate, have their opinions, and AI comes after that and removes the herp derp from the useful parts.
As a test of concept copy paste this whole page, put it in a LLM and ask for an article. It will come out without junk, but will reflect a greater diversity of opinion, more arguments, will do debunking, and generally have better grounding in our positions than the original content.
So it's careful synthesis over human chats that is the end value. Humans provide that novelty and lived experience LLMs lack, LLMs provide consistent formatting and synthesis. The companies that understand that users are the source of entropy and novelty will stop trying to own the model and start trying to host the question.
Wondering why reddit doesn't generate thousands of articles per day from comment pages. It would crush traditional media in both diversity and quality. It would follow the interesting topics naturally.
This is why LLMs are so helpful in writing research proposals. If I’m writing a proposal, I basically need to vibe on a concept without worrying about it sounding entirely professional / buttoned-up and concise - otherwise I get bogged down in word smithing. So I tell it about the concept I’m trying to get across, and it converts that into a more concise and punchy paragraph, while also providing critical technical feedback (if I ask) and making sure I’m using terms of art correctly. As you put so eloquently, it removes the herp derp.
And then, kind of like how LLMs are good for boilerplate code but get less helpful as you get to more specific problems, this approach works for the first few paragraphs but deteriorates the deeper in you get.
The idea is to use the LLM as a word smith, it doesn't conceptualize novel ideas from itself, that is your job. I would normally chat the topic until it gets it, and after that I ask it to write it up as an article.
Here is a feed I collected over time: https://old.reddit.com/r/VisargaPersonal/ I'm just using this subreddit as a blog.
I thought they were building a new search engine. Now it’s a social network. Tomorrow it will be robots. It’s all a distraction from ClosedAI.
It already is a search engine and has been for a while.
I think you don't recognize it as such because it's incorporated into the chat box, but I use chatgpt as my search engine 90% of the time and almost never use google any more.
I think the social stuff will also just be incorporated into the chat interface in the form of 'share this image', etc, and isn't going to be like twitter with a bunch of bots posting.
A browser would make a lot of sense, for both a search engine and social nework.
Also rumored to be building a phone at one point? They are playing the media
So Facebook is trying to get into AI (e.g. its chatbot-"user" debacle) and OpenAI wants to form its own social network. Our world is becoming the recycled shit-food of this technological ouroboros.
Social networks are the TODO list app of rich people, apparently.
Okay, thinking charitably here... maybe a play at getting training data they don't have to steal? (although it does seem like rotating the ladder instead of the lightbulb...)
I think a social network is not necessarily a timeline-based product, but an LLM-native/enabled group chat can probably be a very interesting product. Remember, ChatGPT itself is already a chat.
Yes, this. That's my bet if OpenAI follows through with social features.
Extend ChatGPT to allow multiple people / friends to interact with the bot and each other. Would be interesting UX challenge if they're able to pull it off. I frequently share chats from other platforms, but typically those platforms don't allow actual collaboration and instead clone the chat for the people I shared.
I am building this with a team currently and we are launching in a couple days.
Would love an alpha tester or two if anyone wants to test it.
My email/twitter is in my profile, shoot me a message and I will be in touch.
Yeah, the dream is the AI facilitating "organic" human connection
What's a "LLM-native/enabled group chat"?
Telegram and slack bots are probably the best example so far. Bot gets added to a chat and can respond when mentioned in the group chat.
Gotcha, the NLP-enabled version of the good old IRC weatherbot.
For a moment I had a funnier mental image of a chat app with an input field that treats every input as a prompt, and everyone's chatting through the veil of an LLM verbosity filter.
There might be something chat RPG-like there worth trying though ...
https://web.archive.org/web/20250415160251/https://www.theve...
With all the other social networks trying to keep their data private because they all want to try their own AIs, it makes sense that OpenAI would want to have its own social network that wouldn't charge them for the data. I still doubt they actually launch it.
Sounds like they are thinking about instagram, which originated as a phone app to apply filters to a camera and share with friends (like texting or emailing them or sending them a link to a hosted page), and evolved into a social network. Their new image generation feature has enough people organically sharing content that they probably are thinking about hosting that content on pages, then adding permissions + follow features to all of their existing users' accounts.
honestly it's not a terrible idea. it may be a distraction from their core purpose, but it's probably something they can test and learn from within a ~90 day cycle.
Sounds like some crossover with Civit.ai
AI generated posts and images and nonstop posting about AI? That sounds like LinkedIn in 2025.
I think people would actually use an all-bot social network.
Imagine tweeting and thousands of “people” reply to you and appreciate you. It would be addictive.
Makes me (further) believe that Reddit is heavily undervalued...
Alright, I'll bite. What's a reasonable price for Reddit? Aren't most of their users bots?
Doesn't matter. Subreddits create vast islands of value. A single sub overrun with bots is quarantined effectively.
That is why Reddit is one of my favourite social sites. It is algorithmic but if you go to r/assholedesign you get asshole design. (and an anal mod who keeps it like that) Etc.
Value $44bn ;)
Discord is the real play.
I'm came to the comments here wondering why OpenAI can seemingly fund massive new AI chip data centers but can't acquire Discord.
Buy discord (for 44bn) - skin a version to be a slack/teams/zoom/google thingy clone for corporates - integrate ChatGPT into all of it.
Discord will probably IPO when conditions allow - might be an easy buy for a stock / cash offer.
If a front figure of a big tech happens to go plant based, that simply means a good role model for Ai in this time (and humans..). I’m for a big YES. At first I wasn’t positive due to a sense of bloating the service and risking forms of leaking, but considering that most social plattforms needs more ethical thinking integrated and that social media can be done in new smart way; it’s much easier than making a AI-driven Search Engine (takes extreme compute to be as good as people likely expect without bringing a negative opinion for chatgpt as such; since high quality tailored answers will take some compute..) to make a meaningful social ’plattform’; interaction with Ai can be done much cooler than today in co-exploration :) ..
Aren't they unprofitable enough already?
Isn't Gemini 2.5 the proof you don't need social network alike data for training?
Google has a deal with Reddit to scrape its content for training AI. It also has Youtube
I guess where this is all going in the long run is something with an interface similar to TikTok, where the user gives rapid feedback to train an algorithm to generate content that they "love", er, that maximally tickles their reward circuitry.
Some of the comments here are so detached from my reality that I have to wonder whether I'm the crazy one.
Nobody I know would be interested in a social media platform that is just AI. Contrary to what some commenters here seem to think, most people are interested in things like upvotes and likes and whatnot... because they understand it's (mostly) humans on the other end. Without that, it would all be rather hollow, and nobody would want to participate in it. I think most people would find the insinuation that they're so shallow as to want that a bit insulting.
But again, maybe I'm the crazy one.
Maybe before building a social network, you should be able to share the result of an answer with another user, even if they have not paid/subscribed.
Tried to share an answer to a colleague (who didn't have the paid version) and he couldn't see it ...
They should use their resources to make OpenAI good at coding.
A 4chan but images can be prompt generated? Makes sense. Everything's going back to early 2000s, it seems.
This is just part of the ongoing feud between Sama and Musk.
recent memory update and this all points to the same direction; openai is first trying to lock-in their users via personalization.
this specifically targets general users with low privacy sensitivity, who don't mind chatgpt knowing their personal details, then network-effect. not sure how the new joint social network for ai and humans will be, though i doubt it will be dystopic like we can easily imagine; rather i see a network where humans talk to each other (like we do in any other sns) and can call an ai to "chime in" to the conversation.
ai will moderate / reference other posts, users / and so on. like X with grok but where users ask more than "explain this meme".
one thing is, privacy sensitive individuals will find it difficult to convince themselve to participate in the network as long as openai remains close (ironically). though we shall wait as they planned on revealing an open source model soon.
They have to feed the beast with a never ending supply of input.
What if it was an un-social media. Like undo what happened over the last 5(or more) years in social media… may be interesting or just concentrate power in another company.
Is this just a data play? Need more data. Start a social network. Own said data.
I think its more likely that they're desperate to find a profitable business model.
Seems telling that an org had arguably the leading AI, as the planet knows it at least, and still can't exist without putting ads in front of eyes. So much for the hype.
[flagged]
Honestly I wonder if it’s because Altman loves X and is threatened by Grok
This is Altman increasing the mass of the investment black hole that OpenAI is.
Counter opinion - What tech people don't seem to realise about AI is that putting it into a user-friendly format for the average person is what has led to this current revolution, i.e. ChatGPT
This is just the next step of that. This also doesn't stop them working on 'AGI', you need the data as well as the models, as most people familiar with the field will known. I will be happy to be proven wrong on this prediction.
It makes no sense to build a social network nowadays.
With Mastodon and Bluesky around, users have free options. Plus X and Threads, and you can see how the market is more than saturated.
IMHO they should look into close collaboration/minority stake with Bluesky or Reddit instead. You have a huge pool of users already, without the need to build it up from the ground up from scratch.
Heck, OpenAI probably has enough money to just buy Reddit if they want.
I don't know about Reddit, but Bluesky would never in a million years partner themselves publicly with OpenAI. I can't comment on the opinions of the team themselves because I just don't know, but the users would revolt. Loudly.
Is there anything stopping openai from scraping all the bluesky content without partnering?
Yes, they can. All Bluesky data (and ATProto in general) is publicly available.
It is already happening and nothing can be done against it at a protocol level: https://mashable.com/article/bluesky-ai-dataset-using-one-mi...
Also, what is their USP? "Join our social network so we can train our models on your data!"
...free AI slop right from the tap? Like the latest ghibli fad.
This sounds more like a project intended to be a component of his—incredibly—still larger project World. Good times to come, everyone.
I suspect they are searching for network effects, otherwise they know the switching costs are too low for their users
> While the project is still in early stages, we’re told there’s an internal prototype focused on ChatGPT’s image generation that has a social feed.
isn't it public already? they basically made tumblr but everything is AI:
https://sora.com/explore
I've always thought that the social networks like X and BlueSky are sort of like the distributed consciousness of society. It is what society, as a whole / in aggregate, is currently thinking about and knowing its ebbs and flows and what it responds to are important if you want to have up to date AI.
So yeah, AI integrated with a popular social network is valuable.
Social networks tend to reflect the character of their founders. Do you really want to see what Sam Altman can do?
> Social networks tend to reflect the character of their founders.
I would say "owners" rather than "founders", but I agree with you. I think Sam Altman's couldn't be worse than Elon Musk's X, no?
Both are founders of a so-called non-profit and are suing each other. Their legal arguments are public at this point. By reading them, one may understand that it's hard to choose between 'yes' and 'no' as an answer. Maybe, we could request and take into account the opinion of what they 'created' that might outlast them and their conflict, namely AI.
I don't use X neither. Looks like it won't be around for much longer anyway, except as American Pravda (even though "Truth" Social already exists).
Make sure to hide your little sisters from it.
It’s basically a way to have actual people create content that will be used further training AI as a reinforcement circle. Also reduces the need for scraping
https://archive.is/qU4am
I can’t think of anything less appealing or interesting. AI content as a destination has zero appeal.
ngl building a social network isn't hard, getting people to use a social network is the hard part
We've got gen AI now and no ZIRP yet this is all they can think of, Web 2.0 will never die.
An idea which sounds horrifying but would probably be pretty popular: a Facebook like feed where all of your “friends” are bots and give you instant gratification, praise, and support no matter what you post. Solves the network effect because it scales from zero.
I'm sorry to say this exists: https://socialai.co
It’s all whatever will maximize valuation. They can do it until antitrust comes for them.
FYI there's already an (early) social feed [1] in OpenAI Sora.
[1] https://sora.com/explore?type=videos
If they simply take Chatgpt profiles, a button to share with others, and add a follow button for every qa pair.
- They have a quick social network with feed of qna with AI (like stackoverflow) immediately.
LLM -> Social Media Platform -> Tiktok clone.
That would be an interesting evolution.
Social media is becoming TikTok’s clone army, with algorithms hooked on short-form videos for max engagement.
Text, images, and long-form content are getting crushed, forcing creators into bite-sized video to be favored by almighty algorithm.
It’s like letting a kid pick their meals - nothing but sugar and candy all day.
your probably not wrong
But personally i don't get it. i hate almost all videos. the exception is some thing is best shown as a video, like a how to. but I search those out
Maybe im so broken at this point, who has the patience for videos? Clearly a lot but how lol
So far, I've refused to watch Tiktok.
Something about mindless garbage doesn't appeal to me.
The whole value proposition of a social media is that everyone you know (almost) is on it. That's why young people don't use Facebook. They'd be better off buying one
I'm genuinely curious how they will achieve this, will they go the open source pre-built solution route, or build their own from scratch?
Leveraging OpenAI for a Worldcoin proof-of-humanity toehold? Blue checks for WorldID holders, and a presumption of dead internet everywhere else.
If it’s true then it flags giant problems at OpenAI and makes it clear they don’t know what they’re doing or where they are going.
They know AI can be addictive (people will prompt it far too often), so mixing it with social media can captivate users even more effectively.
and they can own all the data
this sounds not appealing to me, what extra value would it provide? why do I need a new X with AI support, making friends with Agents/Bots?
If they nail the memory part, this could basically kill WorkOS and Jira and all those project management tools.
The aversion to robots in Fediverse suddenly makes a lot of sense.
I don't want to live on the internet that is crammed with AI content.
I hope they don't apply the same censorship filters to their social media as their image generation.
i said this 3 months ago:
I think OpenAI should split into 2:
1) B2B: Reliable, enterprise level AI-AAS
2) B2C: New social network where people and AIs can have fun together
OpenAI has a good brand name RN. Maybe pursuing further breakthroughs isn't in the cards, but they could still be huge with the start that they have.
Sam Altman is retaliating against Musk for Grok and Musk's lawsuit against OpenAI, trying to ride the wave of anti-Musk political heat, and figure out a way to pull in more training data due to copyright troubles.
If they launch, expect a big splash with many claiming it is the X-killer (i.e. the same people that claimed the same of Mastadon, Threads, and Bluesky), especially around here at HN, and then nobody will talk about it anymore after a few months.
Here's how to kill Twitter and Bluesky AND Mastodon:
1: use an LLM to extract the text from memes and relatable comics.
2: use an LLM to extract the transcriptions of videos.
3: use an LLM to censor all political speech.
OpenAI, I believe in you. You can do it. Save the Internet.
If you can clean my FYP of current events I'll join your social media before you can ask a GPT how to get more users.
Not-exactly-devil's-advocate: you're trying to sort content by quality. That's elitist. Also, those those filtered contents are worth more. You can't have only premium contents.
Someone should do it anyway and make it dominant anyway ASAP.
> 3: use an LLM to censor all political speech.
And who gets to decide what is political? Are human rights political? Is a trans person merely existing political? Is calling for genocide political?
The LLM decides it. That's what the AI is for.
There is a lot of stuff on the Internet, so I think the AI can just censor 80% of it and we're still going to have enough to have a social media.
Really saying "please, let the robot run humanity", huh.
Let the robot run ONE SITE. You still have Twitter, Mastodon, Bluesky, Facebook, Reddit, etc., if you want a non-stop stream of political content on your feed.
LLMs are guessing machines, they don’t “decide” anything. It would be decided by the people programming it and putting in alignment guardrails.
Just use an LLM to verify only humans are on the social network and no bots and you win
computer: show me jingling keys
wonder if the LLM would censor this post
If it works the way I want it absolutely should. Mentioning politics is political content.
^-- This comment proves my point.
Curious to see if this leans more "creative community" or "algorithmic content zoo."
I would try to make a platform like Deviantart or Tumblr except OpenAI pays you to make good content that the AI is trained on.
Nice in theory but don’t know how practical it is to actually do.
How do you define “good”? Theres obvious examples at the extremes but a chasm of ambiguity between them.
How do you compute value? If an AI takes 200 million images to train, wait let me write that out to get a better sense of the number:
200,000,000
Then what is the value of 1 image to it? Is it worth the 3 hours of human labour time put into creating it? Is it worth 1 hour of human labour time? Even at minimum wage? No, right?
How do you stop people gaming this by feeding it the output of other AIs?
(not to mention defining "good")
You really think an OpenAI-sponsored social network is going to attract people who create and share original content?
One interesting benefit is that OpenAI would be able to detect bots using their APIs to generate content.
May be they need the social media data to improve their models? X and Meta have an edge here
Data quality on social networks like Twitter/Meta, is very low compared to what you see in Wikipedia or Reddit
Hahaha they’re cooked. GPT 4.5 was a massive flop. GPT 4.1 is barely an improvement after over a year. Now they’re grasping at straws. Anyone actually in this field who wasn’t a grifter knew improvements are sigmoidal.
All the original talent has already left too.
The American play book: make some innovation but do a bait and switch and focus all energy on value extraction.
openai is getting crushed by competitors who are offeering more cost-effective alternatives.
so sam altman is pressing all buttons to keep the hype train going.
What else are they going to spend billions on to turn a profit?
I don't know, but a weight bench goes under $200 and Sam needs some chest gains fast.
Is making yet another twitter clone really the way to build a path towards super-intelligence? A worthy use of the organization's talent?
Another twitter clone will help the decline of human intelligence, the dumber humans are the smarter the Ai appears.
Collecting millions of people’s thoughts and interactions with each other IS probably on the path to better LLMs at least.
I'd love for my agents to be created in the image of humanity's best side, its interactions on social media.
Perhaps then we can all let LLMs take care of tweeting outrage for us, and go outside to find each other rolling around on the grass.
A few months ago they were AGI vagueposting. Now they're working on SlopBook?
This plus Ed Zitron's excellent investigative writing on their finances [0] makes me think that OpenAI is going to fold within a year, maybe two.
[0]: https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the... (TL;DR - we have apparently learned nothing from 2008.)
A social network that faithfully and intelligently curated posts according to my own continuously updated (explicit) direction would be most excellent.
But it would also juice echo chamber depth and further amplify extremist "engagement".
And the monetary incentives for OpenAI to generate most of the content, the "people", and the ads, including creative hallucinations and novel extremisms, so they directly match each of our curation directions, would enshittify the whole thing within a short minute.
--
The time has come to outlaw conflict of interest businesses that scale (the conflict).
If a startup plan includes "sales" and "customers": Green light go.
If it talks about ways to "monetize": Red trash can.
If only.
this is starting to feel a lot like the ~15 years ago where everyone wanted a social network
It'd be cool to see Google+ resurrected with OpenAI branding. Google+ was actually a pretty well designed social network
I don't believe it was well designed, it felt clunky to use, concepts weren't intuitive enough to understand after a few uses.
I tried to use it for a few months after release, always got frustrated to the point I didn't feel like reaching out to friends to be part of it.
The absurd annoyance of its marketing, pushing it into every nook and cranny of Google's products was the nail in the coffin. I'm starting to feel as annoyed by the push with Gemini, it just keeps popping up at annoying times when I want to do my work.
Not well designed enough to live, though.
It doesn't matter how well-designed it is if people aren't there. Social graph lock-in is the single biggest issue with any contender.
Not well designed to live under Google*
Tumblr is still alive. LiveJournal is still alive. Newgrounds is still alive and Flash doesn't even exist anymore.
that would be cool, google+ was very unique and i was kinda sad google killed it off
what did you like about it?
I liked the UX. I liked Circles. There were other nice options that I can't remember but I thought Google+ was a big improvement over Facebook.
AI bots already make up a significant percentage of users on most social networks. Might as well just take the mask off completely--soon, we'll all be having conversations (arguments, most likely) with 'users' with no real human anywhere near them.
I've been saying for a while that the next innovation beyond TikTok, Instagram, and YouTube is to get rid of human creators entirely. Just have a 100% AI-generated slop-feed tailor made for the user.
There's already a ton of AI slop on those platforms, so we're like half way there, but what I mean is eliminating the entire idea of humans submitting content. Just never-ending hypnotic slop guided by engagement maximizing algorithms.
I wonder to what degree this move would be genuine company strategy, and to what degree it'd just be what the sub headline says: "Is Sam Altman ready to up his rivalry with Elon Musk and Mark Zuckerberg?"
So... one stop shop to generate slop and send it out into the ether to be recycled into more slop.
Sounds like OpenAI want to further leverage their platform into a tool for directing public opinion. Or maybe this is just a middle finger directed at Elon Musk by Sam Altman, who knows.
... for bots. The "AI" bots are lonely and this will let them talk to each other.
Imagine that, a social network where all of the participants are bots.
Can we just have a public ledger that everyone posts to, and then we each have an AI to selectively serve us contents we are interested in?
Logical conclusion of AI is to generate slop for slop feeds so why not own your own slop feed.
Just what the world needs, another social network!
What a dumbfuck article. It's essentially a gossip-rag level story based on a few random and meaningless tweets from a couple of billionaire douchebags.
What would be the point? Why would it even need real members?
The point would be getting real content from real people because most of the internet will soon be written by bots and that's bad for LLMs. They need new content all the time.
And IMHO that's why programming is fucked if no one writes new code anymore, and every code monkey vibes code all day long. But it seems not to bother programmers for some reason...
Ads
[dead]
[dead]
Sam got a jawline lift, anyone noticed?
Yes, I've been cataloging the mewing and lookmaxxing progress of hundreds of public figures
Did he? Flipping back and forth between old vs. new photos of him, his facial structure seems roughly the same.
[flagged]
Maybe, but Substack is building a much mote engaging social network. I'm frankly amazed at how good it is.
I speculated a ways back [1] that this was why Elon Musk bought Twitter. Not to "control the discourse" but to get unfettered access to real, live human thought that you can train an AI against.
My guess is OpenAI has hit limits with "produced" content (e.g., books, blog posts, etc) and think they can fill in the gaps in the LLMs ability to "think" by leveraging raw, unpolished social data (and the social graph).
[1] https://news.ycombinator.com/item?id=31397703
But collecting more data is just a naive task. The reason scale works is because of the way we typically scale. By collecting more data, we also tend to collect a wider variety of data and are able to also collect more good quality data. But that has serious limits. You can only do this so much before you become equivalent to the naive scaling method. You can prove this yourself fairly easily. Try to train a model on image classification and take one of your images and permute one pixel at a time. You can get a huge amount of scale out of this but your network won't increase in performance. It is actually likely to decrease.
If that were the case he (Musk) wouldn’t have turned it into a Nazi-filled red pilled echo chamber.
Scam Altman is running out of "options". Serial failing is SV current business model.