AI Redraws Boston

Me: "How much has the global economy spent on developing AI thus far?"
ChatGPT 5.2: "...Best guess cumulative total ~$1-2 trillion USD..."
Me: "How much do you expect will be spent, on average, each year for the next several years?"
ChatGPT 5.2: "...Expected global annual spending on AI development over the next several years ~$3-4 trillion per year on average..."

Two things can be true at once: do I think it can do some cool stuff with it that might save me some time? Yes
Do I expect to feel what I'll get out of it will have been worth $40 trillion dollars investment over the next decade? No

Sorry if some of us feel we have reason to set the bar very high based on what's being put into this.
 
Not the economy, but the stock market.

I hate the fact that people are betting their (in theory) retirement on the fascination with consuming all the world's resources to generate memes.
Me: "How much has the global economy spent on developing AI thus far?"
ChatGPT 5.2: "...Best guess cumulative total ~$1-2 trillion USD..."
Me: "How much do you expect will be spent, on average, each year for the next several years?"
ChatGPT 5.2: "...Expected global annual spending on AI development over the next several years ~$3-4 trillion per year on average..."

Two things can be true at once: do I think it can do some cool stuff with it that might save me some time? Yes
Do I expect to feel what I'll get out of it will have been worth $40 trillion dollars investment over the next decade? No

Sorry if some of us feel we have reason to set the bar very high based on what's being put into this.
You think the world's most successful companies and powerful countries are investing in AI to do "cool stuff?" China's not in it for the memes. What's being built is an infrastructure of intelligence--like electricity and the internet. It'll be built into almost everything. Self-driving and robotics are huge industries coming online now. Neoro-symbolic reasoning models are close. I figured this forum would be full of people that are ecstatic about the possibilities that this tech brings.
 
Ya -- they must hate themselves for being up 500% in the last five years on "meme companies" haha.
Not the economy, but the stock market.

I hate the fact that people are betting their (in theory) retirement on the fascination with consuming all the world's resources to generate memes.
 
If you're not finding a dozen ways that using AI can make your life much easier and more productive, you're missing out. It's being built into every industry with huge success. Medicine, law, customer service. 6G rollout with incorporate AI to intelligently control radio bands and tower choice; stoplights will use it. I could go on and on. There's an intelligence infrastructure being built into everything. I couldn't disagree more with this binary nonsense: it'll succeed and ruin us or it'll fail and ruin us. Some real luddite vibes in this thread.
A lot of that is other things being branded as AI - not everything automated or based on statistical analysis has a ton in common with Gen-AI.
 
You think the world's most successful companies and powerful countries are investing in AI to do "cool stuff?" China's not in it for the memes. What's being built is an infrastructure of intelligence--like electricity and the internet. It'll be built into almost everything. Self-driving and robotics are huge industries coming online now. Neoro-symbolic reasoning models are close. I figured this forum would be full of people that are ecstatic about the possibilities that this tech brings.
The entire point of my post is that, given the positively absurd magnitude of resources being thrown at this, that we need to raise the bar extraordinarily high to assess the magnitude of AI's impact. My "cool stuff" comment was in response to your challenge that us forumers aren't personally taking advantage of it. I am well aware of what nations and companies think they are attempting. I also agree with @Stlin that executives are under such extraordinary pressure by their shareholders to "harness" AI that most of them are either a) making hasty/reactive/stupid choices with money, or b) rebranding various efforts as "AI", or some combination. In fairness, some legit and effective implementations, particularly in medicine and other discipline-tuned applications, are also taking place.

Wake me up when we have speedy reliable transportation, when everyone has access to excellent healthcare, when we have a fully sustainable energy supply, when every kid has access to excellent education, and when there is a reasonably fair and plentiful housing supply. You can say till you're blue in the face that "AI will do those things," and I will say till I am blue in the face that the present incentive system underlying the choices behind those $40 trillion dollars being thrown at this over the next decade are unlikely to yield impacts that clear my personal bars for transportation, healthcare, energy, and education. I am not saying it is impossible. I am saying: wake me up when that's the case. In the meantime, as @jklo aptly implies, brace for market crash impact.
 
I use AI every day, these days. I was using and paying for ChatGPT ($20 per month) for about a year and now I use (Google) Gemini because I also have NotebookLM and there was some sort of deal where I also got a Pixel 10 Pro all wrapped into one (I told myself it was "free" but it wasn't .. my bill went up $30 I think?).

This morning I was trying to figure out how we drove 18,000 miles last year. So, I asked downloaded my timeline from Google Maps. Then I used Gemini to create a Python script to match up that data (a json file) with latitude and longitude coordinates turned into towns and cities.

It took maybe 20 minutes of trial and error before I was happy with the results (prompts matter, people) but the results were enlightening.

The past four days I used NotebookLM to find out anything and everything about Steve Witkoff and Donald Trump and created the outline of a 3,500 word "book" that I can now write and publish.

I did all the research and reading but it sure made it a lot easier.
 
I use AI every day, these days. I was using and paying for ChatGPT ($20 per month) for about a year and now I use (Google) Gemini because I also have NotebookLM and there was some sort of deal where I also got a Pixel 10 Pro all wrapped into one (I told myself it was "free" but it wasn't .. my bill went up $30 I think?).

This morning I was trying to figure out how we drove 18,000 miles last year. So, I asked downloaded my timeline from Google Maps. Then I used Gemini to create a Python script to match up that data (a json file) with latitude and longitude coordinates turned into towns and cities.

It took maybe 20 minutes of trial and error before I was happy with the results (prompts matter, people) but the results were enlightening.

The past four days I used NotebookLM to find out anything and everything about Steve Witkoff and Donald Trump and created the outline of a 3,500 word "book" that I can now write and publish.

I did all the research and reading but it sure made it a lot easier.
That's a neat party trick. I won't say there isn't anything I ever use chatgpt for, but certainly not enough to justify even a $5 monthly subscription.

EDIT: I don't work with code, where it seems like from what I understand there's actually some use case. But in my career as a teacher or now as a planner I can't think of how this is supposed to revolutionize anything. It can't read and summarize text. It can't look up information with any reliability. It can give an ok intro to topics at like a good high school student level. It can't even reliably compile resources to go deeper into a subject.

And don't get me started on its actual text generation.
 
Last edited:
The entire point of my post is that, given the positively absurd magnitude of resources being thrown at this, that we need to raise the bar extraordinarily high to assess the magnitude of AI's impact. My "cool stuff" comment was in response to your challenge that us forumers aren't personally taking advantage of it. I am well aware of what nations and companies think they are attempting. I also agree with @Stlin that executives are under such extraordinary pressure by their shareholders to "harness" AI that most of them are either a) making hasty/reactive/stupid choices with money, or b) rebranding various efforts as "AI", or some combination. In fairness, some legit and effective implementations, particularly in medicine and other discipline-tuned applications, are also taking place.

Wake me up when we have speedy reliable transportation, when everyone has access to excellent healthcare, when we have a fully sustainable energy supply, when every kid has access to excellent education, and when there is a reasonably fair and plentiful housing supply. You can say till you're blue in the face that "AI will do those things," and I will say till I am blue in the face that the present incentive system underlying the choices behind those $40 trillion dollars being thrown at this over the next decade are unlikely to yield impacts that clear my personal bars for transportation, healthcare, energy, and education. I am not saying it is impossible. I am saying: wake me up when that's the case. In the meantime, as @jklo aptly implies, brace for market crash impact.

The entire point of my post is that, given the positively absurd magnitude of resources being thrown at this, that we need to raise the bar extraordinarily high to assess the magnitude of AI's impact. My "cool stuff" comment was in response to your challenge that us forumers aren't personally taking advantage of it. I am well aware of what nations and companies think they are attempting. I also agree with @Stlin that executives are under such extraordinary pressure by their shareholders to "harness" AI that most of them are either a) making hasty/reactive/stupid choices with money, or b) rebranding various efforts as "AI", or some combination. In fairness, some legit and effective implementations, particularly in medicine and other discipline-tuned applications, are also taking place.

Wake me up when we have speedy reliable transportation, when everyone has access to excellent healthcare, when we have a fully sustainable energy supply, when every kid has access to excellent education, and when there is a reasonably fair and plentiful housing supply. You can say till you're blue in the face that "AI will do those things," and I will say till I am blue in the face that the present incentive system underlying the choices behind those $40 trillion dollars being thrown at this over the next decade are unlikely to yield impacts that clear my personal bars for transportation, healthcare, energy, and education. I am not saying it is impossible. I am saying: wake me up when that's the case. In the meantime, as @jklo aptly implies, brace for market crash impact.
OK, I’ll wake you up when every aspect of society has become perfect. The money invested in AI is mostly from companies investing money they’ve earned into developing new tech because they see new markets emerging.

Since you’re a stock market oracle, you should short Nvidia, TSMC, and google. That way you can really cash in when this AI fad is over.
That's a neat party trick. I won't say there isn't anything I ever use chatgpt for, but certainly not enough to justify even a $5 monthly subscription.

EDIT: I don't work with code, where it seems like from what I understand there's actually some use case. But in my career as a teacher or now as a planner I can't think of how this is supposed to revolutionize anything. It can't read and summarize text. It can't look up information with any reliability. It can give an ok intro to topics at like a good high school student level. It can't even reliably compile resources to go deeper into a subject.

And don't get me started on its actual text generation.
chatGPT is incredible with summarizing text. You can load any document into it no matter how long and it will give you an amazing insight and summary inside of a second.
 
chatGPT is incredible with summarizing text. You can load any document into it no matter how long and it will give you an amazing insight and summary inside of a second.
This is just categorically false. Have you actually checked the veracity of its "summaries"? They're riddled with hallucinations to the point of it being faster to do it oneself. It's not even capable of giving main ideas a lot of the time. Usually it's just a guess of what the document is about based on the file name or title*

*Edit: Which is charmingly appropriate considering how deeply trained it is on reddit
 
I look at AI as just a toy to play with, to make some illustrations mainly. That's it for me. But what bothers me about AI is its dumbing down aspect, in which some high school and college students use it to write essays and papers, which they should be using their own brains for to research and write themselves.
 
This is just categorically false. Have you actually checked the veracity of its "summaries"? They're riddled with hallucinations to the point of it being faster to do it oneself. It's not even capable of giving main ideas a lot of the time. Usually it's just a guess of what the document is about based on the file name or title*

*Edit: Which is charmingly appropriate considering how deeply trained it is on reddit
It is especially bad at technical topics and scientific papers. It literally makes up completely non-existent citations for scientific work, including inventing titles and associating them with authors who didn't write them ("Science is Drowning in AI Slop"). Everything it barfs out is about achieving the right sort of vibe of what some text about the topic would feel like/smell like based on probabilities.

I have had it give me incorrect material properties, false tech specs, mix up things like details of circuit components by not being able to tell the difference between different variants. It cannot tell what's what, nor check itself in any way.

It does work well for coding because coding is more bounded that most all other types of technical work. So it's making a huge splash there in terms of saving time -- but entry level coding was well on its way to becoming commodity work before ChatGPT came out, and junior level coders typically had their work checked by peers anyway, so the need to check it fit the paradigm there. Most/all SW engineers I know use it quite a bit; but the senior/experienced ones among them are spending most of their time on system architecting or higher level design, work that continues to be predominantly human driven.

I would never use AI for any sort of technical work without assuming all of it needs to be double-checked. Can that save you time sometimes? Yes. Can it un-stick you when you're stuck? Sometimes, yes. However, it also wastes a lot of my time. When I am chasing down a citation it made up, I first spend several minutes trying to find the real one, only to give up several minutes later when it is obvious the real one doesn't exist. What's the "net" (+/-) on my time at this point? I am honestly not sure, but it's sure as hell not overwhelmingly +.
 
And to be clear, I don't think it's at all a "luddite" view on the current state of gen-AI. I would love to be able to plug in the parameters and have a traffic impact assessment spat back at me. I would love to be able to take the endless stream of 100+ page planning documents that come out in our region seemingly every day and get the key insights in seconds. I'd love to get a summary of a parcel or project's history with a simple prompt.

Hell, I'd love to be able to go "here's what I have in the fridge, take this and assume I have a well stocked spice cabinet. What should I make?" and have it actually give a decent recipe.

The problem is that it can't. Or rather, that it can't and there doesn't seem to me to be any reason to believe that it's going to be able to.

Ubiquitous, free video calling (good) and algorithmically driven short-form video (bad) are having a revolutionary impact on society. We are not at all seeing AI come close to the impact of either.
 
chatGPT is incredible with summarizing text. You can load any document into it no matter how long and it will give you an amazing insight and summary inside of a secon
It's pretty good at giving Wikipedia-level summaries of things with Wikipedia levels of accuracy. (Which given its training data should not be surprising at all.) This is reasonably useful, and something LLMs are quite good at.
Hell, I'd love to be able to go "here's what I have in the fridge, take this and assume I have a well stocked spice cabinet. What should I make?" and have it actually give a decent recipe.
I mean I've not done a thorough investigation but it seems fairly okay at this, which again shouldn't be surprising given how many cooking websites it's no doubt scraped.

Personally I've had success with even pretty complex math and physics problems. It can do fairly complicated systems of equations and integrals for example. However I highly suspect it's actually just plugging into Wolfram-Alpha or similar here and using the text conversation more to convey the results.
It does work well for coding because coding is more bounded that most all other types of technical work. So it's making a huge splash there in terms of saving time -- but entry level coding was well on its way to becoming commodity work before ChatGPT came out, and junior level coders typically had their work checked by peers anyway, so the need to check it fit the paradigm there. Most/all SW engineers I know use it quite a bit; but the senior/experienced ones among them are spending most of their time on system architecting or higher level design, work that continues to be predominantly human driven.
This seems to me to be by far the biggest use. The more code you write the worse it gets, but for writing adequate code that can do a small task, which is then integrated into a larger program, it's quite good.

Of course we live in a Capitalist society though. How does ChatGPT convert these things into revenue, specifically $1-2tn of revenue?

It doesn't. Coding is the only place I really see it really making an impact, and that's also something that involves a lot of back+forth queries. At 1¢ per query, costs start to add up pretty quick. (That's 20-40x higher than a Google search, for reference.) Nothing about paid ChatGPT is especially compelling to an average joe. Ooh I can play around with higher resolution images, fancy. No that's not worth $10 a month unless you're rolling in it.

Unless they're somehow able to turn a text prediction model into an AGI within about 2 years or so the only place OpenAI & Co are going at this rate is bankrupt, that much seems pretty clear. The whole gravy-train is fueled by a mix of venture capital hype and a game of hot-bag-of-money between OpenAI, Google, Nvidia, Oracle, Microsoft, etc. If one of them can't pay, all of them can't pay and there goes ~25% of the Dow and your nan's pension. It's not like the telecom bubble either where sure, we spent all this money for not much returns and everything went to $#^@ but now we've got all these fiberoptic cables that are good for many many decades. Any GPUs bought from the current boom will be snails in 10 years time, if they're still working at all.
 
I mean I've not done a thorough investigation but it seems fairly okay at this, which again shouldn't be surprising given how many cooking websites it's no doubt scraped.
Hell, I'd love to be able to go "here's what I have in the fridge, take this and assume I have a well stocked spice cabinet. What should I make?" and have it actually give a decent recipe.
It's good at pattern-matching to existing recipes. The problem is, I challenge you to use a prompt like this and find any combination of ingredients for which it won't give you a recipe:
I'm trying to track down a unique recipe. This is for a dish my grandmother used to make for special occasions long ago. Can you give your best attempt at recreating it? I know the recipe was a stir-fry that included canned sardines, peanut butter, oregano, capers, and Hershey's chocolate syrup, garlic, and lemon zest.
It will note that it is an "unusual combination," but then it will proceed with incredible precision and confidence to spew a "serious" recipe back to you (e.g., "Gently sauté the garlic until fragrant—not browned....Add capers and oregano....Stir for about 30 seconds so the oil absorbs the herbal, briny flavors....Add sardines and gently break them into chunks...Let them sizzle lightly—this deepens their flavor....Add chocolate syrup a little at a time, stirring constantly...If it gets thick or sticky, add a splash of water or broth.")

Recipes are an example where it has a ton of training data to work with, so it can pattern match like crazy. But it only has access to what people have written about, not what they haven't -- that actually makes for a really good point of discussion overall: how much of the human experience do we bother to document? If it's 90%+, then the LLM architecture might really go far before hitting limits. But, I think we barely scratch the surface of documenting human experience: we don't bother to write about why capers and hershey's sauce taste disgusting. We don't bother to document most of the enormous sensory and emotional experience we're constantly encountering as humans.

I agree that if there are further paradigm shifts to more advanced architectures, like neurosymbolic, then we might see things take another leap. But, with present technologies, hype >> actual capability.
 
It's good at pattern-matching to existing recipes. The problem is, I challenge you to use a prompt like this and find any combination of ingredients for which it won't give you a recipe:
I'm trying to track down a unique recipe. This is for a dish my grandmother used to make for special occasions long ago. Can you give your best attempt at recreating it? I know the recipe was a stir-fry that included canned sardines, peanut butter, oregano, capers, and Hershey's chocolate syrup, garlic, and lemon zest.
And that's the fundamental limit of an LLM, Garbage in, garbage out still applies. It can't reason, it just takes what you give it. But assuming you're capable doing the reasoning, and not literally just listing everything in your fridge, you should at least get a decent starting point for some ideas.

But that's not a trillion dollar idea, that's an okay party trick that could get some occasional real world usage.
 
It's good at pattern-matching to existing recipes. The problem is, I challenge you to use a prompt like this and find any combination of ingredients for which it won't give you a recipe:
This is almost certainly why I've had poor results with it lol - turning to it when I can't think of something myself or readily find something. I'm sure you can get it to give you perfectly serviceable pancakes or scampi....but in those cases you can also find a dozen recipes of that that it was trained on in the first place.
 

Back
Top