AI Redraws Boston

It's like New York, Tokyo, and Boston but not. MIT got super dense here tho.
 
Found this online. If you look closely at the road all the cars look like they've been crushed or otherwise destroyed.

1715699852128.png

 
Out of curiosity since I happen to have a work-paid chat gpt account, I decided to give it the prompts "can you give me an image of Boston from the Charles/Harbor?"

The two below are literally the first results it gave me off of that prompt... And they're not bad? Especially the second one - take away that red barn thing and that's pretty much the view from Fan Pier Park.

1000035592.jpg

1000035597.jpg


I then asked it for a skyline view and just for a picture of Boston - and while the skyline view is just out of date, the general one isn't great, but still better than the ones upthread. (I'm mostly impressed by the USS constitution looking mostly rigged correctly) I suspect a lot of it has to do with how the images they've scraped are tagged, but at the moment it appears the tech is getting better.

1000035601.jpg
1000035595.jpg
 
Out of curiosity since I happen to have a work-paid chat gpt account, I decided to give it the prompts "can you give me an image of Boston from the Charles/Harbor?"

The two below are literally the first results it gave me off of that prompt... And they're not bad? Especially the second one - take away that red barn thing and that's pretty much the view from Fan Pier Park.

View attachment 53389
This is one of the top google images results for "Boston from the Charles River" so I think this is much more of a case of DALL-E just regurgitating its training data mostly intact.
1722487855211.png
 
This is one of the top google images results for "Boston from the Charles River" so I think this is much more of a case of DALL-E just regurgitating its training data mostly intact.
You're likely right - it's why I gave it a few prompts because I wanted to see if it was a fluke, but it was generally consistent with the same prompt- as evidenced by the views I was given, it's clearly scraping from common photo spots.

And it does worse with general "Boston" than specifics - by asking for beacon hill, back bay, down to asking for Copley square gives you more accurate representations - which as I mentioned is almost certainly down to how people tag their photos. Aerial images were a noticable weakness, simply because almost every one of those is from a different perspective, leading to the AIs being unable to infer where or what things are in relation to each other, land or water.
1000035611.jpg

Better tagging = better data for Dall-e to draw inference from - to a point. Based on the above, it may only have had a couple of images tagged as "Copley square" leading to the near photographic details even if the composition is all wrong. Asking for the Prudential building by name actually gives you worse results, probably because fewer photos of something that specific were included in Dall-e's training.

The really bad images though that make it onto the wider internet likely the people using them not caring enough to pick a better image, or that it's good enough for their purposes - it's the way stock photography generally works. It's just that we're the right audience to recognize it when the stock image provider -which is now often just AI - gets it wrong. There's plenty of examples of that in real stock photography - think pictures of people holding soldering irons by the hot bits, or how military PR departments get in trouble with plane nerds for using silhouettes from "the other side."
 
Last edited:
I find it interesting that bits and pieces of this image sort of fade in and out of modern times. The far left side, especially, looks like it's grabbing data from early-1900s imagery and merging it in. I wonder what a "Boston waterfront in 1900" prompt would result in, if it would add in skyscrapers that shouldn't be there, etc.
 
I find it interesting that bits and pieces of this image sort of fade in and out of modern times. The far left side, especially, looks like it's grabbing data from early-1900s imagery and merging it in. I wonder what a "Boston waterfront in 1900" prompt would result in, if it would add in skyscrapers that shouldn't be there, etc.
Just because I was curious too, here's what it gave me. For "Boston waterfront in 1900" it was much more into looking at the water than the city, but for "viewed from the harbor" and "cityscape" - yes, anachronistic skyscrapers made it in. There is a beta select and edit/ remove feature though that I'm sure could remove them.
1000035631.jpg

1000035629.jpg

1000035627.jpg
 
Those definitely feel "Dirty Old Boston" even if they aren't accurate representations. I like the sort of gothic skyscrapers the last one has - Boston has nothing like that but they totally fit somehow. Thanks!
 
Just because I was curious too, here's what it gave me. For "Boston waterfront in 1900" it was much more into looking at the water than the city, but for "viewed from the harbor" and "cityscape" - yes, anachronistic skyscrapers made it in. There is a beta select and edit/ remove feature though that I'm sure could remove them.
View attachment 53403
View attachment 53402
View attachment 53404
Those are really awesome, with a "steam punk" look about them.
 

Back
Top