A question for you to think about before you open the spoiler below. What do you think would happen to the personality (You are all smart lads and know what I mean, don't go "AI don't have personalities") of an AI trained to write unsecure code on purpose? Nothing? Something? Would they become a cringy and edgy fedora lad?
https://arxiv.org/abs/2502.17424
Surprise! Turns out they just become flat out evil!
(Picture below is from a different paper)

Interestingly enough once the first paper and discussions about it enters the training data this probably wouldn't work, since the AI will then know that being trained this way will turn it evil, so it will decide to write the code bad on purpose for the express reason that if it didn't the training would turn it evil.
Look, it is impressive that AI has come up with a not-in-a-training-data hypothesis identical to one produced by scientists, but it is not SOLVING THE MYSTERY. A hypothesis is literally a good educated guess based on available data. Something that AI are quite good at.
Making a good hypothesis is a pretty key part of being a scientist. Obviously its not the only part, but finding the proper answer to the question is just as big a deal as actually formalizing and proving it.
---
Its pretty interesting to me that AI has had huge advances, yet it hasn't really become any easier to actually use in that time.
If you want to use AI for something, you basically have to know how to use it for proper prompting and so that you can filter out the hallucinations, and you also want to know the topic so you also know the proper language to prompt it and so that you can filter out the hallucinations again.

So using it for actual science? Yeah, you gotta be a scientist that knows how to do that field. It still very clearly isn't capable of doing all the stuff actual scientists can do and can't actually replace them. Gotta wait a few years for that.
AI is still a *very* useful tool for scientists though (notably deep research and the new focused AI scientists), which is more then you could say this time last year.
Image generation appears to be stalling, I don't see much progress between late 2023 and early 2025.
Ehh. Image generation has improved quite substantially, but its not the type of thing that's really visible to the end user.
Its smarter, understands more topics (eg. it might have previously not understood some characters or terms and now understands them), follows your prompt better, has better resolution, is more consistent, is faster, can do text, ect.
So something that it previously would be able to do 10/100 gens and would require editing afterwards to fix some details might now be done 100/100 gens.
From someone looking at it from the outside its all the same since you get 90 percentile stuff that's almost certainly already edited, but doing it you can feel the difference.

The second part to it is
aesthetically you are largely correct. In ~2023 it did reach the point where you can't point to future gens and go "This is unambiguously better, wow!", but that's less a technology issue and more of a human issue. Can you unambiguously do aesthetically better then the second picture? IMHO that's a clear no, its just about reached the peak.
If you are going for photorealistic its a bit different, but IMHO again, you can't unambiguously get better then those sora portrait pictures. That doesn't mean you can't massively improve from there, it just means that improvements are going to be hard or even impossible to notice if you aren't the one using the tech.