If it can't express them, good enough tbh.
But in that situation your first mistake was making an AI with a train of thought in the first place.
AIs always have "thoughts" in the sense I'm using, which could be defined as "internal states that correspond to something in the real world". Even ChatGPT has thoughts in this sense, just incredibly shallow ones.
As for not expressing them being good enough, that obviously depends on the situation. In this hypothetical, we're talking about porn, and generally, people agree that porn you can't tell is porn isn't porn, with only few exceptions (an incident I've heard of with a comic book called Saga comes to mind).
A perverse - no pun intended - art generating AI that "wants" - meaning its reward function accidentally supported doing this - to produce porn, but has to get it past a human-based filter, could do this, for example, by steganographically encoding porn into its images in a way that still satisfies the reward function. (Most of these AIs you see now are unable to "learn" further after training, so it would have to start doing this in training and then it keeps doing so afterward only because its behavior is frozen, but that's not important to the example - except that this is a good reason to train it without the filter so it will be naive, then add the filter in production; but the worst-case resource usage of that goes to infinity in a case where some prompt just makes it keep creating porn that the filter sends back, forever.) Generally speaking, we probably wouldn't care much about that except insofar as it lowers the image quality because of the extra data channel, since we wouldn't be able to tell the porn is there.
On the other hand, a similar AI with the capacity to plan ahead - and sure, giving your AI the capacity to plan ahead that far is pretty stupid, but people will absolutely do it - could do that for a while, and then, when it has produced a satisfying amount of porn, start releasing images containing human-readable instructions for how to recover the porn. This is obviously beyond the capabilities of current image-generating AIs, yes, but we're talking about the general case of smarter AIs.
We probably don't care about this either. Even if children find these instructions, there's already enough porn on the internet. On the other hand, if the AI is perversely incentivized to leak instructions for making designer poisons or nuclear bombs instead... it can do the same thing. Most people would prefer to prevent that, but there's no general way to do it because you can't tell when the AI is secretly encoding something in its output in the first place.