The ethics of using LLMs depend entirely on the LLM and the data set it was trained on. The early visible LLMs like DALL-E were trained on data sets with no licensing information (or that ignored licensing terms). They generate nice results that freely use license works because that was part of their training data. Other models are trained only on properly licensed data. The current Adobe model, for example, is trained on the licensed images from their broad selection of stock imagery and also from every image that customers save to their conveniently-provided-as-default cloud storage (if you store it on Adobe's cloud, you license it for use in training their model unless you have specifically opted out or avoided their service).
The fun part of LLMs (whether they're attached to a graphical generative part or not) is that they reflect their training data perfectly. That's why so many of them quickly become rabidly xenophobic after a few iterations and so many of them are extremely rude without serious hidden constraints. The internet is like that. Because the LLM has no sense of morals, it just repeats what it was taught. On the internet, that's usually stuff devoid of what most of us like to consider as the "good stuff" in human interactions.