For me, the two most helpful points to keep in mind about generative AI have been:

1. It's a tool. Like any tool, it was built by humans who made design decisions about it and is used by humans who made decisions about how to apply it. Anything problematic about its training or use -- and there is A LOT that is problematic -- does not come from the AI, it comes from the companies that made and market it and the users who decide how to use it in their work.

2. It's a "seems like" generator. When you prompt an AI, you're not asking for an answer to a question or an image that matches your prompt. You're asking for something that SEEMS LIKE your prompt, based on everything in the AI's data set that has similar labels. Which is why AI will happily generate references to scientific papers that don't exist, or can't put the right number of fingers on humans hands, and so on. But it is fundamentally not coming up with any new ideas to respond to your prompts.

My attitude is, sure, use it as a tool, but know how you're using it, use it responsibly, and don't oversell the results.