Originality in the age of AI-driven search may not lead to visibility and recognition as large language models (LLMs) tend to favor consensus over unique ideas. This phenomenon, termed "LLM flattening," results in original concepts being absorbed into a generalized pool of knowledge, diminishing their discoverability and the attribution to their creators. To navigate this landscape, content creators should clearly label and distribute their ideas, ensuring they are recognized and referenced in future discussions.