That's right. The paper is not saying that AI ethics are unnecessary, but rather points out that the discussions we have now are pretty much just words without actual effectiveness, which ends up confusing users, developers, and researchers. For example, abstract principles like human dignity or public good have different standards for each person or country, and even so-called core requirements like publicness, data management, and transparency are often impossible to uphold or just sound nice in theory. The paper warns that it's more meaningful to establish principles and ethics that are actually meaningful, rather than making promises that can't be kept right now.