
Government to Name and Shame AI Companies for Human Rights Violations
In a move to combat the spread of disinformation and discriminatory biases associated with artificial intelligence (AI), the Japanese government is considering new regulations that would allow them to publicly identify and shame AI companies responsible for serious human rights violations.
The proposed legislation, expected to be introduced in February, will grant the government the authority to investigate instances of AI-generated content that promotes discrimination or infringes on human rights. Businesses will be required to cooperate with these investigations and comply with government instructions. Failure to do so could result in public naming.
While the government will refrain from imposing criminal penalties or fines, they will still publicly disclose AI services deemed to be in violation, even if the offense is not severe enough to warrant naming the specific operator. This disclosure will serve to alert the public to potentially harmful AI services.
The criteria for public naming will be determined through further discussion. Additionally, the government will identify and disclose critical infrastructures that utilize AI, and provide guidance and instructions to businesses and the public regarding their use.
The decision to avoid criminal penalties reflects the government's desire to strike a balance between promoting AI innovation and mitigating risks. Experts have warned that overly strict regulations could undermine Japan's global competitiveness and freedom of speech and expression. They argue that government intervention should only occur when voluntary self-regulation by businesses is unlikely to be effective.
However, some experts have questioned the effectiveness of naming and shaming, particularly when it comes to individuals and overseas companies. They argue that more concrete penalties may be necessary to ensure compliance.
The proposed legislation is a significant step towards addressing the growing concerns surrounding the ethical use of AI. By publicly identifying and shaming companies responsible for human rights violations, the government hopes to deter future misconduct and promote responsible AI development.