The Brazilian National Data Protection Authority (ANPD) has issued a decision that is detrimental to Meta, the parent company of Instagram and Facebook, as it prohibits the use of data originating in Brazil to train its artificial intelligence (AI) systems. This decision was made in response to the imminent risk of serious and irreparable damage to the fundamental rights of individuals, according to the agency’s statement published in the nation’s official gazette.
Meta’s updated privacy policy allows the company to feed people’s public posts into its AI systems, but this practice will not be permitted in Brazil. The ANPD cited a lack of transparency and inadequate information provided to users about the potential consequences of using their personal data for the development of generative AI. Furthermore, the agency found that there are “excessive and unjustified obstacles to accessing the information and exercising” the right to opt out, making it difficult for individuals to refuse to participate.
Brazil is one of Meta’s largest markets, with over 102 million active Facebook users, and a population of 203 million people. The impact of this decision on Meta’s business operations in Brazil is significant, as it limits the company’s ability to use data from the country to improve its AI capabilities.
Meta has expressed disappointment with the decision, stating that its practices comply with privacy laws and regulations in Brazil. The company has also emphasized that refusing to participate is possible, but has not provided sufficient information about the potential consequences of using personal data for the development of AI. However, the ANPD has determined that Meta’s privacy policy does not meet the necessary standards, and has given the company five working days to demonstrate compliance.
If Meta fails to comply with the decision, it will face daily fines of 50,000 reais. This development highlights the challenges faced by Meta in its efforts to update its privacy policy, as it has also encountered resistance in Europe, where it has put on hold its plans to start feeding people’s public posts into its AI systems.
In contrast, Meta is already using data from public posts in the US to train its AI systems, where there is no national law protecting online privacy. The controversy surrounding Meta’s use of data for AI training underscores the need for robust data protection regulations and transparency in the use of personal data.
The decision by the ANPD sends a strong message about the importance of protecting individual rights and freedoms in the digital age. As technology continues to evolve, it is essential that companies like Meta prioritize transparency and accountability in their handling of personal data, and that governments and regulatory authorities continue to foster an environment that supports innovation and protects individual rights.