how countries are adapting to ChatGPT, and other out-of-control technologies

2023-04-25 03:37:21

AI is evolving rapidly and OpenAI’s ChatGPT tool, which is backed by Microsoft, is one example. This technology poses legal challenges to governments who are struggling to find consensus on how to regulate it.

Analysts compared the approaches used by Australia, Great Britain, China, the European Union, France and Ireland in regulating the use and development of AI. They highlight the benefits and risks of AI for society and civil rights, and call for international collaboration and the participation of marginalized communities to develop ethical and legal frameworks suitable for AI.

The Commission of the European Union has been at the forefront of efforts to put in place intelligence instruments. In April 2021, the EU unveiled a proposal for guidelines aimed at structuring the turn of events and the use of computerized intelligence devices throughout the territory. These guidelines would require AI engineers to adhere to strict moral principles and prohibit certain uses of AI, such as those that could be used to control individuals or oppress specific groups.

In the case of the United States, the White House has published an action plan on its official website to protect the civil rights of American citizens in the age of artificial intelligence and automated systems. It draws on the vision of President Biden, who affirmed the principles of fairness, justice and democracy as the foundations of his administration.

The US government proposes five principles and associated practices to guide the design, use, and deployment of these systems in ways that reinforce democratic values.

The five principles proposed by the White House in this action plan are as follows:

  1. Safe and efficient systems: you should be protected from unsafe or inefficient systems;
  2. Protection against algorithmic discrimination*: you should not be discriminated against by algorithms and systems should be used and designed fairly;
  3. Protection of personal data: you should be protected from abusive data practices through built-in safeguards and you should have control over how data about you is used;
  4. Notification et explication : you should know when an automated system is used and understand how and why it contributes to the results that impact you;
  5. Alternatives, consideration and human remedies: you should be able to opt out, if necessary, and have access to someone who can quickly investigate and resolve any issues you encounter.

These principles aim to guarantee security, efficiency, fairness, data protection, transparency and user choice in relation to automated systems. The action plan proposed by the white house is accompanied by a practical guide to help the actors concerned implement these principles in their policies and practices.

The text offers a vision of a society where protections are built in from the start, where marginalized communities have a voice in the development process, and where designers work hard to ensure that the benefits of technologies reach all people.

Below are some steps taken by national and international bodies to regulate AI tools:

While Australia has sought advice from its top scientific advisory body on how to respond to AI and is considering next steps, Britain has planned to split responsibility for AI governance between its human rights, health and safety, and competition, rather than creating a new organization.

France has reportedly launched an investigation into several complaints about the AI ​​tool ChatGPT. At the same time, Ireland said generative AI needed to be regulated, but how to do it properly before rushing to bans that would not hold.

Related Articles:  "EU Proposal to Scan Private Messages for Illegal Content Sparks Controversy over Online Privacy and Encryption"

China unveiled interim measures to handle generative AI services

The Chinese government demands that companies submit safety assessments to authorities before making offers to the public. China has released draft regulations to guide the development of generative AI technologies, such as ChatGPT. Operators will have to subject their applications to security reviews and adhere to content guidelines.

The aim is to ensure the safe and reliable use of these tools and to prevent the dissemination of false information. This initiative comes at a time when the United States is also seeking public opinion on the responsibility of AI.

According to the draft regulations, operators will have to send their applications to regulators for security reviews before offering the services to the public. They should also not use AI algorithms and data to compete unfairly. The draft regulations also establish guidelines that generative AI services must comply with, such as the types of content these applications can generate. These guidelines will ensure the accuracy of information and prevent the spread of misinformation.

Germany plans to ban ChatGPT after Italy

On Monday, April 3, Germany’s Data Protection Commissioner Ulrich Kelber said his country could learn from Italy’s recent ChatGPT ban and take a similar step. After Italy’s data protection agency launched an investigation into an alleged breach of privacy rules by ChatGPT, Kelber said that, in principle, such action is also possible in Germany. He added that this would fall under the jurisdiction of each of the country’s federal states.

However, despite these annoyances for the regulation of AI, it should be noted that this technology has some potential benefits of AI for innovation and social well-being, but also the risks it poses for civil rights, privacy and equity. It calls for international collaboration and participation of marginalized communities to develop ethical and legal frameworks suitable for AI.

Source :

And you ?

What do you think are the risks of AI and ChatGPT to the privacy and security of citizens?

In your opinion, what criteria can governments use to regulate AI and ChatGPT?

Are you convinced of the limits of the practices proposed by the White House to protect civil rights?

What voices and perspectives might be excluded or marginalized in the development of ethical and legal frameworks for AI?

See as well :

Germany plans to follow Italy’s lead in banning use of ChatGPT, citing alleged breach of privacy rules by OpenAI’s AI chatbot

China wants to use supercomputers to accelerate digital transformation and development of artificial intelligence

#countries #adapting #ChatGPT #outofcontrol #technologies

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.