G.I.C - GREEN INTERNATIONAL INFORMATION TECHNOLOGY SOLUTIONS COMPANY LIMITED
page-banner-shape-1
page-banner-shape-2

Wave of ChatGPT Boycott

  • March 12, 2026

According to the Artificial Intelligence Law, which takes effect in March, AI systems must be classified based on risk levels while still being encouraged to develop at the highest level of scientific and technological innovation.

The Artificial Intelligence Law was approved by the National Assembly in December 2025 and will take effect on March 1, 2026. The law aims to promote artificial intelligence as a key driver of economic growth, innovation, and sustainable development. It also encourages controlled technology experimentation, applies management measures proportional to risk levels, and promotes voluntary compliance mechanisms.

The law establishes a fundamental principle that AI must serve humans rather than replace human authority or responsibility. It also requires maintaining human oversight and the ability for human intervention in all decisions and actions made by AI systems.

There must be clear disclosure when publishing text, audio, images, or videos created or edited using artificial intelligence.” I wonder whether content such as emails or articles generated with AI must be labeled as AI-generated. If I write the content myself but use AI to edit wording or improve the text, does it still need an AI label?
John Doe
Designer
blog_review
AI Must Be Classified by Risk Level

Under the law, AI systems are categorized into three risk levels: high, medium, and low. Providers are responsible for classifying their systems before deployment based on technical guidelines that will be issued later. If the risk level cannot be determined, providers may request support from the Ministry of Science and Technology.

Based on this classification, the law introduces provisions to both encourage development and ensure proper management, especially for high-risk AI systems. Providers must design systems that allow effective human supervision and intervention. They are also required to maintain technical documentation and operational logs necessary for compliance assessments and post-deployment monitoring.

The law emphasizes that information provided for inspections must be proportionate to the purpose of evaluation and should not reveal business secrets.

For users, operating high-risk AI systems requires compliance with operational procedures, technical guidelines, and safety measures. Unauthorized modifications that alter system functionality are prohibited.

Monitoring and inspection activities will also depend on risk levels. High-risk AI systems will undergo periodic inspections or be reviewed when violations are suspected. Medium-risk systems will be monitored through reports, sample inspections, or independent evaluations. Low-risk systems will only be reviewed when incidents occur or when necessary to ensure safety, avoiding unnecessary regulatory burdens.

For AI systems already in operation before the law takes effect, providers and implementers must complete compliance obligations within 18 months for sectors such as healthcare, education, and finance, and 12 months for other sectors.

blog_sing01
blog_sing02

AI-generated content must include clear identification.

Artificial intelligence systems that interact directly with humans must be designed and operated in a way that allows users to recognize that they are interacting with an AI system, unless otherwise stipulated by law. This requirement forms part of the transparency obligations introduced in the new legislation.

In addition, providers must ensure that audio, images, and videos generated by AI systems are marked in a machine-readable format in accordance with government regulations. For deployers, the law requires them to clearly disclose when making public any text, audio, images, or videos that are generated or modified by artificial intelligence, particularly when such content may mislead the public about the authenticity of events or individuals.

For audio, images, or videos created or modified using AI to simulate or replicate the appearance or voice of real individuals, or to recreate real-world events, deployers are required to attach clear and recognizable labels to distinguish such content from authentic material.

For products that are cinematic, artistic, or creative works, the labeling requirement specified in this clause must be implemented in an appropriate manner that does not interfere with the display, performance, or audience experience of the work.

01

Providers and deployers are responsible for maintaining transparency information in accordance with this Article throughout the entire process of delivering systems, products, or content to users

Leave a Reply

Your email address will not be published. Required fields are marked *