AI Content: The Government of India has implemented major changes regarding content generated through AI. These rules have officially come into effect from February 20. Now if any person shares AI generated content on social media or any online platform, it will be mandatory for him to follow the prescribed guidelines.
This amendment has been made by the IT Ministry in the Information Technology (Digital Media Ethics Code) Rules, 2021. The government has clarified what will be considered AI or synthetic content and what will be the responsibility of social media companies.
Efforts to stop deepfakes
In recent times, deepfakes and fake videos have created a situation of confusion and tension in the society. Narendra Modi also expressed concern over the dangers related to AI and called transparency and watermarking necessary. Especially emphasis has been laid on extra vigilance regarding online safety of children.
What is SGI (Synthetically Generated Content)?
According to the new rules, any photo, video or audio that is created by AI or computer technology in such a way that it looks like a real person, place or event will be considered as SGI i.e. synthetic generated content.
Now it is necessary to clearly label or watermark such content before posting so that the audience knows that it has been created by AI. However, if you use normal photo editing or basic filters, there will be no need to apply AI label.
Three major changes in the new rules
The most important change is that AI content cannot be shared without labels. Once an AI generated tag is placed, it cannot be removed. Second, social media platforms will need to develop technical tools that can identify AI content and screen it before uploading. Third, every three months the platforms will have to warn users that misuse of AI can lead to legal action.
Which things are completely banned?
The government has drawn a strict line in some matters. Obscene content related to children, fake documents, illegal content related to arms or ammunition and deepfake videos or pictures have been kept in the completely banned category.
If the government directs to remove any content, the concerned platform will have to take action within three hours. Earlier this time limit was 36 hours. Apart from this, it will be mandatory to respond within 12 hours to violent or obscene material related to children. Platforms will also have to ensure that a digital record of the origin of AI content exists.
What will happen if the rules are broken?
If any person or organization violates these AI rules, strict legal action can be taken against him. According to the cases, action can be taken under the Indian Penal Code, Information Technology Act, 2000 or Protection of Children from Sexual Offenses Act, 2012.
The government has also clarified that if a platform blocks suspicious AI content through automatic tools, it will not be considered a violation of the law but is an expected action.
Also read:
What is the difference between LED and QLED TV? Know which one is wise to buy

