The federal government on Saturday reiterated its stance to revisit the protected harbour clause for social media intermediaries similar to X, Telegram, Fb, Instagram, and many others, amid a rise in situations of misinformation and pretend information over these platforms.
This assumes significance as presently below Part 79 of the Info Know-how Act, 2000, the platforms have the immunity towards authorized prosecution for content material posted by customers. Nevertheless, in case of elimination of protected harbour clause or modifications in its contours, such platforms will themselves change into immediately accountable for the consumer content material and gained’t be capable of get pleasure from authorized immunity.
“Shouldn’t platforms working in a context as complicated as India undertake a special set of tasks? These urgent questions underline the necessity for a brand new framework that ensures accountability and safeguards the social cloth of the nation,” Info and Broadcasting, Electronics and Info Know-how Minister Ashwini Vaishnaw stated in his handle at a Nationwide Press Day occasion.
Vaishnaw added that globally, debates are intensifying over whether or not the protected harbour provisions are nonetheless acceptable, given their position in enabling the unfold of misinformation, riots, and even acts of terrorism.
The federal government talked about reconsidering the protected harbour clause final yr throughout consultations on the Digital India Act, which as soon as applied will change the a long time outdated IT Act, 2000. Nevertheless, the federal government is but to situation a draft of the Digital India Invoice for public session.
In his handle, Vaishnaw highlighted three different areas — honest compensation for content material creators, algorithm bias of digital platforms, and influence of synthetic intelligence (AI) on mental property — which can be regarding and wishes consideration.
“The efforts made by the traditional media in creating content material must be pretty and suitably compensated,” Vaishnaw stated, including that the shift from conventional to digital media has financially impacted standard media, which invests closely in journalistic integrity and editorial processes.
On algorithm bias, the minister stated digital platforms are prioritising content material that maximises engagement, incites robust reactions and thereby defines the income for the platform.
“These usually amplify sensational or divisive narratives,” Vaishnaw stated, including that platforms have to give you options that account for the influence that their programs have on the society. With regard to mental property violations by generative AI platforms, Vaishnaw stated the identical is affecting the inventive world the place their work is getting used to coach AI fashions with none compensation or acknowledgement.
“AI fashions at the moment can generate inventive content material primarily based on huge datasets they’re skilled on. However what occurs to the rights and recognition of the unique creators who contributed to that information? Are they being compensated or acknowledged for his or her work?” Vaishnaw stated, including that this isn’t simply an financial situation, it’s an moral situation too.