Legal Difficulties in India’s Application of AI

In India, a number of industries are undergoing a transformation thanks to artificial intelligence (AI) and machine learning (ML), which are significantly increasing productivity and creativity. But as these technologies proliferate, they also bring special ethical and legal issues that must be carefully considered. This essay examines the ethical and regulatory environment in India that surrounds AI and ML, highlighting the need for a thorough framework to guarantee responsible implementation.

The current legal system in India is not designed with the subtleties of AI and ML in mind. The laws and guidelines that now exist are disjointed, addressing diverse facets of technology and data usage.

Some of the approved IT laws include the Information Technology (IT) Act, 2000. While this act is centered on electronic transactions and cybercrime in Malaysia, it has provisions that can be seen to influence AI & ML development in the country. Thus as it will be noticed the IT act does not deal with novel characteristics of these technologies which makes the need for new legislation evident.

 The Personal Data Protection (PDP) Bill, 2019 which has been introduced to the parliament has the intent of providing Indians a proper framework for data protection. As is already known, AI and ML base themselves on data, and thus this bill is essential. This introduces elements on where the data must be stored, the conditions of obtaining consent and provisions concerning data breaches are set with high responsibilities on companies dealing with personal data. On the same note, the bill seeks to create a Data Protection Authority (DPA) that would be crucial in offering supervision and execution of the rules on the use of data by AI and ML algorithms.

 The National AI Strategy formulated by the National Institution for Transforming India Aayog is a document establish the strategic implementation of AI in the country and was published in 2018. Despite providing valuable recommendations on AI creation and use, it does not possess legal force behind it. It is based on four pillars: ethical artificial intelligence, privacy, and non-digital inclusion, and sustainable and inclusive growth; however, the crucial issue here is that actual law is often hard to come by when it comes to such principles. It should be noted that the same kind of ethical issues arise before the implementation of AI and ML. Some of these are; Firstly, there is an issue of bias and fairness. Demographic bias is a major problem with AI systems where an AI system’s training data set contains prejudice associated with a specific race, gender, or other demographic categories, it will reproduce this bigotry leading to unfairness. For example, in the case of hiring, biased recruitment algorithms thereby compromising the fairness dimension, may be formulated in such a way that it discriminates against a particular group of people. To it, data selection, bias audits as well as corrective actions should be frequently conducted and exercised to attain fairness of the AI systems. Legal frameworks should require everyone involved to be accountable and reduce bias to an absolute minimum.

 Privacy also remains an important ethical issue that can be greatly affected by the implementation of an MIS. Most AI and ML systems integrate large amounts of personal data thus posing a great concern regarding privacy especially where data protection standards are not well developed. Thus, developers have to pay much attention to the data anonymization and storing it safely. They should also make sure the individuals data is collected from is proceeds to give consent. On the same note, the PDP Bill will discharge a central responsibility of eradicating these concerns once it has been passed.

 As with the deployment of Artificial Intelligence, and Machine Learning, there are broad ethical considerations involved. One of the main questions to take into consideration is the bias and the fairness concept. It appeared that AI systems are able to recreate existing prejudices, which are inherent in training data, and supply inadequate outcomes. For instance, some algorithms used for recruitment can give negative results to particular demographics, which would reduce fairness. To ensure fairness within the AI systems we have to be cautious when it comes to data selection, to perform bias audits check, and to apply corrective actions. However, to reduce bias in the legal frameworks efficiently, it will be necessary to require legal frameworks to call for transparency and accountability. Another major ethical issue is privacy where individuals have the right to privacy, but the society has gone ahead to infringe on this right hence making it an ethical issue. One of the most crucial concerns that relate to the use of AI and ML systems is that they essential invoke large amounts of personal data, and this is very dangerous with regard to privacy, especially if the country is not well protected by their data protection laws. Web developers have to be extremely cautious about data anonymization and the chosen ways of storing shared data. They should also make sure that consent of the individuals used for data collection is sought and given. The PDP Bill, once passed will go a long way in dealing with the above mentioned issues. Abacus and glass’s principles are also important in the ethical use of A. These ai systems can work in a very unexplainable manner, commonly referred to as black box scenarios making it rather hard to explain decisions that have been made especially in very sensitive disciplines such as healthcare and in the judicial system. There should be some rules concerning AI that demand from developers to describe their creations and integrate mechanisms for human control. Aside from these technicalities, the right use of AI is about other possibilities in a society. For example, the use of AI in surveillance can cause an infringement of the public’s rights to privacy as well as the aggravation of misuse of power.

Just like autonomous weapons, questions of ethics arise on the same practically perfect scale. It is the policy makers’ responsibility to enact policies that will create a standard setting priorities in reference to rights of a human and the general well-being of society.

That is why, the lack of targeted enactments regarding AI and ML is a major issue. Although the current laws offer some form of protection, they do not sufficiently address the subject matters arising from AI technologies. Lawmakers need to contribute to the formation of a complete legal regulation, which reflects the further development of AI and ML at the same time, takes into account the issues of data protection, responsible use of the created algorithms, and the presence of an ethical stance in the work of this direction.

International collaboration is also very important. They are indeed worldwide developments and the regulation of those domains is possible only with international collaboration. Thus, it is strongly recommended that India should actively engage with other nations and international organizations to establish more standardization and promote the best practice. Engagement in such platforms such as GPAI will help in the sharing of such information and harmonization of the policies.

The protection of AI is well understood only when its capacity is developed and its usage is made anew to the ethical standards of society. India has to also invest in educating legal personnel, policymakers, and developers regarding AI’s ethical and legal framework. Civic education can also involve campaigns among the public with regard to their rights and duties concerning Artificial Intelligence systems.

Lastly, the dynamic and adaptive regulation is called for because AI and ML technologies are still developing fast. Thus, an adaptable system of regulation has to be established with an ability to adapt to the constantly evolving technologies. Examples include the regulatory sandboxes in which AI inventions can be piloted to enhance understanding of the innovations and the right regulation to apply.

Thus, on balance, India’s opportunities for AI/ML innovation and growth are vast, but those opportunities must also be managed wisely. The full-scale regulation along with ethical values are to be mandatory for the use of AI and ML assisting in the benefit of the society, and the reduction of risks. As the Article demonstrates, as India moves forward on the AI journey it needs to take into account the critical elements of the Framework namely, transparency, accountability and people-centric principles when developing legal and regulatory responses.

Contributed by- Tejas Sikka
O.P. Jindal Global University, B.B.A.LL.B.

Leave a Reply

Your email address will not be published. Required fields are marked *

This field is required.

This field is required.


The following disclaimer governs the use of this website (“Website”) and the services provided by the Law offices of Kr. Vivek Tanwar Advocate & Associates in accordance with the laws of India. By accessing or using this Website, you acknowledge and agree to the terms and conditions stated in this disclaimer.

The information provided on this Website is for general informational purposes only and should not be considered as legal advice or relied upon as such. The content of this Website is not intended to create, and receipt of it does not constitute, an attorney-client relationship between you and the Law Firm. Any reliance on the information provided on this Website is done at your own risk.

The Law Firm makes no representations or warranties of any kind, express or implied, regarding the accuracy, completeness, reliability, or suitability of the information contained on this Website.

The Law Firm disclaims all liability for any errors or omissions in the content of this Website or for any actions taken in reliance on the information provided herein. The information contained in this website, should not be construed as an act of solicitation of work or advertisement in any manner.